95th Percentile Billing - Aggregate Interfaces
Hi, I need to pull a report on 95th percentile usage of internet transit for my organization. Our way of documenting this is by putting a description on each interface of our routers/switches/firewalls where the format is [SID#XXXXX] where XXXX is replaced with a 5 digit number of the client’s subscription ID. We need to pull a report for all interfaces with that same code and add them together to find the aggregate internet transit for each of our customers. Would someone be able to assist us? Thanks, Adam35Views1like1CommentAPI Groovy HttpPatch?
Is it possible to do an HttpPatch, or to use the PATCH verb, when updating devices? I looked over on one of the LM pages for updataing devices with the API and as usual, most of the examples were in Python, but PUT did have one Groovy example, but PATCH did not. I’ve seen mention somewhere that at some point PATCH would be supported, wondering if it is or not. I ended up getting my script working with a Get (so I dont lose all my device custom settings etc) and then changing theautoBalancedCollectorGroupId value, and then doing a PUT and it worked, and I didnt lose any of my custom properties. And I have a working filter all set up to run it against a target set of devices. But still, I would rather just patch the fields I want without risk. I’m wondering if thats possible, or if GET/tweak/PUT is still the main go-to?Solved72Views9likes9CommentsLM Logs multiple capture group parsing
Ok, this is cool. I have some log data that has structured data in it (some text, then a python list of strings). I had started building out a parse statement for each member of the list, then thought I’d try just making multiple capture groups and naming multiple variables after the as fanboy. Turns out it completely works. It parses each capture group into the corresponding column with a single parse statement. I was halfway through writing out a feature request when I figured I’d give it a try only to discover that it completely works. Nice job LM Logs guys.70Views14likes2CommentsHas anybody noticed the flaw in LogSource logic?
So LogSources have a couple purposes: They allow you to filter out certain logs. I’m not sure the use case here since the LogSource is explicitly including logs. Maybe the point is to allow you to exclude certain logs that have sensitive data. No masking of data, just ignore the whole log. Not clear if the ignored logs are still in LMLogs or if they get dumped entirely. They allow you to add tags to logs. This is actually pretty cool. You can parse out important info from the log or add device proeprties to the logs. This means you can add device properties to the log that can be displayed as a separate column, or even filtered upon. Each of our devices has a customer property on it. This means i can add that property to each log and search or filter by customer. Device type, ssh.user, SKU, serial number, IP address, the list is literally endless. They allow you to customize which device the logs get mapped to. You can specify that the incoming log should be matched to device via IP address, or hostname, or FQDN, or something else. The documentation on this isn’t exactly clear, but that actually doesn’t matter because… The LogSources apply to logs from devices that match the LogSource’s AppliesTo. Which means the logs need to already be mapped to the device. Then the LogSource can map the logs to a certain device. Do you see the flaw? How is a log going to be processed by a LogSource so it can be properly mapped to a device, if the LogSource won’t process the log unless it matches the AppliesTo, which referencesto the device, to which the logs haven’t yet been mapped? LogSources should not apply to devices. They should apply to Log Pipelines.Solved163Views18likes4Commentsparam not found error in datapoint mapping
Hello, I am new to LogicMonitor and need some help. I was working on creating a datasource where the groovy script returns line based output like: QueueDepth.Queue_Name_1=24 QueueDepth.Queue_Name_2=20 The datapoints are created like: Queue_Name_1 →Key- ##WILDVALUE##.Queue_Name_1 Queue_Name_2 →Key-##WILDVALUE##.Queue_Name_2 However, when the data gets polled, it is unable to retrieve it: NaN param not found in output - (method=namevalue, param=.Queue_Name_1) Can someone please let me know what I am missing here. Regards, Shivam91Views8likes9CommentsNew Ubiquiti Modules
Having recently switched my home network from a hodgepodge of APs, switches & an Untangle firewall to all Ubiquiti gear, I wanted to bring more of the data I was seeing in the UniFi web console to LogicMonitor. The core Ubiquiti modules are a good starting point but there's room to grow. Those using Ubiquiti networking gear may know that monitoring their equipment can be a challenge. Ubiquiti makes SNMP available for their access points (APs), switches, and older equipment but not for their UniFi controllers, which can be particularly frustrating for users of their Dream systems like their popular Dream Machine (UDM) Pro that function as a controller, router/firewall, and other systems such as VOIP and camera NVR. Ubiquiti has a robust REST API for their UniFi controllers but it's undocumented. I started digging more into those APIs, watching what calls were made when clicking on various parts of the local UniFi web console and building modules that provide much greater visibility including new metrics, alert & event logs (requires LM Inspector, formerly referred to as LM Logs), topology, properties & more. These are now available in LM Exchange as the "Ubiquiti UniFi" package. I'm also attaching a new dashboard that leverages these new modules. NOTE: Unlike the core modules, the ones I'm providing here are just community-supported at this time. These have been tested against a UDM Pro, a UDR, a UniFi Switch, and various UniFi APs but I'm sure there's still room for improvement. If you make useful modifications please post them here for everyone's benefit! Also note that these are currently coded to monitor via local controllers, not the UniFi cloud portal. Some of these modules expand upon the core modules, in which case I've appended "_extended" to the name. All these newer modules will appear under an "Ubiquiti" group, whereas the original core versions are ungrouped. If you're satisfied with the data coming from those extended versions you can disable the core versions if you'd like to avoid redundant polling. New modules in the "Ubiquiti UniFi" package... Datasources: Ubiquiti_UniFi_AccessPoints_extended: tracksmany new metrics such as "satisfaction" scores plus added discovery of more AP properties. Ubiquiti_UniFi_Alerts_LMLogs: (requires LM Inspector) ingests UniFi alerts into LM Inspector via the UniFi API. Ubiquiti_UniFi_Clients_Wired_extended: better identification of clients, including IP. Ubiquiti_UniFi_Clients_Wireless_extended: much better identification of clients, including IP; plus tracking of “satisfaction” scores. Ubiquiti_UniFi_Events_LMLogs: (requires LM Inspector) ingests UniFi events into LM Inspector viathe UniFi API. Ubiquiti_UniFi_Sites_extended: better device classification. Ubiquiti_UniFi_Switches_extended: added better device classification plus tracking of "satisfaction" scores. Ubiquiti_UniFi_UDM: monitors metrics from Ubiquiti Dream Machines (UDMs)/UDM Pros and other UniFi controllers. Ubiquiti_UniFi_UDM_DPI: monitors traffic by high-level deep-packet inspection (DPI) categories. Ubiquiti_UniFi_UDM_Ports: monitors traffic & status of individual switch ports on UDMs and other UniFi hardware controllers. Ubiquiti_UniFi_UDM_Storage: discovers & monitors capacity of filesystems on UDMs and other UniFi hardware controllers. PropertySources: addCategory_Ubiquiti_UniFi_UDM: adds several new properties for UDMs and other UniFi hardware controllers, including ERIs for topology. TopologySources: Ubiquiti_UniFi_Wireless: creates topology maps for clients & managed devices. NOTE: this replaces the core version. The name is a bit of a misnomer as this version also maps wired clients. ConfigSources: Ubiquiti_UniFi_Configs: (experimental) grabs the latest config from your UniFi controller. Note: Unlike the other modules in the package, this will require the UniFi user have admin privileges. To enable... 1. Install the package into your portal by clicking the Exchange tab in the left-hand nav-bar and... a. clicking the Public Repository tab, b. ensuring "Community" is enabled in the Status filter (I recommend leaving the 'Expand Packages' toggle off to simplify searching), c. searching for "ubiquiti" and installing the "Ubiquiti UniFi" package. 2. If you're not already monitoring your local UniFi controller (including UDMs, UDRs, etc.) then follow the instructions in this support article: https://www.logicmonitor.com/support/ubiquiti-unifi-network-monitoring Once installed you'll see the new modules appear under your UniFi controller(s) in the "Ubiquiti" group to distinguish them from the core modules. If you'd like to install a dashboard that leverages many of these new modules, you can download it here. Once downloaded, in your portal go to Dashboards, expand the dashboard list and click Add > From File. Some examples of what's new... Zoomed view of the new dashboard mentioned above... Additional info about managed devices & connected clients... System metrics for a UDM Pro... Some deep packet inspection (DPI) metrics of various high-level traffic categories from a UDM Pro... Capture of "satisfaction" scores... Topology mapping (in this case, controller to switches to APs & clients)... Example event logs being ingested via API...475Views19likes5CommentsIt there a Windows equivalent of 'VMware_vCenter_HostPerformance' DataSource
I’ve had a look but I can’t see one. So before I create one... In my VMware dashboard I have CPU and mem combined in a single widget (to more closely resemble Windows Task Manager) because both stats are available in this DataSource: I’d like to do the same for a Windows OS widget but without a DataSource that exposes both CPU and mem wecan’t. ThanksSolved93Views14likes12CommentsHow to return text results from an SQL query
Hello, Relatively new to LM and have been tasked with trying to get a datasource set up that will show all the values in 1 column off the back of a script in MySQL. These values will all be job names ie report_job, send_job etc. My understanding is the JDBC collection only returns numerical values? if this is the case is there any sort of documentation or a “beginners guide” on how I can go about getting something setup where the data returned will be my list of job names based on certain criteria? Thanks in Advance. Jonathan73Views4likes4CommentsAlert List Dashboard, not a Widget on a Dashboard
Does anyone else have a Dashboard that contains just a single widget, anAlert List widget, so show every alert? It’s really annoying having the double scrollbar: We should have a special Dashboard thatisan Alert List to avoid this problem.63Views6likes10CommentsUsing Postman to create multiple Websites via API & CSV?
Hi, I’m testing out creating websites (or resources) via the API. I have a standard Post working in Postman just fine. However, when I then try to do the same thing via a CSV file in the Runner section, I’m getting 401 Authorization errors. When I look at the data that’s being sent, it looks the same as what’s sent when I run it manually. Is there something special I need to do when running a Post command via the Runner vs the manual Send command? Thanks.Solved434Views12likes40CommentsPalo Alto application data missing from Netflow
We havebeen able to get Netflow data working for a Palo Alto PA-820 firewall, but we are not seeing the application data show up. Does anyone have any suggestions on next steps we could take? Here is what has been done so far: Netflow profile has been configured on the Palo Alto side and assigned to the interface, including selecting the PAN-OS Field Types to get the App-ID and User-ID (https://docs.paloaltonetworks.com/pan-os/11-0/pan-os-admin/monitoring/netflow-monitoring/configure-netflow-exports) nbarhas been enabled on the collector: # enable netflow support for NBAR, IPV6 and Multicast fields netflow.nbar.enabled=true # enable netflow support for IPV6 fields netflow.ipv6.enabled=true Collector version is 34.003 We’re seeing everything we expect except the app & systemsdata on the Traffic tab for the device: Any thoughts on what we might be missing? Thank you. :-)Example script for automated alert actions via External Alerting
Below is a PowerShell script that's a handy starting point if you want to trigger actions based on specific alert types. In a nutshell, it takes a number of parameters from each alertand has a section of if/elsestatements where you can specify what to do based on the alert.It leverages LogicMonitor'sExternal Alertingfeature so the script runs local to whatever Collector(s)you configure it on. I included a couple of example actions forpinging a device and forrestarting a service.It also includes some handy (optional) functions for logging as well as attaching a noteto thealert in LogicMonitor. NOTE: this script is provided as-is and you will need to customize it to suit your needs. Automated actions are something that must be approached with careful planning and caution!! LogicMonitor cannot be responsible for inadvertent consequences of using thisscript. If you want try it out, here's how to get started: Update the variables in the appropriate section near the top of the script with optional API credentialsand/or log settings. Also change any of the if/elseif statements (starting around line #95) to suit your needs. Save the script onto your Collector server.I named the file"alert_central.ps1" but feel free to call it something else. Make note of it’s full path (ex: “C:\scripts\alert_central.ps1”). NOTE: it’s notrecommended to place it under the Collector's agent/lib directory (typically "C:\Program Files (x86)\LogicMonitor\Agent\lib") since that location can be overwritten by collector upgrades. In your LogicMonitor portal go to Settings, then External Alerting. Click the Add button. Set the 'Groups' field as needed to limit the actions to alerts from any appropriategroup of resources. (Be sure the group's devices would be reachable from the Collector running the script) Choose the appropriate Collector in the Collectorfield. Set Delivery Mechanismto "Script" Enter the name you saved the scriptas (in step #2)in theScriptfield (ex. "alert_central.ps1"). Paste the following into the Script Command Linefield (NOTE: if you add other parameters here then be sure to also add them to the 'Param' line at the top of the script): "##ALERTID##" "##ALERTSTATUS##" "##LEVEL##" "##HOSTNAME##""##SYSTEM.SYSNAME##" "##DSNAME##" "##INSTANCE##" "##DATAPOINT##" "##VALUE##" "##ALERTDETAILURL##" "##DPDESCRIPTION##" Example of the completed Add External Alerting dialog Click Save. This uses LogicMonitor's External Alerting featureso there are some things to be aware of: Since the script is called foreveryalert, the section of if/then statements at the bottom of the script is important for filtering what specific alerts you want to take action on. The Collector(s) oversee the running of thescript, so be conscience to any additional overhead the script actions may cause. It could take up to 60 seconds for the script to trigger from the time the alert comes in. This example is a PowerShell script so best suited for Windows-based collectors, but could certainly be re-written as a shell script for Linux-based collectors. Here's a screenshot of acleared alert where the script auto-restarted a Windows service and attached a note based on its actions. Example note the script added to the alert reflecting the automated action that was taken Below is the PowerShell script: # ---- # This PowerShell script can be used as a starting template for enabling # automated remediation for alerts coming from LogicMonitor. # In LogicMonitor, you can use the External Alerting feature to pass all alerts # (or for a specific group of resources) to this script. # ---- # To use this script: # 1. Update the variables in the appropriate section below with optional API and log settings. # 2. Drop this script onto your Collector server under the Collector's agent/lib directory. # 3. In your LogicMonitor portal go to Settings, then click External Alerting. # 4. Click the Add button. # 5. Set the 'Groups' field as needed to limit the actions to a specific group of resources. # 6. Choose the appropriate Collector in the 'Collector' field. # 7. Set 'Delivery Mechanism' to "Script" # 8. Enter "alert_central.ps1" in the 'Script' field. # 9. Paste the following into the 'Script Command Line' field: # "##ALERTID##" "##ALERTSTATUS##" "##LEVEL##" "##HOSTNAME##" "##SYSTEM.SYSNAME##" "##DSNAME##" "##INSTANCE##" "##DATAPOINT##" "##VALUE##" "##ALERTDETAILURL##" "##DPDESCRIPTION##" # 10. Click Save. # The following line captures alert information passed from LogicMonitor (defined in step #9 above)... Param ($alertID = "", $alertStatus = "", $severity = "", $hostName = "", $sysName = "", $dsName = "", $instance = "", $datapoint = "", $metricValue = "", $alertURL = "", $dpDescription = "") ###--- SET THE FOLLOWING VARIABLES AS APPROPRIATE ---### # OPTIONAL: LogicMonitor API info for updating alert notes (the API user will need "Acknowledge" permissions)... $accessId = '' $accessKey = '' $company = '' # OPTIONAL: Set a filename in the following variable if you want specific alerts logged. (example: "C:\lm_alert_central.log")... $logFile = '' # OPTIONAL: Destination for syslog alerts... $syslogServer = '' ############################################################### ## HELPER FUNCTIONS (you likely won't need to change these) ## # Function for logging the alert to a local text file if one was specified in the $logFile variable above... Function LogWrite ($logstring = "") { if ($logFile -ne "") { $tmpDate = Get-Date -Format "dddd MM/dd/yyyy HH:mm:ss" # Using a mutex to handle file locking if multiple instances of this script trigger at once... $LogMutex = New-Object System.Threading.Mutex($false, "LogMutex") $LogMutex.WaitOne()|out-null "$tmpDate, $logstring" | out-file -FilePath $logFile -Append $LogMutex.ReleaseMutex()|out-null } } # Function for attaching a note to the alert... function AddNoteToAlert ($alertID = "", $note = "") { # Only execute this if the appropriate API information has been set above... if ($accessId -ne '' -and $accessKey -ne '' -and $company -ne '') { # Encode the note... $encodedNote = $note | ConvertTo-Json # API and URL request details... $httpVerb = 'POST' $resourcePath = '/alert/alerts/' + $alertID + '/note' $url = 'https://' + $company + '.logicmonitor.com/santaba/rest' + $resourcePath $data = '{"ackComment":' + $encodedNote + '}' # Get current time in milliseconds... $epoch = [Math]::Round((New-TimeSpan -start (Get-Date -Date "1/1/1970") -end (Get-Date).ToUniversalTime()).TotalMilliseconds) # Concatenate general request details... $requestVars_00 = $httpVerb + $epoch + $data + $resourcePath # Construct signature... $hmac = New-Object System.Security.Cryptography.HMACSHA256 $hmac.Key = [Text.Encoding]::UTF8.GetBytes($accessKey) $signatureBytes = $hmac.ComputeHash([Text.Encoding]::UTF8.GetBytes($requestVars_00)) $signatureHex = [System.BitConverter]::ToString($signatureBytes) -replace '-' $signature = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($signatureHex.ToLower())) # Construct headers... $auth = 'LMv1 ' + $accessId + ':' + $signature + ':' + $epoch $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $headers.Add("Authorization",$auth) $headers.Add("Content-Type",'application/json') # Make request to add note.. $response = Invoke-RestMethod -Uri $url -Method $httpVerb -Body $data -Header $headers # Change the following if you want to capture API errors somewhere... # LogWrite "API call response: $response" } } function SendTo-SysLog ($IP = "", $Facility = "local7", $Severity = "notice", $Content = "Your payload...", $SourceHostname = $env:computername, $Tag = "LogicMonitor", $Port = 514) { switch -regex ($Facility) { 'kern' {$Facility = 0 * 8 ; break } 'user' {$Facility = 1 * 8 ; break } 'mail' {$Facility = 2 * 8 ; break } 'system' {$Facility = 3 * 8 ; break } 'auth' {$Facility = 4 * 8 ; break } 'syslog' {$Facility = 5 * 8 ; break } 'lpr' {$Facility = 6 * 8 ; break } 'news' {$Facility = 7 * 8 ; break } 'uucp' {$Facility = 8 * 8 ; break } 'cron' {$Facility = 9 * 8 ; break } 'authpriv' {$Facility = 10 * 8 ; break } 'ftp' {$Facility = 11 * 8 ; break } 'ntp' {$Facility = 12 * 8 ; break } 'logaudit' {$Facility = 13 * 8 ; break } 'logalert' {$Facility = 14 * 8 ; break } 'clock' {$Facility = 15 * 8 ; break } 'local0' {$Facility = 16 * 8 ; break } 'local1' {$Facility = 17 * 8 ; break } 'local2' {$Facility = 18 * 8 ; break } 'local3' {$Facility = 19 * 8 ; break } 'local4' {$Facility = 20 * 8 ; break } 'local5' {$Facility = 21 * 8 ; break } 'local6' {$Facility = 22 * 8 ; break } 'local7' {$Facility = 23 * 8 ; break } default {$Facility = 23 * 8 } #Default is local7 } switch -regex ($Severity) { '^(ac|up)' {$Severity = 1 ; break } # LogicMonitor "active", "ack" or "update" '^em' {$Severity = 0 ; break } #Emergency '^a' {$Severity = 1 ; break } #Alert '^c' {$Severity = 2 ; break } #Critical '^er' {$Severity = 3 ; break } #Error '^w' {$Severity = 4 ; break } #Warning '^n' {$Severity = 5 ; break } #Notice '^i' {$Severity = 6 ; break } #Informational '^d' {$Severity = 7 ; break } #Debug default {$Severity = 5 } #Default is Notice } $pri = "<" + ($Facility + $Severity) + ">" # Note that the timestamp is local time on the originating computer, not UTC. if ($(get-date).day -lt 10) { $timestamp = $(get-date).tostring("MMM d HH:mm:ss") } else { $timestamp = $(get-date).tostring("MMM dd HH:mm:ss") } # Hostname does not have to be in lowercase, and it shouldn't have spaces anyway, but lowercase is more traditional. # The name should be the simple hostname, not a fully-qualified domain name, but the script doesn't enforce this. $header = $timestamp + " " + $sourcehostname.tolower().replace(" ","").trim() + " " #Cannot have non-alphanumerics in the TAG field or have it be longer than 32 characters. if ($tag -match '[^a-z0-9]') { $tag = $tag -replace '[^a-z0-9]','' } #Simply delete the non-alphanumerics if ($tag.length -gt 32) { $tag = $tag.substring(0,31) } #and truncate at 32 characters. $msg = $pri + $header + $tag + ": " + $content # Convert message to array of ASCII bytes. $bytearray = $([System.Text.Encoding]::ASCII).getbytes($msg) # RFC3164 Section 4.1: "The total length of the packet MUST be 1024 bytes or less." # "Packet" is not "PRI + HEADER + MSG", and IP header = 20, UDP header = 8, hence: if ($bytearray.count -gt 996) { $bytearray = $bytearray[0..995] } # Send the message... $UdpClient = New-Object System.Net.Sockets.UdpClient $UdpClient.Connect($IP,$Port) $UdpClient.Send($ByteArray, $ByteArray.length) | out-null } # Empty placeholder for capturing any note we might want to attach back to the alert... $alertNote = "" # Placeholder for whether we want to capture an alert in our log. Set to true if you want to log everything. $logThis = $false ############################################################### ## CUSTOMIZE THE FOLLOWING AS NEEDED TO HANDLE SPECIFIC ALERTS FROM LOGICMONITOR... # Actions to take if the alert is new or re-opened (note: status will be "active" or "clear")... if ($alertStatus -eq 'active') { # Perform actions based on the type of alert... # Ping alerts... if ($dsName -eq 'Ping' -and $datapoint -eq 'PingLossPercent') { # Insert action to take if a device becomes unpingable. In this example we'll do a verification ping & capture the output... $job = ping -n 4 $sysName # Restore line feeds to the output... $job = [string]::join("`n", $job) # Add ping results as a note on the alert... $alertNote = "Automation script output: $job" # Log the alert... $logThis = $true # Restart specific Windows services... } elseif ($dsName -eq 'WinService-' -and $datapoint -eq 'State') { # List of Windows Services to match against. Only if one of the following are alerting will we try to restart it... $serviceList = @("Print Spooler","Service 2") # Note: The PowerShell "-Contains" operator is exact in it's matching. Replace it with "-Match" for a loser match. if ($serviceList -Contains $instance) { # Get an object reference to the Windows service... $tmpService = Get-Service -DisplayName "$instance" -ComputerName $sysName # Only trigger if the service is still stopped... if ($tmpService.Status -eq "Stopped") { # Start the service... $tmpService | Set-Service -Status Running # Capture the current state of the service as a note on the alert... $alertNote = "Attempted to auto-restart the service. Its new status is " + $tmpService.Status + "." } # Log the alert... $logThis = $true } # Actions to take if a website stops responding... } elseif ($dsName -eq 'HTTPS-' -and $datapoint -eq 'CantConnect') { # Insert action here to take if there's a website error... # Example of sending a syslog message to an external server... $syslogMessage = "AlertID:$alertID,Host:$sysName,AlertStatus:$alertStatus,LogicModule:$dsName,Instance:$instance,Datapoint:$datapoint,Value:$metricValue,AlertDescription:$dpDescription" SendTo-SysLog $syslogServer "" $severity $syslogMessage $hostName "" "" # Attach a note to the LogicMonitor alert... $alertNote = "Sent syslog message to " + $syslogServer # Log the alert... $logThis = $true } } ############################################################### ## Final functions for backfilling notes and/or logging as needed ## (you likely won't need to change these) # Section that updates the LogicMonitor alert if 'alertNote' is not empty... if ($alertNote -ne "") { AddNoteToAlert $alertID $alertNote } if ($logThis) { # Log the alert (only triggers if a filename is given in the $logFile variable near the top of this script)... LogWrite "$alertID,$alertStatus,$severity,$hostName,$sysName,$dsName,$instance,$datapoint,$metricValue,$alertURL,$dpDescription" }1.3KViews21likes5CommentsAnyone know of a way to monitor for SNapshot age on a Hyper-V machine?
We have some checks that monitor our VMWare system for Snapshots. Once they are over 72 hour old, we get an alert. I haven’t been able to find a way to do the same thing for our Hyper-V servers. Does anyone happen to know if that’s possible? I didn’t see anything in the Exchange but just through I’d ‘check in case someone knew of anything. I can pull the data with Powershell via the get-vm|get-vmsnapshot command. Not sure if that’s usable in LM somehow. Thanks.81Views5likes4CommentsOracle jdbc JAR file update
LogicMonitor's collector utilizes an outdated version of the Oracle JDBC jar file. It's essential to upgrade to the most recent version available in the Maven repository to take advantage of new secure database connection types. However, users should note a significant change in behavior with the new jar: while the old version automatically closed abandoned Oracle database connections, the new version does not, potentially leading to an excessive number of open connections. This surge in open connections can overload and crash an Oracle server where connections aren’t limited by user. Therefore, clients must either ensure that customizations explicitly close database connections or adjust their server settings to impose limits on the number of concurrent open connections. All of the newest Logicmonitor datasources properly close connections but some of the older modules did not do this. Logicmonitor has created a module to test for this problem and alert if it occurs. Oracle_Database_MonitorUser will keep track of the number of connections in use by the monitoring user and alert if the number of connections is too high. This update is scheduled for collector 35.400. Make sure this module is installed before upgrading to collect 35.400 and monitor your database connections before rolling this out to general release.63Views15likes0CommentsIssue in capturing historical data
Hi I have issue in capturinghistorical data. i am getting this error Response Status: 200 Response Body: b'{"data":null,"errmsg":"Invalid time range. Start time must be before current time","status":1007}' No 'values' field found in the 'data' section of the JSON response.81Views13likes12CommentsMeraki Cellular Gateways and Sensors
We’re planning R&D that “aims” to monitor Meraki MG Cellular Gateways and MT Sensors and to give them their own Topology Map graphics. Please DM me if youuse either of these types of Meraki devices and would like to participate in the R&D process.103Views29likes5CommentsCan I set a different threshold for C drive from D or E drive space?
Hi, Our Windows servers have the standard alerting based on Volume Capacity. When Used Percent gets to 90%, we get a warning, 95% error, etc. How could we change this so that the C drive uses a different set of thresholds than the D drive? Can we do that within the basic DataSource or would we have to have one DataSource that just checks C and one that just checks D? I don’t want to clutter everything up, but if this is easy to do, it would come in handy. Thanks.76Views6likes5CommentsCisco Meraki Environmental Sensor Monitoring
Today, new modules are available in LM Exchange to monitor Cisco Meraki MT-Series Environmental Sensors. These fully support Resource Explorer and include a new (IoT) Sensor Topology Graphic. If you are subscribed, these devices count toward the Wireless Access Points SKU.27Views11likes0CommentsAny easy way to delete a Step from a website check?
Hi, I have a couple hundred websites where I need to delete the 2nd Step. ("name": "__step1",). There doesn’t seem to be a built-in way to mess with Steps directly from the API. I’d have to read in the site, then figure out how to remove the info for the 2nd Step, then write it back in or something like that. Does anyone know how hard this might be to do? I’m using Powershell and can pull the data in just fine. It’s the editing and rewriting it back up that I’m not sure how to do. If it was just a couple sites I would even worry about the API. But since it’s 250+, I thought it might help. But if it’s going to be really complicated, it might be faster to just brute force through them manually. Thanks.34Views10likes3CommentsCan I monitor vCenter tags and create an alert if a computer doesn't have one?
Hi, We use vCenter to manage our VMs. We have the hosts in LM. We currently have a process where we get an email every morning that has VMs that don’t have any tags. We use Tags to manage backup schedules and things so not having any tags is bad. Anyway, I’m wondering if that’s something that we could use LM to monitor. I don’t need to confirm what the tag is, I just need to know if any VM doesn’t have any tags at all. Is that something we can do with the build-in checks LM does or is that something that would have to be created by hand? Thanks.Solved184Views27likes5CommentsLetting non-admin users make and manage their own dynamic groups
We’ve got an environment that is maturing into a space where our SMEs are now wanting to take some ownership of their device organization and alert tuning. Part of that has butted up against dynamic group membership, but unfortunately (understandably), that can only be done by people who can manageallgroups, since obviously you could just make a dynamic group under a group to which you have access whose AppliesTo is just “true()” and thus gain access to every device in the org. We have been considering ways to facilitate access to these SMEs so that they can manage their own device groups, without giving them too much power with which they could accidentally delete or silence a bunch of devices they don’t own by a mistake in their AppliesTo logic. We’d really prefer not to just give these SMEs blanket Manage access, but we’d also like to avoid having a paradigm where they have to come to us to have every single dynamic group created for them. We’ve been considering Terraform, granting each team access to their own static group and letting them make subgroups inside of it,and adding what is effectively an AppliesTo prepend that is “belongsToThisTeam() &&” + “whatever their AppliesTo is.” This, unfortunately, would require them to know about and remember to use whatever custom module we’d build to add that prepend. Furthermore the way parent groups are set up, we’d have little to no way to restrict them to putting their new groups under the ones to which they rightfully have access. Has anyone else come up on these hurdles and figured out a way, or done some thinking on how to facilitate dynamic group management for specific teams without giving them the keys to the kingdom?Solved104Views13likes8CommentsSNMP collector performance: SHA/AES vs MD5/DES
Is there a significant difference in the collector processing load or overhead that would impact performance, when switching from MD5/DES to SHA/AES? https://www.logicmonitor.com/support/collectors/collector-overview/collector-capacity I was looking at the collector capacity page, and while obviously v3 is more of a burden on the collector than v2. But I dont see anything about MD5/DES vs SHA/AES. I’m wondering if we can simply change a collector’s snmp properties and assume a fairly (but not overly) loaded collector can handle this? Or is that a huge chance? Anyone have any experience with this?Solved50Views9likes1CommentLogic Monitor service now integration issue
hi, i have followed the documentation for setting up logic monitor with service now but when i test the alert delivery from the alert routing page I get a incident created in service now but I don’t get any of the details that should be included in the HTTP payload also the alert does not show up in the integration log. anyone have any suggestions or experienced similar issues ?Solved82Views8likes3CommentsTime-based escalation chains with 8-5 M-F and the inverse?
I’m trying to set up an escalation chain for a group where it does one behavior from 8-5 Monday through Friday, but also in the same escalation chain, we want anything OUTSIDE that time to follow anotther behavior. Unfortunately, I see 3problems. It looks like I’m going to have to make 4 different subchains. a) weekends, b) midnight to 8 am, c) 8 am to 5 pm, and d) 5pm to midnight It looks like LM wont let me select 24:00 to make sure the gap is closed, so I’m worried about alerts that happen between 23:45 and 24:00 (00:00 next day) It also bugs me that technically the way I’m doing it below, I have 2 subchains that apply to the 8:00 minute and the 17:00 minute (5 pm) as each of those moments are included in subchains that overlap on that one minute Does anyone have a better solution to this? What do I do about #2 above in particular? Thanks!Solved45Views9likes1CommentFilter widget based on datapoint values
Hi, I supposefollowing on from my other message about combining CPU and memory in the same table widget, can these widgets be filtered - not dynamic/on demand but statically? Take this example - can I have another table that only shows, for example, resources whose CPU OR Memory OR Uptime is beyond a certain threshold, e.g. ‘where CPU > X OR Memory > Y OR Uptime > Z’ Then expanding on that, I’m aiming for a dedicated dashboard highlighting ‘at risk’ or ‘concerning’ resources, so ideally this filter could be done at the dashboard level (e.g. with a ‘filter’ token)resulting in widgets on that dashboard inheriting the filter. Thanks74Views7likes13Commentsinvalid Output format - root element of output JSON should be "data"
What does means the below message on the following setting ? [MSG] [WARN] [collector-task-thread-collector.batchscript-batchscript-28-1::collector.batchscript:Task:4218:XXX.XXX.XXX.XXX:batchscript:SAP Cloud Connector Memory Status::18] [BatchScriptSlaveTask._prepareOutput:155] invalid Output format - root element of output JSON should be "data", CONTEXT= The output is {"physicalKB":{"total":16469532,"CloudConnector":959296,"others":6477020,"free":9033216},"virtualKB":{"total":41635356,"CloudConnector":4639480,"others":5910204,"free":31085672},"cloudConnectorHeapKB":{"total":4128768,"used":209168,"free":3919600},"version":1}45Views5likes5Comments- 193Views12likes7Comments
Monitoring DAS Environment and Creating Custom Data Sources in Logic Monitor
Hi , I am currently working on monitoring the DAS environment within Logic Monitor, and we've obtained an MIB file from the product team. We've incorporated a single device for preliminary testing. Could someone assist me in developing a custom data source for one or two specific data points? I've experimented with the OID for certain aspects such as ChannelStatus, and I'm receiving responses when conducting SNMP Walk and SNMP Get operations.25Views3likes1CommentAccess to Group or Device properties from a text widget
I’m looking for a way to display custom properties from an arbitrary group. We have a dashboard that shows widgets from a specific client group using the group/device pulldown text widget built by @Kevin Ford :Dynamic Dashboards | Community (logicmonitor.com). I’m trying to figure out how to get to custom properties set at the group to display info about that particular client for our staff to have easy reference to. Something like ##defaultResourceGroup##/##example.property## We have our client’s groups tagged with specific contact and support links (KB, Azure Portal, etc...) that would be useful to have direct access to.40Views8likes1CommentHow to set up Splunk with multiple IIQ SailPoint environments with Splunk
Observing the code in the Python scripts, it appears that Splunk does not support multiple environments (s), despite what the Splunk documentation on this website claims. Version of SailPoint IIQ: 8.1p3 Version of Splunk: 8.0.9; Version of TA: 2.0.5 Upon examining the Python code known as the Splunk Plugin, which allows Splunk to read data from SailPoint, I discovered the following details: The plugin directory is Splunk/etc/apps/Splunk_TA_sailpoint, from which the plugin gets its files. The file that drew my attention was Splunk/etc/apps/Splunk_TA_sailpoint/bin/input_module_sailpoint_identityiq_auditevents.py.6Views6likes0CommentsHow to set up Splunk with multiple IIQ SailPoint environments with Splunk TA configuration using: SailPoint Adaptive Response
I noticed that the Splunk documentation on this site says that this should support multiple environments (s) - looking at the code in the python scripts though it looks like it doesn't? SailPoint IIQ version: 8.1p3 Splunk version: 8.0.9 TA version: 2.0.5 After reviewing the Splunk Plugin code (the Python code which Splunk uses to read data from SailPoint), I noticed the following bits of information: Splunk/etc/apps/Splunk_TA_sailpoint is the plugin directory where the plugin derives its files. Splunk/etc/apps/Splunk_TA_sailpoint/bin/input_module_sailpoint_identityiq_auditevents.py – this is the file in question that caught my attention.10Views4likes0CommentsGraphs Device View vs Ovewview default?
As a portal admin on my site, I admit I very infrequently look at graphs tabs, I’m normally digging about in raw data or alert rules or datasource definitions etc. It was brought to my attention that a Customer is unhappy that when they go to their graphs tab, they get some device view by default, and not the overview. They hate having to switch it. When did this start? I guess I should read the release notes closer. Is there a way to change it back? Or are we stuck with the “Device” view being default?Solved79Views11likes4CommentsBigNumber Widget or something similar that will use the Dashboard Timescale?
We had one of our Devs setup a self service daemon for use to provision some “things” and we are querying those numbers with an API. We currently have some hard coded (app side) data for Successful/Failed per hour and per 24 hours and we are using the Big Number Widgets. We also have some graphs that accurately display this same data on a time scale Based on the Dashboard Time Setting. Now for my question. For the life of me i cannot figure out a way to get the Dashboard Time Scale setting to work with anything else outside of a graph. You’d think using the SUM or Max aggregate function would work but it doesn’t. Maybe its the way the data is coming over from this custom app. The people that are looking at this data are complaining they don’t want to do the math off the graph to figure out how many success/fails happen in a given time period.47Views9likes4CommentsPlugin update for LogicMonitor to ServiceNow Integration
Hello, Our ServiceNow Admin recently upgraded our ServiceNow Dev instance to the newest release ( Washington DC). https://docs.servicenow.com/bundle/washingtondc-release-notes/page/release-notes/family-release-note... While attempting to retest the LogicMonitor to ServiceNowintegration, we learned the LogicMonitor plugin appears to be incompatible with the latest version: Does anyone know if this will be addressed? Thanks.Solved134Views13likes5CommentsTrouble Authenticating to LogicMonitor REST API from ServiceNow
I am trying to convert a PowerShell script to run from ServiceNow and found “Using REST API from ServiceNow Scripting” from two years ago. Since then, ServiceNow has addedGlideDigest() which, among other things, should allow me to create a message digest from a string using the SHA256 algorithm, with the result being a Base64 string. However, I am getting back: "{"errorMessage":"Authentication failed","errorCode":1401,"errorDetail":null}" The PowerShell script looks like this: [string]$sandboxaccessid = 'abc' [securestring]$sandboxaccesskey = '123' | ConvertTo-SecureString -AsPlainText -Force [string]$AccountName = 'portalname' [int]$Id = 2 $httpVerb = "GET" $resourcePath = "/device/devices/$id" $AllProtocols = [System.Net.SecurityProtocolType]'Tls11,Tls12' [System.Net.ServicePointManager]::SecurityProtocol = $AllProtocols $time = (Get-Date).ToUniversalTime() $epoch = [Math]::Round((New-TimeSpan -Start (Get-Date -Date "1/1/1970") -End $time).TotalMilliseconds) $requestVars = $httpVerb + $epoch + $resourcePath $hmac = New-Object System.Security.Cryptography.HMACSHA256 $hmac.Key = [Text.Encoding]::UTF8.GetBytes([System.Runtime.InteropServices.Marshal]::PtrToStringAuto(([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($sandboxaccesskey)))) $signatureBytes = $hmac.ComputeHash([Text.Encoding]::UTF8.GetBytes($requestVars)) $signatureHex = [System.BitConverter]::ToString($signatureBytes) -replace '-' $signature = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($signatureHex.ToLower())) $headers = @{ "Authorization" = "LMv1 $sandboxaccessid`:$signature`:$epoch" "Content-Type" = "application/json" "X-Version" = 3 } $url = "https://$AccountName.logicmonitor.com/santaba/rest$resourcePath" $response = Invoke-RestMethod -Uri $url -Method $httpVerb -Header $headers -ErrorAction Stop $resposne And JavaScript looks like this (at the moment): var ACCESS_ID = 'abc'; var ACCESS_KEY = '123'; var ACCOUNT_NAME = 'portalname'; var resourcePath = '/device/devices'; var time = new Date().getTime(); var epoch = Math.round((time - new Date("1/1/1970").getTime())); var id = 2; var requestVars = 'GET' + epoch + resourcePath; // Compute the HMACSHA256 hash using GlideDigest var gd = new GlideDigest(); var signature = gd.getSHA256Base64(ACCESS_KEY, requestVars); // Remove hyphens from the signature signature = signature.replace(/-/g, ''); var token = 'LMv1 ' + ACCESS_ID + ':' + signature + ':' + epoch; var httpRequest = new GlideHTTPRequest('https://' + ACCOUNT_NAME + '.logicmonitor.com/santaba/rest/device/devices/' + id); httpRequest.addHeader('Content-Type', 'application/json'); httpRequest.addHeader('Authorization', token); httpRequest.addHeader('X-Version', '3'); var response = httpRequest.get(); gs.info('Response status code: ' + response.getStatusCode()); gs.info('Devices = ' + response.body); Anyone know how to make this authentication work, without the customerconvertByteArrayToHex() utility?Solved301Views10likes7CommentsTable style widget + values from PropertSource - possible?
Hi, I’m just fiddling now but I had a thought about exposing at least Windows resources Windows Update info. The only way I’ve been able to do this is with an Alert style widget, but it doesn’t look brilliant (especially column names): I have a custom PropertySource that retrieves that info. What I would like to do is use the standard Table style widget instead, but that seems only possible to use a DataSource and not PropertySource. And then have the ‘Show colour bars’ option turned on and colourise theauto.os.latest_update.result_code andauto.os.latest_update.error_code columns when they meet certain criteria. Also, if the install date is over a 6 weeksago then it’s likely to not have the latest update. And then additionally, show any matching resources who have those 2 columns as specific values in the normal Alerts List widget, e.g. ‘Windows Update failed installation’ or ‘Windows Update succeeded with errors’ etc. etc. ThanksSolved53Views8likes4CommentsI need to alert for 20 consecutive failed logon attempts within a 30 minute time period
We have a team that would like to get alerted on 20consecutive failed logon attempts from a single account on any of our SQL servers in a 30 minute time frame. I started out using the eventsource for errors in the security event log and set it watch for EventID 4625. I am not very savvy with groovy and am now looking at setting up with a Powershell script via a new Datasource but I am having some trouble with it. If anyone has any ideas on how to best script this I would greatly appreciate the help!75Views10likes5Comments'Instance not found' output manipulation
Hi, Is it possible to have ‘Instance not found’ be shown as something else, e.g. Those 2 are: I know I can do some logical stuff with a virtual data point but not sure what syntax to use in the expression specifically for ‘Instance not found’; this doesn’t appear to be treated the same as NaN using ‘un’. Thanks28Views8likes1CommentDomain joined collector polling non-domain joined device
Hi, I’ve been trying to resolve this for a few days now but no luck from an LM pov. Collector is domain joined Veeam server not domain joined I’ve beenthrough all the config. guides for WMI, DCOM, PSRemoting etc., AND from the collector itselfI can now successfully get to the Veeam host using wbemtest with recurse option, and Enter-PSSession. However LM reports: I’ve run the commands to check the WMI repository state and that comes back as OK. The Add Device Wizard did initially prompt for creds (but that’s because of my overall hierarchy setup so is expected), but on providingthe creds for the local account it then succeeded with no other errors. Thenwhen it tries Active Discovery that error pops up. What else am I missing? This specific code doesn’t have much info available from a Google; if I include LogicMonitor in the search I get 1 result, and that’s the general LM home page “get started” result. Thanks96Views9likes13CommentsPATCH website custom property via API
Anyone have some PowerShell that will update the custom properties of a Website (ping check)? I have been fiddling around and the following code will update properties like name or host (for example), but will not add a custom property. The code runs without error and returns the website as expected, but the change is not reflected in the portal. [string]$AccessId = '' [securestring]$AccessKey = ('' | ConvertTo-SecureString -AsPlainText -Force) [string]$AccountName = '' [int]$Id = '' #website id [hashtable]$PropertyTable = @{ 'propertyName' = 'test' } [string]$httpVerb = 'PATCH' [string]$resourcePath = "/website/websites" $AllProtocols = [System.Net.SecurityProtocolType]'Tls11,Tls12' [System.Net.ServicePointManager]::SecurityProtocol = $AllProtocols $resourcePath += "/$Id" $data = $PropertyTable | ConvertTo-Json -Depth 6 $enc = [System.Text.Encoding]::UTF8 $encdata = $enc.GetBytes($data) $url = "https://$AccountName.logicmonitor.com/santaba/rest$resourcePath" $epoch = [Math]::Round((New-TimeSpan -start (Get-Date -Date "1/1/1970") -end (Get-Date).ToUniversalTime()).TotalMilliseconds) $requestVars = $httpVerb + $epoch + $data + $resourcePath $hmac = New-Object System.Security.Cryptography.HMACSHA256 $hmac.Key = [Text.Encoding]::UTF8.GetBytes([System.Runtime.InteropServices.Marshal]::PtrToStringAuto(([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($AccessKey)))) $signatureBytes = $hmac.ComputeHash([Text.Encoding]::UTF8.GetBytes($requestVars)) $signatureHex = [System.BitConverter]::ToString($signatureBytes) -replace '-' $signature = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($signatureHex.ToLower())) $headers = @{ "Authorization" = "LMv1 $accessId`:$signature`:$epoch" "Content-Type" = "application/json" "X-Version" = 3 } $response = Invoke-RestMethod -Uri $url -Method $httpVerb -Header $headers -Body $encdata -ErrorAction Stop $response58Views8likes4CommentsNew Modules for APC Netshelter Rack PDU Advanced 10000 Series
I’m pleased to announce that we just launched new modules for the APC Netshelter Rack PDU Advanced10000 Series power distribution units that now include per-outlet power measurements. Vendor Reference:https://www.apc.com/us/en/product-range/86360147-netshelter-rack-pdu-advanced/24Views12likes0CommentsNew Alert Threshold Options In Portal 200
We were reading through the new alert threshold options coming up and we are very excited. This will solve a ton of our problems and allow us to be more agile for our clients. Being able to control alert interval at various levels now and controlling no data is awesome! 1 issue tho. The continued bad design around UI v4.I use 4k monitors at 100%. Why in the world is the threshold adjustment so tiny and hard to work with? Now truthfully I don’t always go around maximizing my browser windows, but still. This feels like so much wasted space.52Views12likes2CommentsSQL Server DataSources - Updates
Hi, I did a DataSource update the other day and it looks like the built-in SQL Server dashboard is now broken, because the updated DataSources now have one of these: hasCategory("MSSQL") && !hasCategory("SQL_Node") hasCategory("MSSQL") && !hasCategory("WSFC_VNN") I’m guessing the bold bit is new/changed because previously my SQL AG servers were shown in the built-in Dashboards and now they’re not. Only non AG SQL’s are shown. Is this intentional? Thanks99Views10likes12Comments