Recent Discussions
Arista Campus Switch Power Supply State
We recently discovered that Arista Campus switches are either just not populating the OID for Power Supply states, or are using a different one, so in my API first approach to monitoring I decided to utilize the Arista EOS API to grab the power supply data, below are the discovery and collection scripts along with some of the key field for a Data source. You will notice I am using the Model number to identify the Arista campus switches using an if not statement "if ( !(model.startsWith("CCS-")))" which will drop all the DC switches out of the script. As always appreciate any feedback and ways to improve on these things. Have a great day. Note: This is obviously shared on a best efforts basis, but we are using this across all of our campus switches and it works great (We found 3 on top of the one we knew had a failed PSU that had issues). Applies to: hasCategory("Arista") Description: This will check the state of the power supply and return the following. 1 = Ok 0 = Down This replaces the Arista SNMP PSU as this returns a null value for the campus switches. Discovery Script: /* Script Name: Campus Power Supply Discovery */ import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.util.Settings import groovy.json.JsonOutput import groovy.json.JsonSlurper import java.util.concurrent.Executors import java.util.concurrent.TimeUnit // Gather host statistics username = hostProps.get("ssh.user"); password = hostProps.get("ssh.pass"); host = hostProps.get("auto.network.names"); model = hostProps.get("auto.endpoint.model") vendor = hostProps.get("auto.endpoint.manufacturer") //Only need to discove the devices if the vendor is Arista Networks if ( !(vendor = "Arista Networks")) { println "Do not need to discover as not an Arista Device" return(0) } //Now we need to check if the model is one of these specific models if it is then fine if ( !(model.startsWith("CCS-"))) { println "Not a Campus Switch" return(0) } //Build Authentication JSON to send to switch def authJson = JsonOutput.toJson([username:username,password:password]); //Build URL url = "https://"+host+"/login"; // Make the call httpClient = Client.open(host, 443); def response = httpClient.post(url, authJson,["Content-Type":"application/json"]); if ( !(httpClient.getStatusCode() =~ /200/)) { println "Failed to retrieve data "+httpClient.getStatusCode return(1) } //Get Session data so we can extract the session string String sessionData = httpClient.getHeader("Set-Cookie"); sessionData.substring(sessionData.lastIndexOf("Session=")+1,sessionData.length()); // Remove everything including the ; after the semiColon in session Data def firstPass = sessionData.split(';')[0]; // Remove everything before and including the = this will leave you with the session data. def session = firstPass.split('=')[1]; // Now we can gather the data to do this we need to build the command JSON output, URL and Header def GetStatus = JsonOutput.toJson([jsonrpc: "2.0",method: "runCmds",params: ["version": 1,cmds: [ "show environment power" ],format: "json"],"id": "1"]) def powerUrl = "https://"+host+"/command-api"; def powerHeader = ["Cookie":session,"Content-Type":"application/json"] // Now we can retrieve the data. //httpClient = Client.open (host,443) def powerData = httpClient.post(powerUrl, GetStatus ,powerHeader); if ( !(httpClient.getStatusCode() =~ /200/)) { println "Failed to retrieve data "+httpClient.getStatusCode return 1 } String powerContent = httpClient.getResponseBody() def powerTable = new JsonSlurper().parseText(powerContent) def encoded_instance_props_array = [] def wildvalue = "" def wildalias = "" def description = "" //println powerTable powerTable.result[0].powerSupplies.each { psuNo,powerSupply -> wildvalue = "Power_Supply_"+psuNo wildalias = "${wildvalue}:${powerSupply.modelName}" description = "${wildvalue}/${powerSupply.modelName}" def instance_props = [ "auto.power_suppLy": "Power Supply " + psuNo, "auto.power_model_name": powerSupply.modelName ] encoded_instance_props_array = instance_props.collect() { property, value -> URLEncoder.encode(property.toString()) + "=" + URLEncoder.encode(value.toString()) } println "${wildvalue}##${wildalias}##${description}####${encoded_instance_props_array.join("&")}" } return 0 Collection Script: /* Script Name: Campus Power Supply Collection */ import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.util.Settings import groovy.json.JsonOutput import groovy.json.JsonSlurper import java.util.concurrent.Executors import java.util.concurrent.TimeUnit // Gather host statistics username = hostProps.get("ssh.user"); password = hostProps.get("ssh.pass"); host = hostProps.get("auto.network.names"); model = hostProps.get("auto.endpoint.model") vendor = hostProps.get("auto.endpoint.manufacturer") //Only need to discove the devices if the vendor is Arista Networks if ( !(vendor = "Arista Networks")) { println "Do not need to discover as not an Arista Device" return(0) } //Now we need to check if the model is one of these specific models if it is then fine if ( !(model.startsWith("CCS-"))) { println "Not a Campus Switch" return(0) } //Build Authentication JSON to send to switch def authJson = JsonOutput.toJson([username:username,password:password]); //Build URL url = "https://"+host+"/login"; // Make the call httpClient = Client.open(host, 443); def response = httpClient.post(url, authJson,["Content-Type":"application/json"]); if ( !(httpClient.getStatusCode() =~ /200/)) { println "Failed to retrieve data "+httpClient.getStatusCode return(1) } //Get Session data so we can extract the session string String sessionData = httpClient.getHeader("Set-Cookie"); sessionData.substring(sessionData.lastIndexOf("Session=")+1,sessionData.length()); // Remove everything including the ; after the semiColon in session Data def firstPass = sessionData.split(';')[0]; // Remove everything before and including the = this will leave you with the session data. def session = firstPass.split('=')[1]; // Now we can gather the data to do this we need to build the command JSON output, URL and Header def GetStatus = JsonOutput.toJson([jsonrpc: "2.0",method: "runCmds",params: ["version": 1,cmds: [ "show environment power" ],format: "json"],"id": "1"]) def powerUrl = "https://"+host+"/command-api"; def powerHeader = ["Cookie":session,"Content-Type":"application/json"] // Now we can retrieve the data. //httpClient = Client.open (host,443) def powerData = httpClient.post(powerUrl, GetStatus ,powerHeader); if ( !(httpClient.getStatusCode() =~ /200/)) { println "Failed to retrieve data "+httpClient.getStatusCode return 1 } String powerContent = httpClient.getResponseBody() def powerTable = new JsonSlurper().parseText(powerContent) def encoded_instance_props_array = [] def wildvalue = "" def wildalias = "" def description = "" powerTable.result[0].powerSupplies.each { psuNo,powerSupply -> wildvalue = "Power_Supply_"+psuNo wildalias = "Power_Supply_"+psuNo+":${powerSupply.modelName}" psuRawState = "${powerSupply.state}" if (psuRawState == "ok") { psuState = 1 } else { psuState = 0 } println "${wildvalue}.psuState=${psuState}" } return 0SteveBamford5 days agoNeophyte22Views0likes5CommentsAdding Data sources to LMExchange
Hi All, Didn't think this particularly warranted a support call, have I missed something, I have tried looking up submitting a datasource to review in LMExchange and I can't seem to do it. I have org admin rights so its not a permissions thing and did double check and there is a button publish that is ticked so does this mean all data sources I write will be auto published or just they can be published. Thanks in advanceSteveBamford5 days agoNeophyte25Views0likes1CommentDuplicate Tickets Generated for Alert
We're getting duplicate tickets generated for each alert Host Status. Can't quite figure out what is causing it. Same Alert ID... When the alert closes... one ticket closes, the other doesn't. I've seen this in the past with having separate alert rules for error vs. critical... The association in ##EXTERNALTICKETID## only respects the most recent ticket created per integration. I've started at the dataSource's thresholds and made my way through the alert -> rule -> escalation -> integration... everything suggests it should only create one ticket per alert. We have thresholds defined at the /Clients group level in most cases to make sure we're not touching the DS itself to make those changes so updating is easier. I've verified that only Critical creates tickets there.SolvedCole_McDonald11 days agoProfessor31Views0likes5CommentsHP Aruba 6000 switch Support?
Good Afternoon, It seems that the newest switch chassis from Aruba/HP Isn't playing too nicely with LM at the moment. The same DataSources that were useful for the previous HP Switches don't appear to work for the newer 6000 series devices (we have a 6000 and a few 6500s) specifically, the Memory snmp query seems to poll no data - luckily the default CPU for SNMP does work. Has anyone run into this themselves?Jordan-Eil12 days agoNeophyte79Views1like2CommentsLinux Collector setup
HI fellow monitor champs I have a question regarding installing linux collectors. Our engineers are complaining that they cant detect the collector when installing and adding it to the portal. So the collector is visible in collectors but if we add the machine it does not work. Adding Collector - LogicMonitor Do we need additional steps? And if yes which exact ones? Thanks in advance!Admine12 days agoNeophyte23Views0likes1CommentHPE Alltera Storage
Hello community does anyone know if there is a module for monitoring HPE Alltera devices?phakesley13 days agoNeophyte12Views0likes0CommentsCan't install collectors on Windows core DC's
As it says on the tin. Error installing watchdog service. On separate customers core DC's, different networks, different proxies, same error. Failed to install watchdog. Browser access to LM works and we're installing as system. Proxies do not require auth and we're installing with domain admin rights. We've had/have support tickets opened but we haven't been able to resolve this. Anybody got any ideas.Andy_C20 days agoNeophyte109Views0likes10CommentsCustom Threshold export
Im trying to automate the export of all custom ("INSTANCE") alert thresholds from a specific LogicMonitor device group (org.org.devices) using PowerShell and the LogicMonitor REST API. I want to get a total count and optionally a breakdown per device. I have the following script which: Authenticates using the LMv1 HMAC scheme Finds the device group by its full path Recursively retrieves all subgroups and devices Fetches datasources and instances for each device Counts thresholds where thresholdSource == 'INSTANCE' Runs in parallel across devices for performance param ( [string]$AccessId, [string]$AccessKey, [string]$Company, [string]$GroupPath, [int] $MaxThreads = 10 ) function Invoke-LmApi { param( [string]$Method, [string]$ResourcePath, [string]$QueryParams = '', [string]$Body = '' ) $epoch = [math]::Round((Get-Date).ToUniversalTime() - [datetime]'1970-01-01').TotalMilliseconds $toSign = "$Method$epoch$Body$ResourcePath" $hmac = New-Object System.Security.Cryptography.HMACSHA256 $hmac.Key = [Text.Encoding]::UTF8.GetBytes($AccessKey) $hash = $hmac.ComputeHash([Text.Encoding]::UTF8.GetBytes($toSign)) $signature = [Convert]::ToBase64String($hash) $auth = "LMv1 ${AccessId}:${signature}:${epoch}" $url = "https://$Company.logicmonitor.com/santaba/rest$ResourcePath$QueryParams" return Invoke-RestMethod -Uri $url -Method $Method -Headers @{ Authorization = $auth } } function Get-GroupIdByPath { param([string]$Path) $resp = Invoke-LmApi -Method GET -ResourcePath '/device/groups' -QueryParams "?filter=fullPath:$Path&size=1" if (-not $resp.data.items) { throw "Group '$Path' not found" } return $resp.data.items[0].id } function Count-CustomThresholds { param([string]$RootPath) $rootId = Get-GroupIdByPath -Path $RootPath $groupIds = @($rootId) # TODO: collect subgroups, devices and count thresholds in parallel } Count-CustomThresholds -RootPath "$Company/$GroupPath" However, I'm having trouble getting the script to reliably find my target group and export the counts. It either reports "Group not found" or hangs/slowly processes. Here is the current PowerShell script: What I've tried so far: Switching between filtering on name: vs fullPath: Adjusting group path prefixes (with/without company) Serial vs parallel loops Verifying the group exists via manual API calls Questions: What is the recommended way to filter by device group path (fullPath) using LogicMonitor's API? Are there any pitfalls in the LMv1 authentication or parallel Invoke-RestMethod calls I should avoid? How can I streamline this script to reliably export a summary of custom thresholds with minimal API calls? Any guidance would be greatly appreciated. Thanks in advance!Admine25 days agoNeophyte145Views0likes2CommentsUbiquiti Unifi 'Source Errors
We're having some difficulties getting the Unifi 'Sources to properly complete their Active Discovery Scripts, leading to building thread counts in the collector, leading to collector service restarts... (ScriptADTasks - 7 days - Red = Failures): I've been chasing the issues (some is the 'Source's appliesTo not properly targeting devices based on the gathered SNMP initial discovery properties) and have not quite found the smoking gun as the error I'm being given doesn't directly point to the issue. Running AD Test from the DS "Ubiquiti_UniFi_Security_Gateways" against a UXG Pro device gives me this error: "Text must not be null or empty" It doesn't identify which text is needed. The line numbers mentioned don't seem to relate directly to the DS AD code's line numbers.Cole_McDonald26 days agoProfessor37Views0likes6CommentsRestAPI Alerts access to ExternalTicketID
Has anyone figured out how to get at the ##ExternalTicketID## programatically at all? Not having access to that is driving me to distraction. It's in the DB somewhere, but we can't get to it to help automate our workflows and toolsets. Right now, I'm troubleshooting our Connectwise Integration and have to manually relate 4637 integration log entries to tickets manually one by one. Only having this internal var being able to be exposed in the Alerts view is hobbling our ability to build and troubleshoot our integrated systems.SolvedCole_McDonald2 months agoProfessor281Views0likes21Comments