Recent Discussions
Dell ECS Network Statistics Version 3.6+ Flux Query
Dell made some changes to their ECS offering in version 3.6 where system level statistics such as CPU, Memory and Network were removed from the dashboard API and Flux Queries needed to be used to retrieve the data, below are two discovery and collection scripts one for CPU and Memory and one for Network Level statistics that utilize the flux query to retrieve the relevant metrics. Important note: this is for all versions above 3.6 of the Dell EMC ECS Solution all versions before are supported fully by the existing LogicMonitor out of the box packages. The Network statistics will return "0" values from time to time, I am still troubleshooting this, as in the previous script I have found a minimum of 5 minutes works best. Due to 20000 character limitation on a post the cpu and memory stats can be found here. Discovery Script: /******************************************************************************* * Dell ECS Network Interface Discovery script. ******************************************************************************/ import groovy.json.JsonSlurper import groovy.json.JsonOutput import groovy.json.JsonBuilder import java.util.concurrent.Executors import java.util.concurrent.TimeUnit import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.util.Settings hostname = hostProps.get("system.hostname") user = hostProps.get("ecs.user") pass = hostProps.get("ecs.pass") collectorplatform = hostProps.get("system.collectorplatform") debug = false def success = false def token = login() if (token) { //Retrieve data for all nodes for CPU and Memory def encoded_instance_props_array = [] def NETresponse = getNetwork(token) // Work through table to retrieve the Node Name and Id to build out the instance level properties. // Internal Note used the methods in "Arista Campus PSU Collection Script" //Process Network Statistics if (debug) println "Net Response: "+ NETresponse def NETJson = new JsonSlurper().parseText(NETresponse) if (debug) println "\n\n Network Values: "+NETJson.Series.Values[0] NETJson.Series.Values[0].each { ifaceEntry -> def nodeId = ifaceEntry[9] def nodeName = ifaceEntry[7] def nodeIfaceName = ifaceEntry[8] wildvalue = "${nodeId}-${nodeIfaceName}" wildalias = "${nodeName}-${nodeIfaceName}" description = "${nodeId}/${nodeIfaceName}" def instance_props = [ "auto.node.id": nodeId, "auto.node.name": nodeName, "auto.node.interface":nodeIfaceName, "auto.node.interface.speed":ifaceEntry[4] ] encoded_instance_props_array = instance_props.collect() { property, value -> URLEncoder.encode(property.toString()) + "=" + URLEncoder.encode(value.toString()) } println "${wildvalue}##${wildalias}##${description}####${encoded_instance_props_array.join("&")}" } } else if (debug) { println "Bad API response: ${response}"} return 0 def login() { if (debug) println "in login" // Fetch new token using Basic authentication, set in cache file and return if (debug) println "Checking provided ${user} creds at /login.json..." def userCredentials = "${user}:${pass}" def basicAuthStringEnc = new String(Base64.getEncoder().encode(userCredentials.getBytes())) def loginUrl = "https://${hostname}:4443/login.json".toURL() def loginConnection = loginUrl.openConnection() loginConnection.setRequestProperty("Authorization", "Basic " + basicAuthStringEnc) def loginResponseBody = loginConnection.getInputStream()?.text def loginResponseCode = loginConnection.getResponseCode() def loginResponseToken = loginConnection.getHeaderField("X-SDS-AUTH-TOKEN") if (debug) println loginResponseCode if (loginResponseCode == 200 && loginResponseToken) { if (debug) println "Retrieved token: ${loginResponseToken}" return loginResponseToken } else { println "STATUS CODE:\n${loginResponseCode}\n\nRESPONSE:\n${loginResponseBody}" println "Unable to fetch token with ${user} creds at /login.json" } println "Something unknown went wrong when logging in" } def getNetwork(token) { def slurper = new JsonSlurper() def dataUrl = "https://"+hostname+":4443/flux/api/external/v2/query" if (debug) println "Trying to fetch data from ${dataUrl}" def flux = 'from(bucket:\"monitoring_op\") |> range(start: -5m) |> filter(fn: (r) => r._measurement == \"net\" and r._field == \"speed\")' def jsonBody = groovy.json.JsonOutput.toJson(["query":flux]) if (debug) println "Raw JSON Body "+jsonBody if (debug) println "Json Body "+JsonOutput.prettyPrint(jsonBody)+" Type "+jsonBody.getClass() def dataHeader = ["X-SDS-AUTH-TOKEN": token,"Content-Type":"application/json","Accept":"application/json"] if (debug) println("Sent Header: "+dataHeader) // Now we can retrieve the data. def httpClient = Client.open (hostname,4443); httpClient.post(dataUrl,jsonBody,dataHeader); if ( !(httpClient.getStatusCode() =~ /200/)) { println "Failed to retrieve data "+httpClient.getStatusCode() println "Header: "+httpClient.getHeader return(1) } String dataContent = httpClient.getResponseBody() if (debug) println "Status Code "+httpClient.getStatusCode() if (debug) println "Data in response Body "+dataContent return dataContent } Collection Script: /******************************************************************************* * Dell ECS Flux Query Network Statistics ******************************************************************************/ import groovy.json.JsonSlurper import groovy.json.JsonOutput import groovy.json.JsonBuilder import java.util.concurrent.Executors import java.util.concurrent.TimeUnit import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.util.Settings hostname = hostProps.get("system.hostname") user = hostProps.get("ecs.user") pass = hostProps.get("ecs.pass") collectorplatform = hostProps.get("system.collectorplatform") debug = false def success = false def token = login() if (token) { //Retrieve data for all nodes for CPU and Memory def encoded_instance_props_array = [] def NETresponse = getNetwork(token) // Work through table to retrieve the Node Name and Id to build out the instance level properties. // Internal Note used the methods in "Arista Campus PSU Collection Script" //Process Network Statistics if (debug) println "Net Response: "+ NETresponse def NETJson = new JsonSlurper().parseText(NETresponse) if (debug) println "\n\n Net Values: "+NETJson.Series.Values[0] NETJson.Series.Values[0].each { ifaceEntry -> def nodeId = ifaceEntry[9] def nodeName = ifaceEntry[7] def nodeIfaceName = ifaceEntry[8] // Get the _field value so we know which metric we are collecting. def fieldName = ifaceEntry[5] def fieldValue = ifaceEntry[4] wildvalue = "${nodeId}-${nodeIfaceName}" wildalias = "${nodeName}-${nodeIfaceName}" description = "${nodeName}/${nodeIfaceName}" println "${wildvalue}.${fieldName}=${fieldValue}" } } else if (debug) { println "Bad API response: ${response}"} return 0 def login() { if (debug) println "in login" // Fetch new token using Basic authentication, set in cache file and return if (debug) println "Checking provided ${user} creds at /login.json..." def userCredentials = "${user}:${pass}" def basicAuthStringEnc = new String(Base64.getEncoder().encode(userCredentials.getBytes())) def loginUrl = "https://${hostname}:4443/login.json".toURL() def loginConnection = loginUrl.openConnection() loginConnection.setRequestProperty("Authorization", "Basic " + basicAuthStringEnc) def loginResponseBody = loginConnection.getInputStream()?.text def loginResponseCode = loginConnection.getResponseCode() def loginResponseToken = loginConnection.getHeaderField("X-SDS-AUTH-TOKEN") if (debug) println loginResponseCode if (loginResponseCode == 200 && loginResponseToken) { if (debug) println "Retrieved token: ${loginResponseToken}" return loginResponseToken } else { println "STATUS CODE:\n${loginResponseCode}\n\nRESPONSE:\n${loginResponseBody}" println "Unable to fetch token with ${user} creds at /login.json" } println "Something unknown went wrong when logging in" } def getNetwork(token) { def slurper = new JsonSlurper() def dataUrl = "https://"+hostname+":4443/flux/api/external/v2/query" if (debug) println "Trying to fetch data from ${dataUrl}" def flux = 'from(bucket:\"monitoring_op\") |> range(start: -5m) |> filter(fn: (r) => r._measurement == \"net\")' def jsonBody = groovy.json.JsonOutput.toJson(["query":flux]) if (debug) println "Raw JSON Body "+jsonBody if (debug) println "Json Body "+JsonOutput.prettyPrint(jsonBody)+" Type "+jsonBody.getClass() def dataHeader = ["X-SDS-AUTH-TOKEN": token,"Content-Type":"application/json","Accept":"application/json"] if (debug) println("Sent Header: "+dataHeader) // Now we can retrieve the data. def httpClient = Client.open (hostname,4443); httpClient.post(dataUrl,jsonBody,dataHeader); if ( !(httpClient.getStatusCode() =~ /200/)) { println "Failed to retrieve data "+httpClient.getStatusCode() println "Header: "+httpClient.getHeader return(1) } String dataContent = httpClient.getResponseBody() if (debug) println "Status Code "+httpClient.getStatusCode() if (debug) println "Data in response Body "+dataContent return dataContent } Data Source Configuration: In both cases I have set up the "applies to" setting to "hasCategory("EMC_ECS_Cluster")" The discovery schedule is daily The collection schedule as stated is every 5 minutes. The collection is configured as batch script in both. Additional comments and notes: One of the biggest challenges I had solving this change from the Dashboard API to the Flux API was that I was receiving a HTTP 401 initially I thought this was the flux query however it turned out to be the saving of the token to the file as per the original data sources, once I removed this and made it the same as my Python script which worked with out issue I resolved this issue. I have an additional request for the Latency statistics, I will share these in a separate post once done. Hope this helps.SteveBamford4 days agoNeophyte8Views0likes0CommentsDell ECS System Level Statistics Data Sources
Dell made some changes to their ECS offering in version 3.6 where system level statistics such as CPU, Memory and Network were removed from the dashboard API and Flux Queries needed to be used to retrieve the data, below are two discovery and collection scripts one for CPU and Memory and one for Network Level statistics that utilize the flux query to retrieve the relevant metrics. Important note: this is for all versions above 3.6 of the Dell EMC ECS Solution all versions before are supported fully by the existing LogicMonitor out of the box packages. CPU and Memory Statistics: The following collection and discovery scripts retrieve the CPU and Memory statistics from the flux query API I would recommend keeping the collection frequency at 5 minutes. Discovery Script: /******************************************************************************* * Dell ECS Flux Query CPU and Memory Discovery Script ******************************************************************************/ import groovy.json.JsonSlurper import groovy.json.JsonOutput import groovy.json.JsonBuilder import java.util.concurrent.Executors import java.util.concurrent.TimeUnit import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.util.Settings hostname = hostProps.get("system.hostname") user = hostProps.get("ecs.user") pass = hostProps.get("ecs.pass") collectorplatform = hostProps.get("system.collectorplatform") debug = false def success = false def token = login() // End Templines if (token) { //Retrieve data for all nodes for CPU and Memory def encoded_instance_props_array = [] //Use the flux call for getting the CPU to retrieve the Node information. Future Enhancement find a call for just the Nodes rather than the metrics call. def CPUresponse = getNode(token) if (debug) println "CPU Response: "+ CPUresponse // Work through table to retrieve the Node Name and Id to build out the instance level properties. // Internal Note used the methods in "Arista Campus PSU Collection Script" def CPUJson = new JsonSlurper().parseText(CPUresponse) if (debug) println "\n\n CPU Values: "+CPUJson.Series.Values[0] CPUJson.Series.Values[0].each { nodeEntry -> if (debug) println "In Table" if (debug) println "Node Data "+nodeEntry def nodeId = nodeEntry[9] def nodeName = nodeEntry[8] wildvalue = nodeId wildalias = nodeName description = "${nodeId}/${nodeName}" def instance_props = [ "auto.node.id": nodeId, "auto.node.name": nodeName ] encoded_instance_props_array = instance_props.collect() { property, value -> URLEncoder.encode(property.toString()) + "=" + URLEncoder.encode(value.toString()) } println "${wildvalue}##${wildalias}##${description}####${encoded_instance_props_array.join("&")}" } } else if (debug) { println "Bad API response: ${response}"} return 0 def login() { if (debug) println "in login" // Fetch new token using Basic authentication, set in cache file and return if (debug) println "Checking provided ${user} creds at /login.json..." def userCredentials = "${user}:${pass}" def basicAuthStringEnc = new String(Base64.getEncoder().encode(userCredentials.getBytes())) def loginUrl = "https://${hostname}:4443/login.json".toURL() def loginConnection = loginUrl.openConnection() loginConnection.setRequestProperty("Authorization", "Basic " + basicAuthStringEnc) def loginResponseBody = loginConnection.getInputStream()?.text def loginResponseCode = loginConnection.getResponseCode() def loginResponseToken = loginConnection.getHeaderField("X-SDS-AUTH-TOKEN") if (debug) println loginResponseCode if (loginResponseCode == 200 && loginResponseToken) { if (debug) println "Retrieved token: ${loginResponseToken}" return loginResponseToken } else { println "STATUS CODE:\n${loginResponseCode}\n\nRESPONSE:\n${loginResponseBody}" println "Unable to fetch token with ${user} creds at /login.json" } println "Something unknown went wrong when logging in" } def getNode(token) { def slurper = new JsonSlurper() def dataUrl = "https://"+hostname+":4443/flux/api/external/v2/query" if (debug) println "Trying to fetch data from ${dataUrl}" //def flux = 'from(bucket:\"monitoring_op\") |> range(start: -5m) |> filter(fn: (r) => r._measurement == \"cpu\" and r.cpu == \"cpu-total\" and r._field == \"usage_idle\" and r.host == \"'+hostname+'\")' def flux = 'from(bucket:\"monitoring_op\") |> range(start: -5m) |> filter(fn: (r) => r._measurement == \"cpu\" and r.cpu == \"cpu-total\" and r._field == \"usage_idle\")' def jsonBody = groovy.json.JsonOutput.toJson(["query":flux]) if (debug) println "Raw JSON Body "+jsonBody if (debug) println "Json Body "+JsonOutput.prettyPrint(jsonBody)+" Type "+jsonBody.getClass() def dataHeader = ["X-SDS-AUTH-TOKEN": token,"Content-Type":"application/json","Accept":"application/json"] if (debug) println("Sent Header: "+dataHeader) // Now we can retrieve the data. def httpClient = Client.open (hostname,4443); httpClient.post(dataUrl,jsonBody,dataHeader); if ( !(httpClient.getStatusCode() =~ /200/)) { println "Failed to retrieve data "+httpClient.getStatusCode() println "Header: "+httpClient.getHeader return(1) } String dataContent = httpClient.getResponseBody() if (debug) println "Status Code "+httpClient.getStatusCode() if (debug) println "Data in response Body "+dataContent //return slurper.parseText(dataContent) return dataContent } Collection Script: /******************************************************************************* * Dell ECS Flux Query CPU and Memory ******************************************************************************/ import groovy.json.JsonSlurper import groovy.json.JsonOutput import groovy.json.JsonBuilder import java.util.concurrent.Executors import java.util.concurrent.TimeUnit import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.util.Settings hostname = hostProps.get("system.hostname") user = hostProps.get("ecs.user") pass = hostProps.get("ecs.pass") collectorplatform = hostProps.get("system.collectorplatform") debug = true def success = false def token = login() // End Templines if (token) { //Retrieve data for all nodes for CPU and Memory def encoded_instance_props_array = [] def CPUresponse = getCPU(token) def MEMresponse = getMemory(token) if (debug) println "CPU Response: "+ CPUresponse if (debug) println "Mem Response: "+ MEMresponse // Work through table to retrieve the Node Name and Id to build out the instance level properties. // Internal Note used the methods in "Arista Campus PSU Collection Script" //Process CPU Metrics def CPUJson = new JsonSlurper().parseText(CPUresponse) if (debug) println "\n\n CPU Values: "+CPUJson.Series.Values[0] CPUJson.Series.Values[0].each { nodeEntry -> if (debug) println "Node Data "+nodeEntry def idleCPU = Float.valueOf(nodeEntry[4]) def usedCPU = 100 - idleCPU def nodeId = nodeEntry[9] def nodeName = nodeEntry[8] wildvalue = nodeId wildalias = nodeName description = "${nodeId}/${nodeName}" println "${wildvalue}.idle_cpu=${idleCPU}" println "${wildvalue}.used_cpu=${usedCPU}" } // Process Memory Metrics def MEMJson = new JsonSlurper().parseText(MEMresponse) if (debug) println "\n\n Mem Values: "+MEMJson.Series.Values[0] MEMJson.Series.Values[0].each { nodeEntry -> def fieldValue = nodeEntry[4] def fieldName = nodeEntry[5] def nodeId = nodeEntry[8] def nodeName = nodeEntry[7] wildvalue = nodeId wildalias = nodeName description = "${nodeId}/${nodeName}" println "${wildvalue}.${fieldName}=${fieldValue}" } } else if (debug) { println "Bad API response: ${response}"} return 0 def login() { if (debug) println "in login" // Fetch new token using Basic authentication, set in cache file and return if (debug) println "Checking provided ${user} creds at /login.json..." def userCredentials = "${user}:${pass}" def basicAuthStringEnc = new String(Base64.getEncoder().encode(userCredentials.getBytes())) def loginUrl = "https://${hostname}:4443/login.json".toURL() def loginConnection = loginUrl.openConnection() loginConnection.setRequestProperty("Authorization", "Basic " + basicAuthStringEnc) def loginResponseBody = loginConnection.getInputStream()?.text def loginResponseCode = loginConnection.getResponseCode() def loginResponseToken = loginConnection.getHeaderField("X-SDS-AUTH-TOKEN") if (debug) println loginResponseCode if (loginResponseCode == 200 && loginResponseToken) { if (debug) println "Retrieved token: ${loginResponseToken}" return loginResponseToken } else { println "STATUS CODE:\n${loginResponseCode}\n\nRESPONSE:\n${loginResponseBody}" println "Unable to fetch token with ${user} creds at /login.json" } println "Something unknown went wrong when logging in" } def getCPU(token) { def slurper = new JsonSlurper() def dataUrl = "https://"+hostname+":4443/flux/api/external/v2/query" if (debug) println "Trying to fetch data from ${dataUrl}" def flux = 'from(bucket:\"monitoring_op\") |> range(start: -5m) |> filter(fn: (r) => r._measurement == \"cpu\" and r.cpu == \"cpu-total\" and r._field == \"usage_idle\")' def jsonBody = groovy.json.JsonOutput.toJson(["query":flux]) if (debug) println "Raw JSON Body "+jsonBody if (debug) println "Json Body "+JsonOutput.prettyPrint(jsonBody)+" Type "+jsonBody.getClass() def dataHeader = ["X-SDS-AUTH-TOKEN": token,"Content-Type":"application/json","Accept":"application/json"] if (debug) println("Sent Header: "+dataHeader) // Now we can retrieve the data. def httpClient = Client.open (hostname,4443); httpClient.post(dataUrl,jsonBody,dataHeader); if ( !(httpClient.getStatusCode() =~ /200/)) { println "Failed to retrieve data "+httpClient.getStatusCode() println "Header: "+httpClient.getHeader return(1) } String dataContent = httpClient.getResponseBody() if (debug) println "Status Code "+httpClient.getStatusCode() if (debug) println "Data in response Body "+dataContent return dataContent } def getMemory(token) { def slurper = new JsonSlurper() def dataUrl = "https://"+hostname+":4443/flux/api/external/v2/query" if (debug) println "Trying to fetch data from ${dataUrl}" //def flux = 'from(bucket:\"monitoring_op\") |> range(start: -5m) |> filter(fn: (r) => r._measurement == \"mem\" and r._field == \"available_percent\")' def flux = 'from(bucket:\"monitoring_op\") |> range(start: -5m) |> filter(fn: (r) => r._measurement == \"mem\")' def jsonBody = groovy.json.JsonOutput.toJson(["query":flux]) if (debug) println "Raw JSON Body "+jsonBody if (debug) println "Json Body "+JsonOutput.prettyPrint(jsonBody)+" Type "+jsonBody.getClass() def dataHeader = ["X-SDS-AUTH-TOKEN": token,"Content-Type":"application/json","Accept":"application/json"] if (debug) println("Sent Header: "+dataHeader) // Now we can retrieve the data. def httpClient = Client.open (hostname,4443); httpClient.post(dataUrl,jsonBody,dataHeader); if ( !(httpClient.getStatusCode() =~ /200/)) { println "Failed to retrieve data "+httpClient.getStatusCode() println "Header: "+httpClient.getHeader return(1) } String dataContent = httpClient.getResponseBody() if (debug) println "Status Code "+httpClient.getStatusCode() if (debug) println "Data in response Body "+dataContent return dataContent } Networking Statistics: See the link https://community.logicmonitor.com/discussions/lm-exchange/dell-ecs-network-statistics-version-3-6-flux-query/19817 for the network statistics as I ran out of space. Additional comments and notes: One of the biggest challenges I had solving this change from the Dashboard API to the Flux API was that I was receiving a HTTP 401 initially I thought this was the flux query however it turned out to be the saving of the token to the file as per the original data sources, once I removed this and made it the same as my Python script which worked with out issue I resolved this issue. I have an additional request for the Latency statistics, I will share these in a separate post once done. Hope this helps.SteveBamford4 days agoNeophyte9Views0likes0CommentsHow to Create a Dashboard Widget for “Sensitive” Windows Servers?
Hi Community, I’m looking for best practices to create a dashboard widget that highlights Windows servers which are more “problematic” or sensitive—for example, servers that frequently trigger CPU, Memory, or Disk alerts. Goal: Identify servers with high alert frequency or severe resource issues. Display them in a widget so they stand out for quick troubleshooting.24Views1like1CommentCheckpoint Power Supplies - 6000-XL
Not sure where else to post this or how to get my update into the repo. To support the 6000-XL and some other variants On the Datasource: Checkpoint Power Supplies Modify the datapoint 'PowerSupplyStatus' to: Up|PresentMonitoringLife2 months agoNeophyte23Views0likes1CommentDell ECS Flux API DataSource
I am currently working on replacing the DataSource elements for CPU and Memory for the Dell ECS S3 Platform we use, but have hit a problem in building the JSON I can get this to work in Python, but I am really struggling in Groovy where I get 400 Bad request where I see a Syntax as the problem. Here is the JSON I am sending as I would write it in python and working: sendData = {"query":"from(bucket:\"monitoring_op\") |> range(start: -5m) |> filter(fn: (r) => r._measurement == \"cpu\" and r.cpu == \"cpu-total\" and r._field == \"usage_idle\" and r.host == \""+hostname+"\")"} Here is it in Groovy format which has been through a few iterations. dataString = ["query":'from(bucket:\"monitoring_op\") |> range(start: -5m) |> filter(fn: (r) => r._measurement == \"cpu\" and r.cpu == \"cpu-total\" and r._field == \"usage_idle\" and r.host == \"'+hostname+'\")'] The actual code is here it is a copy of one of the original ECS Data Sources then adapting the getApi element to use the flux API as opposed to the dashboard API elements. import groovy.json.JsonSlurper import groovy.json.JsonOutput import groovy.json.JsonBuilder import java.util.concurrent.Executors import java.util.concurrent.TimeUnit import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.util.Settings hostname = [Removed] user = [Removed] pass = [Removed] collectorplatform = "linux" // Temp lines println "Successfully read params" def nodeid = [Removed] // Temp lines end if (nodeid == null) { println nodeid println "auto.emc.ecs.node.nodeid is not set! Please see Technical Notes." return 1 } debug = true def success = false def token = login() //Temp Lines println "out of login" println token // End Templines if (token) { def response = getApi(token) println "Response below" println response } else if (debug) { println "Bad API response: ${response}"} return success ? 0 : 1 def login() { println "in login" File tokenCacheFile if (collectorplatform == 'windows') tokenCacheFile = new File("emc_ecs_tokens" + '\\' + hostname + "-emc_ecs_tokens.txt") if (collectorplatform == 'linux') tokenCacheFile = new File("emc_ecs_tokens" + '/' + hostname + "-emc_ecs_tokens.txt") if (debug) println "Token cache filename is: ${tokenCacheFile}" // If we have a non-empty readable token cache file, then extract and return the token if (tokenCacheFile.exists() && tokenCacheFile.canRead() && tokenCacheFile.readLines().size() == 1 && !tokenCacheFile.readLines()[0].contains("null")) { if (debug) println "Token cache file exists and is non-empty" def cachedToken = tokenCacheFile.readLines()[0] if (cachedToken) { if (debug) println "Extracted token from cache file: ${cachedToken}" return cachedToken } } else if (!tokenCacheFile.exists()) { // token cache file does not exist, create it if (debug) println "Token cache file does not exist, creating..." new File("emc_ecs_tokens").mkdir() tokenCacheFile.createNewFile() } else if (tokenCacheFile.text != '') { // malformed token cache file println "Bad token file: ${tokenCacheFile.readLines()}\nClearing..." tokenCacheFile.text = '' } else if (debug && tokenCacheFile.text == '') { // token cache file has been cleared, proceed and rebuild println "Session token file is cleared. Rebuilding..." } // Fetch new token using Basic authentication, set in cache file and return if (debug) println "Checking provided ${user} creds at /login.json..." def userCredentials = "${user}:${pass}" def basicAuthStringEnc = new String(Base64.getEncoder().encode(userCredentials.getBytes())) def loginUrl = "https://${hostname}:4443/login.json".toURL() def loginConnection = loginUrl.openConnection() loginConnection.setRequestProperty("Authorization", "Basic " + basicAuthStringEnc) def loginResponseBody = loginConnection.getInputStream()?.text def loginResponseCode = loginConnection.getResponseCode() def loginResponseToken = loginConnection.getHeaderField("X-SDS-AUTH-TOKEN") println loginResponseCode if (loginResponseCode == 200 && loginResponseToken) { if (debug) println "Retrieved token: ${loginResponseToken}" tokenCacheFile << loginResponseToken if (debug) println "Set token in cache file" return loginResponseToken } else { println "STATUS CODE:\n${loginResponseCode}\n\nRESPONSE:\n${loginResponseBody}" println "Unable to fetch token with ${user} creds at /login.json" } println "Something unknown went wrong when logging in" } def getApi(token, alreadyFailed=false) { def dataUrl = "https://"+hostname+":4443/flux/api/external/v2/query"; if (debug) println "Trying to fetch data from ${dataUrl}..." //def GetStatus = JsonOutput.toJson([query:'from(bucket:\"monitoring_op\") |> range(start: -5m) |> filter(fn: (r) => r._measurement == \"cpu\" and r.cpu == \"cpu-total\" and r._field == \"usage_idle\" and r.host == \"${hostname}\")']) dataString = ["query":'from(bucket:\"monitoring_op\") |> range(start: -5m) |> filter(fn: (r) => r._measurement == \"cpu\" and r.cpu == \"cpu-total\" and r._field == \"usage_idle\" and r.host == \"'+hostname+'\")'] def GetStatus = JsonOutput.toJson(dataString) println GetStatus println "GetStatus Class" println GetStatus.getClass() def dataHeader = ['X-SDS-AUTH-TOKEN':token,'Content-Type':'application/json','accept':'application/json'] // Now we can retrieve the data. httpClient = Client.open (hostname,443) def dataData = httpClient.post(dataUrl,GetStatus,dataHeader); if ( !(httpClient.getStatusCode() =~ /200/)) { println "Failed to retrieve data "+httpClient.getStatusCode return(1) } String dataContent = httpClient.getResponseBody() println httpClient.getStatusCode println dataContent return new JsonSlurper().parseText(dataContent) } Thanks in advance for any help.SolvedSteveBamford2 months agoNeophyte54Views0likes4CommentsBest Practices for API Calls in a datasource
Hi all, Possibly the most random question of the week, when working in datasources where you are looking to utilize API calls what would you say is the maximum calls to make in what datasource typically I have worked on one data retrieval call per data source. Why the question? So Dell have withdrawn a number of fields from their Dashboard API in Dell ECS which means metrics such as CPU and memory need now to be retrieved from the flux API that is provided amongst a few other metrics which I may or may not need to provide to our infrastructure team. To do this it looks like I may need to generate at least two flux queries one for CPU and one for memory this will result in two API calls. So would you create a single data source for each metric or make the two calls within the datasource so you have a global stats data source for this sort of information. Thanks in advance for your input.SteveBamford2 months agoNeophyte38Views0likes1CommentSeeking feedback on Nutanix monitoring
We are starting to monitor Nutanix environments in our datacenter, and I've downloaded all the LM modules, so they are ready to use. I'm looking for any success stories and feedback from users, because as of now I can get SNMP for system stats, but nothing from the Nutanix modules themselves. Within Prism we added an SNMP user and the v.3 creds are in the LM resource. It appears the SNMP service needs to be restarted after configuring a user. This is a reference we've used so far: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA0600000008bAECAY#Heading_BJaredM2 months agoNeophyte118Views0likes5CommentsHow to handle unnecessary active alerts
Dear LM community, I’m looking for the best practice to handle unnecessary active alerts in LogicMonitor. As far as I understand, we can acknowledge, put into SDT, escalate, or adjust alert thresholds (Instance thresholds), or even group instances with custom alerting rules. However, it doesn’t seem possible to simply remove an active alert once it’s triggered- please correct me if I am mistaken. Each of these approaches has some downsides — for example, grouping interfaces to suppress alerts may cause us to miss new alerts later if the port becomes active again. What is the recommended way to deal with such unnecessary alerts - in this case - inactive network interfaces that are alerting but are expected to stay down? Thank you in advance for your input!Clark_Kent2 months agoNeophyte73Views0likes3CommentsMeraki Switch Stack vs Cisco Switch Stack
I apologize if this topic has already been addressed—I was unable to locate any relevant discussions. I'm encountering a challenge with how LogicMonitor Topology represents Meraki stacked switches, particularly in contrast to its handling of Cisco stacked switches. When LogicMonitor discovers Cisco switches configured in a stack, it identifies the stack as a single logical entity, aggregating multiple serial numbers and hardware components. This behavior aligns with Cisco IOS, which presents the stack as a unified system. As a result, LogicMonitor’s topology mapping treats the stack as a single node, simplifying both visualization and monitoring. Meraki, however, takes a different approach. The Meraki cloud platform recognizes individual switches as members of a stack, and because of this (I believe) LogicMonitor treats each switch as a distinct device. Consequently, topology maps generated by LogicMonitor show individual connections between each switch in a stack, rather than representing the stack as a cohesive unit. This leads to fragmented and often impractical topology views. Manual topology mapping is not a viable option in my environment. Has anyone found a method or workaround to reconcile this issue?billbianco2 months agoNeophyte57Views1like1CommentExample scripts
Hi community, I'm running into a limitation with reporting on Scheduled Downtime (SDT) in LogicMonitor. Right now, i' m able to pull alerts that occurred during SDT' s but i cannot generate a single report that shows all historcal SDTs across all my resources/devices. is there any way to generate such a historical SDT report, does someone have a script or code to share to get that trough the API Thanks in advance!Admine3 months agoNeophyte72Views1like3Comments