Recent Discussions
Duplicate Tickets Generated for Alert
We're getting duplicate tickets generated for each alert Host Status. Can't quite figure out what is causing it. Same Alert ID... When the alert closes... one ticket closes, the other doesn't. I've seen this in the past with having separate alert rules for error vs. critical... The association in ##EXTERNALTICKETID## only respects the most recent ticket created per integration. I've started at the dataSource's thresholds and made my way through the alert -> rule -> escalation -> integration... everything suggests it should only create one ticket per alert. We have thresholds defined at the /Clients group level in most cases to make sure we're not touching the DS itself to make those changes so updating is easier. I've verified that only Critical creates tickets there.SolvedCole_McDonald2 months agoProfessor42Views0likes5CommentsArista Campus Switch Power Supply State
We recently discovered that Arista Campus switches are either just not populating the OID for Power Supply states, or are using a different one, so in my API first approach to monitoring I decided to utilize the Arista EOS API to grab the power supply data, below are the discovery and collection scripts along with some of the key field for a Data source. You will notice I am using the Model number to identify the Arista campus switches using an if not statement "if ( !(model.startsWith("CCS-")))" which will drop all the DC switches out of the script. As always appreciate any feedback and ways to improve on these things. Have a great day. Note: This is obviously shared on a best efforts basis, but we are using this across all of our campus switches and it works great (We found 3 on top of the one we knew had a failed PSU that had issues). Applies to: hasCategory("Arista") Description: This will check the state of the power supply and return the following. 1 = Ok 0 = Down This replaces the Arista SNMP PSU as this returns a null value for the campus switches. Discovery Script: /* Script Name: Campus Power Supply Discovery */ import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.util.Settings import groovy.json.JsonOutput import groovy.json.JsonSlurper import java.util.concurrent.Executors import java.util.concurrent.TimeUnit // Gather host statistics username = hostProps.get("ssh.user"); password = hostProps.get("ssh.pass"); host = hostProps.get("auto.network.names"); model = hostProps.get("auto.endpoint.model") vendor = hostProps.get("auto.endpoint.manufacturer") //Only need to discove the devices if the vendor is Arista Networks if ( !(vendor = "Arista Networks")) { println "Do not need to discover as not an Arista Device" return(0) } //Now we need to check if the model is one of these specific models if it is then fine if ( !(model.startsWith("CCS-"))) { println "Not a Campus Switch" return(0) } //Build Authentication JSON to send to switch def authJson = JsonOutput.toJson([username:username,password:password]); //Build URL url = "https://"+host+"/login"; // Make the call httpClient = Client.open(host, 443); def response = httpClient.post(url, authJson,["Content-Type":"application/json"]); if ( !(httpClient.getStatusCode() =~ /200/)) { println "Failed to retrieve data "+httpClient.getStatusCode return(1) } //Get Session data so we can extract the session string String sessionData = httpClient.getHeader("Set-Cookie"); sessionData.substring(sessionData.lastIndexOf("Session=")+1,sessionData.length()); // Remove everything including the ; after the semiColon in session Data def firstPass = sessionData.split(';')[0]; // Remove everything before and including the = this will leave you with the session data. def session = firstPass.split('=')[1]; // Now we can gather the data to do this we need to build the command JSON output, URL and Header def GetStatus = JsonOutput.toJson([jsonrpc: "2.0",method: "runCmds",params: ["version": 1,cmds: [ "show environment power" ],format: "json"],"id": "1"]) def powerUrl = "https://"+host+"/command-api"; def powerHeader = ["Cookie":session,"Content-Type":"application/json"] // Now we can retrieve the data. //httpClient = Client.open (host,443) def powerData = httpClient.post(powerUrl, GetStatus ,powerHeader); if ( !(httpClient.getStatusCode() =~ /200/)) { println "Failed to retrieve data "+httpClient.getStatusCode return 1 } String powerContent = httpClient.getResponseBody() def powerTable = new JsonSlurper().parseText(powerContent) def encoded_instance_props_array = [] def wildvalue = "" def wildalias = "" def description = "" //println powerTable powerTable.result[0].powerSupplies.each { psuNo,powerSupply -> wildvalue = "Power_Supply_"+psuNo wildalias = "${wildvalue}:${powerSupply.modelName}" description = "${wildvalue}/${powerSupply.modelName}" def instance_props = [ "auto.power_suppLy": "Power Supply " + psuNo, "auto.power_model_name": powerSupply.modelName ] encoded_instance_props_array = instance_props.collect() { property, value -> URLEncoder.encode(property.toString()) + "=" + URLEncoder.encode(value.toString()) } println "${wildvalue}##${wildalias}##${description}####${encoded_instance_props_array.join("&")}" } return 0 Collection Script: /* Script Name: Campus Power Supply Collection */ import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.util.Settings import groovy.json.JsonOutput import groovy.json.JsonSlurper import java.util.concurrent.Executors import java.util.concurrent.TimeUnit // Gather host statistics username = hostProps.get("ssh.user"); password = hostProps.get("ssh.pass"); host = hostProps.get("auto.network.names"); model = hostProps.get("auto.endpoint.model") vendor = hostProps.get("auto.endpoint.manufacturer") //Only need to discove the devices if the vendor is Arista Networks if ( !(vendor = "Arista Networks")) { println "Do not need to discover as not an Arista Device" return(0) } //Now we need to check if the model is one of these specific models if it is then fine if ( !(model.startsWith("CCS-"))) { println "Not a Campus Switch" return(0) } //Build Authentication JSON to send to switch def authJson = JsonOutput.toJson([username:username,password:password]); //Build URL url = "https://"+host+"/login"; // Make the call httpClient = Client.open(host, 443); def response = httpClient.post(url, authJson,["Content-Type":"application/json"]); if ( !(httpClient.getStatusCode() =~ /200/)) { println "Failed to retrieve data "+httpClient.getStatusCode return(1) } //Get Session data so we can extract the session string String sessionData = httpClient.getHeader("Set-Cookie"); sessionData.substring(sessionData.lastIndexOf("Session=")+1,sessionData.length()); // Remove everything including the ; after the semiColon in session Data def firstPass = sessionData.split(';')[0]; // Remove everything before and including the = this will leave you with the session data. def session = firstPass.split('=')[1]; // Now we can gather the data to do this we need to build the command JSON output, URL and Header def GetStatus = JsonOutput.toJson([jsonrpc: "2.0",method: "runCmds",params: ["version": 1,cmds: [ "show environment power" ],format: "json"],"id": "1"]) def powerUrl = "https://"+host+"/command-api"; def powerHeader = ["Cookie":session,"Content-Type":"application/json"] // Now we can retrieve the data. //httpClient = Client.open (host,443) def powerData = httpClient.post(powerUrl, GetStatus ,powerHeader); if ( !(httpClient.getStatusCode() =~ /200/)) { println "Failed to retrieve data "+httpClient.getStatusCode return 1 } String powerContent = httpClient.getResponseBody() def powerTable = new JsonSlurper().parseText(powerContent) def encoded_instance_props_array = [] def wildvalue = "" def wildalias = "" def description = "" powerTable.result[0].powerSupplies.each { psuNo,powerSupply -> wildvalue = "Power_Supply_"+psuNo wildalias = "Power_Supply_"+psuNo+":${powerSupply.modelName}" psuRawState = "${powerSupply.state}" if (psuRawState == "ok") { psuState = 1 } else { psuState = 0 } println "${wildvalue}.psuState=${psuState}" } return 0SolvedSteveBamford2 months agoNeophyte38Views0likes6CommentsLinux Collector setup
HI fellow monitor champs I have a question regarding installing linux collectors. Our engineers are complaining that they cant detect the collector when installing and adding it to the portal. So the collector is visible in collectors but if we add the machine it does not work. Adding Collector - LogicMonitor Do we need additional steps? And if yes which exact ones? Thanks in advance!SolvedAdmine2 months agoNeophyte56Views0likes3CommentsRestAPI Alerts access to ExternalTicketID
Has anyone figured out how to get at the ##ExternalTicketID## programatically at all? Not having access to that is driving me to distraction. It's in the DB somewhere, but we can't get to it to help automate our workflows and toolsets. Right now, I'm troubleshooting our Connectwise Integration and have to manually relate 4637 integration log entries to tickets manually one by one. Only having this internal var being able to be exposed in the Alerts view is hobbling our ability to build and troubleshoot our integrated systems.SolvedCole_McDonald3 months agoProfessor299Views0likes21CommentsWindows Patching Dashboard
i all, We’re looking to build a comprehensive Windows patching dashboard in LogicMonitor to support compliance, vulnerability management, and visibility across our customer environments (we're an MSP). We currently monitor patching via the WinUpdate_PatchStatus DataSource, but we’d like to expand that with more widgets and deeper insights. Host-level metrics we want: Pending updates count Failed updates count Last successful update time Reboot required (true/false) Recent installed or pending KBs (if possible) Dashboard-wide summary widgets: Top 10 hosts with most pending updates Percentage of Windows servers that are fully patched Pie chart: compliant vs pending vs failed Compliance trends over time Breakdown by group, tag, or customer Nice to have: Table view showing last 5 patches per server Alert integration (e.g., warning if failed updates > X) Multi-tenant filters using tags like env=prod or customer=x Reusable dashboard layout for other clients or environments What we already have in place: WinUpdate_PatchStatus active Proper WMI permissions & Collector access Basic auto properties like auto.updatecount, auto.lastupdate Looking for: Dashboard JSON exports with any of the above Custom DataSources (PowerShell-based?) to enrich with KBs General tips on patching visibility and compliance via LogicMonitor Would appreciate anything you can share — we’ll happily post our version once we finalize it! Thanks in advance! Admine LM certified Monitoring ProfessionalSolvedAdmine4 months agoNeophyte227Views1like6CommentsBuilding Dynamic Groups using Powershell Batchscript DataSources
I'm looking for a way to use the "Description" field I'm collecting when building instances from a batchscripted datasource. current output I'm using in the active discovery writes WildAlias, WIldValue, Description: $($proc.processname)##$($svcmatch.displayname)##$toolname I want $toolname to drive instance grouping. I see mechanisms for using the other two, but altering those doesn't fit the use case I need for these. The Support Docs for instance grouping and for active discovery don't provide quite enough info to figure out what they're instructing without a bunch of experimentation (which is probably how I'll end up sorting this out if someone hasn't done this already). For instance (pun!), This refers to Dynamic groups ( dynamicGroup="^(.d.)" )... but does it only evaluate the GREP based on wildalias? Instance Groups | LogicMonitorSolvedCole_McDonald6 months agoProfessor85Views0likes1CommentAlert/Alerts API endpoint
Hello Everyone, I am trying to get the list of all alerts through the API. Endpoint I am using is https://companyname.logicmonitor.com/santaba/rest/alert/alerts?v=3 Through this API I am only able to get the Uncleared Alerts. Now, Issue is I want all alerts including the one that has been cleared for reporting purpose. I would really appreciate if anyone could help me with this? Thank you, MnaishSolved150Views0likes4CommentsMonitoring of Veeam is not reliable
Has anyone successfully monitored Veeam with LM? From what I am seeing out of the box, every single module is unreliable. The scripts run probably 50-60% of the time. Otherwise they just fail with the error: Veeam Powershell snap-in was loaded from an incorrect location. This results in numerous "No data " responses. Which leads to issues with alerts not clearing when they should, or not opening when they should. Which cascades into our ticketing system to cause further confusion. Powershell works fine locally on any of the servers in question, scripts that utilize Veeam's powershell module that I push from our management tool also work fine. It just seems to be LogicMonitor that has issues with reliability. In these instances, the collector is installed directly on the Veeam host. Veeam forums indicate this may be due to the snap-in installation being corrupted, however I have manually verified it is all correct on a handful of servers and the issue persists. Plus it works locally and on my pushed scripts from a different tool. We are monitoring 44 Veeam servers (all the latest version) and all of them seem to have this reliability issue. Making it hard to believe that the installation could be goofed on every single one of them. All LM supplied default scripts utilize: Add-PSSnapin -Name VeeamPSSnapIn -WarningAction SilentlyContinue -ErrorAction SilentlyContinue I tested by manually running the script from a debug window, but changing from Add-PSSnapin to using Import-Module and referencing the Veeam dll file, yet the intermittent "incorrect location" error persists. Another test, I removed all references to Add-PSSnapin and suprisingly, it still works about 50% of the time. By all findings, I only get this error when running collection scripts from LogicMonitor. Have yet to see it locally. Anybody ese noticing the same thing?Solvedtk_baha8 months agoNeophyte239Views0likes1CommentLeast Privilege's script to set permissions on Services for Non Admin account.
With the new security push for us to use non admin accounts. If anyone would like I to have a script that can run on Domain and one for Workgroup Servers. That iterates though all services and applies correct SDDL for least privilege's account. Extract these to c:/temp, add your list of servers (or for the workgroup add the single server to the serverlist.txt) and then run the RunScript.ps1 You'll need a local admin account to run with for Workgroup Server You'll need a DA account to run for list of Domain Servers. PM me if you are interested ;)SolvedBarb10 months agoAdvisor243Views3likes5Comments