SDTing via API, one source work the other does not
So I've been working on an LM/Jira JSM integration lately.. the native functions did not work for us. And to be quite honest I cannot remember the details but suffice to say that I have a working Custom HTTP Delivery with three URLs for Active, ACK, and Clear alert functions. Now the goal is an automation action on the Jira side to SDT a specific host. So I began by building and testing the call in Postman. Works like a champ.. I can add a DeviceSDT, with a comment using the deviceDisplayName for a period of 1 hour. Exactly what I want in Jira. Build out the automation rule, it's a manual action with no user input. It should add a DeviceSDT to a variable {{deviceHostname}} with variables that are defining epoch time in MS for {{now}}, and an endDateTime of 1 hour.. again in epoch MS format. I test it and nothing happens. Jira reports back an HTTP400. Odd.. so I run the same thing in Postman and it works. Ok remove the variables and replace them with static content, mirroring exactly whats in the Postman payload. Still, 400 error. Stumble across https://webhook.site (which is fabulous for this sort of thing BTW) Nope.. the call/payload from Jira is 100% identical to Postman. Same Bearer token, and mostly the same headers; content-type, accept-encoding, Accept, Connection and User-Agent. Postman does add a couple that are unique to it but I have not found any doc on what (if any) headers other than Authorization, are required for this SDT call. So does anyone here have any suggestions? I do have a case open with LM support as well as Atlassian. Thanks!92Views1like39CommentsCustomizing Alert Message for Integrations - Question.
Hi. Did anyone figured out a way to build you own alert message for a datapoint? I know you can use 'Customized' option in DataPoint's Alert Messsage or modify a message template but those do not affect ##MESSAGE## tag being used in Integrations. My walkaround was to create separate integrations for specific DataSources I care about and dropping ##MESSAGE## tag all together. Am I stupid or there is no easy way to do it?36Views1like9CommentsCollectors: Open Telemetry vs. LM Collector
Hi, We're looking to create a networking "appliance" that we can send to small locations, like a retail store. Among other things, this appliance will allow us to do some gateway to gateway tunneling allowing for user support, but in addition, we want to include key devices in monitoring as well. Originally, we were thinking that we would do a LM Collector install in a container on the device (in this case, we were experimenting with a MikrtoTik). The problem is that LM Collector supports Linux on 64-Bit AMD installs, but not 64-bit ARM installs which many (most?) appliance types would use. Open Telemetry, however, can be used in a 64-bit ARM container. On the surface, it appeared that we could maybe use an Open Telemetry collector for basic stuff instead of LM Collector. But, the closest I get to a comparison of what Open Telemetry collector can do, compared to LM Collector is that Open Telemetry collects "telemetry" and LM Collector collects "metrics" ... which are, of course, entirely useless descriptions if you don't know already what they are doing. :) I understand that they are not interchangeable, but in the end, I'm trying to determine a few things: if not interchangeable, what to use each for? can Open Telemetry collector be used for the most basic monitoring? any other ideas for Linux 64-bit ARM based collector? Since every time I ask anything close to this question to support, I get pointed to the links about Open Telemetry collector, let me pre-empt that (to avoid that in the answer) and say that we've looked at these links: https://www.logicmonitor.com/support/adding-an-opentelemetry-collector https://www.logicmonitor.com/support/configurations-for-opentelemetry-collector-container-installation https://www.logicmonitor.com/support/opentelemetry-collector-for-logicmonitor-overview https://www.logicmonitor.com/support/opentelemetry-collector-installation-from-contrib-distribution https://www.logicmonitor.com/support/opentelemetry-collector-installation-from-logicmonitor-wizard and they don't answer the questions above, far as we can tell. :)Solved46Views0likes7CommentsWebsite Downtimes via API
Hi I'd like to use the API to gather information about the availability of my websites. I will have to gather data for monthly and yearly SLA calculation which features the exact downtimes of the website. With these requirements, I don't think the graph data can deliver accurate data enough. So I'm looking into the /website/websites/{webservice_id}/checkpoints/{checkpoint_id}/data API. Getting Data | LogicMonitor The API works fine, but obviously getting all the statuscodes for every check (every minute) for a whole Month, or even year takes a long time and produces a lot of data that I have to crunch. For me, it would be sufficient to get only the data for downtimes, so every status report which is not 1.0. Does anyone know if the API has such a feature? I already ask for only the datapoints, but this would need some additional filtering. params = {'start': start, 'end': end, 'format': 'json', 'datapoints': 'overallStatus' }16Views0likes1CommentMonitoring Top 10 processes on Windows and Linux platforms
I'm exploring monitoring the top 10 processes by resource usage on Linux and Windows systems. I'm aware there are modules available to monitor specific processes - but was curious as to whether anyone has had the same requirement and created a module. My thinking is to create instances for each process of the top 10 - but right now I am still busy brainstorming...15Views1like3CommentsCorrectly adding one-time SDT via API?
So I'm trying to add a one-time device sdt in Postman.. I get getting an error return of: endDateTime should be a future time I've tried incrementing the end time value by 100ms, by a few thousand then lastly I jumped probably a year ahead, nothing works. Here is the payload: { "type": "DeviceSDT", "sdtType": "1", "comment": "test sdt from Postman", "deviceDisplayName": "host1.domain.net", "startDateTime": "1714753638", "endDateTime": "1724755900" } The example payloads, while is in a Python script, are the same as mine, I just don't get it.Solved15Views0likes2CommentsREST API 503 errors
Hello everyone We have been using various PowerShell scripts for years and have never had any problems with them. Now, since last week Monday, we are getting random 503 “server is busy” HTTP errors on some GET queries such as Devices, Device DataSources, Dashboards, etc. There is no recognizable pattern to the errors. Does this problem also occur for others using script automation via the REST API (occurs in V2 and V3)? I have submitted a support request to LM, but progress is slow... Have a nice day everyone Dorian109Views1like15CommentsTraining
Guys there is training option once we login to logic mon account. When I click on it , it says to create new training account for my email id. I wanted to know whether training account is free or chargeable. We have enterprise package with Logic monitor. We just want to go through various training videos. We don't want to appear for certifications. So will there be any cost ? Thanks.20Views0likes1CommentNice little hint! with LM Config
In light of the recent Palo CVE and the meed to check logs and monitor for IOC's we used LM Config with an expect script (Grep is not avail over API). That would only pull the logs if it matched a IOC and alert us! Simple task but a handy use case for LM Config. Palo will not let you send those logs to a remote syslog otherwise we would of gone the siem path37Views0likes2CommentsInfo and Overview resource tabs
I’d like to know what is planned to make the overview and info tabs for resources easier to use/more useful. My experience with the info tab is that there is just too much information on it for my day to day tasks. It’s all useful to have, but the details I need most are spread out through a very long list and I honestly don’t remember what the property names of them are most of the time. Is there something in the works to make this easier? If I could pin which properties are most important to me to the top of the screen, that’d be helpful. Being able to put them in a table on the overview tab would also be very helpful. Right now, I’m not really using the overview tab much since I can’t put what would be useful on it. To me these seem like they should be device level dashboards. Are there any plans to add more functionality to this tab? I’d need more tabular data instead of just graphs (important details from the info tab, instance level info from various datasources, etc.)168Views17likes16CommentsLinux collectors for Windows servers
A year or so ago there was talk of a Linux based collector in testing that could monitor Windows servers too, is this still planned? We would like to have a minimum number of Windows servers but as long as we have any number of Windows servers, we will always need to also have Windows based Collectors.46Views0likes5CommentsIssues with Set-LMWebsiteGroup
I'm working with the lm-powershell-module and I'm able to get all my website information and the groups id's but when I try to update the group information so I can bulk move websites nothing seems to happen. I have tried many different ways such as: Set-LMWebsiteGroup -Name 'CNVPABACUSS252/Servicing/Loan' -ParentGroupName 'Production/Internal/DevOps/Individual Nodes/Abacus Sync' Set-LMWebsiteGroup -Name $website.name -ParentGroupName 'Production/Internal/DevOps/Individual Nodes/Abacus Sync' -ParentGroupId 77 Set-LMWebsiteGroup -Id 3526 -ParentGroupId 77 I'm sure my syntax is not correct somewhere, but I'm drawing a blank on what it is. Thanks for your help.22Views1like3CommentsSNMP Interface Status=No Such Instance currently exists at this OID
Alright, so I'm a little surprised LM doesn't seem to handle this natively, but when a device has a logical interface configured on it and then you delete the logical interface, LM is expecting a response (I think) = 6 which is the component is removed or something along those lines. However, it appears the network appliance community thinks the right answer is this one: No Such Instance currently exists at this OID. I tried this on multiple Juniper routers and on Fortigate and they all agree this is the right way to do it. Anyone else seen this problem and is there a reasonable way to not alarm when interfaces are deleted? I tried re-discovering the device. Don't really want to delete it and re-add it...Solved25Views0likes4CommentsDataSource with PowerShell script
Hi everyone, I would like to monitor AD Sync. The collector is on a Linux server, and the application is on a Windows one. I am trying to create a DataSource based on a PowerShell script, but I fund out I should set the config file of the collector with PowerShell. policy unrestricted. I don't wanna do this for security reasons. Not sure what else I can do to get that info. I would appreciate it if someone could give me some ideas. Thanks.Solved60Views4likes4CommentsWhich PropertySource tags categories with "snmp"?
I've got some devices that are definitely "snmp" devices but LM isnt tagging them with "snmp" in system.categories. I can manually do it, or set it on a folder for them to inherit, but I am in a situation where I'll need to mix it with some other devices and I dont want to have the others inherit that, just because I want to leave them untouched. This raised the question as to where the "snmp" gets set in system.categories .... I expected to find it in the device basicInfo or something similar, but I dont think it gets set there. I've looked through tons of propertysource code and I'm just not finding it. Wondering if someone could fast track me? I'll prob need to write a custom one that tags just these devices somehow, but need to see what the typical logic is for applying it. Thanks!Solved46Views0likes5CommentsHow do I configure an alert, for a specific Instance name?
For the life of me I can't figure out to make the type of alert our org needs. I have a datasource named "Event Log Errors v3" This has a ton of different instances being generated and it's applied to any IsWindows() systems which we have over 10,000 of. I need to set an alarm, for a specific instance name. If that instance name is seen with a value over X over X, then I want an alarm triggered, on any system. I don't want to have to configure this a bazillion times (individual resource level), the alert should be uniform for all systems. I can't set an alarm on the DataSource itself, as there is one datapoint right now that uses ##WildValue##.COUNT as its Key, so setting a threshold there would set the same threshold for all the hundreds of instances being created by this DataSource and blow up our help desk. I can't set it at the Dynamic Group level right now because there I can pick the DataSource, but can't seem to specify a specific instance name from within the DataSource. It gives me the generic "count" entry. What am I missing here? Any suggestions? I chatted with support but end up with a slew of doc page links that send me down rabbit holes or confuse me even more. This should be pretty simple one would think...109Views2likes17CommentsESX Host Services?
I'm looking for a way to monitor ESX Host Services. Right now I have a script that I believed to be working but after spot checking a few hosts it seems to not be totally accurate. Or, I just don't fully understand the backend on a host as far as the services go vs the data I am collecting. My datasource is reporting services to be Running on hosts that clearly state certain services to be Stopped. Active Discovery import com.santaba.agent.groovyapi.esx.ESX import com.vmware.vim25.mo.InventoryNavigator def hostname = hostProps.get("system.hostname") def user = hostProps.get("esx.user") def pass = hostProps.get("esx.pass") def custom_url = hostProps.get("esx.url") def display_name = hostProps.get("system.displayname") // Connect to the ESX service def url = custom_url ?: "https://${hostname}/sdk" def svc = new ESX() svc.open(url, user, pass, 10 * 1000) // Timeout in 10 seconds // Get the service instance and root folder def si = svc.getServiceInstance() def rootFolder = si.getRootFolder() // Search for managed entities of type 'HostSystem' def hosts = new InventoryNavigator(rootFolder).searchManagedEntities("HostSystem") // Iterate over each host system hosts.each { host -> // Get the services for the current host def services = host.getConfig().getService() // Iterate over each service services.each { service -> // Get the label and key of each service def labels = service?.service?.label def keys = service?.service?.key def running = service?.service?.running ? "Running" : "Stopped" // If label is an array, print each element on its own line if (labels instanceof List) { labels.eachWithIndex { singleLabel, index -> println "${keys[index]}##${keys[index]}##${singleLabel}" } } else { // If label is not an array, print it normally //println "Service Label: ${labels}, Key: ${keys}, Status: ${running}" } } } // Close the ESX service connection svc.close() Data Collection import com.santaba.agent.groovyapi.esx.ESX import com.vmware.vim25.mo.InventoryNavigator def hostname = hostProps.get("system.hostname") def user = hostProps.get("esx.user") def pass = hostProps.get("esx.pass") def custom_url = hostProps.get("esx.url") def display_name = hostProps.get("system.displayname") // Connect to the ESX service def url = custom_url ?: "https://${hostname}/sdk" def svc = new ESX() svc.open(url, user, pass, 10 * 1000) // Timeout in 10 seconds // Get the service instance and root folder def si = svc.getServiceInstance() def rootFolder = si.getRootFolder() // Search for managed entities of type 'HostSystem' def hosts = new InventoryNavigator(rootFolder).searchManagedEntities("HostSystem") // Iterate over each host system hosts.each { host -> // Get the services for the current host def services = host.getConfig().getService() // Iterate over each service services.each { service -> // Get the label and key of each service def labels = service?.service?.label def keys = service?.service?.key //def running = service?.service?.running ? "Running" : "Stopped" def status = service?.service?.running ? "1" : "0" // If label is an array, print each element on its own line if (labels instanceof List) { labels.eachWithIndex { singleLabel, index -> def wildvalue = "${keys[index]}" //println "Service Label: ${singleLabel}, Key: ${keys[index]}, Status: ${running}" println "${wildvalue}.status=${status}"; } } else { // If label is not an array, print it normally //println "Service Label: ${labels}, Key: ${keys}, Status: ${running}" } } } // Close the ESX service connection svc.close()50Views5likes3CommentsWhat not to do when developing code
FYI, this shows a level of ignorance when it comes to troubleshooting and usability of return codes. if (!organization) { println "Organization ID missing; device property meraki.api.org must be set." return 1 } if (!network) { println "Network ID missing; device property meraki.api.network must be set." return 1 } if (!serial) { println "Serial missing; device property auto.endpoint.serial_number or meraki.serial must be set." return 1 } if (!productType) { println "Serial missing; device property auto.meraki.productype or meraki.productType must be set." return 1 } Each different return statement could have a different non-zero return code (yes, it's an integer!). By returning a different value for each problem, the troubleshooter can identify exactly what the issue is just by looking at the return code. This shows that the developer possibly is laboring under the delusion that a return code can only be 0 or 1. While we're at it, this also evidences a lack of coding performance knowledge: def category = it.category def clientDescription = it.clientDescription def clientId = it.clientId def clientMac = it.clientMac def description = it.description def deviceName = it.deviceName def deviceSerial = it.deviceSerial def eventData = it.eventData.toString() def networkId = it.networkId def occurredAt = it.occurredAt def type = it.type def events = [:] events.put("message", description) events.put("category", category) events.put("clientDescription", clientDescription) events.put("clientId", clientId) events.put("clientMac", clientMac) events.put("description", description) events.put("deviceName", deviceName) events.put("deviceSerial", deviceSerial) events.put("eventData", eventData) events.put("networkId", networkId) events.put("occurredAt", occurredAt) events.put("type", type) There's no point in defining a variable and storing it in memory to only use it once. If you're going to use it twice, it makes sense, but only barely. You're just wasting memory. Instead, do this: def events = [ "message": it.description, "category": it.category, "clientDescription": it.clientDescription, "clientId": it.clientId, "clientMac": it.clientMac, "description": it.description, "deviceName": it.deviceName, "deviceSerial": it.deviceSerial, "eventData": it.eventData.toString(,) "networkId": it.networkId, "occurredAt": it.occurredAt, "type": it.type ]46Views9likes2CommentsWhy are websites missing so many features and treated differently than computers?
Every time I try to do something with websites I just get angry. I can't change settings at the group level like I can with resources. Can't easily change thresholds or anything. The Info tab is blank so I can't see any detail about a website. If I try to use the API, the formatting is all different so I can't get any tree/group information to filter results. Everything just sucks about how websites are different than resources. Why is this? Is there any way to pull a list of all the websites under s certain folder in the tree? I can't find any way to do that. I can't even pull a list of all and then filter by a Group field because it only shows the immediate group and nothing about the higher level groups it's in. I need to check all the sites under a particular top level folder, to make sure the alerting is the same. There appear to be absolutely no way to do this without manually going to each sites and checking it by hand. Thanks.54Views1like9CommentsRemote DB connections via Oracle SQLnet
While Logic Monitor seems determined that it needs to "connect" to a remote host to gather data, in the case of Oracle DB up / down testing it's much easier to just perform a remote connection via SQLnet. Has anyone been able to build remote database login calls from the collectors? We created Groovy script from a bash script to connect on the Collector via Oracle SQLnet but haven't been able to integrate that into the Collector server. The idea being - we'll add these scripts for each database on each server using or Oracle LDAP DB names which resolve fine in our SQLnet script. Do a simple connect - verify and return OK/ not OK. This will help us resolve the issues with Logic Monitor failing to understand Oracle RAC and PDB databases. Thanks, Ben79Views1like14CommentsTrial Account
I'm not sure what's happening, maybe you're not interested in new customers anymore or what, but I've been trying to set up a Trial for almost a week now with no response from LM whatsoever. I filled out and submitted the form last week, I called your sales several times this week but nobody reached to me and all calls went to the voicemail. This seems bizarre to me that the trial option is locked behind sales but the sales process seems to be completely dysfunctional. Any help here would be greatly appreciated before I completely gave up here and moved on to something else. Thank you!97Views3likes5CommentsContainer-based Collector - DNS issues in 35.001
We have found that our 35.001 collector, based on logicmonitor/collector:latest has a /etc/resolv.conf using Google (8.8.8.8 / 8.8.4.4), as well as a domain setting. This seems to be set at install time, based on the /etc/resolv.conf file timestamps. This prevents intra-cluster DNS lookups. Has anyone else experienced this?Solved18Views1like1CommentMonitoring for file updates
We have a windows share that contains a number of document templates. I would like to be alerted if/when any of these templates is updated, and get an alert containing the name of the updated template. (It is reasonable for these templates to be updated, but can cause issues if this is not expected) Is this possible? Is there an existing datasource that could do this?27Views0likes1Commentstorage monitoring
We have Hitachi Vantara and IBM enterprise storage solutions. LM does not have datasources for either Manufacture. Does anyone else have these storage solutions and monitoring in place for them with LogicMonitor? Also don't seem to see anything in the community datasources as well. Looking at competitors like Solarwinds they have support for both OOTB.37Views3likes5CommentsProcess monitoring (Linux)
I have a requirement to monitoring Linux processes - and am really hoping that someone has done something similar. Initially I thought the datasource "LinuxNewProcesses_byProperty" would meet the requirements as you just need to specify the process/processes to monitor. However I need a solution that would allow me to almost concatenate processes. eg. a resource could exist in multiple groups where group one may configure monitoring for process x, but the other group may configured monitoring for process y. Effectively it shouldn't be overwriting the property field, but just adding to it. The one way I can think of doing this is multiple datasources - but that seems very very clunky. Alternatively a combination of group properties and using the API to build the contents of another property... to make it just a little bit more interesting the processes MUST be running on the servers. So if they are not found during discovery, an alert must be raised.Solved40Views4likes8CommentsLM Tokens can they be broken down
Hi, We have a ServiceNow integration and use the ##EXTERNALTICKETID## in our alerts views and dashboards. However I believe tokens are clunky and not particularly granular. For example ##EXTERNALTICKETID## also contains the integration name as part of the token. Something we do not want all we need is the TicketId as per the JSON payload. In addition it is not possible to rename the column header is alerts and dashboards. This makes look untidy. Would it be possible to have something like: ##EXTERNALTICKETID.TICKET## with the ability to have a DISPLAYNAME option ##EXTERNALTICKETID.INTEGRATION##with the ability to have a DISPLAYNAME option30Views3likes3CommentsCreating subgroup with a dynamic query
I am trying to create more than one subgroup at once. For example, I created a group to filter all devices by site. I tried to use just one query to create a subgroup for each site, but I couldnt. Is it possible? should I use the sdk? Thanks!Solved31Views2likes2CommentsModule Toolbox AppliesTo IDE
I have a new DS to build. I decided, for the first time, to try to build it in the module toolbox instead of the UIv3 editor. I guess "Resource Label" is the display name of the module. That's confusing because it's not the name of the resource this will be on. I guess I can see that it's the label that the module will show up with under the resource. But it's not the label of the resource. It's the label of the module. Technical notes - i guess this now supports markup. Which markup? hypertext transfer markup? Extensible markup? Why not markdown (or does it mean markdown when it says markup)? My big problem is with the "IDE". Only developers would think the word "IDE" makes more sense than "wizard". Most of them are Java/Groovy developers who actually need an IDE to develop in a language as overly complicated as Java/Groovy. This thing is not an IDE but a field picker. Functionality that used to exist is no longer there. When I'm developing a datasource, i usually limit the first runs to one device. I opened the "IDE" hoping to find a way to search for the device and limit it to that device. I could do that in the old UI really easily. I don't even know where to start with this new "IDE". The "IDE" does not auto-complete properties. So even if i started typing out "system.display" it doesn't even suggest a complete property name. Once i get "system.displayname == " into the appliesto, it doesn't suggest display names to choose from. I know LM knows how to do this because they do it with the LM Logs query window. Why is there a big help section in the middle of this "IDE" describing what the "true()" convenience function does? I didn't select it and i'm not using it in my appliesto. Why is the "IDE" so big? Why can't it pop out in a drawer from the left side? I was worried that the cancel button might cancel the progress i've made on the DS so far. Speaking of the cancel button, why is there no "you'll lose the progress on your appliesto if you cancel. are you sure you want to cancel?" warning? Why are we still choosing the collection mechanism type (batchscript in this case) before getting ot the collection setting? Why is the discovery group method selected before instances even exist? Did someone actually say, "it makes more sense to go through the effort of moving this above the discovery arguments"? Why are the results for testing active discovery still not shown in groups? I hit save before putting in a name/resource label. It marked them as red, but didn't scroll up to them. It looked like nothing happened when i hit save.12Views1like0CommentsDisplayName max length? Docs for other details?
I am trying to find out what the maximum displayName length is for a resource in LogicMonitor. I went to the v3 API and picked POST for /device/devices for making a new device, hoping that under the Model subtab that I would see some details. I DO in fact see that displayName is a string, but the blue text on string isnt a hyperlink. So I'm wondering what the max length is for displayName, but also more generally, is there something in the documentation I've missed that would explicitly talk about some of these things? I am a bit UI-challenged at times, so apologies if I've missed something obvious. Usually the more obvious something is, the more my brain ignores it due to decades of subconscious self-training to ignore advertising.:) Thanks!21Views1like2CommentsEmail Ingestion
Dear Community, We've been endeavouring to monitor unread emails using the Email Ingest Eventsource. However, I encountered a Java error ("Unable to resolve the class") during implementation. Upon investigating, I learned that JAVAX isn't supported, as mentioned by a member of the LM Team on the Community page. It appears we might be utilizing an older version of the email ingest that relies on "jakarta". We made minimal adjustments, primarily switching from email.user to apc.email.user and updating the password. Besides these modifications, no other changes were made. We did this changes because we are testing this on APC UPS Devices. Upon executing the script, it successfully identified the emails, and we applied the event source to all devices. However, we've encountered issues with triggering alerts and displaying data under the event source. Interestingly, upon application, the script reads the emails and marks them as read in the email inbox. Below is the code we're currently using, and while it does generate output, we're seeking assistance in implementing alert triggers. Any guidance on this matter would be greatly appreciated. Thank you /******************************************************************************* © 2007-2020 - LogicMonitor, Inc. All rights reserved. *******************************************************************************/ import jakarta.mail.* import jakarta.mail.internet.* import jakarta.mail.search.* import groovy.json.JsonBuilder import groovy.json.JsonOutput import groovy.json.JsonSlurper def debug = false def deleteProcessedEmails = false def imapHost = hostProps.get("apc.imap.host") // Required def imapType = hostProps.get("apc.imap.type") // Optional def emailUser = hostProps.get("apc.email.user") // Required def emailPass = hostProps.get("apc.email.pass") // Required def msgSubject = hostProps.get("apc.email.subject") // Optional, can include a propertyname (example: "##system.hostname##") for dynamic replacement def deleteProcessedEmailProp = hostProps.get("email.deleteProcessed") // Optional def emailInbox = hostProps.get("apc.email.folder") ?: "Inbox" // Example: "Inbox/Errors" def hostName = hostProps.get("system.displayname") // Default our email subject if one wasn't specified via property... if (!msgSubject) { msgSubject = "Incident Notification" } // See if a property was specified in our subject line... def dynamicProp = msgSubject =~ ".*#{2}(.+)#{2}.*"; if (dynamicProp.size() > 0) { def dynamicPropName = dynamicProp[0][1]; def dynamicPropValue = "" // Allow simply specifying ##hostname## to replace with system.hostname... if (dynamicPropName.toLowerCase() == "hostname") { dynamicPropValue = hostName // Otherwise, grab the property named (example: "Alert Test ##aws.instanceid##")... } else { dynamicPropValue = hostProps.get(dynamicPropName) } if (dynamicPropValue != "") { msgSubject = msgSubject.replaceAll("##" + dynamicPropName + "##", dynamicPropValue) } } // Default is to NOT delete processed emails (will just mark them as read)... if (deleteProcessedEmailProp && deleteProcessedEmailProp.toLowerCase() == "true") { deleteProcessedEmails = true } def protocolDebug = true if (debug) { println "imapHost=${imapHost}" println "imapType=${imapType}" println "emailUser=${emailUser}" println "emailFolder=${emailInbox}" println "msgSubject=${msgSubject}" println "deleteProcessedEmails=${deleteProcessedEmails}" // println "emailAddr=${emailAddr}" } def eventList = []; // Do we have all required params?... if (!imapHost || !emailUser || !emailPass) { debugOutput(debug, "errorCode=-1") return 0 } protoDebugFile = new PrintStream('../logs/' + hostName + '-APCEmailAlert-protocol.log') scriptDebugFile = new File('../logs/' + hostName + '-APCEmailAlert-debug.log') scriptDebugFile.delete() /* * IMAP Setup */ imapProps = System.getProperties() if (imapType == "SSL") { store_type = "imaps" imapProps.setProperty("mail.imap.socketFactory.class", "javax.net.ssl.SSLSocketFactory") imapProps.setProperty("mail.imap.socketFactory.fallback", "false") imapProps.setProperty("mail.imap.socketFactory.port", "993") imapProps.setProperty("mail.imaps.ssl.trust", "*") imapProps.setProperty("mail.imaps.ssl.checkserveridentity", "false") } else if (imapType == "TLS") { store_type = "imap" imapProps.setProperty("mail.imap.starttls.enable", "true") imapProps.setProperty("mail.imap.socketFactory.port", "143") } else { store_type = "imap" imapProps.setProperty("mail.imap.socketFactory.port", "143") imapProps.setProperty("mail.imap.socketFactory.class", "javax.net.SocketFactory") } imapProps.setProperty("mail.store.protocol", store_type) imapSession = Session.getInstance(imapProps, null) imapSession.setDebug(protocolDebug) imapSession.setDebugOut(protoDebugFile) imap = imapSession.getStore(store_type) /* * Connect to IMAP */ def tries = 5 while (tries > 0) { tries-- try { // scriptDebugFile.append("Attempting IMAP Connection\n") debugOutput(debug, "Attempting IMAP Connection") imap.connect(imapHost, emailUser, emailPass) tries = -1 } catch (Exception e) { error_string = "IMAP Connection Error: " + e.message + "\n" // scriptDebugFile.append(error_string) debugOutput(debug, error_string) println(error_string) sleep(1000) } } // Were we able to connect to imap server?... if (tries == -1) { // Yes. log some debugging... // scriptDebugFile.append("IMAP Connection Successful\n") debugOutput(debug, "IMAP Connection Successful") } else { // No. Exit... debugOutput(debug, "errorCode=3") protoDebugFile.close() return 0 } /* * Search inbox for the message we created earlier */ // inbox = imap.getFolder("Inbox") inbox = imap.getFolder(emailInbox) // Search for unread messages containing our subject (Note: our search term is converted to lowercase above)... subjectTerm = new SubjectTerm(msgSubject) unreadTerm = new FlagTerm(new Flags(Flags.Flag.SEEN), false) search_term = new AndTerm(subjectTerm, unreadTerm) foundMessage = false try { // Open the inbox do the search... inbox.open(Folder.READ_WRITE) // scriptDebugFile.append("Initiating new inbox search on subject \"${search_term}\"\n") debugOutput(debug, "Initiating new inbox search on subject \"${search_term}\"") searchResults = inbox.search(search_term) // Message messages[] = inbox.search(new FlagTerm(new Flags(Flags.Flag.SEEN), false)); foundCount = searchResults.size() debugOutput(debug, "foundCount: ${foundCount}") // Did we find any messages?... if (foundCount == 0) { // No -- sleep for a bit... // scriptDebugFile.append(" - search found no messages -- sleeping\n") debugOutput(debug, " - search found no messages -- sleeping") sleep(1000) } else { // Yes. Iterate through the search results array... // scriptDebugFile.append(" - search found ${foundCount} messages\n") debugOutput(debug, " - search found ${foundCount} messages") // Iterate through results... searchResults.each { imap_message -> foundSubject = imap_message.getSubject() foundSubject = foundSubject.replaceAll(",","") foundSubject = foundSubject.replaceAll("\\(","") foundSubject = foundSubject.replaceAll("\\)","") // Does the subject of this message match the subject of the message we sent earlier?... if (foundSubject =~ msgSubject) { // Yes -- exit the loop... tries = -1 foundMessage = true // scriptDebugFile.append(" - located target message \"${foundSubject}\"\n") debugOutput(debug, " - located target message \"${foundSubject}\"") emailBody = getText(imap_message).trim() emailBody = emailBody.replaceAll("\\r|\\n", "") emailBody = emailBody.replaceAll("\\<.*?\\>", "") emailBody = emailBody.replaceAll("font-family:[^;']*(;)", "") emailBody = emailBody.replaceAll("font-size:[^;']*(;)", "") emailBody = emailBody.trim().replaceAll("\\s+", " ") // From epoch to human readable... String date = new java.text.SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss zzz ").format(new Date()); eventList.push([ "happenedOn": date.toString(), "severity" : "Critical", "message" : foundSubject, "source" : hostName ]) } // Flag any messages that matched this search for deletion... if (deleteProcessedEmails) { imap_message.setFlag(Flags.Flag.DELETED, true) // scriptDebugFile.append(" - delete flag set on \"${foundSubject}\"\n") debugOutput(debug, " - delete flag set on \"${foundSubject}\"") } imap_message.setFlag(Flags.Flag.SEEN, true) } // Expunge the deleted messages... if (deleteProcessedEmails) { // scriptDebugFile.append(" - expunging deleted message(s)\n") debugOutput(debug, " - expunging deleted message(s)") inbox.expunge() } } // Close the mailbox... inbox.close(true) // scriptDebugFile.append(" - inbox closed\n") debugOutput(debug, " - inbox closed") } catch (Exception e) { // Something blew up... error_string = "IMAP Inbox Search Error: " + e.message // scriptDebugFile.append(error_string + "\n") debugOutput(debug, error_string) println(error_string) // Because something blew up, lets make sure our inbox is closed... inbox.close(true) sleep(1000) } // Did we find the message?... if (!foundMessage) { // No. Close and exit... debugOutput(debug, "errorCode=4") protoDebugFile.close() return 0 } else { def events = [events: eventList]; println(new JsonBuilder(events).toPrettyString()); // def jsonOut = JsonOutput.toJson(eventsMap) // println JsonOutput.prettyPrint(jsonOut) } // Close the IMAP connection... // scriptDebugFile.append("Closing IMAP Connection\n") debugOutput(debug, "Closing IMAP Connection") imap.close() // Everything went well... debugOutput(debug, "errorCode=0") return 0 // Capture/log debug output if flagged to do so... def debugOutput(Boolean debug, String message) { if (debug) { println "IMAP Connection Successful" scriptDebugFile.append("${message}\n") } } // Credit to Mike Suding for the following function... def getText(Part p) { if (p.isMimeType("text/*")) { String s = (String)p.getContent(); textIsHtml = p.isMimeType("text/html"); return s; } if (p.isMimeType("multipart/alternative")) { // Prefer html over plain text... Multipart mp = (Multipart)p.getContent(); String text = null; for (int i = 0; i < mp.getCount(); i++) { Part bp = mp.getBodyPart(i); if (bp.isMimeType("text/plain")) { if (text == null) text = getText(bp); continue; } else if (bp.isMimeType("text/html")) { String s = getText(bp); if (s != null) return s; } else { return getText(bp); } } return text; } else if (p.isMimeType("multipart/*")) { Multipart mp = (Multipart)p.getContent(); for (int i = 0; i < mp.getCount(); i++) { String s = getText(mp.getBodyPart(i)); if (s != null) return s; } } return EMPTY; }92Views1like21CommentsAzure Stack HCI resources don't have storage, memory, disk or cluster metrics
We have a customer with an Azure Stack HCI cluster deployed a few months ago. For those not familiar, this is basically a customised Windows Server core environment that runs Hyper-V VMs and some Azure-specific workloads on-premises. The virtualised workloads are all added as resources using a locally-deployed collector (on the Windows jumphost, if it matters) and they all show CPU, Disks, Interfaces, Processes …everything you’d expect for Windows hosts. We’ve added the two nodes as Resources, but we don’t see any detailed metrics - only Host Status (DNS), HTTP and Ping. I also added the FQDN for the cluster management point / VNN, and it has the same minimal detail as the individual cluster nodes. There are quite a few valid/correct properties recorded for the systems - some, for example: system.domain (customer AD domain) system.ips (all IP addresses for all interfaces) system.model (correctly identifies vendor and server model, presumably from WMI) system.sysinfo (“Microsoft Azure Stack HCI”) system.sysname (hostname) system.systemtype (“x64-based PC”) Is there something else I need to do to have this system monitored? I’m rushing because we nearly had a CSV run out of space - we thought it was monitored, and we were wrong.Solved172Views16likes9CommentsSeeing Failed To Parse ServiceNow Integration
We are sometimes seeing “-2” as the ##EXTERNALTICKETID## when sending alerts to ServiceNow. When this happens and the alert clears it is not auto-closing the ticket in SNOW. Has anyone seen this or have any idea how to fix this. Seems when there is an event where multiple alerts generate and send to SNOW at once this happens.257Views21likes13CommentsDatasource that outputs text and an error code?
I have a requirement to create datasources that outputs text and an error code. We have scripts that run on Linux which outputs text (the actual text varies depending what the problem is and may contain other variable text such as a directory name etc.) and an error code. The error code is easy enough to have as a datapoint, but I am struggling to find a solution for the variable text. I initially thought of creating instances but they would only be added/deleted at specific intervals. What is the best way to handle variable text as an output AND values as datapoints with thresholds?83Views2likes15CommentsIs anyone monitoring a parent process and its child processes on Windows platforms?
I initially thought about doing this as a multi-instance datasource where each instance would be the name of the child process, and the description would be the name of the parent process. But sadly could not get that working. I even tried what I thought was the simpler approach of having the parent process as the instance name and just a count of child processes - still no go.20Views1like0CommentsSort and/or filter a Table Widget
Hi there, I have a table widget which has the resource name as the row and system up time (days) as the column. The idea is that my team can easily identify those servers which have been recently rebooted. Although the widget shows all the devices, I need it to sort by up time so the most recently rebooted is shown at the top without having to keep clicking on the column name. But when I change the dashboard and revert back to the one with this table widget, the sort column changes back to the default. The other thing that would be useful is to apply a filter so only servers with an uptime of 7 days or less would be shown. I cannot see any obvious way to do this. Any ideas? Thanks.32Views1like5CommentsBypass or update logic to clear alert (LogicMonitor to ServiceNow Integration)
Hello. I recently executed a use case where the following steps occurred: Alert triggers in LogicMonitor and creates an incident in ServiceNow The assigned team who works the incident, assigns it to the appropriate team/team member The team/team member remediates the alert, adds their comments to the incident, and resolves the incident Once LogicMonitor sees the alert has been remediated, it makes the http rest call to resolve the incident What's happening currently is although the user resolves the incident, LogicMonitor will still proceed with resolving the issue based on the alert status "Cleared". When that happens, the predefined values from the payload are replacing the information provided by the user. I understand if I delete that alert status under HTTP Delivery, this would ultimately resolve my issue. Another alternative would be to remove the key-value pair from the payload that updates the Resolution Notes section of the incident. Is there a way for LogicMonitor to recognize the incident currently has a resolve status and not proceed to update the incident without me having to remove the Cleared status? Thanks.Solved55Views1like6CommentsReport/dashboard which contains groups that do not contain a specific resource?
It is possible to create a report/dashboard that contains groups that do not contain a specific resource? For example, I need to find all groups that do not contain resources starting with sw0.Solved33Views1like4Commentscisco wlc 9800 monitoring
Hello community, any experiences with monitoring of 9800 and its APs? I am using some other tool and considering LM since I used it in past. Are the modules for cisco wireless any good? I am interested in monitoring mesh link and AP radio interface metrics. 9800 is not responding to many queries from cisco MIB files, I assume they just repacked old MIB files and now there is no difference between 9800 and old AireOS.... Let me know if LM overcomes this. Cheers, M63Views2likes5CommentsCisco IPSec Aggregate Tunnels- False Alerts?
Hi, We have a new deployment of Logic Monitor and we are seeing what I believe to be false alerts on our Cisco IPSec Aggregate Tunnels for a Cisco router. The Alert only shows for some Tunnels and not others. The Tunnels with the Critical Alert are value=2 and Threshold is 1 1 1 Alert Message: LMD6833854 critical - Router_Name Cisco IPSec Aggregate Tunnels-Tunnel x.x.x.x -> x.x.x.x TunnelStatus ID: LMD6833854 Cisco IPSec Tunnel x.x.x.x -> x.x.x.x on Router_Name seems to have dropped or restarted, placing the tunnel into critical state. This started at 2024-04-05 09:08:39 EDT, -- or 0h 41m ago. But if we look at the raw data on that Tunnel, there are no InDropPkts or OutDropPkts and there is throughput. Any thoughts on why these Critical Alerts are showing up when there seems to be no issue with the router or the specific Tunnel? Thanks42Views2likes3CommentsData for groovy script in complex datapoint
Hi I have a question regarding using data in complex datapoint. Long story short - I need to use base-10 logarithm for some calculation - in complex datapoint I can use only log function so my idea is to use groovy script and then I can use Math.log10 DS is using SCRIPT for discovery method and another groovy script is collecting attributes/data. I would like to use data which is displayed in Normal datapoints (##WILDVALUE##.something). Is there any smart way to process/get that data in complex datapoint groovy script ? (without pooling device again as it is already done in "Collector Attributes")228Views10likes10CommentsTips or Tweaks for controlling when a daily ConfigSource runs?
I have a config source, that really under the hood is doing some other task, a coding task, and it used to conveniently run every day at 22:47 or something like that, but because of reasons, now it runs at 10:05 every morning. This is some ancient script someone else wrote in python and it runs on a collector. I’ll probably convert it to be groovy, and have it run hourly, but only act when the hour digit on the time is 22 or something wacky. But short of that, it did have me wonder, are there any tricky ways to change when a daily config source runs? Because from what I can tell, they run at the time they were made, or I guess at the last time the collector went down because of a domain password problem :) *cough*. I could just try a meaningless edit on the ConfigSource and see if that does it. I dont think bouncing the collector service will do it because on rare occasion the service restarts at a time other than its normal time and that doesnt change CS run times. Just wondering if anyone has any tips or tricks? I’ll probably just rewrite this to run hourly but act only on a specific hour of the day. But just fishing for ideas. I really wish we could schedule ConfigSources like we can reports. Thanks!70Views7likes7CommentsDynamic Topology Mapping - ISIS only
Is there a way to create a dynamic map depicting only the IGP ? By default, it appears LM wants to create a map with both the BGP adjacencies and the IGP adjacencies shown. Basically I want to keep LM from utilizing the discovered BGP adjacencies to draw the map. Is there a way to do this?Solved32Views2likes4CommentsTesla Motors LogicModule Suite
I previously published a datasource for Tesla Motors Battery Statistics - which presents compelling vehicle battery and charging information that is fetched from the Tesla REST API. To complement those efforts, I've written a few other Tesla Motors LogicModules that return a variety of different, but still interesting, datapoints - including a ConfigSource that displays configuration information about the vehicle itself (are the doors locked? Is the sunroof open?) The following is a list of all the Tesla Motors LogicModules now available (see the above-linked post for additional info on how this all works.) DataSource 'Battery Statistics' tracks battery and charger performance and health metrics Tesla Motors Battery Statisticspreviously posted to the Exchange but included here for sake of keeping everything together.) The datasource name isTeslaMotors_BatteryStatisticsand has lmLocatorDXLLKY. DataSource 'Climate Statistics' tracks inside and outside temperatures, as well as driver and passenger temperature settings. The datasource name isTeslaMotors_ClimateStatisticsand has lmLocatorYZRWXC. ConfigSource 'Car Configuration' collects textual configuration data, cleans it up and makes it easily readable (screenshot attached.) The configsource name isTeslaMotors_Configurationand has lmLocatorGRY9AE. DataSource 'Location Data' tracks compass heading, latitude and longitude, and power. The datasource name isTeslaMotors_LocationDataand has lmLocatorAYWYWA. DataSource 'Odometer Reading' does exactly what you might expect. The datasource name isTeslaMotors_BatteryStatisticsand has lmLocatorHHJRD62Views11likes5CommentsTesla Motors Battery Statistics
UPDATE: Check out the full suite of Tesla LogicModules in a more recent community post - including statistics for battery/ charging, climate, location, odometer, and configuration. Tesla Motors provides owners of their vehicles with a web portal - and an HTTP REST API - that can be used to retrieve vehicle performance and configuration data (in JSON format.) Using the embedded Groovy scripting functionality of a LogicMonitor DataSource, we can query the Tesla API - and bring that data back into LogicMonitor. Once we have the data, we can display it in meaningful ways (a dashboard for our Dashboards,) or perform calculations to create complex datapoints - like tracking "Average Energy Cost per Mile." Assuming that you’re not getting the electricity for free at a Tesla Supercharger station, that is... A little Googling will assist with the API key retrieval - there are a couple of scripts and/ or CURL commands that can be used to facilitate this process. The datasource name is TeslaMotors_BatteryStatistics and has lmLocatorDXLLKY. Screenshot of some included graphs:25Views5likes1CommentREST API Device SLA Widget
Hi Portal I am currently automating our SLA reporting and have come across the following issue. When I create a device SLA widget via REST API, I do not have the option "displayPercentageBar". When I make an Inspect in Chrome, for example, I see this setting: However, when I set it via REST API, nothing happens. Has anyone ever had to deal with this? Greetings DorianSolved87Views9likes5CommentsDo you use the New UIv4? Any lost functionality?
I’m curious to what degree other power users here actually use the new UI? I took it as a challenge myself this week to start using it since it seems LM is committed to using it. I’m not a fan, but I’m not sure how much of that is just because its new. I struggle to find some things. It took me way too long to find out where the collector debug option was hidden. And I still cant find the “instances” count in the columns of information where it used to be, and I don’t see it as a column I can add. So I’m just wondering, do you all like the new UI? Do you use it? Are there important places where you have lost functionality or information? Thanks!83Views13likes4Comments