Issue in auto refreshing EC2 properties
I’m encountering an issue with my Windows ASG group EC2 instances. Whenever a new instance is added, certain properties from a dynamic group—most notably “wmi.user” and “wmi.pass”—are applied. However, by the time the instance is registered in LogicMonitor, WMI isn’t immediately available because some automations are still configuring the WMI credentials on the host. A few minutes later, WMI starts working on the host, and I can successfully test it using wbemtest from the local collector. However, the LogicMonitor portal still shows that WMI is not responding. Interestingly, when I use the collector’s debug console and explicitly specify the WMI credentials, it pulls the information successfully. But if I don’t specify the credentials manually, it fails to work. The only way to resolve this is by manually running “refresh properties,” after which WMI starts working without making any changes. I’m trying to figure out if there’s a way to automatically force a properties refresh every 15 minutes to ensure everything works as expected without manual intervention.Naveenkumar2 days agoNeophyte3Views0likes0CommentsUser defined "host dead" status
There are two ideas that I need help maturing before talking to LM about them. Both have to do with how LM uses server side logic to declare a device dead. We need the ability to designate what metric declares a device as dead/undead and when. What We have several customers who have devices, usually UPS, at remote sites all connected to a Meraki switch. The collector is not at the remote site, but connects over a VPN tunnel, which may be torn down due to inactivity or could be flaky for any other reason. When the VPN tunnel goes down, the devices alert that they have gone down. We have added monitoring to the tunnel and also get alerted when it goes down. However, we'd like to prevent the host down alerts when the only problem is that the VPN tunnel is down. RCA (or recently renamed DAM?) would likely solve this, but defining that mapping manually or through a topologysource is not scalable (plus visibility into the RCA logic is never been good). Luckily, Meraki has an API where we can query the status of devices connected to the switch. During a tunnel outage, this API data shows that the device is still connected to the switch and online. Since it's a UPS, that's sufficient. We've built the datasource required to monitor the devices via the Meraki API. However, since it's a scripted datasource, it doesn't reset the idleInterval. (Insert link here to a really good training or support doc explaining how idleInterval works.) Since none of the qualifying datasources are working on the UPS during the VPN outage, the idleInterval eventually climbs high enough to trigger a host down alert. When the host is declared down, other alerts, like the alerts from this new Meraki Client Device Status DS, are suppressed. How can this be remedied? So, we need the supported and documented ability to use the successful execution of a collection script to reset the idleInterval. I know this is possible today as I've seen it in several of LM's modules. However, I've never seen official documentation on how to do it. LM's probably worried someone will add it to all their scripts, which wouldn't be the right thing to do. When I know I'm not the only one. I need control over the server side logic that determines when the idleInterval declares a device dead. In the example above, we get a slew of host down alerts when the VPN tunnel goes down. However, usually within a few minutes, the VPN gets reestablished and the collector reestablishes connectivity to the device and the idleInterval resets, thus clearing the alerts. With a normal datapoint, I'd just lengthen the alert trigger interval for the idleInterval datapoint. This would mean that the device would have to be down for 15 minutes, 20 minutes, however long I want before generating the alert. What's great is that now we can do that on the group level, so I can target these devices specifically and not alert on them unless they've been down for a truly unacceptable amount of time (i.e. not just a VPN going down and coming right back up). However, the idleInterval datapoint is an odd one. Two things happen. One happens when you surpass the threshold defined on the datapoint. I can't remember what the default is, but in my portal, that's > 300 or 5 minutes. At 6 minutes server side logic, which has been inspecting the idleInterval, decides that the device is down which has implications on suppressing other alerts on the device. As far as I can tell, lengthening the alert trigger interval on the idleInterval datapoint has no effect if the window would exceed the 6 minutes that the server side logic uses to declare the device down. What do we need? We need the ability to set the amount of time that the server side logic uses to declare the device down. We need to be able to set that for some devices and not others. So we need to be able to set it globally, on the group level, and on the device level. Preferably this could be set in the alert trigger interval on the idleInterval datapoint since this mechanism already exists globally and on the group and device levels. Knowing that this could be a confusing way of defining it (since it's measured in poll cycles not minutes/seconds) so, it could alternatively be done as a special property on the group(s)/device(s). I'm interested in hearing your thoughts, even if you are an LMer.Anonymous2 days ago93Views8likes6CommentsBug Report: Editing Alert Rules broken
To reproduce: Create an alert rule at priority 1 that ONLY filters on a resource property: "change.me" = "true", sending to NoEscalationChain Edit it, and change thename of the filtered property to "i.am.changed", again with the value "true". Observe that the UI message lets you know that the change has succeeded Expected behaviour: Editing the Alert Rule again, the property name should be "i.am.changed" Actual behaviour: The property name is still "change.me"David_Bond2 days agoProfessor9Views0likes0CommentsDynamic Dashboard Filters for Text Widget
My apologies if this has been covered already, but I have been search forum and haven't seen this topic. I am building a text widget that performs an API call to display data from another system. This is important as I am attempting to put all information for a facility to a single pane of glass. Is there a way, to pass the Filter value at the top of the dashboard to the text widget for me to use in my java script call?billbianco2 days agoNeophyte16Views0likes0CommentsAPI v3 Python Patch on user 403 forbidden
I have some old python code (that I didnt write) that uses v1 of the API that does a patch with a super minimal patch data block, just the value needed. Personally, I have Groovy code that does some patching using the retrieved user in a Map object and I make a minimal map with just the stuff v3 requires that I set (which v1 didnt) like roles and password and etc, and I managed to get code working with Groovy v3. But I'm in a circumstance where I have to use python and patch on a user, and for the life of me, I keep getting a 403 forbidden error. I've found several examples online, and basically I believe I've got everything set up correctly, the code is mostly similar to the old working v1 code except it has the changes that v3 needs. But I get a 403 error (forbidden). But I know the API token has rights to update the user (tho it is still attached to a user with an administrator role, but I dont think thats the issue, I have groovy code using the same API token). I hate to ask people to look at code but is there anything obviously wrong here that I'm missing for python and v3? http_verb ='PATCH'; resource_path = '/setting/admins/' + str(user_id); patch_data = '{"roles":[{"name":"myrole"}],"email":"what@whatever.what","username":"blah","password":"meh","apionly": true}'; queryParam = '?changePassword=false&validationOnly=false' url = 'https://'+ Company +'.logicmonitor.com/santaba/rest' + resource_path + queryParam; epoch = str(int(time.time() * 1000)); requestVars = http_verb + epoch + patch_data + resource_path; hmac1 = hmac.new(AccessKey.encode(),msg=requestVars.encode(),digestmod=hashlib.sha256).hexdigest(); signature = base64.b64encode(hmac1.encode()); auth = 'LMv1 ' + AccessId + ':' + signature.decode() + ':' + epoch; headers = {'Content-Type':'application/json','Authorization':auth,'X-Version': '3'} response_patch = lm_session.patch(url, data=patch_data, headers=headers) return response_patch; Thanks!SolvedLewis_Beard2 days agoExpert27Views0likes2CommentsCreating a Custom Module based on OIDs?
Hi, We have some IBM MQ devices that we want to monitor. We found some MQ items in the LM Repository, but those are for monitoring the MQ application. We also need to monitor the device for CPU and Memory. I was given the following information and told we should see about monitoring them. I'm not sure if I can modify something to get started or if we'd have to create something from scratch. I'm not familiar with creating anything like this and am hoping someone can point me to something similar I can use and modify or or figure out how to create this from scratch. Thanks! Below are the numeric OID and their respective full details - It shows the CPU usage and also at last 3 different intervals 1 min 5 min and 15 min .1.3.6.1.4.1.14685.4.1.521.1.0 = Gauge32: 2 % .1.3.6.1.4.1.14685.4.1.521.2.0 = STRING: 0.14 % .1.3.6.1.4.1.14685.4.1.521.3.0 = STRING: 0.31 % .1.3.6.1.4.1.14685.4.1.521.4.0 = STRING: 0.34 % IBM-MQ-APPLIANCE-STATUS-MIB::mqStatusSystemCpuStatusCpuUsage.0 = Gauge32: 3 % IBM-MQ-APPLIANCE-STATUS-MIB::mqStatusSystemCpuStatusCpuLoadAvg1.0 = STRING: 0.27 % IBM-MQ-APPLIANCE-STATUS-MIB::mqStatusSystemCpuStatusCpuLoadAvg5.0 = STRING: 0.34 % IBM-MQ-APPLIANCE-STATUS-MIB::mqStatusSystemCpuStatusCpuLoadAvg15.0 = STRING: 0.34 % And below Filesystem Monitors - Can we set alerts if the Free is less than 50% of Total for all three different readings - Total encrypted/Free encryptes - Total temp/Free Temp - Total Internal/Free Internal .1.3.6.1.4.1.14685.4.1.29.1.0 = Gauge32: 23223 Mbytes .1.3.6.1.4.1.14685.4.1.29.2.0 = Gauge32: 29857 Mbytes .1.3.6.1.4.1.14685.4.1.29.5.0 = Gauge32: 4036 Mbytes .1.3.6.1.4.1.14685.4.1.29.6.0 = Gauge32: 4096 Mbytes .1.3.6.1.4.1.14685.4.1.29.7.0 = Gauge32: 3071 Mbytes .1.3.6.1.4.1.14685.4.1.29.8.0 = Gauge32: 3072 Mbytes IBM-MQ-APPLIANCE-STATUS-MIB::mqStatusFilesystemStatusFreeEncrypted.0 = Gauge32: 23223 Mbytes IBM-MQ-APPLIANCE-STATUS-MIB::mqStatusFilesystemStatusTotalEncrypted.0 = Gauge32: 29857 Mbytes IBM-MQ-APPLIANCE-STATUS-MIB::mqStatusFilesystemStatusFreeTemporary.0 = Gauge32: 4036 Mbytes IBM-MQ-APPLIANCE-STATUS-MIB::mqStatusFilesystemStatusTotalTemporary.0 = Gauge32: 4096 Mbytes IBM-MQ-APPLIANCE-STATUS-MIB::mqStatusFilesystemStatusFreeInternal.0 = Gauge32: 3071 Mbytes IBM-MQ-APPLIANCE-STATUS-MIB::mqStatusFilesystemStatusTotalInternal.0 = Gauge32: 3072 MbytesKelemvor3 days agoExpert25Views0likes3CommentsAny way to automate tasks via alerts?
Hi, I know LM doesn't support taking any action when there's an alert. However, I'm wondering if anyone has any neat ideas on how to accomplish something by way of some other automation program. Maybe Power Automate? Here's what I'm thinking. I have a module that monitors a service to see if it's running or not. If it's not, it generates an alert. This seems like the most basic thing to automate because all I'd want to do is run the start-service command to start it back up. So, I'm wondering if I can have the alert send an email to a certain address. That address could then watch for a certain email to arrive. It could then parse out the servername and service name and put that into a start-service -computername xxx -name yyy. Has anyone looked into doing anything like that and did you have any success? Thanks.72Views0likes9Commentsdell hosts and idrac combined
tie idrac monitoring to its host, not a separate device | LogicMonitor - 5620 saw this old post and have a similar ask. i'm trying to figure out how to the get idrac alerts combined to the host. from what i can tell dell OME monitoring is simply accepting idrac SNMP traps. i can probably copy those same SNMP traps to logicmonitor collectors. for the tieing it together with the host, i'm wondering if there is a way to add the idrac IP as an additional IP to the host itself? will that mess up other things logicmonitor is trying to do? will that provide enough info for logicmonitor to tie an idrac SNMP trap for hardware failures to the host itself?gdavid163 days agoNeophyte14Views0likes0CommentsReporting on Alerts and SDTs
Hi all, I am having an issue trying to generate a report of alerts that are generated outside of SDT. I have found that the 'In SDT' field is only populated while the alert is outstanding, so any alerts that are cleared do not have the 'In SDT' field set to Y. As a result, I am finding it impossible to screen out alerts generated during SDT. My assumption (incorrect) was that 'in SDT' would do this. My requirement is to be able to generate trend data on alerts that are relevant to particular teams/escalation chains, to say (for example) the Linux team had 10 Critical alerts last month, vs 50 the previous month, but unless I can screen out the alerts that are generated during SDT, these may all have been expected and require no action, so the data is meaningless. I was pointed to wards the Alert dashboard, as this has fields for alert suppression type, but this does not seem to be populated either, or is similarly cleared when the alert clears. Has anyone else found an appropriate way of reporting on Alerts that screens these out?Tim_OShea3 days agoNeophyte20Views0likes2CommentsFeature Request: Alerts Page Deep Links
On the Alerts page, in order to communicate a filter to another user, I would like to simply pick up a deep link from the browser navigation bar and send it to a colleague in Teams / email. This should be pretty easy to implement and would provide significant value to Operations Teams. Example implementation:David_Bond4 days agoProfessor14Views1like0CommentsLogicMonitor.Api nuget (C#/.NET library) v211 support and breaking changes
The LogicMonitor.Api nuget package for C#/.NET developers now has v211 support and some breaking changes. Don't worry, we're still open source (contributions welcome) and the project team is here (and on Github) to help you transition your code. We (the authors) have made the decision to lean fully into the renaming of "Device" to "Resource" throughout the library. This includes classes like DeviceDataSource, which now becomes ResourceDataSource. This work is mostly complete, though there will be a long tail of obscure Properties which may take us a little longer to get through. We have made good use of the well-IDE-supported [Obsolete] attribute to help your migration. For example, the Device class now looks like this, with all the POCO code moved to Resource.cs: namespace LogicMonitor.Api.Devices; /// <summary> /// Obsolete /// </summary> [Obsolete("Use Resource instead", true)] public class Device : Resource; Note the use of error=true. We have made the decision to force the upgrade, ensuring a clean and EARLY migration experience, with Visual Studio giving you hints with strikethrough and Intellisense hints like this: This concept extends to properties: ...and methods: We understand that there is some refactoring to do. Our flagship Magic Suite systems of products only took an hour to migrate, and we made thousands of changes. A combination of the Obsolete hints and good IDE-fu should make your project a breeze to upgrade also. Shout here if you need help.David_Bond5 days agoProfessor18Views1like0CommentsOrphaned tickets when a alert changes severity...
Hi, I came in today and had a bunch of leftover tickets from the weekend. They were all for Low Disk Space since we had patching on Saturday. I know all the servers are working fine so I wasn't sure why I had these alerts. After some research, I think LM has a slight issue in how it interacts with Integrations. I'm specifically using Zendesk but I'm not sure if that matters. Here's what happened. Error alert came in for a server with low disk space. LM used our ZD integration and fired the Active line which creates a ticket in ZD for us. 6 minutes later, the disk space dropped below the next threshold and became a Critical. LM used our ZD integration and fired the Active line which creates a ticket in ZD for us. Now we had two tickets for the same alert. One for the Error and one for the Critical. Later on, the alert cleared. LM used our ZD integration and fired the Clear line which closes the ticket in ZD for us. However, it only fired this line once, for the Critical alert, which closed the second ticket that had been created. This left the Error ticket still open and now orphaned. The subject line of the tickets has the Severity in them, so it's right that it created a second ticket when the alert went Critical. However, it should have either then, or when it closed the alert, fired two Clear integrations to close out both the tickets that had been created. Has anyone else ever noticed anything like this? I'm not sure if we have something broken or if this has always been like this or what. Thanks.41Views0likes9CommentsBug Report: LogicModule Microsoft_SQLServer_GlobalPerformance
Active Discovery script At the end, line 22 is missing ?: '.' ...meaning that it should be: def jdbcConnectionString = hostProps.get("mssql.${instanceName.trim()}.mssql_url") ?: hostProps.get("auto.${instanceName.trim()}.mssql_url") ?: '.'David_Bond9 days agoProfessor74Views2likes8CommentsUIv4 Lacks Parity
Thought I'd give UIv4 dashboards another try today. Lasted less than 30 seconds. Op notes used to be visible on line graph widgets. They're not there anymore. You can get them to show up if you select the "show ops notes" button; it pops up a drawer from the bottom, covering up part of my dashboard. I can minimize it, but now i have a big bar of wasted space covering up part of my dashboard.Anonymous10 days ago54Views2likes2CommentsResource Tree Deprecation?
I missed the latest the recent virtual Roadmap Roadshow. When reviewing the summary notes from the session, I noticed the following statement regarding the resource tree. ResourceExplorer: Say goodbye to the traditional resource tree! Our new Resource Explorer is designed for modern environments like Cloud and Kubernetes. It leverages metadata and tagging to help you quickly find and visualize resources. I don't see how resource explorer can provide feature parity when it comes to filtering, setting custom/inherited properties, grouping devices, API query refinement etc. If we lose this functionality, the product loses half it's value. Is there anyone in the community that sat in for this session that can provide some more context?SolvedDrelo10 days agoNeophyte51Views0likes3CommentsExcessive snmp requests with a community string I am not using
I have some switches that are getting hammered by a few of my collectors and I can't figure out why. The logs on them are full of this message: snmp: ST1-CMDR: Security access violation from <Collector IP> for the community name or user name : public (813 times in 60 seconds) I don't have "public" set for this set of switches anywhere and it is coming from my collectors. I don't have any netscans for the subnet they are on. In my portal everything looks normal for these switches. I'm not sure what else to be looking at to figure this out, anyone have any thoughts? Thank you!pgordon10 days agoAdvisor58Views2likes7CommentsCan I monitor a Linux Process by something other than the name?
Hi, We have some servers with a certain process we need to monitor. It's a Java process that runs a specific Jar file. This is how it show sup in the "Add Other Monitoring" area: I have a LogicModule to monitor Java processes already, but it goes off the name. The name of this is just Java, but there could be lots of other things running in Java. I need to be able to just find the one with the imqbroker.jar part in there. Unfortunately, I don't know how to do that. Can it be done in LM easily so I can apply this to a bunch of servers? Thanks for any help.24Views0likes1CommentNew Support Portal Suggestions
New Support Portal Suggestions Can the default sort order display the most recent tickets at the top? I do not need to see my oldest tickets at the top. Can a few date columns be added? Date Opened, Date Updated, Date Resolved Then I want to be able to add them or hide them from my list view. Can the ticket comments, correspondence and history be included in the email notification I receive when an update to a ticket has been made? I think its important to be able to correspond to tickets via email. Can we have that back? For organizational tickets - I see a contact name field but in my portal they are all blank. Will those be populated?Shack11 days agoAdvisor198Views11likes8CommentsCollecting a very large number of datapoints
I have a need to collect data about CPU P-levels in VMware hosts. The way that VMware is structured in LogicMonitor relies on a vCenter server, and then all of its hosts are created in Active Discovery. There does not seem to be a way to create a datasource that uses those AD-found hosts as target resources. So I have a script that hits my vCenter, loops through each of a few dozen hosts, each of which has around 80 CPUs, each of which has around 16 P-levels. When you multiply that all up, that's about 30,000 instances. The script runs in the debug environment in about 20 seconds and completes without error, but the output is truncated and ends with "(data truncated)". When I try to "Test Active Discovery", it just spins and spins, never completing. I've waited up to about 20 minutes, I think. It seems likely that this is too much data for LogicMonitor to deal with all at once. However, I don't seem to have the option to be more precise in my target for the script. It would make more logical sense to collapse some of these instances down to multiple datapoints in fewer instances, but there isn't a set number of P-levels per CPU, and there isn't a set number of CPUs per host, so I don't see any way to do that. There doesn't seem to be any facility to collect this data in batches. What can I do?73Views4likes5CommentsGrouping name in graph title
I have a module that creates instances via Active Discovery, and groups them based on an instance level property, "auto.host", which is being assigned via Active Discovery. When applied to a resource, the module name shows up below the resource, the groups appear as children of the module, and then the individual instances show up as children of the groups. The module also has overview graphs that can be viewed by clicking on the group name. But you also get to see all of those graphs when you click on the module name inside the resource. But all of these graphs have the same title, which makes it very hard to figure out which group the graph is for. Is there a way to apply the group name (or some other group-level information) to the graph titles? I've tried adding tokens, but very few seem to work. In particular, I've tried to use the same property that's being used to create the group, and it just shows up as the literal string "##AUTO.HOST##". The only other way I can think of to identify these graphs is to change the actual instance names to include the group names, but that would require hovering over the graph to see.wfaulk12 days agoNeophyte10Views1like0CommentsWinSQLServices DataSource
For WinSQLServices DataSource, do I need specific permissions? Logic Monitor documentation only mentions SQL Server (MSSQLSERVER) Service but what about adding other services such as SQL Server Agent or SQL Browser or SSIS under that DataSource? SQL Server (MSSQLSERVER) service requires permissions specified in documentation. Would adding other SQL Services require more permissions or any changes in way they're added? For example, be part of specific security/admin group or run under service account or managed account?lucjad12 days agoNeophyte20Views0likes1CommentRabbitMQ monitoring
I'm trying to get the Modules for RabbitMQ up and running but I can't find anything recent on this. Wondering if there's anything new on RabbitMQ Modules...or if anyone has done it and can share some details on your experience. I appreciate any input on this. Thanks.Spike013 days agoNeophyte21Views0likes1CommentBug Report: Ops Notes broken in new UI
Steps to reproduce: Create a Resource Group Ops Note in the new UI via a Resource belonging to multiple Resource Groups Note that the Ops note does NOT appear Toggle back to old UI Note that the Ops note DOES appear in the old UI Toggle back to new UI Note that the Ops note still does NOT appearDavid_Bond13 days agoProfessor76Views2likes7CommentsBug Report: Cisco_NTP DS lacks proper error handling output
If you're getting no data on Cisco NTP and you can't tell why, even after running poll now or running the script in debug, it's because there's some helpful information that wasn't included in the catch block of the script (at least for NX OS). Today I was trying to troubleshoot why we were getting no data on Cisco NTP for a Nexus device. That device happens to have 10+ peers (this is important later on). Running the script in poll now/debug would only result in this: Something went wrong: java.io.IOException: End of stream reached, no match found This is a common thing with expect scripts. The script issued a command and is waiting for a response from the device that matches the prompt. If we were to get a response matching the prompt, it means the device has received the command, executed it, put the output to stdout, and returned to the prompt; the device is ready for the next command. However, this error means that the device responded with something that doesn't eventually match the prompt; the device is not waiting for the next command, it's waiting for something else. The script is waiting for the prompt and the device is waiting for something else. The script waited an appropriate amount of time then executed the catch block, outputing the helpful message above. It would be nice if the catch block output more so we could tell what's going on. How about it outputs the entirety of the SSH session? That would be helpful. Go to line 438 and insert the following in the catch block: println("==================== SSH SESSION TRANSCRIPT ====================") println(ssh.stdout()) println("==================== END SESSION TRANSCRIPT ====================") After adding this, when I executed the script in the debug console, i got this output: $ !groovy agent has fetched the task, waiting for response agent has fetched the task, waiting for response agent has fetched the task, waiting for response agent has fetched the task, waiting for response agent has fetched the task, waiting for response agent has fetched the task, waiting for response returns 0 output: Something went wrong: java.io.IOException: End of stream reached, no match found [MOTD Omitted] sxxxxxxxxxxxxx1# show ntp peer-status Total peers : 11 * - selected for sync, + - peer mode(active), - - peer mode(passive), = - polled in client mode remote local st poll reach delay vrf -------------------------------------------------------------------------------- --------------------------------------- =xxxx:xx:xxxx::x :: 16 64 0 0.00000 =xxx.xxx.xxx.xx x.x.x.x 2 64 0 0.07820default =x.x.x.x x.x.x.x 2 64 0 0.04123default =x.x.x.x x.x.x.x 2 64 377 0.07166default =x.x.x.x x.x.x.x 2 64 377 0.07191default *x.x.x.x x.x.x.x 1 64 377 0.05200default =x.x.x.x x.x.x.x 4 64 337 0.01245default +x.x.x.x x.x.x.x 16 64 0 0.00000default .[7m--More--.[m In this case, the device was waiting for me to press the enter or space keys because the last line of the output was " --More-- ". It took overly long to find this out because the module needs that one thing in the catch block to help troubleshoot.Anonymous13 days ago24Views5likes0CommentsAn Error Occurred Click to Rerender
Anyone else get these when just navigating around in the new UI? Error: An error occured while selecting the store state: V.Z(...) is not a function. at https://static-prod.logicmonitor.com/sbui209-1/v4/bundle.js?v=239804:1:5308280 at https://static-prod.logicmonitor.com/sbui209-1/v4/bundle.js?v=239804:1:5413576 at Y (https://static-prod.logicmonitor.com/sbui209-1/v4/bundle.js?v=239804:1:5415367) at span at https://static-prod.logicmonitor.com/sbui209-1/v4/vendor.js?v=239804:2:4266099 at https://static-prod.logicmonitor.com/sbui209-1/v4/vendor.js?v=239804:2:4589889 at https://static-prod.logicmonitor.com/sbui209-1/v4/vendor.js?v=239804:2:2857786 at div at re (https://static-prod.logicmonitor.com/sbui209-1/v4/bundle.js?v=239804:1:5416713) at div at S (https://static-prod.logicmonitor.com/sbui209-1/v4/vendor.js?v=239804:2:8212942) at b (https://static-prod.logicmonitor.com/sbui209-1/v4/bundle.js?v=239804:1:4849267) at _v (https://static-prod.logicmonitor.com/sbui209-1/v4/bundle.js?v=239804:1:6031090) at div at https://static-prod.logicmonitor.com/sbui209-1/v4/bundle.js?v=239804:1:6034301 at S (https://static-prod.logicmonitor.com/sbui209-1/v4/vendor.js?v=239804:2:8212942) at Fv (https://static-prod.logicmonitor.com/sbui209-1/v4/bundle.js?v=239804:1:6033125) at b (https://static-prod.logicmonitor.com/sbui209-1/v4/bundle.js?v=239804:1:5615285) at wa at Aa (https://static-prod.logicmonitor.com/sbui209-1/v4/5019.4b4a7992f68226ffb816.js:1:69737) at w (https://static-prod.logicmonitor.com/sbui209-1/v4/vendor.js?v=239804:2:8022315) at k (https://static-prod.logicmonitor.com/sbui209-1/v4/vendor.js?v=239804:2:8023407) at main at section at div at div at main at y (https://static-prod.logicmonitor.com/sbui209-1/v4/3078.54f [GOES ON.....]Drew_Hawkins16 days agoNeophyte39Views3likes3CommentsBug Bounty Program
Who here would be interested in LogicMonitor setting up a bug bounty program? I'm sure I speak for many... We don't mind spending my time reporting bugs, but it would be nice to get some kind of recognition. Maybe a red hat "Making Observability Fantastic Oncemore"David_Bond16 days agoProfessor57Views5likes3CommentsLogicModules Toolbox Broken in Version 210
For a multi-instance DataSource, when trying to test the collection script, the error message shown appears. This should be fixed before version 210 is released. Exporting the DataSource and importing it into 209 results in a DataSource with collection testing working as expected.DagW17 days agoNeophyte183Views3likes8CommentsModule Toolbox z-index issue
z-index fix needed for the "Module Toolbox" page. See screenshotSolvedDavid_Bond18 days agoProfessor111Views4likes8CommentsPerformance averages over time
Hi, I’m currently looking for a way to get averages on certain data points & make that data via the API e.g. Average CPU busy per hour, per device. I've played around with the following approaches but none of them are really what I'm after: Looping through datasources, getting devices & looping through instances via “/device/devices/{deviceId}/devicedatasources/{hdsId}/instances/{id}/data”. This approach is too cumbersome for the number of devices we have, combined with the other averages we want to pull. Creating a graph widget & getting the data via the /dashboard/widgets/{widgetId}/data endpoint. This would work if there wasn't a maximum of 100 devices in the widget & deviceIds were clearly linked the graph data. Generating a trend report & grabbing the link via /report/links/{reportId}. I haven't written this approach off yet but the available formats & the expiry on the generated link isn't ideal. Dynamic table data is great via the /dashboard/widgets/{widgetId}/data endpoint but I cannot work out a way create a datapoint that has an hourly average value. I'm not sure if it's possible to do something like grab the last x records for a datasource/datapoint on a device & present them in a different datasource/datapoint that could be used in a table widget or to set a custom property value on the device?Drelo19 days agoNeophyte48Views1like3CommentsCollector Server on Azure VM
I have Collector Server Installed on Virtual machine in Azure. I added Collector successfully to Logic Monitor. I can't add other 3 servers. Collector can't communicate with other hosts. Here is the environment: Collector Server on Azure VM | Server1 and Server2 --> vnet1 | | peered | | Server3 --> vnet2 Servers are peered between 2 network that are in two different regions and I still have a problem. Does that mean I have to add two Collectors to Logic Monitor? One per virtual network/region? Ping/Host Status-critical error in Logic Monitor. Collector can't communicate with other servers I want to add to Logic Monitor. All ports are enabled/firewall checked/permissions are correct. (documentation says: There must be at least one collector per CSP account, one per region, and one per VPC. ) Any way to set it up without adding second Collector? Why wouldn't Collector communicate with any of the vnets?lucjad19 days agoNeophyte51Views0likes5CommentsTraining Redirect page is broken
From here: https://academy.logicmonitor.com/page/all-courses Clicking on (e.g.) https://academy.logicmonitor.com/recorded-webinar-anatomy-of-a-datasource-http?reg=1 ...takes you to... https://www.logicmonitor.com/training-redirect?next=%2Fcheckout%2F2e0vcg6xlaq71 Typing your portal in and clicking continue just takes you back to your portal. Please fix.David_Bond19 days agoProfessor35Views0likes1CommentIf a collector fails to a down collector, will it then fail to that collectors backup?
Hi, We have three collectors that can all see the same items. Normally we've had them setup where 1 and 2 fail to each other, and 3 fails to 1. However, if we ever lose two at a time, we'd be stuck. If I set them up so 1 -> 2, 2 -> 3, 3 -> 1, and we lose 1 and 2, what happens? The devices on 2 will fail to 3 just fine. What happens with the devices on 1? Do they try to fail to 2 and then realize that 2's down and then fail to 3? Or do they just go to 2 and then stop? Thanks26Views0likes3CommentsIssues automating Least Privelege at scale
I'm working through how to implement the least privelege "Windows_NonAdmin_Config" script in 100+ environments. In at least two, the LM service account we have is the only one with enough admin credentials to change the account to non-admin. I'm testing in our own internal systems to make sure I can get it to work. In my first go of it as both the LM Service account and using my own Admin creds in our environment, I'm getting errors: Has anyone else seen this? I'm going to keep chipping away at it as I'd like to come up with a purely LM solution to the shift due to the scale of the effort in our MSP environment. We do have ConnectWise Automate to utilize if I can't get this working, but right now, I can't even get it going using the instructions provided directly on the VM (in a console window using 'enter-pssession 127.0.0.1 -credential (get-credential)' to get a session with admin priveleges.95Views0likes7CommentsI need to alert for 20 consecutive failed logon attempts within a 30 minute time period
We have a team that would like to get alerted on 20consecutive failed logon attempts from a single account on any of our SQL servers in a 30 minute time frame. I started out using the eventsource for errors in the security event log and set it watch for EventID 4625. I am not very savvy with groovy and am now looking at setting up with a Powershell script via a new Datasource but I am having some trouble with it. If anyone has any ideas on how to best script this I would greatly appreciate the help!148Views11likes9CommentsLogicMonitor Device Provisioning Workflow Diagram?
Does LogicMonitor have a high-level diagram of the order of operations for provisioning hosts to LogicMonitor? I’m working on my own to detail specifics related to ourown internal design, but I have thusfar been unable to locate said diagram from LogicMonitor’s side. In particular, I’d like to see something that shows ‘Once a device is provisioned it is evaluate for SNMP reachability/SysOID mappings, then propertysources, then DataSources/EventSources are matched based upon relevant properties.’ Is anyone aware of such a chart from LogicMonitor?SolvedAustinC23 days agoNeophyte164Views17likes5CommentsSetting SDTs for the weekend
To set SDTs for "The weekend", it's currently necessary to set three separate SDTs. This is because it's not possible to set the number of hours AND it's not possible to set a 24-hour period. So you have to set ones from: 00:00 Sat -> 00:59 Sat 00:59 Sat -> 00:59 Sun 00:59 Sun -> 00:00 Mon This seems like a fairly basic requirement: "Set a weekend SDT". Fix please? While I'm ranting... what's with the AM/PM nonsense on time selectors?! Anyone who's configuring LM will prefer the 24-hour clock.David_Bond24 days agoProfessor23Views5likes0CommentsFeature Request: Improved LogicModule IDE
With a very simple change (seriously - low dev effort), LogicMonitor could VASTLY improve the LogicModule development experience. Please implement the red box to avoid repeated Resource selection and to add STDOUT, STDERR and return code output boxes.David_Bond25 days agoProfessor73Views3likes9CommentsAdding Weather to Map Widgets
LogicMonitor's Map widgets are a great and easy way to plot resources/groupsgeographically, including their status. A question that comes up occasionally is if it's possible to show weather information on top of these maps. While there's currently not a native option to show weather on a Map widget, it is possible to inject a weather layer onto an existing map with a bit of JavaScript. Below is a link to a sample dashboard that can insert various types of weather info onto Map widgets.Simply save the linked JSON file to your local workstation, then in your LogicMonitor portal go to Dashboards and click Add > From File. Dynamic_Weather_Overlay.json The magic happens in JavaScript embedded in the source of the Text widget. Feel free to explore the source code by entering the Text widget's Configure dialog and clicking the 'Source' button. In typical overkill fashion, I included the option for several differenttypes of weather information. The script looks for the following text (regardless of case) in the Map widget's title and adds the appropriate weather/info layer: "Radar" or "Precip" "NEXRAD Base" "NEXRAD Echo Tops" "MRMS" "Temperature" (OpenWeatherMap.org API key required) "Wind Speed" (OpenWeatherMap.org API key required) "Cloud Cover" or "Satellite" (OpenWeatherMap.org API key required) "fire"(for including perimeters of active wildfires) Prerequisites If you want to use one of the map types noted above as needing an API key (the other types use free APIs that don't require a key), you'll need to register for a free account on OpenWeatherMap.org. Once you've obtained an API key, just add a new dashboard token named 'OpenWeatherAPIKey' and paste your key into its value field.Alternatively, you can also hard-code the key directlyin the 'openWeatherMapsAPIKey' variable near the top of the script. The weather overlays should auto-update when the widgets performtheir regular timed refresh. For instance, new radar imagery is made available every 10 minutes and will update automatically. Weather sources currently defined within this script: RainViewer.com- Excellent source of global weather imaging data. Updates approx. every 10 minutes. Used by the script for radar/precipitation maps. Open Geospatial Consortium- Hosted by Iowa State University, an excellent free source of weather data. Since it sources data from the US National Weather Service, its data is local just US and Canada. Used by the script for NEXRAD and MRMS data. OpenWeatherMap.org- Good source for some weather data such as wind speed, temperature, and cloud cover. Requires use of an API key, which is available for free. National Interagency Fire Center - For data about active US wildfires. Known Issues: When switching to a different dashboard containing a Map widget, it's possible weather may still be visible on the new dashboard. If that happensjust refresh the page.384Views23likes3CommentsLogicMonitor Integration with ServiceDesk Plus
Has anyone been able to get a custom HTTP delivery integration for alerts setup with ServiceDesk Plus? I found a very old article with the V1 API that don't work anymore. Trying to get it setup with the V3 API but I keep getting an error when doing key/value pairs and the RAW JSON is not supported by ServiceDesk Plus anymore.RVanHouten30 days agoNeophyte19Views0likes0CommentsTesting LogicModules without updating?
I looked around in the LogicModules at the BGP- LogicModule (datasource) because I plan to update the current one we have (older) to the latest. I was hoping that I could go to the Exchange page and test it in the UI (changing but not saving) or something. But I guess thats a security risk and wouldnt be allowed. I then tried to look at the update/diff stage of the update on the MyToolbox side, hoping I could actually run the code from there somehow with an alteration. Again, I guess there is a security concern. But I did use the copy icon on the update (right) side to export the data, and I tried to edit it with a new name and import it but its not valid for import. So then I went to our test portal, we have a separate one for testing, and I updated to the latest over there, and then did an export from a fully-installed one. I was able to edit that one by changing the names and display names and wiping out the metadata block and changing the file export name field, and the applies to was swapped to false() and then I validated I could import it BACK into the test server without breaking the one I updated (the real one). Since this work I then updated this FAKE-BUT-UPDATED one to our real portal as well. Once it was imported and saved, THEN I was able to test against real devices by changing the applied-to in the UI but never saving, just running. All was well on all the servers I tested on, so I feel good about doing the update. But WOW. Is there an easier way? I wish I could test the updated code before actually updating, without this circuitous approach. Am I missing something obvious? Is there a better way?Lewis_Beard31 days agoExpert57Views3likes5Comments