Recent Discussions
Dependent Alert Mapping - defining the originating alert
In the current Dependent Alert Mapping feature, alert suppression will only take place if an upstream device has an alert for the Ping or Host Status datasources for overall reachability. There are many cases where the upstream device will not be completely down, and instead will have an alert for an interface or BGP peer going down. This will still impact downstream devices, but Dependent Alert Mapping does not work correctly in these cases because the upstream device was not technically "down". I would like to request that the Dependent Alert Mapping feature give us the option to allow other types of alerts be considered when it determines what alert is "Originating" vs "Dependent". Ideally we would be able to define this when we are defining entry points. We could have the option of only using Ping or Host Status like the feature does today, or we could choose to select our own datasources for determining what the "Originating" alert is. This would allow us to get a lot more granular and make the feature a lot more flexible/powerful than it is today.Matt_Whitney3 days agoExpert6Views0likes0CommentsDashboard Interlinking
Hi Team,<br><br> Is there a feature available to interlink dashboards? As per our requirement, when clicking on a device in the dashboard, it currently redirects to the resource tree info page. However, we need it to redirect to the graphs page instead.<br><br> If this isn't feasible, our alternative is to redirect to a new dashboard, passing the device name as a token.<br><br> Regards,<br> Sreekanth11Views0likes0CommentsAllow Read-Only Accounts To Stay Logged in
We use multiple TVs as dhasboards, we currently have individual widgets on them, we would like to display dashboards on the TVs. We were informed by an engineer that we cannot select accounts to stay logged in only all accounts. We would like the option have a ready-only account stayed logged in so we can display dashboards.19Views0likes0Comments- phakesley19 days agoNeophyte19Views0likes0Comments
Windows agent-based monitoring
By far the most time consuming and difficult parts of onboarding a customer into our managed services is getting servers onboarded. We advise customers use Group Policy to push out settings for Windows Firewall and add the monitoring users to some groups. We also advise using it to run a PowerShell script on each server to implement least priv access. This is based largely on the same code LM uses in the Non-Admin script with a bunch of extra features we've added ourselves. Getting this GPO implemented though often requires jumping through a lot of change control and security related questions. In larger organisations, it often requires several levels of approval before it can be implemented. Then we have the non-domain joined machines, which need this all run manually. It's often not practical to install collectors in each subnet, so we also need to get a whole host of firewall rules opened up across the network to allow the collectors to reach each server to be monitored. Again, time consuming to explain the requirements, get approval and get implemented. In cases where we need to monitor applications, like SQL Server, this requires a more ports being opened - sometimes custom ports depending on how it's been setup and additional permissions for user accounts, which again we always try to do with 'least priv' in mind. I know LM is all about "agentless" but it would be great if we could have a lightweight agent for Windows (and maybe Linux) that could collect data for a machine then forward it to a local collector for onward transmission to LM. It would reduce requirements for a lot of network changes. I'm not sure if this could also reduce the need for running scripts etc for "least priv" access. Possibly not as I guess running this agent thing as system wouldn't be ideal either. I'm talking out loud here a little, but wondering if others have similar frustrations with getting servers into monitoring.Dave_Lee25 days agoAdvisor23Views0likes2CommentsActive Discover Filters with non-existent properties
I've been looking for an easy way to exclude instances by manually creating properties like ##exclude.volumes## or ##exclude.replicationjobs## which can be referenced in Active Discovery filters. Normally you can use something like NotContain or RegexNotMatch (ie "Name NotContain ##exclude.volumes##") but that only works if the property exists. If it doesn't existing (which provides an empty string) then that filter ends up excluding everything. Which means you can't just add it when you need to exclude something. I tried several different ways to write regex to make this work but I don't think it's possible (but open to hearing a way!). Note that this needs to account for instances that can change automatically, like temporary drive mounts for example. So just disabling the instances manually wouldn't be enough. I also don't really want to set a "default" value for all these possible Exclude properties at the root level. I also wanted something that can be easily modified on any existing multi-instance DataSources, including common built-in ones and DataSources that just use WMI or SNMP without scripting. I could clone the existing DataSource and switch to using a scripted Active Discovery to deal with it in code. But I already have so many parallel DataSources with official ones and a simple Filter change would be better. So what I would like to see, and I'm sure it's come up before, is for Active Discover filters to support nested AND. So a filter can contain something like (##exclude.volumes## Exists AND Name RegexNotMatch ##exclude.volumes##). That should provide a way to do what I'm looking to do. Or something that might be easier but messier is a new option like NotContainUnlessBlank or similar that will ignore values that are blank, although that sounds messy and confusing. I'm open to any suggestions to workaround if anyone has any.Mike_Moniz2 months agoProfessor63Views0likes8CommentsService nesting and topology mapping
I would like to see that services can be be nested and relationship based on the nesting with alert settings on the relationship. For instance we have products that contain multiple components now each is monitored and using group strucure we can kind of do a root cause. But it we can use services with maps and build relationships between them we could for instance see that a website goes down because on a sql server the logdisk fails. the service than from a user perpective is the website. But from tech perspective it is the whole chain. If we can use services to include the websitecheck, the webserver, the database, the hardware stack each connected using a service depending on the other service shown on a map would directly point you to the root cause. Guess here LM lacks in my opinion.28Views0likes1CommentAlert Exports : Please add export to CSV option
From the Alerts Option, I can build my custom alert (filtered on Datasource, group etc). Select the time period. Then.... I need to take a screenshot to send it to someone. Or start the work again with an Alert Report. Can we get an export to csv/pdf option?ryang2 months agoNeophyte23Views1like1CommentCollector configuration management - API/Ansible
Hi, We recently had to change the collector cache to a new baseline. Needed to be applied to every collectors. To avoid slow and repetitive actions, we tried using APIs/Ansible. But to my understanding the following APIs call don't support "Agent config" changes : /setting/collector/groups/{id} /setting/collector/collectors/{id} Also tried using Ansible lm_collector module https://galaxy.ansible.com/ui/repo/published/logicmonitor/integration/content/module/lm_collector/Virgil_Soulie2 months agoNeophyte40Views0likes2Comments