Can you compare and contrast Dexda's capabilities with PagerDuty's alert grouping feature
We currently send alerts to PagerDuty because it has really good ML based alert grouping. From there it goes on to our ticketing system. Looks like this might be able to replace that capability (which would simplify things greatly on our side). It seems like grouped alerts are called “episodes”. Will episodes have the ability to be routed similar to the way alerts are routed through alert rules? In PD, in order to train the ML engine that some alerts belong together, all we have to do is combine two “incidents” (what PD calls a Dexda episode). That not only groups all the alerts from both groups into one group, but it also trains the ML to do better next time. Will Dexda have a similar capability? In PD, in order to train the ML engine that some alerts DON’T belong together, all we have to do is split one or more alerts from one “incident” (what PD calls a Dexda episode) into multiple incidents. This not only ungroups the alerts into separate groups, but also trains the ML to do better next time. Will Dexda have a similar capability? “Dexda will automatically re-cluster alerts when it identifies a more optimal clustering option” - does this mean it will change the grouping of alerts that it has already grouped? How is multi-tenancy handled? There’s a issuewith the tenant id currently that’s making it undefined for all our alerts. I don’t mind using tenant ID as long as we can get that issuefixed.168Views20likes9CommentsInfo and Overview resource tabs
I’d like to know what is planned to make the overview and info tabs for resources easier to use/more useful. My experience with the info tab is that there is just too much information on it for my day to day tasks. It’s all useful to have, but the details I need most are spread out through a very long list and I honestly don’t remember what the property names of them are most of the time. Is there something in the works to make this easier? If I could pin which properties are most important to me to the top of the screen, that’d be helpful. Being able to put them in a table on the overview tab would also be very helpful. Right now, I’m not really using the overview tab much since I can’t put what would be useful on it. To me these seem like they should be device level dashboards. Are there any plans to add more functionality to this tab? I’d need more tabular data instead of just graphs (important details from the info tab, instance level info from various datasources, etc.)225Views19likes16CommentsDynamic Groups for Websites
Why don’t we have Dynamic Groups for Websites? We have chosen to group our websites in client-specific groups. However, there are times when we are performing maintenance across a product which impacts multiple clients. We would love to have the ability to schedule downtime across multiple clients using a Dynamic Group.Solved111Views16likes4CommentsEvent-driven ansible
Would like to know more about the architecture for this. I assume this would integrate with Ansible Tower/AWX?Is there another component involved? Do we need a lambda function to tie it all together? What about multi-tenancy? I will have some customers that we’ll use this with and some we won’t. Can we route this through alert rules to determine which alerts go to which Ansible Tower/AWX?572Views15likes8CommentsManaging credentials
I am currently onboarding and moving off another monitoring platform that wasn’t working for my organization. However, it did have a credential manager feature that was exceptionally useful. It would track SNMP strings and credentials across the platform and I could make changes to a set of credentials in the credential manager and have it carry those changes to all the appropriate devices. I could also run reports on which devices were using which strings or credentials. I know that LogicMonitor will allow for easy changes via inheritance, but my environment is made up of a variety of organizations and there frequently will be cases where the inheritance model does not work for us.133Views14likes5CommentsReusing lmaccess.id and lmaccess.key
Will LM make any steps toward encouraging or enforcing unique API tokens for different integrations/applicaitons/purposes? Currently, many of the LM datasources use lmaccess.id and lmaccess.key when API access is needed (portal metrics, okta logs, etc.). What many other API providers do is require what they call “application registration” requiring a different set of credentials for each different use.Not only does this appeal to the more security minded, but it also makes troubleshooting easier andreduces the risk of credential expiration due to overuse. Troubleshooting is easier because each integration/datasource/script/tool has its own set of credentials, helping identify where the cause of problems may be and limiting the blast radius of any issues. With everything using the same set of credentials, the security profile is expanded. With the ability to reuse one set of credentials, anadmin will be tempted to just give full admin rights to this credential set (even though the UI makes this more difficult, but not very hard at all). This means that the same set of RW credentials may be present in many different locations, making it easier to move laterally once one location is exploited. Also, if everything is using the same set of creds, it’s possible there would be enough usage that the rate limiting may be enforced causing unintended behavior and/or failure. It’s also possible that while configuring a new integration with the API credentials, if the key is fat-fingered, it could cause the token to be expired. This means that everything that uses that token will stop working until the token is reset. If the token had to be regenerated,the new token would have to be redistributed to all the applications.144Views13likes4CommentsExportable/printable reporting
LogicMonitor is great at displaying data on a screen but not so much at exporting or printing it. I’m super grateful forall the hard work my POV team put in to help me find some solutions. They really thought outside the box and we found some options to get the data out of the system that will work for us in the short term. I want to know about some of the specific goals for reporting and exporting that information from LM. I support an ecosystem that is still very reliant on having paper copies of reportingdata and having the ability to email PDF/CSV reports directly from the platform is very important to us. Even something like the ability to put a report file into Google Drive or OneDrive on a schedule would be nice. Having this be an automated process would be feature parity with the monitoring platform we are leaving.80Views12likes2CommentsTemplates for Web Checks
One feature that makes LogicMonitor so powerful is the combination of Auto Discovery and the AppliesTo feature onLogic Modules. Except that this is entirely missing for Web Checks. Several of our products are web-based. We need to monitor many instances of these products across many client environments. We would like to be able to define a Web Check “template” that uses variables in pre-defined Steps, which can be cloned/deployed for each client. Even better…I’d like to specify an FQDN or base url, and have LogicMonitor perform Auto Discovery to decide which Web Check is appropriate for this instance61Views11likes0CommentsMaking instance groups more useful
My environment relies heavily on monitoring different groups of instances. At this time, I’m not able to use these instance groups in alert rules or reporting and this has been a pain point for my team. I’ve also tried grouping the instances with the same monitoring needs into a service, but a service won’t accept the number of instances I need to keep together. I would love to see instance groups have more utility and be used in most of the same ways that resource groups are able to be used.Is this something that will be implemented any time soon?117Views9likes4CommentsData from logs
First there was LogSources, which sounded great until I heard what their goal was. Then there was Logs Query Tracking, which also sounded like it would meet my need perfectly until I saw that the only metrics that came back were log count and anomaly count. Is there anything coming that will let me pull numbers out of my logs? I have several logs that occur very regularly that contain numbers. I’ve easily built parsing into my saved queries that pulls out these numbers into individual columns. When will i be able to create a log-datasource that will let me put in a log query with parsing and map the parsed columns to datapoints (like any other datasource)?Solved125Views9likes3Comments