ContributionsMost RecentMost LikesSolutionsRe: Reports as body of email instead of attachment I agree that this as a built-in feature would be really great. In general, I find LogicMonitor's reporting capabilities, outside of graphing, to be really lacking. I've written scripts to retrieve the data I need for reports via the API. I have some jobs that dump data to a SQL database specifically for reporting purposes. It can be done but does require a lot more effort than should be required. Re: Radical Suggestion for Web Sites Seconded - this is a great idea. Re: Ad-hoc script running In my opinion, an Ops Note (set specifically for that device)or a Note on the alert itself would be great. One thing I do like about the Ops note is that it becomes visible on all of the graphs for the device which should coincide with the alert trigger time. The one thing I like about putting a note on the alert is it would capture at the time the alert is generated and in the context of that alert. I don't really like the idea of creating a separate data/config/properysourcefor this - if you're datasource has to be configuredseparately (ie for thresholds) from the one generating the alert then you're not capturing the point in time when the alert or alerting condition was triggered. How do you handle custom thresholds? Would you be putting in an Ops Note in for as long as the condition exists as often as the datasource runs? There are ways to accomplish this in a customized way, but it would be so much more easy and helpful if LogicMonitor could automatically trigger a "post alert action" or something. There could be 'canned' actions (like get processes/users/memoryfor high cpu alerts) and ones that are customized via groovy/powershell integrations. Just my 2¢ Ad-hoc script running Often when an alert pops up, I find myself running some very common troubleshooting/helpful tools to quickly gather more info. It would be nice to get that info quickly and easily without having to go to other tools when an alert occurs. For example - right now, when we get a high cpu alert the first thing I do is run pslist -s \\computername (PSTools are so awesome) and psloggedon \\computername to see who's logged in at themoment. I know it's possible to create a datasource to discover all active processes, and retrieve CPU/memory/disk metrics specific to a given process, but processes on a given server might change pretty frequently so you'd have to run active discovery frequently. It just doesn't seem like the best way and most of the time I don't care what's running on the server and only need to know "in the moment." A way to run a script via a button for a given datasource would be a really cool feature. Maybe on the datasource you could add a feature to hold a "gather additional data" or meta-datascript, the script could then be invoked manually onan alert or datasource instance. IE when an alert occurs, you can click on a button in the alert called "gather additional data" or something which would run the script and produce a small box or window with the output. The ability to run periodically (every 15 seconds or 5 minutes, etc) would also be useful. This would also give a NOC the ability to troubleshoot a bit more or provide some additional context around an alert without everyone having to know a bunch of tools or have administrative access to a server. Re: Multiple All Day Thresholds 17 hours ago, Joe Williams said: Hate to bring up a dead thread, but this doesn't work when you need the actual datapoint. In our case there is a range that can be returned. Say 1-7. 2-5 are "OK" returned values. 1, 6 & 7 are bad returned values, but each one of those values tells us what is wrong. Couldn't you still use an expression for this scenario? if(eq(datapoint,1)||eq(datapoint,6)||eq(datapoint,7),datapoint,1) Then you can set your alert threshold to warning/error/critical when >1, it would still give you the numerical value indicating what the problem is, but would return 1 (or whatever other number you want) when the values are "OK" values. Re: SystemSTARTUpTimeGreaterthan28days What I've done in these instances is just clone the complex datapoint or create a new one - UptimeSeconds_longfor example, or better yet make UptimeDays by adding a conversion in the calculation so you don't have to remember how many seconds are in 28 days (I do this with a lot of the base LM datasources converting bytes to GB, too, just to make it quickly understandable). This way you're not polling the server multiple times for the same data like would happen if you cloned the whole datasource. Re: More advanced methods of authentication for Website Checks Thanks for that tip, I hadn't come across those so will review. Still would love to see an easier, quicker way to implement an authenticated website check. More advanced methods of authentication for Website Checks We would like to ability to easily authenticate into SaaS products where you have to enter a username and password and click the "Login" button or hit Enter. These days, it seems ADFS and SAML are becoming more popular and the vendors are just redirecting a login page straight to our own SSO site. So the website checks never trigger alerts becauseyou can get to the SSO login page, but after that the application is broken which is really what we want to monitor. Is that something in the pipeline for website checks in LogicMonitor? If not, it would be great to see that. Maybe some way to identify the username, password fields on the page with the credentials stored in properties and the ability to define whether to click a Login button or submit an "Enter" key press. Re: Log Files Are you using PowerShell for this? If so, something like this should work $logfileExists=0 if(test-path D:\path\to\logfile.txt) { $logfileExists=1 } elseif(test-path E:\path\to\logfile.txt) { $logfileExists=1 } else { $logfileExists=0 } $outString="logfileExists="+$logfileExists write-host $outString Otherwise - you can run a command like this $drives = GET-WMIOBJECT win32_logicaldisk | select DeviceID foreach($drive in $drives) { #do a test-path to the path on each drive } Hope that helps. Re: Disable Active Discovery at the Device and Parent Folder level I'd forgotten about auto properties tasks - these were the actual culprit causing some of our UPS's to trigger these "unauthorized access" email notifications. Unfortunately, there doesn't appear to be a way to disable these. I would really like a switch, check box, radio buttons - whatever - to easily choose what tasks (auto properties, active discovery, data collection)run against a given device.
Top ContributionsDisable Active Discovery at the Device and Parent Folder levelRe: Log FilesOffice 365 Monitoring in LM CloudAd-hoc script runningRe: Log FilesRe: Disable Active Discovery at the Device and Parent Folder levelAPI Results for when there is "No Data"Azure MonitoringRe: Show when failure FIRST started not just when alert triggeredRe: Radical Suggestion for Web Sites