How to redirect the output of the groovy script to the collector log file using groovy script?
In my groovy script, I want to redirect the output from the groovy script into the collectors log file? What should be the groovy code, to redirect the output to the collectors log file? Can anyone help me here?48Views5likes1Commentsending Windows syslogs to Logicmonitor
I know this is going to be a duh moment. But back in our Proof of Concept we setup 2-3 widows boxes to send logs to Logicmonitor so they can be parsed int he Logs section of the GUI. I cannot for the life of me find in the documentation or remember how we set it up. The only thing i can see is that we have System.pushmodules = logusage. It wont let you add that property manually so I’m guessing its just hidden somewhere else in the GUI.Solved96Views15likes2Comments☁️ Monitor Azure Resource Events with LogicMonitor Logs
I have a strong preference for Microsoft Azure due to its exceptional capabilities! I recently wrote a blog post showcasing how to bring your resource events to the LogicMonitor platform. This way, you can set up alerts for critical business operations, such as when a new user is added to your Active Directory (Entra), or when a file is deleted from your blob storage. I hope you find it as helpful as I did! Monitor Azure Resource Events with LogicMonitor Logs Do you use LogicMonitor or any other monitoring platform to address unique use cases? Share your stories with us!77Views13likes0CommentsCan I monitor a JSON file? Example included.
Hi, We have a script that runs and creates an output like the file attached. We need to be able to parse this file and look at the “replication” and “counts_match” fields and alert if we don’t find certain criteria. Can LM do that? I think that LM can only access files directly if they are on a collector, so we’d make sure this file ends up there. Thanks. I guess I can’t attach a file so here’s what it looks like: { "replication": [ { "db_name": "db1 ", "replication": "running ", "local_count": "12054251", "remote_count": "8951389", "counts_match": "false" }, { "db_name": "db2 ", "replication": "running ", "local_count": "0", "remote_count": "0", "counts_match": "true" }, { "db_name": "db3 ", "replication": "running ", "local_count": "0", "remote_count": "0", "counts_match": "true" }, { "db_name": "db4 ", "replication": "running ", "local_count": "97", "remote_count": "97", "counts_match": "true" }, { "db_name": "db5 ", "replication": "running ", "local_count": "0", "remote_count": "0", "counts_match": "true" } ] }Solved408Views12likes5CommentsLM Logs parser conditional formatting operator
Submitted to LM Feedback under the title “LM Logs parser colorization based on criteria” As an engineer who is trying to see how certain logs relate to other logs, it would be helpful if I could highlight specific logs in context with other logs by using an advanced search operator to colorize certain logs that meet a certain criterion. For example, I run this query often: "PagerDuty Ticket Creation" | parse /(.*) (SUMMARY|ERROR|INFO|DEBUG): (.*)/ as Script, Severity, Msg One of the fields I parse is the Severity, which as you can see can have values of SUMMARY, ERROR, INFO, or DEBUG. It would be nice if I could add an operator to the query that would let me colorize rows based on the value of the parsed Severity column (Severity just in this case; for the general case, any expression on any column). For example, I'd like to run the query: "PagerDuty Ticket Creation" | parse /(.*) (SUMMARY|ERROR|INFO|DEBUG): (.*)/ as Script, Severity, Msg | colorize Severity == "ERROR" as orange | colorize Severity ~ /SUMMARY|INFO/ as green The result would be that rows in the table that have a value of "ERROR" would have a background color of orange (a muted orange) and rows in the table that have a value of "SUMMARY" or "INFO" would be colored green. Since the DEBUG logs don't match any colorization operator, they would have the default color of white. It might be handy if one *or* two colors could be passed, allowing me to change the color of the text and the background, or just the background. It would be ok if I could only choose from a set list of colors, but it would be great if I could specify an RGBA color.30Views12likes0CommentsGetting started with Log analysis - useful queries
We at LogicMonitor want to make taking control of your log data easy for analysis, troubleshooting, and identifying trends. In this post, we will share a few helpful queries to get started with LM Logs - what devices are generating log data and easy ways to track overall usage. In future posts, we’ll share queries to dive deeper into specific log data types. What type of queries do you want to see? Reply to this post with areas of log analysis or best practices you want. Not up to date with LM Logs? Check out this blog post highlighting recent improvements and customer stories: A lookback at LM Logs NOTE: Some assumptions for these queries: Each queries results are bound to the time picker value, adjust according to your needs * is a wildcard value meaning ALL which can be replaced by a Resource, Resource Group, Sub-Group, Device by Type or Meta Data value You may need to modify specific queries to match your LM portal Devices Sending Logs- use this query to easily see which LM monitored devices are currently ingesting log data into your portal * | count by _resource.name | sort by _count desc Total Number of Devices Sending Logs-the previousquery showed which devices are generated logs, while this query identifies the overall number of devices * | count by _resource.name | count Total Volume by Resource Name -this query shows the total volume of log ingestion (as GB) by resource name, with the average, min, max size per message. The results are sorted by GB descending but you can modify the operators to identify your own trends. * | count(_size), sum(_size), max(_size), min(_size) by _resource.name | num(_sum/1000000000) as GB | num(_sum/_count) as avg_size | sort by GB desc Total Log Usage -This is a helpful query to run to see your overall log usage for the entire portal * | sum(_size) | num(_sum/1000000000) as GB | sort by GB desc And finally,Daily Usage in Buckets -run this query to see an aggregated view of your daily log usage * | beta:bucket(span=24h) | sum (_size) as size_in_bytes by_bucket | num(size_in_bytes/1000000000) as GB | sort by _bucket asc We hope these help you get started!171Views11likes0CommentsAny way to display LM Logs query results in Dashboard?
Not sure if I am overlooking something…. is it possible to put LM Logs query table showing the results of said query into a Dashboard? The purpose of that would be to have the query updated with the latest results every x minutes. I've tried Text widget and HTTP widget andembedded the URL for log query but that's not working.55Views1like1CommentEvent Source for log file monitoring
We're looking to have log file monitoring for file extension*.rpt and SQL log files. LM does not appear to support anything (out of the box)other than .log and .txt. Has anyone done this via script with other file types in Windows? If so, can you share your solution?81Views1like5CommentsLog Streaming Feature Request
Hi, Our team recently has certain error scenarios found in multiple production sites. As of today we're monitoring specific exception (via keyword match or Regex expression) via LogicMonitor and trigger alert to be generated. This solution has few drawbacks: 1. Requires us to know ahead what're the specific exception(s) to monitor in each log file (e.g. Tomcat, ActiveMQ) 2. Requires us to download all the logs from each production site that has this issue (some of our customers requires VPN/Secure access and it's very inefficient to download these logs from each site to analyze) Our team then run a quick log streaming POC and discovered datadog is one of the vendors that provides a decent log streaming solution (to the cloud) and allow us to search & perform analytics (seehttps://www.datadoghq.com/log-management/). It'll be great if LogicMonitor can implement something similar to enable us to elasticsearch these logs in the cloud to enable faster troubleshooting analysis. Thanks & Best Regards, Horace4Views1like1Comment