Introducing Logs for Lunch: Unlock your LM Logs potential during your lunch break!
Join us for an engaging new webinar that will transform how you understand and leverage LM Logs to drive better business outcomes. In this focused, lunch-friendly session, our experts will guide you through maximizing the full potential of your log data. Why attend? Whether you're new to LM Logs or looking to deepen your expertise, each session delivers practical insights you can implement immediately. Learn how to: Slash your Mean Time to Resolution (MTTR) with advanced troubleshooting techniques Configure and optimize LM Logs across diverse data sources Transform raw log data into actionable intelligence Leverage built-in analysis tools to identify patterns and anomalies What you'll get Live demonstrations of real-world use cases Step-by-step setup and configuration guidance Interactive Q&A sessions with LM Logs experts Best practices for integration with your existing workflows Practical tips for immediate implementation Perfect for IT Operations teams seeking to streamline troubleshooting DevOps professionals looking to enhance monitoring capabilities Security teams who want better visibility into their log data Business leaders evaluating log management solutions Current users ready to unlock advanced features Register now! Our first session will be on January 8, 2025, at 12:00 pm CT. Join our team at “lunch” for an interactive LM Logs discussion and demo to see how our logging solution helps your Ops teams: Solve problems faster with anomaly detection Simplify and standardize log management Use log data proactively to reduce major outages before they happen Click here to register or follow the link below! https://logicmonitor.zoom.us/webinar/register/WN_3z4XccEMRg61VUOzkuEUfA Transform your log management strategy one lunch break at a time. See you there!343Views2likes0CommentsLM Logs Alert Tokens
We are looking at expanding into LM Logs and I am wondering are there any other hidden alert tokens? We are looking at what structure of the message we can send, and there feels to be a lack of items for LM Logs. Is there a way to pull out some of the log fields as part of the message? The closest we found so far is ##logMetaData## which gives us a json string of our custom fields. For future parties here is a little more detail. ##alerttype## logAlert ##datapoint## Log Pipeline Alert Condition Name ##datasource## LM Logs ##dsdescription## Raw Log Value ##dsidescription## Raw Log Value ##instance## Log Pipeline Name ##threshold## Log Pipeline Alerting Condition ##logMetaData## Log metadata fields included in alert from Pipeline Alert Conditions34Views1like0CommentsLogSource Resource Mapping Confusion
I have a ticket opened but hoping to get a quicker response here. I am using LogSources but since we are an MSP with multiple clients, there seems to be an issue syslog's are being mapped to other client devices that have the same IP because I'm using IP=system.hostname as the mapping. I have even pointed all the duplicate IPs to their respective syslog collector and it still maps wrong. Am I doing something wrong or is the system not smart enough to know that it came on this collector, therefore I should only map it to resources monitored by that collector? Is there a way I can use AND logic with the Token mapping for _lm.collectorId = system.collectorid? Thanks in advance.Solved85Views8likes7CommentsLM Logs - Alerting
Hello, Just wanted to ask if there is a way to have alerting on multiple lets say IP addresses in a similar log message but not have it spam our ticketing system? Log Message 1 - 1.1.1.1 is down Log Message 2 - 2.2.2.2 is down Log Message 3 - 1.1.1.1 is down I want to be able to alert on 1.1.1.1 being down and suppress it for a day on duplicate alerts but if I have a alert query for "is down" then 2.2.2.2 will also get suppressed for a whole day as well when its a whole separate device/alert. I would also not be able to add all lets say 50 IP addresses that may alert as their own alert condition. Is there a way or is LM Log too limiting right now?56Views2likes1CommentLM Logs multiple capture group parsing
Ok, this is cool. I have some log data that has structured data in it (some text, then a python list of strings). I had started building out a parse statement for each member of the list, then thought I’d try just making multiple capture groups and naming multiple variables after the as fanboy. Turns out it completely works. It parses each capture group into the corresponding column with a single parse statement. I was halfway through writing out a feature request when I figured I’d give it a try only to discover that it completely works. Nice job LM Logs guys.111Views14likes2CommentsHas anybody noticed the flaw in LogSource logic?
So LogSources have a couple purposes: They allow you to filter out certain logs. I’m not sure the use case here since the LogSource is explicitly including logs. Maybe the point is to allow you to exclude certain logs that have sensitive data. No masking of data, just ignore the whole log. Not clear if the ignored logs are still in LMLogs or if they get dumped entirely. They allow you to add tags to logs. This is actually pretty cool. You can parse out important info from the log or add device proeprties to the logs. This means you can add device properties to the log that can be displayed as a separate column, or even filtered upon. Each of our devices has a customer property on it. This means i can add that property to each log and search or filter by customer. Device type, ssh.user, SKU, serial number, IP address, the list is literally endless. They allow you to customize which device the logs get mapped to. You can specify that the incoming log should be matched to device via IP address, or hostname, or FQDN, or something else. The documentation on this isn’t exactly clear, but that actually doesn’t matter because… The LogSources apply to logs from devices that match the LogSource’s AppliesTo. Which means the logs need to already be mapped to the device. Then the LogSource can map the logs to a certain device. Do you see the flaw? How is a log going to be processed by a LogSource so it can be properly mapped to a device, if the LogSource won’t process the log unless it matches the AppliesTo, which referencesto the device, to which the logs haven’t yet been mapped? LogSources should not apply to devices. They should apply to Log Pipelines.Solved231Views18likes4CommentsLM Logs parser conditional formatting operator
Submitted to LM Feedback under the title “LM Logs parser colorization based on criteria” As an engineer who is trying to see how certain logs relate to other logs, it would be helpful if I could highlight specific logs in context with other logs by using an advanced search operator to colorize certain logs that meet a certain criterion. For example, I run this query often: "PagerDuty Ticket Creation" | parse /(.*) (SUMMARY|ERROR|INFO|DEBUG): (.*)/ as Script, Severity, Msg One of the fields I parse is the Severity, which as you can see can have values of SUMMARY, ERROR, INFO, or DEBUG. It would be nice if I could add an operator to the query that would let me colorize rows based on the value of the parsed Severity column (Severity just in this case; for the general case, any expression on any column). For example, I'd like to run the query: "PagerDuty Ticket Creation" | parse /(.*) (SUMMARY|ERROR|INFO|DEBUG): (.*)/ as Script, Severity, Msg | colorize Severity == "ERROR" as orange | colorize Severity ~ /SUMMARY|INFO/ as green The result would be that rows in the table that have a value of "ERROR" would have a background color of orange (a muted orange) and rows in the table that have a value of "SUMMARY" or "INFO" would be colored green. Since the DEBUG logs don't match any colorization operator, they would have the default color of white. It might be handy if one *or* two colors could be passed, allowing me to change the color of the text and the background, or just the background. It would be ok if I could only choose from a set list of colors, but it would be great if I could specify an RGBA color.31Views12likes0CommentsGetting started with Log analysis - useful queries
We at LogicMonitor want to make taking control of your log data easy for analysis, troubleshooting, and identifying trends. In this post, we will share a few helpful queries to get started with LM Logs - what devices are generating log data and easy ways to track overall usage. In future posts, we’ll share queries to dive deeper into specific log data types. What type of queries do you want to see? Reply to this post with areas of log analysis or best practices you want. Not up to date with LM Logs? Check out this blog post highlighting recent improvements and customer stories: A lookback at LM Logs NOTE: Some assumptions for these queries: Each queries results are bound to the time picker value, adjust according to your needs * is a wildcard value meaning ALL which can be replaced by a Resource, Resource Group, Sub-Group, Device by Type or Meta Data value You may need to modify specific queries to match your LM portal Devices Sending Logs- use this query to easily see which LM monitored devices are currently ingesting log data into your portal * | count by _resource.name | sort by _count desc Total Number of Devices Sending Logs-the previousquery showed which devices are generated logs, while this query identifies the overall number of devices * | count by _resource.name | count Total Volume by Resource Name -this query shows the total volume of log ingestion (as GB) by resource name, with the average, min, max size per message. The results are sorted by GB descending but you can modify the operators to identify your own trends. * | count(_size), sum(_size), max(_size), min(_size) by _resource.name | num(_sum/1000000000) as GB | num(_sum/_count) as avg_size | sort by GB desc Total Log Usage -This is a helpful query to run to see your overall log usage for the entire portal * | sum(_size) | num(_sum/1000000000) as GB | sort by GB desc And finally,Daily Usage in Buckets -run this query to see an aggregated view of your daily log usage * | beta:bucket(span=24h) | sum (_size) as size_in_bytes by_bucket | num(size_in_bytes/1000000000) as GB | sort by _bucket asc We hope these help you get started!193Views11likes0Comments