sending Windows syslogs to Logicmonitor
I know this is going to be a duh moment. But back in our Proof of Concept we setup 2-3 widows boxes to send logs to Logicmonitor so they can be parsed int he Logs section of the GUI. I cannot for the life of me find in the documentation or remember how we set it up. The only thing i can see is that we have System.pushmodules = logusage. It wont let you add that property manually so I’m guessing its just hidden somewhere else in the GUI.Solved118Views15likes2Comments☁️ Monitor Azure Resource Events with LogicMonitor Logs
I have a strong preference for Microsoft Azure due to its exceptional capabilities! I recently wrote a blog post showcasing how to bring your resource events to the LogicMonitor platform. This way, you can set up alerts for critical business operations, such as when a new user is added to your Active Directory (Entra), or when a file is deleted from your blob storage. I hope you find it as helpful as I did! Monitor Azure Resource Events with LogicMonitor Logs Do you use LogicMonitor or any other monitoring platform to address unique use cases? Share your stories with us!86Views13likes0CommentsCan I monitor a JSON file? Example included.
Hi, We have a script that runs and creates an output like the file attached. We need to be able to parse this file and look at the “replication” and “counts_match” fields and alert if we don’t find certain criteria. Can LM do that? I think that LM can only access files directly if they are on a collector, so we’d make sure this file ends up there. Thanks. I guess I can’t attach a file so here’s what it looks like: { "replication": [ { "db_name": "db1 ", "replication": "running ", "local_count": "12054251", "remote_count": "8951389", "counts_match": "false" }, { "db_name": "db2 ", "replication": "running ", "local_count": "0", "remote_count": "0", "counts_match": "true" }, { "db_name": "db3 ", "replication": "running ", "local_count": "0", "remote_count": "0", "counts_match": "true" }, { "db_name": "db4 ", "replication": "running ", "local_count": "97", "remote_count": "97", "counts_match": "true" }, { "db_name": "db5 ", "replication": "running ", "local_count": "0", "remote_count": "0", "counts_match": "true" } ] }Solved466Views12likes5CommentsLM Logs parser conditional formatting operator
Submitted to LM Feedback under the title “LM Logs parser colorization based on criteria” As an engineer who is trying to see how certain logs relate to other logs, it would be helpful if I could highlight specific logs in context with other logs by using an advanced search operator to colorize certain logs that meet a certain criterion. For example, I run this query often: "PagerDuty Ticket Creation" | parse /(.*) (SUMMARY|ERROR|INFO|DEBUG): (.*)/ as Script, Severity, Msg One of the fields I parse is the Severity, which as you can see can have values of SUMMARY, ERROR, INFO, or DEBUG. It would be nice if I could add an operator to the query that would let me colorize rows based on the value of the parsed Severity column (Severity just in this case; for the general case, any expression on any column). For example, I'd like to run the query: "PagerDuty Ticket Creation" | parse /(.*) (SUMMARY|ERROR|INFO|DEBUG): (.*)/ as Script, Severity, Msg | colorize Severity == "ERROR" as orange | colorize Severity ~ /SUMMARY|INFO/ as green The result would be that rows in the table that have a value of "ERROR" would have a background color of orange (a muted orange) and rows in the table that have a value of "SUMMARY" or "INFO" would be colored green. Since the DEBUG logs don't match any colorization operator, they would have the default color of white. It might be handy if one *or* two colors could be passed, allowing me to change the color of the text and the background, or just the background. It would be ok if I could only choose from a set list of colors, but it would be great if I could specify an RGBA color.43Views12likes0CommentsGetting started with Log analysis - useful queries
We at LogicMonitor want to make taking control of your log data easy for analysis, troubleshooting, and identifying trends. In this post, we will share a few helpful queries to get started with LM Logs - what devices are generating log data and easy ways to track overall usage. In future posts, we’ll share queries to dive deeper into specific log data types. What type of queries do you want to see? Reply to this post with areas of log analysis or best practices you want. Not up to date with LM Logs? Check out this blog post highlighting recent improvements and customer stories: A lookback at LM Logs NOTE: Some assumptions for these queries: Each queries results are bound to the time picker value, adjust according to your needs * is a wildcard value meaning ALL which can be replaced by a Resource, Resource Group, Sub-Group, Device by Type or Meta Data value You may need to modify specific queries to match your LM portal Devices Sending Logs - use this query to easily see which LM monitored devices are currently ingesting log data into your portal * | count by _resource.name | sort by _count desc Total Number of Devices Sending Logs - the previous query showed which devices are generated logs, while this query identifies the overall number of devices * | count by _resource.name | count Total Volume by Resource Name - this query shows the total volume of log ingestion (as GB) by resource name, with the average, min, max size per message. The results are sorted by GB descending but you can modify the operators to identify your own trends. * | count(_size), sum(_size), max(_size), min(_size) by _resource.name | num(_sum/1000000000) as GB | num(_sum/_count) as avg_size | sort by GB desc Total Log Usage - This is a helpful query to run to see your overall log usage for the entire portal * | sum(_size) | num(_sum/1000000000) as GB | sort by GB desc And finally, Daily Usage in Buckets - run this query to see an aggregated view of your daily log usage * | beta:bucket(span=24h) | sum (_size) as size_in_bytes by_bucket | num(size_in_bytes/1000000000) as GB | sort by _bucket asc We hope these help you get started!226Views11likes0CommentsHow to redirect the output of the groovy script to the collector log file using groovy script?
In my groovy script, I want to redirect the output from the groovy script into the collectors log file? What should be the groovy code, to redirect the output to the collectors log file? Can anyone help me here?61Views5likes1CommentJanuary 2025 Logs for Lunch Recap: Transforming Log Intelligence
We launched our 2025 Logs for Lunch series with a bang, diving deep into how LM Logs is transforming how teams tackle troubleshooting. If you missed it, don't worry - here's a quick recap. Making Log Troubleshooting Less Painful Most of us don't exactly jump for joy when we have to dig through logs. But LogicMonitor is changing that game. The standout feature? An AI-powered system that spots unusual patterns automatically - no complex queries needed. This innovative approach has helped organizations reduce their troubleshooting time by up to 80%, significantly improving operational efficiency. The Demo The technical demonstration showcased real-world applications, featuring: Streamlined alert-to-resolution workflow "Show Patterns" feature for identifying recurring issues Automated alert creation based on log patterns Seamless integration between metrics and logs The demo walked through diagnosing a web server issue, illustrating how complex problems can be resolved with minimal clicks and without extensive logging expertise. Q&A People had questions, and we got answers! Here are the ones that got everyone's attention: Q: I'm new to this - where should I start? A: Start with what you know - if you're already monitoring network devices or Windows servers in LogicMonitor, that's your sweet spot. These are usually the easiest to set up and start getting value from right away. Q: How does pricing work? A: LM Logs is an add-on to LM Envision, and it's pretty straightforward: you pay based on how much data you're logging and how long you want to keep it. Whether you need 7 days or a full year of retention, they've got you covered. Q: How do I keep track of usage? A: There's a neat dashboard that shows your monthly usage, trends, and even which systems are your "top talkers" - super helpful for keeping things under control. What's Next? Mark your calendar for the next Logs for Lunch session on February 12th, 2025, at 12 pm CT, where we're tackling troubleshooting wireless networks. Save your spot by registering today. Keep an eye out in the Community for upcoming exciting product launches! Check out our official LM Logs page here for a deeper dive into logs.364Views4likes0CommentsFebruary 2025 Logs for Lunch Recap: Network Observability & Wireless Connectivity
Overview This month’s Logs for Lunch session brought together IT professionals to explore Network Observability & Wireless Connectivity, highlighting how LM Logs can streamline troubleshooting and proactive monitoring. Our experts explored real-world use cases, demonstrating how logs provide deeper visibility into network performance, security events, and infrastructure health. Whether managing a growing wireless network or optimizing log intelligence, this session was packed with actionable insights to elevate your monitoring strategy. The Demo Making Wireless Networks More Predictable: We explored how log intelligence can help identify and resolve connectivity issues before they impact users. Proactive Troubleshooting with LM Logs: Discover how to correlate logs with performance metrics for faster incident resolution and enhanced root cause analysis. Security & Compliance Insights: Learn how to leverage log data for better security monitoring, detecting anomalies in network behavior. Enhancing Network Observability: Unveiling best practices for visualizing wireless connectivity issues with logs and metrics in a single pane of glass. Customer Success Stories: Real-world applications showcasing how teams are using LM Logs to optimize network health and troubleshoot at scale. Q&A Q: How can LM Logs help with wireless troubleshooting? A: LM Logs provide real-time insights into network performance, helping to correlate log data with connectivity metrics, device health, and historical trends. Q: Can LM Logs be used for security monitoring? A: Absolutely! Logs can highlight unexpected login attempts, firewall policy violations, and network anomalies, making them a key tool for security and compliance teams. Q: How do I integrate LM Logs with my current monitoring setup? A: LM Logs work seamlessly with existing dashboards and alerting workflows, allowing you to combine performance metrics, topology maps, and log data in one place. Q: What’s the best way to filter and analyze large volumes of logs? A: Utilize log search, filters, and anomaly detection features to pinpoint the most relevant data, reducing noise and making troubleshooting more efficient. Customer Call-outs “The ability to see connectivity issues correlated with logs in real-time is a game-changer.” “Security monitoring with logs is something we’ve needed, and this session really showed us how to implement it.” “We’ve been struggling with intermittent wireless issues, and now we have a solid strategy to tackle them.” What’s Next? Virtual User Groups: Join us for our first LM Community Virtual User Group series, where you'll hear from fellow LogicMonitor customers about their hybrid observability journey. Register for your preferred region below! LM User Group | AMER East - Mar 20 LM User Group | AMER West - Mar 20 LM User Group | APAC - Mar 27 LM User Group | EMEA - Mar 27 Elevate Community Conference: Join us in Dallas, TX, Sydney, AUS, and London, UK, to gain strategic insights, hands-on product experience, and exclusive networking opportunities. Elevate 2025 will showcase the latest innovations in AI-powered observability, empowering enterprises to optimize their modern data centers. Find more details and registration links here! Stay tuned for more insights and opportunities to enhance your monitoring capabilities with LM Logs. Missed this session? Watch the full recording below ⤵️86Views3likes0CommentsWebhook Event Collection & Cisco Meraki
As a Cisco Meraki Strategic Technology Partner, we are always looking for ways to make our integration the best that it can be, so you can get the most out of your investments with Cisco and LogicMonitor. So, today we kicked off R&D planning for [safe harbor statement] the ability to collect webhook events from Cisco Meraki with the following objectives. Mitigate Cisco Meraki Dashboard API rate limiting. Enable [near] real-time alerts for things like camera motion, IoT sensor measurement threshold breach (or automation button press), power supply failure... Facilite sending webhook events from Cisco Meraki to LogicMonitor I have the following assumptions. Customers want to be alerted on most, but not all webhook events. Customers want to have multiple inbound webhook configurations, i.e. for different tenants/customers or different Cisco Meraki organizations. Cisco Meraki is the first but not the only platform that customers will want to use to send webhook events to LogicMonitor. If you had a magic wand and could make such an integration do exactly what you wanted, what would be your number one ask? Thank you!78Views3likes0CommentsBetter Windows Event Monitoring
Hi, As much as i love the graphs and visuals that LM produces for all sorts of metrics, unfortunately a big part of our monitoring is keeping an eye on Windows Event Logs, which i have to say LM is not that good at. Adding exceptions is a pain (i now have so many i often delete them by accident when adding new ones). I have been told this is in the pipeline for the new UI several times but it has not been mentioned as yet. My first line guys check our gfi & LM dashboard every morning and i hear time again that they prefer the gfi one for looking at Event log messages. I have even caught them loading gfi on to servers that already have LM on them (costing us twice the cost). Is there anything in the pipeline for this? I know it's not a priority for you guys, but i think for a lot of customers it would be.5Views2likes0Comments