Has anybody noticed the flaw in LogSource logic?
So LogSources have a couple purposes: They allow you to filter out certain logs. I’m not sure the use case here since the LogSource is explicitly including logs. Maybe the point is to allow you to exclude certain logs that have sensitive data. No masking of data, just ignore the whole log. Not clear if the ignored logs are still in LMLogs or if they get dumped entirely. They allow you to add tags to logs. This is actually pretty cool. You can parse out important info from the log or add device proeprties to the logs. This means you can add device properties to the log that can be displayed as a separate column, or even filtered upon. Each of our devices has a customer property on it. This means i can add that property to each log and search or filter by customer. Device type, ssh.user, SKU, serial number, IP address, the list is literally endless. They allow you to customize which device the logs get mapped to. You can specify that the incoming log should be matched to device via IP address, or hostname, or FQDN, or something else. The documentation on this isn’t exactly clear, but that actually doesn’t matter because… The LogSources apply to logs from devices that match the LogSource’s AppliesTo. Which means the logs need to already be mapped to the device. Then the LogSource can map the logs to a certain device. Do you see the flaw? How is a log going to be processed by a LogSource so it can be properly mapped to a device, if the LogSource won’t process the log unless it matches the AppliesTo, which references to the device, to which the logs haven’t yet been mapped? LogSources should not apply to devices. They should apply to Log Pipelines.Solved276Views18likes4CommentsLM Logs multiple capture group parsing
Ok, this is cool. I have some log data that has structured data in it (some text, then a python list of strings). I had started building out a parse statement for each member of the list, then thought I’d try just making multiple capture groups and naming multiple variables after the as fanboy. Turns out it completely works. It parses each capture group into the corresponding column with a single parse statement. I was halfway through writing out a feature request when I figured I’d give it a try only to discover that it completely works. Nice job LM Logs guys.141Views14likes2CommentsLM Logs parser conditional formatting operator
Submitted to LM Feedback under the title “LM Logs parser colorization based on criteria” As an engineer who is trying to see how certain logs relate to other logs, it would be helpful if I could highlight specific logs in context with other logs by using an advanced search operator to colorize certain logs that meet a certain criterion. For example, I run this query often: "PagerDuty Ticket Creation" | parse /(.*) (SUMMARY|ERROR|INFO|DEBUG): (.*)/ as Script, Severity, Msg One of the fields I parse is the Severity, which as you can see can have values of SUMMARY, ERROR, INFO, or DEBUG. It would be nice if I could add an operator to the query that would let me colorize rows based on the value of the parsed Severity column (Severity just in this case; for the general case, any expression on any column). For example, I'd like to run the query: "PagerDuty Ticket Creation" | parse /(.*) (SUMMARY|ERROR|INFO|DEBUG): (.*)/ as Script, Severity, Msg | colorize Severity == "ERROR" as orange | colorize Severity ~ /SUMMARY|INFO/ as green The result would be that rows in the table that have a value of "ERROR" would have a background color of orange (a muted orange) and rows in the table that have a value of "SUMMARY" or "INFO" would be colored green. Since the DEBUG logs don't match any colorization operator, they would have the default color of white. It might be handy if one *or* two colors could be passed, allowing me to change the color of the text and the background, or just the background. It would be ok if I could only choose from a set list of colors, but it would be great if I could specify an RGBA color.45Views12likes0CommentsGetting started with Log analysis - useful queries
We at LogicMonitor want to make taking control of your log data easy for analysis, troubleshooting, and identifying trends. In this post, we will share a few helpful queries to get started with LM Logs - what devices are generating log data and easy ways to track overall usage. In future posts, we’ll share queries to dive deeper into specific log data types. What type of queries do you want to see? Reply to this post with areas of log analysis or best practices you want. Not up to date with LM Logs? Check out this blog post highlighting recent improvements and customer stories: A lookback at LM Logs NOTE: Some assumptions for these queries: Each queries results are bound to the time picker value, adjust according to your needs * is a wildcard value meaning ALL which can be replaced by a Resource, Resource Group, Sub-Group, Device by Type or Meta Data value You may need to modify specific queries to match your LM portal Devices Sending Logs - use this query to easily see which LM monitored devices are currently ingesting log data into your portal * | count by _resource.name | sort by _count desc Total Number of Devices Sending Logs - the previous query showed which devices are generated logs, while this query identifies the overall number of devices * | count by _resource.name | count Total Volume by Resource Name - this query shows the total volume of log ingestion (as GB) by resource name, with the average, min, max size per message. The results are sorted by GB descending but you can modify the operators to identify your own trends. * | count(_size), sum(_size), max(_size), min(_size) by _resource.name | num(_sum/1000000000) as GB | num(_sum/_count) as avg_size | sort by GB desc Total Log Usage - This is a helpful query to run to see your overall log usage for the entire portal * | sum(_size) | num(_sum/1000000000) as GB | sort by GB desc And finally, Daily Usage in Buckets - run this query to see an aggregated view of your daily log usage * | beta:bucket(span=24h) | sum (_size) as size_in_bytes by_bucket | num(size_in_bytes/1000000000) as GB | sort by _bucket asc We hope these help you get started!230Views11likes0CommentsIntroducing Logs for Lunch: Unlock your LM Logs potential during your lunch break!
Join us for an engaging new webinar that will transform how you understand and leverage LM Logs to drive better business outcomes. In this focused, lunch-friendly session, our experts will guide you through maximizing the full potential of your log data. Why attend? Whether you're new to LM Logs or looking to deepen your expertise, each session delivers practical insights you can implement immediately. Learn how to: Slash your Mean Time to Resolution (MTTR) with advanced troubleshooting techniques Configure and optimize LM Logs across diverse data sources Transform raw log data into actionable intelligence Leverage built-in analysis tools to identify patterns and anomalies What you'll get Live demonstrations of real-world use cases Step-by-step setup and configuration guidance Interactive Q&A sessions with LM Logs experts Best practices for integration with your existing workflows Practical tips for immediate implementation Perfect for IT Operations teams seeking to streamline troubleshooting DevOps professionals looking to enhance monitoring capabilities Security teams who want better visibility into their log data Business leaders evaluating log management solutions Current users ready to unlock advanced features Register now! Our first session will be on January 8, 2025, at 12:00 pm CT. Join our team at “lunch” for an interactive LM Logs discussion and demo to see how our logging solution helps your Ops teams: Solve problems faster with anomaly detection Simplify and standardize log management Use log data proactively to reduce major outages before they happen Click here to register or follow the link below! https://logicmonitor.zoom.us/webinar/register/WN_3z4XccEMRg61VUOzkuEUfA Transform your log management strategy one lunch break at a time. See you there!2.8KViews8likes0CommentsLogSource Resource Mapping Confusion
I have a ticket opened but hoping to get a quicker response here. I am using LogSources but since we are an MSP with multiple clients, there seems to be an issue syslog's are being mapped to other client devices that have the same IP because I'm using IP=system.hostname as the mapping. I have even pointed all the duplicate IPs to their respective syslog collector and it still maps wrong. Am I doing something wrong or is the system not smart enough to know that it came on this collector, therefore I should only map it to resources monitored by that collector? Is there a way I can use AND logic with the Token mapping for _lm.collectorId = system.collectorid? Thanks in advance.Solved149Views8likes7CommentsBest Practices for Practitioners: Log Query Language, Pipelines, and Alerting
Overview LogicMonitor's Logs feature provides a robust platform for log management, enabling IT professionals to efficiently ingest, process, and analyze log data. By leveraging advanced query capabilities and customizable processing pipelines, users can gain deep insights into their systems, facilitating proactive monitoring and rapid issue resolution. Key Principles Comprehensive Log Collection: Aggregate logs from diverse sources to ensure a holistic view of your infrastructure. Advanced Querying: Utilize LogicMonitor's query language to filter and analyze log data effectively. Customizable Processing Pipelines: Design pipelines to filter and route logs based on specific criteria. Proactive Alerting: Set up alerts to monitor critical events and anomalies in real time. Continuous Optimization: Regularly review and refine log management strategies to align with evolving system requirements. Logs Features and Methods Query Language Overview Logical Operators: Employ a range of simple to complex operators from simple AND, OR, and NOT to complex Regex expressions to construct precise queries. Field Filtering: Filter logs based on specific fields such as resource names, groups, or severity levels. Pattern Matching: Use wildcards and regular expressions to match patterns within log messages. Writing Filtering Queries Autocomplete Assistance: Begin typing in the query bar to receive suggestions for available fields and operators. Combining Conditions: Craft complex queries by combining multiple conditions to narrow down log results. Time Range Specification: Define specific time frames to focus on relevant log data. Advanced Search Operators Comparison Operators: Utilize operators like >, <, >=, and <= to filter numerical data. Inclusion Operators: Use: for exact matches and ~ for partial matches within fields. Negation Operators: Apply ! and !~ to exclude specific values or patterns from results. Log Processing Pipelines Pipeline Creation: Establish pipelines to define the flow and processing of log data based on set criteria. Alert Conditions: Integrate alert conditions within pipelines to monitor for specific events or anomalies. Unmapped Resources Handling: Manage logs from resources not actively monitored by associating them with designated pipelines. Log Alert Conditions Threshold Settings: Define thresholds for log events to trigger alerts when conditions are met. Severity Levels: Assign severity levels to alerts to prioritize responses appropriately. Notification Configuration: Set up notifications to inform stakeholders promptly upon alert activation. Best Practices Efficient Query Construction Start Broad, Then Refine: Begin with general queries and incrementally add filters to hone in on specific data. Leverage Autocomplete: Utilize the query bar's autocomplete feature to explore available fields and operators. Save Frequent Queries: Store commonly used queries for quick access and consistency in analysis. Optimizing Processing Pipelines Categorize Log Sources: Group similar log sources to streamline processing and analysis. Regularly Update Pipelines: Adjust pipelines to accommodate new log sources or changes in existing ones. Monitor Pipeline Performance: Keep an eye on pipeline efficiency to ensure timely processing of log data. Proactive Alert Management Set Relevant Thresholds: Define alert conditions that align with operational baselines to minimize false positives. Review Alerts Periodically: Assess alert configurations regularly to ensure they remain pertinent to current system states. Integrate with Incident Response: Ensure alerts are connected to incident management workflows for swift resolution. Implementation Checklist ✅ Aggregate logs from all critical infrastructure components. ✅ Familiarize with LogicMonitor's query language and practice constructing queries. ✅ Design and implement log processing pipelines tailored to organizational needs. ✅ Establish alert conditions for high-priority events and anomalies. ✅ Schedule regular reviews of log management configurations and performance. Conclusion Effective log management is pivotal for maintaining robust and secure IT operations. By harnessing LogicMonitor's advanced querying capabilities, customizable processing pipelines, and proactive alerting mechanisms, practitioners can achieve comprehensive visibility and control over their systems. Continuous refinement and adherence to best practices will ensure that log management strategies evolve with organizational growth and technological advancements. Additional Resources Query Language Overview Writing a Filtering Query Advanced Search Operators Logs Search Cheatsheet Logs Query Tracking Log Processing Pipelines Log Alert Conditions768Views7likes0CommentsBest Practices for Practitioners: LM Log Analysis and Anomaly Detection
Overview LogicMonitor's Log Analysis and Anomaly Detection tools enhance IT infrastructure monitoring by providing real-time insights into system performance and security. These features simplify log inspection, highlight potential issues through sentiment analysis, and detect anomalies to expedite troubleshooting and reduce mean time to resolution (MTTR). Key Principles Implement a comprehensive log collection strategy to ensure logs from all critical systems, applications, and network devices are gathered in a centralized location, providing a true holistic view of your IT environment. Ingest log data efficiently by applying indexing and normalization techniques to structure raw logs, reducing noise and improving analysis accuracy. Detect and identify issues early by leveraging real-time analysis with AI and machine learning to identify patterns and anomalies as they occur, enabling proactive troubleshooting. Use data visualization tools such as dashboards and reports to present log data intuitively, making it easier to spot trends and anomalies. Log Analysis Features and Methods Sentiment Analysis: LogicMonitor's Log Analysis assigns sentiment scores to logs based on keywords, helping prioritize logs that may indicate potential problems. Anomaly Detection: Automatically identifies unique deviations from normal patterns in log data, surfacing previously unknown issues predictively. Log Dashboard Widgets: Use Log widgets to filter and visualize log metrics in dashboard views, helping to quickly identify relevant log entries. Core Best Practices Data Collection Configure log sources to ensure comprehensive data collection across your whole IT infrastructure. Regularly review and update log collection configurations to accommodate changes in the environment. Data Processing Implement filtering mechanisms to include only essential log data, optimizing storage and analysis efficiency. Ensure sensitive information is appropriately masked or excluded to maintain data security and compliance. Analysis and Visualization Utilize LogicMonitor's AI-powered analysis tools to automatically detect anomalies and assign sentiment scores to log entries. Create and customize dashboards using log widgets to visualize log data pertinent to your monitoring objectives. Performance Optimization Regularly monitor system performance metrics to identify and address potential bottlenecks in log processing. Adjust log collection and processing parameters to balance system performance with the need for comprehensive log data. Security Implement role-based access controls (RBAC) to restrict log data visibility to authorized personnel only. Regularly audit log access and processing activities to ensure compliance with security policies. Best Practices Checklist Log Collection and Processing ✅ Ensure all critical log sources are collected and properly configured for analysis. ✅ Apply filters to exclude non-essential logs and improve data relevance. ✅ Normalize and index log data to enhance searchability and correlation. ✅ Regularly review log settings to adapt to system changes. Anomaly Detection and Analysis ✅ Utilize AI-powered tools to detect anomalies and unusual patterns. ✅ Fine-tune detection thresholds to minimize false positives and missed issues. ✅ Use sentiment analysis to prioritize logs based on urgency. ✅ Correlate anomalies with system events for faster root cause identification. Visualization and Monitoring ✅ Set up dashboards and widgets to track log trends and anomalies in real-time. ✅ Create alerts for critical log events and anomalies to enable quick response. ✅ Regularly review and update alert rules to ensure relevance. Performance and Optimization ✅ Monitor log processing performance to detect bottlenecks. ✅ Adjust log retention policies to balance storage needs and compliance. ✅ Scale resources dynamically based on log volume and analysis needs. Security and Compliance ✅ Restrict log access to authorized users only. ✅ Mask or exclude sensitive data from log analysis. ✅ Encrypt log data and audit access regularly for compliance. Troubleshooting Guide Common Issues Incomplete Log Data Symptoms: Missing or inconsistent log entries. Solutions: Verify log source configurations; ensure network connectivity between log sources and the monitoring system; check for filtering rules that may exclude necessary data. Performance Degradation Symptoms: Delayed log processing; slow system response times. Solutions: Assess system resource utilization; optimize log collection intervals and batch sizes; consider scaling resources to accommodate higher data volumes. False Positives in Anomaly Detection Symptoms: Frequent alerts for non-issue events. Solutions: Review and adjust anomaly detection thresholds; refine filtering rules to reduce noise; utilize sentiment analysis to prioritize significant events. Logs Not Correlated to a Resource Symptoms: Logs appear in the system but are not linked to the correct resource, making analysis and troubleshooting difficult. Solutions: Ensure that log sources are correctly mapped to monitored resources within LogicMonitor. Check if resource properties, such as hostname or instance ID, are properly assigned and match the log entries. Verify that resource mapping rules are configured correctly and are consistently applied. If using dynamic environments (e.g., cloud-based instances), confirm that auto-discovery and log ingestion settings align. Review collector logs for errors or mismatches in resource identification. Monitoring and Alerting Set up pipeline alerts for critical events, such as system errors or security breaches, to enable prompt response. Regularly review alert configurations to ensure they align with current monitoring objectives and system configurations. Conclusion Implementing LogicMonitor's Log Analysis and Anomaly Detection features effectively requires a strategic approach to data collection, processing, analysis, and visualization. By adhering to these best practices, practitioners can enhance system performance monitoring, expedite troubleshooting, and maintain robust security postures within their IT environments. Additional Resources Log Anomaly Detection Log Analysis Accessing Log Analysis Log Analysis Widget Filtering Logs Using Log Analysis Viewing Logs and Log Anomalies Log Analysis Demonstration Video1.9KViews5likes0CommentsREGISTER: Logs For Lunch - Network Observability & Wireless Connectivity
Just in time for Valentine's Day, we're sharing the love with a double-feature! Come and learn about "Network Observability and Troubleshooting" and "Monitoring Wireless Connectivity". Join us for a quick and delicious 45-minute "lunchtime" session and discover how to unlock the secrets hidden within your log data. Learn how to: Troubleshoot your network infrastructure like a pro. Monitor the performance of your wireless network devices. Leverage powerful metrics to drive better business outcomes. Fall head over heels for the insights that await you. Don't miss out on this Valentine's treat! Register here!4.3KViews4likes0CommentsBest Practices for Practitioners: LM Logs Ingestion and Processing
Overview LogicMonitor's LM Logs provide unified log analysis through algorithmic root-cause detection and pattern recognition. The platform ingests logs from diverse IT environments, identifies normal patterns, and detects anomalies to enable early issue resolution. Proper implementation ensures optimal log collection, processing, and analysis capabilities while maintaining system performance and security. Key Principles Implement centralized log collection systems to unify and ensure comprehensive visibility across your IT infrastructure Establish accurate resource mapping processes to maintain contextual relationships between logs and monitored resources Protect sensitive data through appropriate filtering and security measures before any log transmission occurs Maintain system efficiency by carefully balancing log collection frequency and data volume Deploy consistent methods across similar resource types to ensure standardized log management Cover all critical systems while avoiding unnecessary log collection to optimize monitoring effectiveness Log Ingestion Types and Methods System Logs Syslog Configuration Use LogSource as the primary configuration method Configure port 514/UDP for collection Implement proper resource mapping using system properties Configure filters for sensitive data removal Set up appropriate date/timestamp parsing Windows Event Logs Utilize LogSource for optimal configuration Deploy Windows_Events_LMLogs DataSource Configure appropriate event channels and log levels Implement filtering based on event IDs and message content Set up proper batching for event collection Container and Orchestration Logs Kubernetes Logs Choose the appropriate collection method: LogSource (recommended) LogicMonitor Collector configuration lm-logs Helm chart implementation Configure proper resource mapping for pods and containers Set up filtering for system and application logs Implement proper buffer configurations Cloud Platform Logs AWS Logs Deploy using CloudFormation or Terraform Configure Lambda function for log forwarding Set up proper IAM roles and permissions Implement log collection for specific services: EC2 instance logs ELB access logs CloudTrail logs CloudFront logs S3 bucket logs RDS logs Lambda logs Flow logs Azure Logs Deploy Azure Function and Event Hub Configure managed identity for resource access Set up diagnostic settings for resources Implement VM logging: Linux VM configuration Windows VM configuration Configure proper resource mapping GCP Logs Configure PubSub topics and subscriptions Set up VM forwarder Configure export paths for different log types Implement proper resource mapping Set up appropriate filters Application Logs Direct API Integration Utilize the logs ingestion API endpoint Implement proper authentication using LMv1 API tokens Follow payload size limitations Configure appropriate resource mapping Implement error handling and retry logic Log Aggregators Fluentd Integration Install and configure fluent-plugin-lm-logs Set up proper resource mapping Configure buffer settings Implement appropriate filtering Optimize performance settings Logstash Integration Install logstash-output-lmlogs plugin Configure proper authentication Set up metadata handling Implement resource mapping Configure performance optimization Core Best Practices Collection Use LogSource for supported system logs; cloud-native solutions for cloud services Configure optimal batch sizes and buffer settings Enable error handling and monitoring Implement systematic collection methods across similar resources Resource Mapping Verify unique identifiers for accurate mapping Maintain consistent naming conventions Test mapping configurations before deployment Document mapping rules and relationships Data Management Filter sensitive information and non-essential logs Set retention periods based on compliance and needs Monitor storage utilization Implement data lifecycle policies Performance Optimize batch sizes and intervals Monitor collector metrics Adjust queue sizes for volume Balance load in high-volume environments Security Use minimal-permission API accounts Secure credentials and encrypt transmission Audit access regularly Monitor security events Implementation Checklist Setup ✅ Map log sources and requirements ✅ Create API tokens ✅ Configure filters ✅ Test initial setup Configuration ✅ Verify collector versions ✅ Set up resource mapping ✅ Test data flow ✅ Enable monitoring Security ✅ Configure PII filtering ✅ Secure credentials ✅ Enable encryption ✅ Document controls Performance ✅ Set batch sizes ✅ Configure alerts ✅ Enable monitoring ✅ Plan scaling Maintenance ✅ Review filters ✅ Audit mappings ✅ Check retention ✅ Update security Troubleshooting Guide Common Issues Resource Mapping Failures Verify property configurations Check collector logs Validate resource existence Review mapping rules Performance Issues Monitor collector metrics Review batch configurations Check resource utilization Analyze queue depths Data Loss Verify collection configurations Check network connectivity Review error logs Validate filtering rules Monitoring and Alerting Set up alerts for: Collection failures Resource mapping issues Performance degradation Security events Regular monitoring of: Collection metrics Resource utilization Error rates Processing delays Conclusion Successful implementation of LM Logs requires careful attention to collection configuration, resource mapping, security, and performance optimization. Regular monitoring and maintenance of these elements ensures continued effectiveness of your log management strategy while maintaining system efficiency and security compliance. Follow these best practices to maximize the value of your LM Logs implementation while minimizing potential issues and maintenance overhead. The diversity of log sources and ingestion methods requires a well-planned approach to implementation, considering the specific requirements and characteristics of each source type. Regular review and updates of your logging strategy ensure optimal performance and value from your LM Logs deployment. Additional Resources About Log Ingestion Sending Syslog Logs Sending Windows Log Events Sending Kubernetes Logs and Events Sending AWS Logs Sending Azure Logs Sending GCP Logs Sending Okta Logs Sending Fluentd Logs Sending Logstash Logs Sending Logs to Ingestion API Log Processing1.1KViews3likes0Comments