Best Practices for Practitioners: LM Logs Ingestion and Processing
Overview
LogicMonitor's LM Logs provide unified log analysis through algorithmic root-cause detection and pattern recognition. The platform ingests logs from diverse IT environments, identifies normal patterns, and detects anomalies to enable early issue resolution. Proper implementation ensures optimal log collection, processing, and analysis capabilities while maintaining system performance and security.
Key Principles
- Implement centralized log collection systems to unify and ensure comprehensive visibility across your IT infrastructure
- Establish accurate resource mapping processes to maintain contextual relationships between logs and monitored resources
- Protect sensitive data through appropriate filtering and security measures before any log transmission occurs
- Maintain system efficiency by carefully balancing log collection frequency and data volume
- Deploy consistent methods across similar resource types to ensure standardized log management
- Cover all critical systems while avoiding unnecessary log collection to optimize monitoring effectiveness
Log Ingestion Types and Methods
System Logs
Syslog Configuration
- Use LogSource as the primary configuration method
- Configure port 514/UDP for collection
- Implement proper resource mapping using system properties
- Configure filters for sensitive data removal
- Set up appropriate date/timestamp parsing
Windows Event Logs
- Utilize LogSource for optimal configuration
- Deploy Windows_Events_LMLogs DataSource
- Configure appropriate event channels and log levels
- Implement filtering based on event IDs and message content
- Set up proper batching for event collection
Container and Orchestration Logs
Kubernetes Logs
- Choose the appropriate collection method:
- LogSource (recommended)
- LogicMonitor Collector configuration
- lm-logs Helm chart implementation
- Configure proper resource mapping for pods and containers
- Set up filtering for system and application logs
- Implement proper buffer configurations
Cloud Platform Logs
AWS Logs
- Deploy using CloudFormation or Terraform
- Configure Lambda function for log forwarding
- Set up proper IAM roles and permissions
- Implement log collection for specific services:
- EC2 instance logs
- ELB access logs
- CloudTrail logs
- CloudFront logs
- S3 bucket logs
- RDS logs
- Lambda logs
- Flow logs
Azure Logs
- Deploy Azure Function and Event Hub
- Configure managed identity for resource access
- Set up diagnostic settings for resources
- Implement VM logging:
- Linux VM configuration
- Windows VM configuration
- Configure proper resource mapping
GCP Logs
- Configure PubSub topics and subscriptions
- Set up VM forwarder
- Configure export paths for different log types
- Implement proper resource mapping
- Set up appropriate filters
Application Logs
Direct API Integration
- Utilize the logs ingestion API endpoint
- Implement proper authentication using LMv1 API tokens
- Follow payload size limitations
- Configure appropriate resource mapping
- Implement error handling and retry logic
Log Aggregators
Fluentd Integration
- Install and configure fluent-plugin-lm-logs
- Set up proper resource mapping
- Configure buffer settings
- Implement appropriate filtering
- Optimize performance settings
Logstash Integration
- Install logstash-output-lmlogs plugin
- Configure proper authentication
- Set up metadata handling
- Implement resource mapping
- Configure performance optimization
Core Best Practices
Collection
- Use LogSource for supported system logs; cloud-native solutions for cloud services
- Configure optimal batch sizes and buffer settings
- Enable error handling and monitoring
- Implement systematic collection methods across similar resources
Resource Mapping
- Verify unique identifiers for accurate mapping
- Maintain consistent naming conventions
- Test mapping configurations before deployment
- Document mapping rules and relationships
Data Management
- Filter sensitive information and non-essential logs
- Set retention periods based on compliance and needs
- Monitor storage utilization
- Implement data lifecycle policies
Performance
- Optimize batch sizes and intervals
- Monitor collector metrics
- Adjust queue sizes for volume
- Balance load in high-volume environments
Security
- Use minimal-permission API accounts
- Secure credentials and encrypt transmission
- Audit access regularly
- Monitor security events
Implementation Checklist
Setup
✅ Map log sources and requirements
✅ Create API tokens
✅ Configure filters
✅ Test initial setup
Configuration
✅ Verify collector versions
✅ Set up resource mapping
✅ Test data flow
✅ Enable monitoring
Security
✅ Configure PII filtering
✅ Secure credentials
✅ Enable encryption
✅ Document controls
Performance
✅ Set batch sizes
✅ Configure alerts
✅ Enable monitoring
✅ Plan scaling
Maintenance
✅ Review filters
✅ Audit mappings
✅ Check retention
✅ Update security
Troubleshooting Guide
Common Issues
Resource Mapping Failures
- Verify property configurations
- Check collector logs
- Validate resource existence
- Review mapping rules
Performance Issues
- Monitor collector metrics
- Review batch configurations
- Check resource utilization
- Analyze queue depths
Data Loss
- Verify collection configurations
- Check network connectivity
- Review error logs
- Validate filtering rules
Monitoring and Alerting
Set up alerts for:
- Collection failures
- Resource mapping issues
- Performance degradation
- Security events
Regular monitoring of:
- Collection metrics
- Resource utilization
- Error rates
- Processing delays
Conclusion
Successful implementation of LM Logs requires careful attention to collection configuration, resource mapping, security, and performance optimization. Regular monitoring and maintenance of these elements ensures continued effectiveness of your log management strategy while maintaining system efficiency and security compliance. Follow these best practices to maximize the value of your LM Logs implementation while minimizing potential issues and maintenance overhead.
The diversity of log sources and ingestion methods requires a well-planned approach to implementation, considering the specific requirements and characteristics of each source type. Regular review and updates of your logging strategy ensure optimal performance and value from your LM Logs deployment.
Additional Resources
Sending Kubernetes Logs and Events