August Product Power Hour: Edwin AI In Action
Overview This month’s Product Power Hour was a deep dive demo experience of LogicMonitor’s Edwin AI, featuring a next-level opportunity to go beyond the fundamentals. We showcased new capabilities, real-world usage patterns, and what’s coming next on the Agentic AI roadmap. The session was packed with live demonstrations, product walkthroughs, and interactive discussions that brought Edwin AI’s intelligent observability features into sharper focus. From alert deduplication to automated investigations and AI-powered root cause suggestions, the session left no doubt about Edwin’s power to reduce noise and accelerate resolution. Attendees gained a clearer understanding of what Edwin AI can do today, as well as what’s possible tomorrow. Key Highlights ⭐ Next-Level AI Investigations: We went beyond the basics to show how Edwin uses out-of-the-box correlation models and enriched context to pinpoint likely root causes faster. ⭐ Targeted Alert Routing: Discussions explored how Edwin AI’s rapid evolution could lead to support role-specific alerting during deduplication events—a capability on the product radar. ⭐ Flexible LLM Usage: The demo showcased how Edwin AI leverages both OpenAI and Anthropic via AWS Bedrock, selecting the optimal model for each task to ensure precision and performance. ⭐ Out-of-the-Box & Tunable Models: Attendees learned they don’t have to start from scratch, asEdwin comes with built-in models that can be adjusted to fit your environment. ⭐ Strong Customer Momentum: The session shared how attendees are actively exploring Edwin AI via Proof of Concepts or preparing for a Q4 rollout. Q&A Q: Are out-of-the-box correlation models available, or do we build from scratch? A: Yes, Edwin AI provides pre-built models that can be fine-tuned for your specific needs. Q: How is customer data handled with LLMs? A: LogicMonitor does not train on customer data. All data is securely segregated, and models are selected based on the task—nothing is shared across tenants. Q: Can we filter out alerts based on custom resource properties? A: Yes, Edwin AI supports filtering logic at ingestion to give you control over what alerts it processes. Q: Can deduplicated alerts be routed to different stakeholders? A: This isn’t available yet, but it’s a hot topic and something we’re exploring for future iterations. Customer Quote Call-outs 🌟“We’re planning a PoV and are really curious to see how Edwin handles topology-driven data.” 🌟“Excited about where this is going—we’d love to see automation and self-remediation layered in.” 🌟“The multi-model approach makes so much sense—great to see task-specific LLMs being used.” What’s Next 🏕️ Camp LogicMonitor: An Observability Adventure On August 18th, we kicked off our first Camp LogicMonitor! Join us for this 4-week virtual learning experience designed for LogicMonitor users of all levels. Each week features self-paced lessons, community discussions, and live Campfire Chats with product experts. Earn badges, grow your skills, and score exclusive LogicMonitor swag! 👉 Register now to reserve your spot! 👥 User Groups Connect in person with other LM users in your city over dinner and real talk. Share wins, swap stories, and grow your network. RSVP today: Denver - September 10 Stay tuned to our LM Community User Group Hub for upcoming virtual sessions. Note: As we finalize our speakers, these dates and times may change, but be sure to register for your respective regions above so we can keep you informed! 🪵 Logs for Lunch September 10: Logs Overages & Reducing MTTR with Cloud Logs ⚡Product Power Hour September 18: Cloud Monitoring With Containers Want to check out previous Product Power Hours? Explore the Product Power Hour Hub in the LM Community! 📚 Badges and Certifications Earn free, on-demand, digital badges that validate your product knowledge and platform skills. Available badges: 🛡️Getting Started 🛡️Collectors 🛡️Logs 🛡️AI Ops Adoption 🛡️Dashboards Review If you missed any part of the session or want to revisit the content, we’ve got you covered: Review the slide deck Want to dive deeper into this session? Watch the recording below ⬇️47Views1like0CommentsJuly Product Power Hour Recap: Monitoring Your AI Workloads with LM
Overview In this edition of Product Power Hour, the LM team explored how LogicMonitor can be used to effectively monitor AI workloads across modern environments. The session walked through best practices for monitoring key components of AI systems—including GPU metrics, model latency, and infrastructure dependencies—using LogicMonitor’s platform. Attendees gained insights into real-world AI observability challenges and how LogicMonitor enables end-to-end visibility into the health of AI services. Key Highlights ⭐ AI Workload Dashboards: Demonstrated how to build dashboards tailored to AI-specific metrics, including GPU utilization, job runtimes, and inference latency. ⭐ Dynamic Thresholds: Discussed using anomaly detection to set smarter thresholds for variable workloads like training jobs and inference endpoints, helping reduce alert fatigue and improve model reliability by adapting to fluctuating usage patterns. ⭐ Unified Monitoring: Emphasized LM’s ability to consolidate data across cloud, on-prem, and edge environments—critical for hybrid AI infrastructure. ⭐ Alert Routing + Suppression: Demonstrated how to avoid alert fatigue by using alert tuning and dynamic suppression during scheduled AI retraining windows. Q&A Q: Can LogicMonitor monitor GPU metrics out-of-the-box? A: Yes, LM has native collectors and integrations to pull in GPU metrics from platforms like NVIDIA and cloud providers. Q: Is LM useful for model observability? A: While LM focuses on infrastructure-level monitoring, it provides context crucial to understanding model performance issues (e.g., degraded latency tied to resource constraints). Q: How does alert suppression work during model retraining? A: You can set up dynamic suppression rules based on job schedules or metadata to avoid false positives during known high-usage periods. Q: Does LM integrate with tools like PagerDuty or Slack? A: Yes. These integrations are supported and were demoed live during the session. Customer Call-outs 🌟 “I can now see infrastructure issues that were hard to diagnose before.” 🌟 "LM’s GPU monitoring capabilities have been helpful for managing cloud costs and performance.” What’s Next 📚 Badges and Certifications We’ve launched our new LogicMonitor Badges and Certifications program in LM Academy. Earn free, on-demand, digital badges that validate your product knowledge and platform skills. Available badges: 🛡️Getting Started 🛡️Collectors 🛡️Logs Launching July 31: 🛡️AI Ops Adoption 🏕️ Camp LogicMonitor: An Observability Adventure Join us starting August 18th for this 4-week virtual learning experience designed for LogicMonitor users of all levels. Each week features self-paced lessons, community discussions, and live Campfire Chats with product experts. Earn badges, grow your skills, and score exclusive LogicMonitor swag! 👉 Register now to reserve your spot! 🪵 Logs for Lunch August 12 – Network Troubleshooting & Getting Started with Logs ⚡ Product Power Hour August 19 - Edwin AI In Action Want to check out previous Product Power Hours? Explore the Product Power Hour Hub in LM Community! 👥 User Groups Connect in person with other LM users in your city over dinner and real talk. Share wins, swap stories, and grow your network. RSVP today: Salt Lake City - September 9 Denver - September 10 Stay tuned in our LM Community User Group Hub for upcoming virtual sessions. Note: As we finalize our speakers, these dates and times may change, but be sure to register for your respective regions above so we can keep you informed! Review If you missed any part of the session or want to revisit the content, we’ve got you covered: Review the slide deck here Want to see the full session? Watch the recording below ⬇️71Views1like0CommentsNext Up in Our Product Power Hour Series: Mastering LM Collectors on March 26th
Are you making the most of LM Collectors to optimize your monitoring strategy? Whether you're fine-tuning performance, troubleshooting common challenges, or preparing for future scalability, our next session in the Product Power Hour series is designed to help you get the most out of your monitoring infrastructure. In this interactive live session, we’ll take a deep dive into best practices, performance optimizations, and upcoming enhancements, ensuring your data collection remains seamless, efficient, and built to scale. 📅 Date: March 26th 🕙 Time: 10 AM CST Featuring Guest Speakers: Craig Phelps – Product Manager, LogicMonitor Barry Ballard – Principal Product Trainer, LogicMonitor What to Expect Hosted by the LM Community team and product experts, this session will provide valuable insights into: ✅ Optimizing Collector Performance – Fine-tune configurations to maximize efficiency. ✅ Best Practices & Troubleshooting – Resolve common challenges and improve uptime. ✅ Scaling for Growth – Ensure your monitoring setup is ready for expansion. ✅ What’s Coming in 2025 – Get a sneak peek at upcoming features and enhancements. Exclusive Power-User Use Case Beyond best practices, we’ll also hear from one of our top engineers, who will share a real-world success story from a power-user customer. Learn how they’ve optimized their Collector strategy to improve efficiency, scalability, and performance, and how you can apply these insights to your own monitoring environment. Register Now & Join Us Live! As part of our ongoing Product Power Hour series, this session is perfect for IT practitioners, engineers, and monitoring professionals looking to optimize their LM Collectors and build a future-proof monitoring strategy. Don’t miss out—register today to secure your spot! 📅 Can’t attend live? No problem! Register anyway, and we’ll send you the full recap so you don’t miss a thing. 🔗 Register Now1.6KViews5likes0Comments