October Product Power Hour: App Visibility with ITOps
Overview October’s Product Power Hour took us deep into Application Visibility for ITOps, featuring a hands-on demo of Dynamic Service Insights (DSI) and LM Uptime, the latest innovations designed to simplify service mapping and amplify visibility across complex hybrid environments. The session delivered live demos, customer-driven discussions, and expert insights into how LogicMonitor can dynamically model your business services, link application layers, and accelerate root cause analysis. Attendees walked away ready to bring clarity and efficiency to their service monitoring strategy. Key Highlights ⭐ Dynamic Service Insights in Action: See how DSI automatically builds services from custom properties, saving time and ensuring continuous accuracy. ⭐ End-to-End Service Modeling: Discover how to link technologies like F5, web servers, app tiers, and databases for true end-to-end visibility. ⭐ Contextual Alerting & Visualization: Learn how aggregated, service-level alerts minimize noise while improving MTTR. ⭐ ITSM Alignment: DSI supports immutable IDs and friendly names, making it easy to sync with your ITSM workflows. ⭐ Community Excitement: Customers shared real-world use cases and early wins, from reducing manual setup to improving collaboration across teams. Q&A Q: How does DSI reduce complexity in service setup? A: DSI dynamically builds and updates services based on defined properties. There is no need for manual service configuration as your environment changes. Q: Can I model complete end-to-end services (for example, F5 → web → app → database)? A: Yes. You can create linked service definitions that connect all these components for full-stack visibility. Best practices include defining relationships at each layer using consistent naming conventions and grouping resources under business-critical services for faster troubleshooting. Q: How can Dynamic Service Insights help correlate low MOS call scores with specific network devices causing jitter, latency, or packet loss? Can this mapping be automated? A: DSI enables correlation across network and application layers by mapping related metrics to shared service properties. This allows you to associate low-quality voice or collaboration data (such as MOS scores) with the underlying network devices introducing packet loss or latency, providing actionable insights and automated context. Q: Can alerts roll up to the service level instead of flooding my inbox with device alerts? A: Yes. DSI surfaces a single contextual alert that aggregates underlying data, giving you clarity and less noise. Q: How can I align LogicMonitor services with my ITSM system? A: Use immutable service IDs alongside friendly names. They can coexist to improve reporting and maintain ITSM alignment. Customer Call-outs ⭐ “OMG I am jumping up and down with joy right now!” ⭐ “This really helps with complexity. We can finally create services dynamically!” ⭐ “We’d love to join the Services Beta and integrate this with our dashboards.” ⭐ “Great job, team. Very insightful!” What’s Next 👥 User Groups Connect with other LM users in your region, share wins, swap stories, and grow your network. 💻 Virtual User Groups: 🌎 AMER East - Dec 2 at 11am EST 🌐 Register here 🌎 AMER West - Dec 2 at 11am PST 🌐 Register here 🌍 EMEA - Dec 3 at 1pm GMT 🌐 Register here 🌏 APAC - Dec 4 at 10am AEDT 🌐 Register here Note: As we finalize our speakers, these dates and times may change, but be sure to register for your respective regions above so we can keep you informed! Webinars 🪵 Logs for Lunch - Maximizing LM Logs for Faster Troubleshooting Nov 12 - register today ⚡ Product Power Hour - Exploring Agents with Edwin AI Nov 18 - register today Badges & Certifications Earn free, on-demand badges that validate your LogicMonitor skills: 🛡️ Getting Started 🛡️ Collectors 🛡️ Logs 🛡️ AI Ops Adoption 🛡️ Dashboards 🛡️ NEW: Service Insights 🛡️ NEW: Alerts Start learning today on LM Academy Review & Resources If you missed any part of the session or want to revisit the content, we’ve got you covered: Review the slide deck Dynamic Service Insights Solution Brief Want to dive deeper into this session? Watch the recording below ⬇️5Views0likes0CommentsSeptember Product Power Hour: Cloud Monitoring with Kubernetes & Containers
Overview This month’s Product Power Hour focused on Cloud Monitoring with Kubernetes and Containers, emphasizing simplified observability across containerized environments. The session demonstrated how LogicMonitor makes monitoring clusters, pods, and workloads easier without overwhelming teams with noise. Through demos, best practices, and customer discussions, attendees gained practical insights into deploying and managing monitoring at scale with AKS, EKS, and beyond. Key Highlights ⭐ Seamless Enablement: Existing PaaS customers can activate Kubernetes monitoring out of the box, while others can leverage LM’s container monitoring license for quick setup. ⭐ Deeper Visibility: Demonstrations showcased how LM provides observability across nodes, pods, and system containers—cutting through complexity without flooding teams with alerts. ⭐ Retention & History: The session clarified how historical K8s data is retained and how teams can use that data for trend analysis and capacity planning. ⭐ Best Practices: Real-world examples illustrated how to configure thresholds effectively to avoid alert fatigue, especially in sandbox or test clusters. Q&A Q: How can we enable this Kubernetes for existing customers? A: “For an existing PaaS customer you just configure it as it is included in the PaaS License — 1 PaaS license = 1 monitored pod (existing cloud licensing model). For AKS or EKS monitoring you just configure it in your cloud configuration (check the box). For a non-PaaS customer you need to add the LM Container Monitoring license (old license model).” Q: I am new to K8s — how do I monitor the critical components of the platform when there are so many system containers? A: “Focus on the system containers and use the out-of-the-box DataSources as a starting point, then tune thresholds to reduce noise.” Q: What will be the retention period of history data of K8s monitors? A: “Retention is handled according to LogicMonitor’s standard data retention policies, so you can view history for trending and analysis just like other monitored resources.” What’s Next 💻Level Up Your IT Universe: Next LogicMonitor Innovations Unveiled Webinar On September 24, discover the latest Edwin AI and LM Envision capabilities, including noise-cutting AI agents, Dynamic Service Insights, and expanded multi-cloud monitoring, designed to move IT from reactive to resilient. ➡️Register here to save your spot! 🪵 Logs for Lunch October 8: Logs Overages & Reducing MTTR with Cloud Logs ⚡ Next Product Power Hour October 29: App Visibility for ITOps Want to check out previous Product Power Hours? Explore the Product Power Hour Hub in the LM Community! 📚 Badges and Certifications Earn free, on-demand, digital badges that validate your product knowledge and platform skills. Available badges: 🛡️Getting Started 🛡️Collectors 🛡️Logs 🛡️AI Ops Adoption 🛡️Dashboards Review If you missed any part of the session or want to revisit the content, we’ve got you covered: Review the slide deck Want to dive deeper into this session? Watch the recording below ⬇️23Views0likes0CommentsAugust Product Power Hour: Edwin AI In Action
Overview This month’s Product Power Hour was a deep dive demo experience of LogicMonitor’s Edwin AI, featuring a next-level opportunity to go beyond the fundamentals. We showcased new capabilities, real-world usage patterns, and what’s coming next on the Agentic AI roadmap. The session was packed with live demonstrations, product walkthroughs, and interactive discussions that brought Edwin AI’s intelligent observability features into sharper focus. From alert deduplication to automated investigations and AI-powered root cause suggestions, the session left no doubt about Edwin’s power to reduce noise and accelerate resolution. Attendees gained a clearer understanding of what Edwin AI can do today, as well as what’s possible tomorrow. Key Highlights ⭐ Next-Level AI Investigations: We went beyond the basics to show how Edwin uses out-of-the-box correlation models and enriched context to pinpoint likely root causes faster. ⭐ Targeted Alert Routing: Discussions explored how Edwin AI’s rapid evolution could lead to support role-specific alerting during deduplication events—a capability on the product radar. ⭐ Flexible LLM Usage: The demo showcased how Edwin AI leverages both OpenAI and Anthropic via AWS Bedrock, selecting the optimal model for each task to ensure precision and performance. ⭐ Out-of-the-Box & Tunable Models: Attendees learned they don’t have to start from scratch, asEdwin comes with built-in models that can be adjusted to fit your environment. ⭐ Strong Customer Momentum: The session shared how attendees are actively exploring Edwin AI via Proof of Concepts or preparing for a Q4 rollout. Q&A Q: Are out-of-the-box correlation models available, or do we build from scratch? A: Yes, Edwin AI provides pre-built models that can be fine-tuned for your specific needs. Q: How is customer data handled with LLMs? A: LogicMonitor does not train on customer data. All data is securely segregated, and models are selected based on the task—nothing is shared across tenants. Q: Can we filter out alerts based on custom resource properties? A: Yes, Edwin AI supports filtering logic at ingestion to give you control over what alerts it processes. Q: Can deduplicated alerts be routed to different stakeholders? A: This isn’t available yet, but it’s a hot topic and something we’re exploring for future iterations. Customer Quote Call-outs 🌟“We’re planning a PoV and are really curious to see how Edwin handles topology-driven data.” 🌟“Excited about where this is going—we’d love to see automation and self-remediation layered in.” 🌟“The multi-model approach makes so much sense—great to see task-specific LLMs being used.” What’s Next 🏕️ Camp LogicMonitor: An Observability Adventure On August 18th, we kicked off our first Camp LogicMonitor! Join us for this 4-week virtual learning experience designed for LogicMonitor users of all levels. Each week features self-paced lessons, community discussions, and live Campfire Chats with product experts. Earn badges, grow your skills, and score exclusive LogicMonitor swag! 👉 Register now to reserve your spot! 👥 User Groups Connect in person with other LM users in your city over dinner and real talk. Share wins, swap stories, and grow your network. RSVP today: Denver - September 10 Stay tuned to our LM Community User Group Hub for upcoming virtual sessions. Note: As we finalize our speakers, these dates and times may change, but be sure to register for your respective regions above so we can keep you informed! 🪵 Logs for Lunch September 10: Logs Overages & Reducing MTTR with Cloud Logs ⚡Product Power Hour September 18: Cloud Monitoring With Containers Want to check out previous Product Power Hours? Explore the Product Power Hour Hub in the LM Community! 📚 Badges and Certifications Earn free, on-demand, digital badges that validate your product knowledge and platform skills. Available badges: 🛡️Getting Started 🛡️Collectors 🛡️Logs 🛡️AI Ops Adoption 🛡️Dashboards Review If you missed any part of the session or want to revisit the content, we’ve got you covered: Review the slide deck Want to dive deeper into this session? Watch the recording below ⬇️70Views1like0CommentsJuly Product Power Hour Recap: Monitoring Your AI Workloads with LM
Overview In this edition of Product Power Hour, the LM team explored how LogicMonitor can be used to effectively monitor AI workloads across modern environments. The session walked through best practices for monitoring key components of AI systems—including GPU metrics, model latency, and infrastructure dependencies—using LogicMonitor’s platform. Attendees gained insights into real-world AI observability challenges and how LogicMonitor enables end-to-end visibility into the health of AI services. Key Highlights ⭐ AI Workload Dashboards: Demonstrated how to build dashboards tailored to AI-specific metrics, including GPU utilization, job runtimes, and inference latency. ⭐ Dynamic Thresholds: Discussed using anomaly detection to set smarter thresholds for variable workloads like training jobs and inference endpoints, helping reduce alert fatigue and improve model reliability by adapting to fluctuating usage patterns. ⭐ Unified Monitoring: Emphasized LM’s ability to consolidate data across cloud, on-prem, and edge environments—critical for hybrid AI infrastructure. ⭐ Alert Routing + Suppression: Demonstrated how to avoid alert fatigue by using alert tuning and dynamic suppression during scheduled AI retraining windows. Q&A Q: Can LogicMonitor monitor GPU metrics out-of-the-box? A: Yes, LM has native collectors and integrations to pull in GPU metrics from platforms like NVIDIA and cloud providers. Q: Is LM useful for model observability? A: While LM focuses on infrastructure-level monitoring, it provides context crucial to understanding model performance issues (e.g., degraded latency tied to resource constraints). Q: How does alert suppression work during model retraining? A: You can set up dynamic suppression rules based on job schedules or metadata to avoid false positives during known high-usage periods. Q: Does LM integrate with tools like PagerDuty or Slack? A: Yes. These integrations are supported and were demoed live during the session. Customer Call-outs 🌟 “I can now see infrastructure issues that were hard to diagnose before.” 🌟 "LM’s GPU monitoring capabilities have been helpful for managing cloud costs and performance.” What’s Next 📚 Badges and Certifications We’ve launched our new LogicMonitor Badges and Certifications program in LM Academy. Earn free, on-demand, digital badges that validate your product knowledge and platform skills. Available badges: 🛡️Getting Started 🛡️Collectors 🛡️Logs Launching July 31: 🛡️AI Ops Adoption 🏕️ Camp LogicMonitor: An Observability Adventure Join us starting August 18th for this 4-week virtual learning experience designed for LogicMonitor users of all levels. Each week features self-paced lessons, community discussions, and live Campfire Chats with product experts. Earn badges, grow your skills, and score exclusive LogicMonitor swag! 👉 Register now to reserve your spot! 🪵 Logs for Lunch August 12 – Network Troubleshooting & Getting Started with Logs ⚡ Product Power Hour August 19 - Edwin AI In Action Want to check out previous Product Power Hours? Explore the Product Power Hour Hub in LM Community! 👥 User Groups Connect in person with other LM users in your city over dinner and real talk. Share wins, swap stories, and grow your network. RSVP today: Salt Lake City - September 9 Denver - September 10 Stay tuned in our LM Community User Group Hub for upcoming virtual sessions. Note: As we finalize our speakers, these dates and times may change, but be sure to register for your respective regions above so we can keep you informed! Review If you missed any part of the session or want to revisit the content, we’ve got you covered: Review the slide deck here Want to see the full session? Watch the recording below ⬇️74Views1like0CommentsNext Up in Our Product Power Hour Series: Mastering LM Collectors on March 26th
Are you making the most of LM Collectors to optimize your monitoring strategy? Whether you're fine-tuning performance, troubleshooting common challenges, or preparing for future scalability, our next session in the Product Power Hour series is designed to help you get the most out of your monitoring infrastructure. In this interactive live session, we’ll take a deep dive into best practices, performance optimizations, and upcoming enhancements, ensuring your data collection remains seamless, efficient, and built to scale. 📅 Date: March 26th 🕙 Time: 10 AM CST Featuring Guest Speakers: Craig Phelps – Product Manager, LogicMonitor Barry Ballard – Principal Product Trainer, LogicMonitor What to Expect Hosted by the LM Community team and product experts, this session will provide valuable insights into: ✅ Optimizing Collector Performance – Fine-tune configurations to maximize efficiency. ✅ Best Practices & Troubleshooting – Resolve common challenges and improve uptime. ✅ Scaling for Growth – Ensure your monitoring setup is ready for expansion. ✅ What’s Coming in 2025 – Get a sneak peek at upcoming features and enhancements. Exclusive Power-User Use Case Beyond best practices, we’ll also hear from one of our top engineers, who will share a real-world success story from a power-user customer. Learn how they’ve optimized their Collector strategy to improve efficiency, scalability, and performance, and how you can apply these insights to your own monitoring environment. Register Now & Join Us Live! As part of our ongoing Product Power Hour series, this session is perfect for IT practitioners, engineers, and monitoring professionals looking to optimize their LM Collectors and build a future-proof monitoring strategy. Don’t miss out—register today to secure your spot! 📅 Can’t attend live? No problem! Register anyway, and we’ll send you the full recap so you don’t miss a thing. 🔗 Register Now1.6KViews5likes0Comments