New UI Impact Series - Netflow Sankey Graphs
In our new UI, LogicMonitor has enhanced its Netflow monitoring capabilities by introducing Netflow Sankey graphs in the Traffic tab. These visual representations of data offer an alternative view or an additional graphic to the traditional table views. Sankey graphs display traffic flow from source to destination, with link widths proportional to the flow quantity. This intuitive visualization allows admins to quickly understand complex network traffic patterns without having to sift through rows of data. By providing a clear, visual representation of network traffic flow, Sankey graphs empower admins to make quicker, more informed decisions about network optimization and security. But how does this benefit you? Sankey graphs allow you to quickly identify unusual traffic patterns, making spotting potential issues like excessive bandwidth consumption or security threats easier. For instance, an unusually wide link in the graph could reveal an employee streaming high-bandwidth content like Netflix during work hours in no time. This tool saves time and enhances the overall efficiency of network monitoring and troubleshooting processes. Whether you're conducting root cause analysis, optimizing bandwidth utilization, or identifying potential security breaches, these Sankey graphs provide a powerful new way to visualize and understand your network traffic data. Want to know more about Netflow Sankey Graphs? Check out these resources: Viewing Sankey Charts Traffic Tabskydonnell2 days agoCommunity Manager27Views2likes0CommentsNew UI Impact Series - Datapoint & Log Analysis
LogicMonitor's new Datapoint Analysis and Log Analysis features are revolutionary tools designed to transform how IT professionals approach troubleshooting. These AI-powered features provide automated insights that significantly reduce the time and effort required to identify and resolve critical issues. By leveraging advanced correlation techniques and sentiment analysis, these tools offer a streamlined approach to problem-solving, allowing teams to quickly pinpoint the root cause of alerts and minimize downtime. Their user-friendly design ensures that even less experienced team members can use them effectively, empowering the entire team and boosting efficiency. Datapoint Analysis stands out with its ability to correlate metrics across various DataSources, creating a comprehensive correlation score. This score is calculated by: Other datapoints from the same instance The same datapoint on other instances within the same resource The same datapoint on other resources that share the same collector ID This comprehensive approach eliminates hours of manual investigation, instilling confidence in the analysis results. Log Analysis uses sentiment and keyword analysis to distill large volumes of log data into concise, actionable summaries. Its interactive visualizations allow for quick data refinement without complex queries, providing an intuitive interface for log exploration. So, what’s the impact for you? Datapoint and Log Analysis significantly reduce the MTTR for critical issues by offering rapid access to correlated metrics and summarized log insights. This reduction in resolution time not only improves system availability and performance but also relieves the stress and pressure on IT teams. These tools are seamlessly integrated throughout the LM Envision platform, accessible from alerts, dashboards, graphs, and the Resource Explorer, ensuring that users can leverage these powerful insights at any stage of their troubleshooting process. For organizations dealing with complex, distributed systems, these features represent a quantum leap in operational efficiency, enabling teams to maintain high system availability and performance with unprecedented ease and speed. Want to know more about Datapoint & Log Analysis? Check out these resources: Datapoint Analysis Datapoint Analysis Overview Datapoint Analysis Demonstration Video Log Analysis Accessing Log Analysis Log Analysis Overview Log Analysis Widgets Log Analysis Demonstration Videoskydonnell6 days agoCommunity Manager56Views3likes1CommentNew UI Impact Series - Resource Explorer
LogicMonitor's new Resource Explorer is a game-changing addition to its IT infrastructure monitoring platform. This powerful feature gives users a comprehensive, graphical view of their entire IT ecosystem, allowing for quick identification and resolution of issues across complex environments. Resource Explorer's intuitive interface presents devices as color-coded hexagons, grouped and filtered based on normalized properties such as provider, resource type, and region. This visual approach to resource management enables IT teams to spot patterns and troubleshoot problems more efficiently than ever before. By allowing users to create custom views and share filters, teams can collaborate more effectively and respond to issues faster. For example, if an AWS region experiences problems, Resource Explorer can instantly highlight affected resources, dramatically reducing mean time to repair (MTTR). This feature is precious for organizations with hybrid or multi-cloud environments, providing a unified view of resources across different providers and locations. Resource Explorer's flexibility empowers IT teams to maintain optimal service, application, and workflow health. The ability to quickly group and visualize resources based on various criteria gives teams unprecedented control over their infrastructure. Whether managing on-premises systems, cloud resources, or a combination of both, Resource Explorer provides the visibility and insights needed to ensure peak performance and rapid issue resolution. This innovative tool is set to revolutionize how IT professionals monitor and manage their complex, modern infrastructures. Want to know more about Resource Explorer? Check out our Resource Explorer starter guide.skydonnell11 days agoCommunity Manager24Views3likes0CommentsApache Groovy 2 End-of-Life Milestone Public Announcement
In support of Apache Groovy’s end of support for Apache Groovy 2.4, the current version being used by LogicMonitor, and transitioning to the new version (Apache Groovy 4), LogicMonitor is announcing the upcoming significant lifecycle milestone dates. Groovy is the primary LogicMonitor language for modules employing script or batch script collection methodologies. Affected Modules Modules Script and batch script NetScan Groovy Scripts that are not Apache Groovy 4 compatible Why the Change? Apache Groovy 2, the current version LogicMonitor uses, is no longer actively supported by the Apache Software Foundation. Therefore, to maintain support, mitigate security risks, and maintain a strong security posture, LogicMonitor is migrating to Apache Groovy 4. Apache Groovy 2 - Key Milestone Timeline Migration Path To ensure services remain unaffected, verify compatibility with Apache Groovy 4 as soon as possible. Failure to complete this step will result in a loss of total functionality. Compatibility Verification Steps Official Modules Where appropriate, LogicMonitor will provide Apache Groovy 4 compatibility updates for official modules. Customers should follow their established change management control processes to implement the updates. For more information, see LogicMonitor Provided Modules Groovy 4 Migration in the product documentation. Unofficial Modules (Customer or community-created LogicModules) For any custom or community-scripted LogicModules, customers must test their modules to ensure compatibility, and make the appropriate updates. Customers should follow their established change management control processes to implement the updates. For more information, please refer to LogicMonitor’s product documentation for Custom Module Groovy Migration Validation. Collector Updates To minimize disruption in monitoring, before updating Collectors to version 37.100 or later, customers must first verify Module compatibility with Apache Groovy 4. Supporting Materials Modules Management Collector Release Notes Timeline | LogicMonitor Collector Versions | LogicMonitor Embedded Groovy Scripting | LogicMonitor Scripted Data Collection Overview | LogicMonitor Apache Groovy 4: New Features - Release notes | LogicMonitor Known Issues | LogicMonitor Frequently Asked Questions (FAQ) What happens to customized, custom, or community-created modules? Prior to upgrading to collectors that no longer support Groovy 2, customers will need to test their customized or custom written modules, and make the necessary updates for any compatibility issues. When available, LogicMonitor will provide documentation on how customers can test customized modules, as well as all known issues. How might I start testing my modules for Apache Groovy 4 compatibility? Custom Module Groovy Migration Validation Known Issues: https://www.logicmonitor.com/release-notes/v-205-release-notes https://www.logicmonitor.com/release-notes/v-204-release-notes How will officially supported modules be impacted by this change? All official LogicMonitor-provided modules will be compatible with Apache Groovy 4, and will be tagged ‘groovy4’ as available. How can I tell which modules are supported or not? In the ‘My Modules Toolbox’, under the Support column, please note those marked ‘official’, which will indicate those modules which will continue to be maintained by LogicMonitor. Modules that are no longer maintained will be marked as ‘deprecated’. Why did I receive this customer communication, as I usually receive this type of information from LogicMonitor Release Notes? At LogicMonitor, we are customer-obsessed, and are committed to helping ensure a smooth transition during this major change in our Collectors. Where might I get help if I need more time or resources to test or update custom modules? LogicMonitor Professional Services is available to assist with testing and updating customer LogicModules at a cost. Your Customer Success Manager (CSM) will need to facilitate a Statement of Work (SOW) agreement for the proposed LogicModules and effort needed to make the updates. What happens if I have made script customizations to officially supported modules? You will have to either update to the verified compatible module versions or verify the compatibility of your customizations. What happens if I have updated a module to the latest version, but I am still experiencing issues? Please contact/open a ticket with the LogicMonitor Technical Support Team. What if I have a question that is not covered here? Please reach out to your LogicMonitor Customer Success Manager (CSM) or open a ticket with the LogicMonitor Technical Support Team.SuzanneShaw16 days agoCommunity Manager195Views2likes0CommentsSummer Feature Spotlight
Log Analysis First, we're revolutionizing Logs with our newLog Analysis feature. This AI-powered tool is like a gold miner for your log data, automatically sifting through thousands of logs to unearth those precious nuggets of insight. With intuitive visual diagrams based on sentiment, level, and keywords, you'll be troubleshooting faster than ever before. Say goodbye to manual queries and hello to AI-guided problem-solving that even your Level 1 support can master! Release Notes SNMP Traps as Logs Now, let's talk about bringing the 90s back – but not in the way you might think. OurSNMP Traps as Logs feature is giving those old-school network monitoring tools a modern makeover. We've transformed SNMP traps into logs, eliminating those pesky monitoring gaps and serving up instant network insights on a silver platter. No configuration required – just plug in and watch as your network monitoring strategy gets a serious upgrade. It's like having a time machine that brings the best of the past into your cutting-edge monitoring present! Support Documentation Role-Based Access Control (RBAC) for LogicModules Security conscious? We've got you covered withRole-Based Access Control (RBAC) for LogicModules. This feature is like a bouncer for your modules, ensuring only the right people get VIP access. Set granular permissions that give your teams the freedom to monitor what they need without stepping on each other's toes. It's the perfect balance of control and flexibility, letting you tailor access to fit your organization like a glove. Release Notes Support Documentation Security Settings and Recommendations Lastly, we're rolling out the red carpet forSecurity Settings and Recommendations. Think of it as your personal security concierge, guiding you through the best practices to keep your LogicMonitor portal Fort Knox-level secure. With a centralized security command center and proactive recommendations, you'll be able to spot and patch potential vulnerabilities faster than you can say "two-factor authentication!". It's like having a crystal ball for your security needs – but way more practical! Release Notes Support Documentation That's all we have for you this time for our Summer Feature Spotlight! Be sure to come back next time for more exciting releases! Want to know more about our Summer Product Launch? Read our blog to better understand the new features Watch, like and comment on the Log Analysis and SNMP Traps as Logs demo videos on our LogicMonitor YouTube Channel Visit our Quarterly Launch webpage to learn more about recent innovations Note:all listed features are only available in our new User Interface. For more information about toggling, visit ourLogicMonitor New UI Documentation.36Views3likes0CommentsLogicMonitor's Product Roadmap Roadshow
Are you ready to explore the cutting-edge of hybrid observability? LogicMonitor and the LM Community are thrilled to announce our first-ever public Product Roadmap Roadshow! This exclusive online event will give you an insider's look at our vision for the future and how we're revolutionizing the industry. What to Expect Our roadshow features four sessions: LM Envision’s Product Overview September 3 & September 12, 2024 Discover our unified experience, layered AI, and hybrid coverage Explore our product pillars and key initiatives for 2024 Get a glimpse of our customer-driven priorities for 2025 The AI Revolution in Monitoring ft Edwin AI September 10 & September 11, 2024 Uncover the current state of AIOps and our transformative vision Learn how Edwin AI is disrupting traditional AIOps See real customer success stories and outcomes Experience a live demo of Edwin AI in action Why Attend? Be among the first to see our product roadmap Gain insights into the future of intelligent monitoring Learn how AI is reshaping the industry Provide valuable feedback to shape our future developments Don't miss this opportunity to be part of LogicMonitor's journey to redefine hybrid observability and GenAIxOps. Your voice matters, and we want to hear from you! The future of intelligent monitoring is just around the corner – are you ready to join us? Register Below: Virtual Product Roadmap Roadshowskydonnell24 days agoCommunity Manager80Views7likes0CommentsImportant Security Announcement
Hello LM Community! Important Security Announcement As part of our ongoing commitment to enhancing security for our customers, LogicMonitor will be requiring stronger security measures to protect your account. This page is to serve as both an announcement and a landing page for resources you may need. Your account will need to be in compliance with the following security mandates by December 31, 2024, to avoid any disruptions to your service. Please review the items below, and take appropriate actions to ensure a secure experience with LogicMonitor. Security Mandates Two-Factor Authentication (2FA) Two-Factor Authentication Summary: Two-factor authentication (2FA) provides an extra layer of security for accessing any LogicMonitor account. With this upcoming change, users not using SSO (Single Sign-On) will be required to use 2FA. This means, in addition to a username and password, users will need to verify their identity using a third-party application such as Authy, an authentication token delivered via SMS/voice, or authenticate via email. Quick Reference Guide (QRG) Supporting Documentation: Linux Collectors Least Privilege Migrating Linux Collectors Installation Process Summary: Previously, LogicMonitor required collectors to use root credentials to collect data from the resources it monitored. As part of our commitment to increasing security standards, and meeting the feedback of our customer base, with the release of GD 36, we have introduced the capability of collectors to run using non-root credentials. We ask that customers promptly change to non-root credentials, which will improve security and reduce risk. Moving forward, LogicMonitor is requiring all Linux collectors to be migrated to run under non-root users. Our enhanced migration process allows this transition without uninstalling the collector or losing any data. Customers can follow either the prompt-based or silent migration processes to complete the transition. Quick Reference Guide (QRG) Supporting Documentation: Windows Collectors Least Privilege Summary: Originally, LogicMonitor required Windows collectors to run using administrator (admin) credentials to collect data from the resources it monitored. As part of our commitment to the highest security standards, we have continued to invest in security features and risk mitigation for our customers. We have now extended the capabilities of collectors to utilize non-administrator (non-admin) credentials for data collection. Moving forward, LogicMonitor is requiring all Windows collectors to be migrated to utilize non-administrator accounts to monitor their systems. Our enhanced migration process allows this transition, without uninstalling the collector or losing any data. Customers can follow the prompt-based migration processes to complete the transition to non-administrator accounts. Non-admin essentially means moving away from an excessively privileged account for monitoring. Update on the Windows Collectors Least Privilege Security Mandate: What happens at the end of the year if collectors are still using administrative privileged accounts for the collector service and query user? Based on our customers’ unique environmental landscape and needs, if the collector is still configured to use accounts with Administrative privileges on December 31, 2024, we will not prevent collectors from communicating with LogicMonitor. We do however strongly recommend you utilize non-administrator accounts to monitor your systems as our migration process allows this transition, without uninstalling the collector or losing any data. We understand that 100% compliance with a non-admin is not possible, if the technology you’re monitoring REQUIRES administrative privileges. Logic Monitor is currently reviewing the best options in order to limit the attack surface in these scenarios, while also minimizing disruptions. Once we have refined the solutions that best fit customers’ unique circumstances, we will provide ample timing for our customers to implement any required changes.The goal of this security mandate is not to dictate how customers manage Windows accounts, but rather to help our customers better adopt security best practices (Principle of Least Privilege), as it relates to the collector service user and collector query user. Quick Reference Guides (QRG) Supporting Documentation / How To Guides Migrating Windows Collectors Setting up non-Admin for WMI on Windows API Tokens Summary: Roles within the LogicMonitor platform define the permissions and configurations that determine a user's interactions. The administrator role, in particular, grants permissions across all areas of the platform, enabling administrators to perform any action, including those that are security-sensitive. REST API tokens are used to authenticate requests to the REST API, allowing users to programmatically manage their LogicMonitor resources, dashboards, devices, reports, services, alerts, collectors, datasources, scheduled downtime (SDTs), and more. To enhance security and maintain the integrity of our systems, we disabled the ability for customers to create API tokens using LMSupport user accounts or the default administrator role. This restriction helps prevent unauthorized access and potential misuse of elevated permissions, ensuring that API keys are generated and managed within a controlled and secure framework. The combination of out-of-box (OOB) admin permissions with the use of an API token poses significant risks, potentially leading to unauthorized actions and disruptions within an LogicMonitor portal. Therefore, if you have previously created an API token, using LMSupport user accounts or administrator roles, you need to migrate to a new API token created under a new user or role with appropriate permissions. Quick Reference Guide (QRG) Supporting Documentation: API Tokens API Best Practices Guide (coming soon!) LogicMonitor Security Best Practices LogicMonitor Security Best Practices If you have any questions or need assistance, please contact our Technical Support Team.A11ey24 days agoFormer Employee8.1KViews23likes1CommentLet's Talk About It: Crowdstrike
So that happened. What was supposed to be an uneventful Friday morning quickly morphed into a nightmare for many engineers, IT admins, and executives alike. This is how we felt about the Crowdstrike outage: LogicMonitor customers were able to quickly identify where the outage originated and used our platform to help speed up the remediation process. Customers without LM weren't so lucky. Show us your favorite meme from the outage that caused catastrophe across the globe!skydonnell2 months agoCommunity Manager39Views2likes2Comments- skydonnell2 months agoCommunity Manager190Views16likes3Comments
LogicMonitor Collector Ports to be used while monitoring end-user devices
Review a full list of protocols and ports required for monitoring User Activity. This post will provide information regarding the ports, protocols, use case & configuration settings if required that is been used in general, with respect to LM platform. Using the " <port>/<protocol> " format is a common and standardized way to indicate network ports along with the associated protocols. This format helps provide a clear and concise representation of the port and protocol being discussed below : Inbound communication : Port Protocol Use Case Configuration Setting 162 UDP SNMP traps received from target devices eventcollector.snmptrap.address 514 UDP Syslog messages received from target devices eventcollector.syslog.port 2055 UDP NetFlow data received from target devices netflow.ports 6343 UDP sFlow data received from target devices netflow.sflow.ports 7214 HTTP/ Proprietary Communication from customJobMonitorsto Collector service httpd.port 2056 UDP JFlow data received from target devices Outbound communication : Port Protocol Use Case Configuration Setting 443 HTTP/TLS Communication between the Collector and the LogicMonitor data center (port 443 must be permitted to access LogicMonitor’spublic IP addresses; If your environment does not allow the Collector to directly connect with the LogicMonitor data centers, you canconfigure the Collector to communicate through a proxy.) N/A Other non-privileged SNMP, WMI, HTTP, SSH, JMX, etc. Communication between Collector and target resources assigned for monitoring N/A Internal communication : Port Protocol Use Case Configuration Setting 7211 Proprietary Communication between Watchdog and Collector services to OS Proxy service (sbwinproxy/sblinuxproxy) sbproxy.port 7212 Proprietary Communication from Watchdog service to Collector service agent.status.port 7213 Proprietary Communication from Collector service to Watchdog service watchdog.status.port 15003 Proprietary Communication between Collector service and its service wrapper N/A 15004 Proprietary Communication between Collector service and its service wrapper N/A Destination Ports : Port Protocol Use Case 135 TCP Port 135 is used for DCOM's initial communication and RPC (Remote Procedure Call) endpoint mapping..DCOM often uses higher port numbers in therange of 49152 to 65535 fordynamically allocated ports 22 TCP TCP for SSH connections 80 UDP NetFlow data received from target devices 443 UDP sFlow data received from target devices 25 HTTP/ Proprietary Communication from customJobMonitorsto Collector service 161 UDP JFlow data received from target devices 1433 TCP/UDP TCP for Microsoft SQL 1434 TCP/UDP The protocol used by port 1434 depends on the applicationthatis using the port. For example, SQL Server uses TCP forcommunication with clients, while the SQL Server Browserservice uses UDP 1521 TCP/UDP TCP/UDP to listen for database connections from Oracle clients 3306 TCP/UDP TCP/UDP for MySQL 5432 TCP TCP for PostgreSQL 123 NTP Connection from the library to an external NTP server. 445 TCP Server Message Block (SMB) protocol over TCP/IP LM Collector's monitoring protocols support a number of other monitoring protocols that can be incorporated into this list based on your preferences.Our LM collector supports a number of different monitoring protocols, so we can add to this list as necessary. Hopefully, through these details shared above, we will be able to understand what ports/protocols are used in LM platform. Thanks!Persie3 months agoLM Champion4.7KViews38likes1CommentLogicMonitor Security Best Practices
At LogicMonitor we take the protection of customer data and cybersecurity very seriously. Security is a team effort and partnership between LogicMonitor and our valued customers. Below we have provided our recommended guidance on security best practices, and how to keep your LogicMonitor portals secure, including the 2FA authentication enablement. General Security LogicMonitor Security Corporate site: LogicMonitor’s Security corporate site provides resources for our customers who are interested in reviewing our security white papers or accessing SOC2 Type 2 and SOC3 reports. Security Best Practices: This comprehensive document offers invaluable security guidance and best practices which LogicMonitor strongly recommends be diligently followed. It also provides critical insights into how LogicMonitorsecures customer accounts, such as regular updates to strong, unique passwords and not sharingaccount information. Configuring Multi & Single Sign On Single Sign-On Integration Setup Guide: Single Sign-On (SSO) is a powerful mechanism for enforcing robust authentication measures, including 2FA, while simultaneously mitigating the risk of password-related issues. This guide outlines the prerequisites and initial setup steps for SSO, including how to restrict account access to SSO user accounts. Multi Sign-On Integration Setup Guide: Multi-sign on augments security by requiring multiple authentication factors. This document empowers administrators to add multiple tenants (Identity Providers), and manage users directly from their Identity Provider (IdP). Microsoft Azure Active Directory (AD) IdP for Single Sign-On (SSO) Setup Guide: Customers interested in utilizing Microsoft Azure Active Directory (AD) IdP for SSO will find this guide invaluable. It provides step-by-step instructions for integrating Azure with LogicMonitor. Additional Tools to Increase Security Account IP Whitelisting: Customers looking to restrict access to their accounts, based on specific IP addresses or subnets, can refer to point five (5) in the "Configuring the Portal Settings" section document for detailed guidance. Role Based Access Control settings: Role-Based Access Controls offer a powerful means of restricting access to security features or entire product sections for specific user groups. This document explains the numerous configurations available at the role level, ensuring that your security posture aligns seamlessly with your business requirements. Preparing for two-factor authentication (2FA) Remote Session Access Control: In preparation of implementing 2FA, this document comprehensively explains the Access Controls available for the Remote Session feature, allowing for enhanced security through customizable access restrictions or feature disabling. 2FA Setup Guide: This guide provides step-by-step instructions on configuring 2FA at various levels. LogicMonitor strongly recommends customers who are not currently using 2FA or employing Single Sign-On (SSO), without enabling the "Restrict to SSO" option, proactively enable 2FA for their non-SSO user accounts. User Reporting for 2FA: The User Report serves as a vital tool in securing your account with 2FA. It facilitates the identification of user accounts that do not currently utilize 2FA or lack associated phone numbers, which could potentially disrupt user access, if not addressed before 2FA is activated. See also 2FA FAQ’s&User Reports.A11ey3 months agoFormer Employee385Views31likes0CommentsCommunity Announcement: Sunset of LogicModules in Settings in Uiv3
Attention LogicMonitor Users, In the upcoming release v207, we want to inform you about an important change regarding LogicModule management. Starting in release v207, LogicModules will no longer be accessible from the Settings page in UIv3. Instead, the Modules page (Toolbox and Exchange) will be the new platform for LogicModule management going forward. What does this mean for you? Changes to LogicModule Access: All LogicModule management, including adding, editing, and updating LogicModules, will now be performed exclusively through the Modules page (Toolbox and Exchange). Accessing LogicModules: Please navigate to the Modules section from the LogicMonitor navigation sidebar to manage your LogicModules efficiently. What’s Going Away? The following functionalities under "Settings → LogicModules" in UIv3 will be deprecated: Add LogicModules (from LMX, Repo, or File) All Editors for LogicModule Types The LogicModule tree with Groups Update/Import from Core NOTE: All the mentioned functionality exists in UIv4, except for "Import from Core" (although imports between customer portals are still possible with appropriate credentials, especially in child/parent/MSP setups). While the Group tree is not visually represented as a tree in UIv4, Groups are still accessible within the Modules UIv4 interface. Customer Impact: We understand that this change may require a slight adjustment to your workflow. Please refer to our detailedModules Overview Documentation for guidance on utilizing the new Modules page effectively. Additional Helpful Documentation: Modules Management | LogicMonitor Module Installation Thank you for your attention to this important update. We are committed to continuously improving our platform and ensuring a seamless experience for our users.A11ey5 months agoFormer Employee147Views10likes12CommentsSimplify administrative tasks with Co-Pilot, LogicMonitor's Generative AI chatbot
Introducing LM Co-Pilot: Your AI-Powered IT Assistant Simplify IT tasks and boost your team's efficiency with LogicMonitor's Co-Pilot. In this demo, Sarah Luna showcases this revolutionary new generative AI tool that: - Streamlines setup and admin tasks with chat-like interactions - Reduces errors and saves time - Frees your IT team for more strategic work LM Co-Pilot is currently in preview mode. Want to try it? Contact your LogicMonitor rep! Coming soon: Co-Pilot's capabilities will expand to support, troubleshooting, and more.A11ey5 months agoFormer Employee122Views11likes3CommentsLogic.Monitor (PowerShell) module
If you're a LogicMonitor user looking to streamline your workflows and automate repetitive tasks, you'll be pleased to know that there's is a PowerShell module available to help you do just that. As a longtime Windows administrator, I've relied on PowerShell as my go-to tool for automating and managing my infrastructure. I've found that the ability to automate tasks through PowerShell not only saves time, but also reduces errors and ensures consistency across the environment. Developed by myself as a personal side project, this module provides a range of cmdlets that can be used to interact with the LogicMonitor API, making it easier than ever to manage your monitoring setup directly from the command line. Whether you're looking to retrieve information about your monitored devices, update alert thresholds, or perform other administrative tasks, this module has you covered. In this post, we'll take a closer look at the features and capabilities of this module, and show you how to get started with using it in your own automation scripts. This project is published in the PowerShell Gallery at https://www.powershellgallery.com/packages/Logic.Monitor/. Installation From PowerShell Gallery: Install-Module -Name "Logic.Monitor" Upgrading: #New releases are published often, to ensure you have the latest version you can run: Update-Module -Name "Logic.Monitor" General Usage: Before you can use on module commands you will need to be connected to a LM portal. To connect your LM portal use the Connect-LMAccount command: Connect-LMAccount -AccessId "lm_access_id" -AccessKey "lm_access_key" -AccountName "lm_portal_prefix_name" Once connected you can then run an appropriate command, a full list of commands available can be found using: Get-Command -Module "Logic.Monitor" To disconnect from an account simply run the Disconnect-LMAccount command: Disconnect-LMAccount Examples: Most Get commands can pull info by id or name to allow for easier retrieval without needing to know the specific resource id. The name parameters in get commands can also accept wildcard values. Get list of devices: #Get all devices Get-LMDevice #Get device via id Get-LMDevice -Id 1 #Get device via hostname Get-LMDevice -Name device.example.com #Get device via displayname/wildcard Get-LMDevice -DisplayName "corp*" Modify a device: #Change device Name,DisplayName,Descrition,Link and set collector assignment Set-LMDevice -Id 1 -DisplayName "New Device Name" -NewName "device.example.com" -Description "Critical Device" -Link "http://device.example.com" -PreferredCollectorId 1 #Add/Update custom properties to a resource and disable alerting Set-LMDevice -Id 1 -Properties @{propname1="value1";propname2="value2"} -DisableAlerting $true ***Using the Name parameter to target a resource during a Set/Remove command will perform an initial get request for you automatically to retrieve the required id. When performing a large amount of changes using id is the preferred method to avoid excessive lookups and avoid any potential API throttling. Remove a device: #Remove device by hostname Remove-LMDevice -Name "device.example.com" -HardDelete $false Send a LM Log Message: Send-LMLogMessage -Message "Hello World!" -resourceMapping @{"system.displayname"="LM-COLL"} -Metadata @{"extra-data"="value";"extra-data2"="value2"} Add a new user to LogicMonitor: New-LMUser -RoleNames @("administrator") -Password "changeme" -FirstName John -LastName Doe -Email jdoe@example.com -Username jdoe@example.com -ForcePasswordChange $true -Phone "5558675309" There are over ~150 cmdlets exposed as part of this module and more are being added each week as I receive feedback internally and from customers. For more details and other examples/code snippets or to contribute you can visit the github repo where this is hosted. Source Repository:https://github.com/stevevillardi/Logic.Monitor Additional Code Examples:https://github.com/stevevillardi/Logic.Monitor/blob/main/EXAMPLES.md Note: This is very much a personal project and not an official LogicMonitor integration. If the concept of a native PowerShell module interest you, I would recommend putting in a feedback request so that the demand can be tracked.2.1KViews54likes29CommentsPrioritize logs and troubleshoot faster with Log Analysis
Troubleshoot Faster with LogicMonitor's Log Analysis Hybrid environments causing complex troubleshooting? LogicMonitor's Log Analysis streamlines the process. In this demo, David Femino shows you how to: - Instantly find relevant logs with advanced machine learning (ML) - Understand log severity at a glance with sentiment scores - Visually filter logs for lightning-fast troubleshooting - Save time with AI-powered log analysis Ditch complex queries and solve problems faster. Try LogicMonitor's Log Analysis.A11ey5 months agoFormer Employee62Views11likes0CommentsHow IT administrators can streamline operations using the LogicMonitor API
In this article, we’re going to review how LogicMonitor administrators can maximize efficiency and transform their IT operations using LogicMonitor’s REST API and PowerShell. We will cover the following use cases: Device onboarding/offboarding User management Retrieving data107Views3likes0CommentsNice little trick when needing to match against a number of devices
We have many situations where we need to address a number of devices based on certain properties so we made a small function take you choice of language hit up the API you get a list of devices back with the device id which you can use in further loops Endpoint /santaba/rest/functions comes back in a object under currentMatches JSON body: { "type": "testAppliesTo", "originalAppliesTo": "your-applies-to-filter", "currentAppliesTo": "your-applies-to-filter" } An example of us using it was we needed to identify and tag hundreds of devices using this combined with LM python wrapper made it simple much nicer than making a CSV and looping Would be nice to see this as a method in the SDK rather than a monkey patch though ;)Michael_Baker7 months agoNeophyte67Views7likes3CommentsAPI Method to CLEAR an Alert (toggle off=on Alert Enable)
Hello, We have a need to via API, CLEAR an alert. I don’t see any API Methods to clear an alert. I only see methods to GET or POST a Note/ACK. Can you assist me with what API method we can use to clear (toggle on/off) an alert? Thanks, Darren Dudgeon282Views1like13CommentsFixing misconfigured Auto-Balanced Collector assignments
I’ve seen this issue pop up a lot in support so I figured this post may help some folks out. I just came across a ticket the other day so it’s fresh on my mind! In order for Auto-Balanced Collector Groups (ABCG) to work properly, i.e.balance and failover, you have to make sure that the Collector Group is set to the ABCG and (and this is the important part) the Preferred Collector is set to “Auto Balance”. If it is set to an actual Collector ID, then it won’t get the benefits of the ABCG. You want this, not that: Ok, so that’s cool but now the real question is how do you fix this? There’s not really a good way to surface in the portal all devices where this is misconfigured. It’s not a system property so a report or AppliesTo query won’t help here… Fortunately, not all hope is lost! You can use the✨API✨ When you GET a Resource/device, you will get back some JSON and what you want is for the autoBalancedCollectorGroupId field to equal the preferredCollectorGroupId field. If “Preferred Collector” is not “Auto Balance” and set to a ID, then autoBalancedCollectorGroupId will be 0 . Breaking it down step by step: First, get a list of all ABCG IDs https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/#/Collector%20Groups/getCollectorGroupList /setting/collector/groups?filter=autoBalance:true Then, with any given ABCG ID, you can filter a device list for all devices where there’s this mismatch https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/#/Devices/getDeviceList /device/devices?filter=autoBalancedCollectorGroupId:0,preferredCollectorGroupId:11 (where 11 is the ID of a ABCG) And now for each device returned, make a PATCH so that autoBalancedCollectorGroupId is now set to preferredCollectorGroupId https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/#/Devices/patchDevice Here’s a link to the full script, written in Python for you to check out. I’ll also add it below in a comment since this is already getting long. Do you have a better, easier, or more efficient way of doing this? I’d love to hear about it!How to use a bearer token for authentication using Python SDK
I am trying to use a bearer token for authentication using Python SDK. On the documentation I found several examples using the LMv1 access_id and access_key values, like this: # Configure API key authorization: LMv1 configuration = logicmonitor_sdk.Configuration() configuration.company = 'YOUR_COMPANY' configuration.access_id = 'YOUR_ACCESS_ID' configuration.access_key = 'YOUR_ACCESS_KEY' # create an instance of the API class api_instance = logicmonitor_sdk.LMApi(logicmonitor_sdk.ApiClient(configuration)) but couldn’t find any example on using bearer token for authentication. After some search I found something about defining “api_token” and “api_token_prefix” but couldn’t get it to work. Anybody have any example I could follow to use this?400Views17likes21CommentsWebinar Cancelled: Python SDK V3 for Advanced Device Management in the LogicMonitor Platform.
We regret to inform you that the upcoming webinar on 'Python SDK V3 for Advanced Device Management in the LogicMonitor Platform' is canceled for now. However, we are keen on addressing any questions you may have and want to ensure that the future webinar meets your expectations. Please take a moment to share any questions or specific topics you would like us to cover in the rescheduled event. Your input is valuable, and we aim to provide comprehensive information tailored to your needs. Thank you for your understanding, and we look forward to your feedback. Exciting news! We're thrilled to invite you to our upcoming webinar, "Python SDK V3for Advanced Device Management” with Abhishek Bhambore, Sr. Product Manager at LogicMonitor. This webinar is tailored to empower you with the essential skills to efficiently download, update, and install the Python SDK V3. Including walkthrough with SDK examples. Join us to gain comprehensive insights into our API and SDK documentation pages, and elevate your proficiency in managing devices using Python. Agenda Highlights: How to download and update/install Python SDK V3. API and SDK documentation page overview. Sample Python code demo DeviceGroups Get List of DeviceGroups -> with default params and with params like size, offset, filter Add DeviceGroup Get DeviceGroup by Id Update DeviceGroup Delete DeviceGroup Device Get List of devices -> with default params and with params like size, offset, filter Add Device Get Device by Id Update Device Delete Device Alerts Get Alert -> with default params and with params like size, offset and filter Get Alert by Id Ack Alert by Id Note Alert by Id Device Instance Get Device Datasource List by deviceId Get Device Datasource Instance List by deviceId and hostDatasourceId Get Device Datasource Instance Data by deviceId, hostDatasourceId and instanceId -> without time range and with time range, all datapoint or specific data points Don't miss out on this invaluable opportunity to enhance your Python SDK V3skills and take your device management capabilities to the next level. Join us on January 25th at 9:00 AM Central to be part of this informative session. Mark your calendars, spread the word, and get ready for an insightful journey into mastering Python SDK V3for advanced device management.abhishek_bhambo9 months agoFormer Employee217Views23likes7CommentsCan we grab Multiple time range data from the RESTAPI?
I’m working on another thread to report some predictive models and metrics to drive engineer response actions around volume growth/expansion. To do so, I’m calling the RESTAPI 10x per volume instance per device. (one datapoint value sample every 9 days back to 90). To do so, I’m building out a request for each of the day’s values (today - 9 days, today - 18 days, etc...). Is there a way to request multiple start/end values from the restAPI in a single request?58Views7likes6CommentsDevice DataSource Instance datapoint historical data using RestAPI v3
I am having problems getting the RestAPI to return any data regardless the combination of paths, query params, time filters I try using. What am I doing wrong? Here’s my ultimate URL I’ve built for this effort: $ddsiis a successfully retrieved object (DeviceDataSourceInstance) Start and End are from these: [int]$start = get-date (get-date).addMonths(-3) -uformat %s [int]$end = get-date (get-date).addMonths(-3).AddMinutes(5) -uformat %s /device/devices/$($ddsi.deviceid)/devicedatasources/$($ddsi.devicedatasourceid)/data?size=500&offset=0&start=$start&end=$end&datapoints=Capacity,PercentUsed All of the pieces and parts seem to line up with examples I’ve found here and in the LM Docs…it doesn’t error out, but returns nothing. Goal is to get volume capacity metrics from 3 months ago. Where am I going awry here? Everything works up until I add the /data at the end.192Views15likes25CommentsFinding Cisco IOS XE CVE-2023-20198 With ConfigSources
On October 16, 2023, Cisco published a vulnerability that affects IOS XE machines running the built-in web server:https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-iosxe-webui-privesc-j22SaA4z This is tracked ashttps://nvd.nist.gov/vuln/detail/CVE-2023-20198 By adding a simple Config Check to an existing Cisco IOS ConfigSource, LogicMonitor can help people quickly identify which resources have the web server enabled. Here is an example: Name: Cisco-CSCwh87343-Check Check type: "Use Groovy Script" Groovy script: /* The built-in string variable 'config' contains the entire contents of the configuration file. The following example will trigger an alert when the configuration file contains the string "blue". if (config.contains("blue")) { return 1; } else { return 0; } */ if (config.contains("ip http")) { return 1; } else { return 0; } Then trigger this type of alert: Warning Description: "Search for presence of Cisco CSCwh87343 vulnerability" Caveats: -This will apply to all devices where the ConfigSource is used, even though all devices may not be affected by the vulnerability -This assumes usage of ConfigSources and specifically the Cisco_iOS ConfigSource Thanks to Todd Ritter for finding this CVE and Creating the ConfigSource158Views16likes1CommentWindows Server 2012: Microsoft End of Extended Support (10/10/2023)
In case you were unaware, Windows Server 2012 and 2012 R2 and their derivativesrecently reached Microsoft’s End of Extended Support date (October 10, 2023). https://learn.microsoft.com/en-us/lifecycle/products/?terms=server%202012 This means that Microsoft will no longer create non-security updates and security related updates are only available via the“Extended Security Update (ESU) program is a last resort option for customers who need to run certain legacy Microsoft products past the end of support”. https://learn.microsoft.com/en-us/lifecycle/faq/extended-security-updates https://learn.microsoft.com/en-us/lifecycle/policies/fixedPatrick_Rouse11 months agoProduct Manager601Views13likes0Comments☁️ Monitor Azure Resource Events with LogicMonitor Logs
I have a strong preference for Microsoft Azure due to its exceptional capabilities! I recently wrote a blog post showcasing how to bring your resource events to the LogicMonitor platform. This way, you can set up alerts for critical business operations, such as when a new user is added to your Active Directory (Entra), or when a file is deleted from your blob storage. I hope you find it as helpful as I did! Monitor Azure Resource Events with LogicMonitor Logs Do you use LogicMonitor or any other monitoring platform to address unique use cases? Share your stories with us!rahulrai-lm12 months agoFormer Employee69Views13likes0CommentsEnhanced API v3 Experience with Delta on device\device endpoint
Try out the feature today and let us know how it enhances your API Journey Getting the difference between APIs, often referred to as "diffing" or "delta comparison," involves comparing two versions of data or resources to identify the changes that have occurred. Time-based diffing is particularly useful when you want to understand how data has evolved or when you need to apply updates to a dataset based on changes that occurred between two timestamps. Our Customers Ask is …. ensure our data is accurate, reliable, within the rate limits, & lessen effort examining the data elements, fields, or attributes, LM Delta V3 API is a one-stop feature to identify the latest changes that can be used to craft a dashboard, or send it across integration, and granular refinement by leveraging advanced filtering on Delta API. Below are the endpoints: GET /santaba/rest/device/devices/delta – Registers delta request and generates a new delta Id. It also returns all devices that match the filter criteria. GET /santaba/rest/device/devices/delta/<DELTAID> – Returns devices that have any delta between the last and the current API call. Don't miss out on the advantages of the feature….. Stay Tunedabhishek_bhambo12 months agoFormer Employee120Views19likes6CommentsMonitor DFS Share(windows server) using LM Collector!!
Greetings to all members of the LM community. Hope you all are doing great! Our community blog in this section, discusses onhow to monitor DFS share in LM & general recommendations to follow for our LM collector to monitor the share path in today's community blog: Configuring DFS share on Windows server : This DFS share service is dependent on two parameters to establish communication with the target server, shown below, as you can see from the target server: With these two parameters, domain name and IP are used to configure communication with DFS for the purpose of LM data collection. In my test environment, I've created a Stand-alone Namespace that has the following permissions on the local path: In addition to defining the local path permissions for a DFS share, you also have the option to edit the permission for the local path of the shared folder at the time of creating the share path : Pre-requiste/Permissions required : As well as permission, there may be other things the LM collector needs before it can access remote DFS shares : Network Discovery: Enabling network discovery helps the monitoring tool discover and enumerate devices, including network shares, on the network. This can be useful when setting up data collection for resources in remote domains. Firewall and Network Configuration: Ensure that the necessary ports and protocols are open in the firewall between your monitoring tool and the remote domain. Network discovery and access to DFS shares often require specific ports and protocols to be allowed through firewalls. Namespace Path: When specifying the DFS share path in your monitoring tool, use the DFS namespace path (e.g., [ \\(domain/IP).com\dfs] rather than the direct server path. This ensures that the tool can access the share through the DFS namespace. Trust Relationships and Permissions: Ensure that trust relationships between domains are correctly established to allow access. Additionally, configure permissions on the DFS shares and namespace to grant access to the monitoring tool's credentials. It's important to note that the exact steps and configurations may vary depending on your specific network setup, DFS version, and domain structure. Additionally, working with your organization's IT administrators and domain administrators is essential to ensure proper setup and access to DFS resources in remote domains. Monitoring DFS share on LM portal : In the course of testing on the windows serverwith role-based or feature installation for DFS service, it' is set to discovered or acknowledge the information for DFSR monitoring in LM, when an IP address or domain name(FQDN) is known or defined under shared path as shown below. Edit the necessary configurations for each UNC path you are adding as a monitored instance. These configurations are detailed in the following sections. Under Resource →Add Other Monitoring you can configure DFS path under section “UNC Paths” Updating DFS share path in LM Monitors the accessibility of a UNC path from an collector agent. May be a directory or file path required on LM portal to be defined. Discovery of DFS path in LM Once you finalise the above instructions from the target DFS server, you can monitor a UNC share, whether a domain DFS share or otherwise, using the UNC Monitor DataSource. This DataSource will do a directory listing on the given UNC share and report success or failure. The UNC Monitor DataSource will monitor the accessibility of the UNC path from the collector monitoring this device. Once you have added the DFS share to be monitored, LogicMonitor will begin monitoring the share and will generate alerts if there are any problems. Link for more references: https://www.logicmonitor.com/support/devices/device-datasources-instances/monitoring-web-pages-processes-services-and-unc-paths#:~:text=to%20get%20output.-,UNC%20Paths,-To%20monitor%20a https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/dfsn-access-failures Keep Learning & Keep Exploring with LM !!!!!! Interested in learning more about features of your LogicMonitor portal? Check our some of our webinars in our community!https://www.logicmonitor.com/live-training-webinars Sign up for self guided training by clicking the "Training" link at the top right of your portal. Check out our Academy resources!https://www.logicmonitor.com/academy/393Views15likes0CommentsTech Enablement session for API v3
Hello LM Community! As part of our initiative to help customers understand new features added to LogicMonitor, we regularly conduct Tech Enablement sessions. This session gives customers an opportunity to learn new features and get answers to their questions. We recently conducted one such enablement session for LogicMonitor REST API v3 that mainly focused on Delta API and Advanced filters with in-depth analysis and examples. Below is the session link: https://logicmonitor.wistia.com/medias/eezt58kfvh You can listen to the recorded session to get all the details. However, to efficiently use your time, you can refer to the items we have timestamped for you. 4.09 - Introduction: Quick overview of API v3, endpoints names, and the Delta API 5.11 - API v3 Swagger articles, Advance filter articles, Delta feature in API v3 6.34 - Examples – Advanced filtering with device\devices endpoint 7.44 - Advanced filtering with filter and fields parameters on auto properties using the AND operator 10.02 - Advanced filtering with system properties using the OR operator 13.42 - Examples – Filering devices based on display name with the OR operator 14.15 - Examples – Filtering using offset, size, and default size limit 15.38 - Advance filtering with alert\alerts endpoint 15.47 - Difference between device and alert endpoint filter 17.15 - Filtering alert\alerts endpoint based on severity 18.00 - Filtering alert\alerts using the offset and size parameters 22.00 - Filtering alert\alerts API with epoch time 22.49 - Delta API and its examples 32.37 - Q&APlease contact your CSM for any query, or to participate in the Delta API beta program and get access to its support document. Happy learning!abhishek_bhambo2 years agoFormer Employee99Views15likes8CommentsLogicMonitor Two Factor Authentication FAQ's
Two Factor Authentication 2FA FAQ’s 1. Will my access be affected if I use Single Sign-On (SSO)? No, SSO users will not be impacted by 2FA. 2. What happens if I have an incorrect phone number associated with my account? If your phone number is incorrect, you won't receive the code to log in. Please reach out to your Local Admin, and ask them to update your phone number in your user profile. If your Local Admin(s) are unable to log in, please contact LM Support. 3. What occurs when 2FAis activated, and there's no phone number associated with my user account? When 2FAis activated users without a phone number linked to their account will be prompted to enter one, and sign up for 2FA during their next login. 4. Does enabling "restrict to single sign-on" act as an alternative to 2FA, and will customers lose the ability to uncheck this option? After 2FAis activated, customers will not be able to disable 2FA for local users. "Restrict Single Sign-On" and 2FAwill work together. There will be no change or impact on SSO users; they will continue to function as usual. 5. How does the 2FAactivationaffect shared accounts? Sharing accounts is not a recommended security best practice, and with2FA, user accounts can no longer be shared. A new account and profile should be created for each user. 6. What if I am unable to login? If you cannot login, please contact your Local Admin, and request that they update your phone number and email address in your user profile. If your Local Admin(s) are unable to log in, please reach out to LM Customer support. 7. Will integrations be impacted? No, integrations using API keys will continue to function as they are, provided Basic Authentication is not in use. 8. Will the API be affected? No, the API will not be affected. 9. Does the 2FA activation impact API-only users? There is no impact on API-only users. However, we recommend that customers periodically audit API token usage, and recreate any API tokens previously created with administrator permissions. 10. Will Integration IDs, such as ServiceNow (ID/Pass and API ID/Key) and AWS (ARN), be affected by the 2FA Activation? API ID/Keys and ARNs will not be impacted. SSO users are also not affected. Only local accounts without pre-enabled 2FA will be impacted. API keys will not be affected. If you add API keys to a local user, you will need to set up 2FA for that local user. 11. If a user is initially created as a LOCAL user, and later integrated with SSO, will there be any impact? There is no impact in this scenario. 12. What should I do if the user's email address is invalid and the phone number is empty? Reach out to the local admin of your account for assistance in updating the email and phone number. If you are a local admin, and are locked out of your account, click here to contact LM Customer support. 13. Will there be any impact for customers using external SSO with 2FA to authenticate for the LM portal? There is no impact for customers using external SSO with 2FA for LM portal authentication. See Also LogicMonitor Security Best Practices & User Reports.A11ey2 years agoFormer Employee204Views24likes0CommentsLogicMonitor User Reports for 2FA
Identifying User Accounts Requiring Priority Attentionfor two-factor authentication (2FA) Readiness Using Our User Report Guide The subsequent steps are designed to offer guidance on utilizing the User Report to assist customers in their 2FA preparation process." Step 1: Create a new Report by going to Reports -> Add Report Step 2: From the available report types, select User Report Step 3: (Optional): Under User Report Settings, additional filters can be added by clicking the “More” dropdown to help limit the returned results to user accounts that need to be prioritized for updates, prior to the 2FA activation. For this example, we have added filters for Role Assignment and the Enable 2FA flag to help identify administrators’ user accounts which do not have 2FA enabled, and may get impacted once 2FAis activated. We chose to filter to the Administrator role, and for user accounts that are set to “No” for Enable 2FA. Step 4: Ensure appropriate columns are checked for the report outputs. Customers can include as many of the columns as needed, but we recommend including the following columns, highlighted in red, which will help identify user accounts with2FA not enabled. Users with no phone numbers Potential duplicate phone number entries User Accounts with the highest set of privileges which have the pre-mentioned considerations We also recommend sorting results by Phone Number, which will organize the results with user accounts with no phone numbers set. Step 5: Once the Report configuration is completed, click the Next button and the report will be generated, in which the results can be audited for where action would need to be taken, prior to the 2FA activation. An example of the output is included below, where we identified several administrator users that had no phone numbers, and 2FA was not enabled. Our next steps internally were to update contact information for the active accounts, delete suspended users, and enable 2FA for the users. See also 2FA FAQ’s and LogicMonitor Security Best PracticesA11ey2 years agoFormer Employee83Views22likes0CommentsSelf-Service LogicModule Management for Unprivileged Users?
Has anyone taken to managing LogicModules via the API in APIv3? My immediate use case is to facilitate self-service (but ‘gated’) functionality to my end users, while ensuring that we don’t need to confer administrative access to said users. By leveraging GitHub, we can stand up workflows that will allow us to audit and approve myriad LogicModules at first, and establish version control so that, if someone makes a bad change, we can quickly fix/revert. That said, it appears APIv3 no longer supports DataSource imports, and it seems like this is due to less buy-in from the community, where folks tend to leverage the Exchange more readily than APIs. So, on that front, a few thoughts: I’m intuiting that I can introspect some of the API calls via the browser to PUT/POST/GET DataSources -- has anyone reverse engineered these and leveraged them? I understand this amounts to a hack, and that there are likely unknown pitfalls to doing this - If anyone has done this and can share some wisdom, that would be great Are there other opportunities to implement our own version control/workflows for managing these? e.g. is there a way to facilitate this through the Exchange? I have astrongdesire to limit administrative tasks for importing an unprivileged user to the click of a button (e.g. a Pull Request approval) so as to limit the amount of hands-on work my team needs to do to facilitate datasources developed by other teams If leveraging the API is a non-starter (due to reliability, hackiness, myriad other pit falls), is anyone implementing a strategy to democratize LogicModule management to unprivileged users? I’ve not been able to introspect any reasonable paths to facilitating this (apart from having users build DataSources in Sandbox, and then we import them to Production), and it seems like the only way to do this would be to give our users Admin access to manage their datasources via the GUI -- obviouslythis is an immensely bad idea, so we won’t be doing this :) Does anyone have any wisdom to share on the subject, philosophical or practical?AustinC2 years agoNeophyte44Views3likes3CommentsLogicMonitor.Api nuget package for .NET developers - full v3 API support
For those developers who have chosen the C# / .NET software development path… Our“LogicMonitor.Api” nuget package is open source (MIT license), free (as in beer) and tried and tested in many, many software projects (680,000+ downloads and counting!) Find it here: Source:https://github.com/panoramicdata/LogicMonitor.Api/ Nuget:https://www.nuget.org/packages/LogicMonitor.Api/ Some advantages to using this library: Never worry about paging ever again. It’s all built in: var devices = await logicMonitorClient .GetAllAsync<Device>(cancellationToken) .ConfigureAwait(false); Ignore back off responses. It’s all handled for you. All requests and responses are strongly typed Is it an integer, a byte or a long? We’ve found out and implemented it We use .NET best naming practices. No moreconfusion between hostname, displayname, name, host group, device group, resource group and hstgrp etc. Is it numOfX, numberOfX, xNum or… It’s XCount. As it should be, and consistently throughout. What about CreationDtos? Yep, we’ve done them. LogicMonitor has an undocumentedAPI feature whereby… Yes, we know. It’s in there. Why does the documentation not tell me…? We’ve had to work it out. It’s in there. Can I…? Yes, and if not, let us know and we’ll add it. Pull requests gratefully received!33Views4likes0CommentsMigrate your Linux collectors to non-root by Sept 30!
Hello All, Thank you for supporting LogicMonitor's efforts to ensure Collector Security. With your help, we have been able to transition ~7,000 collectors to non-privileged users out of the 10,000 linux collectors currently live in customer environments. Per our last email on this topic, we had shared a deadline of June 30, 2023 for customers to migrate their collectors from root to non-root users. Due to customer requests needing more time, we have now extended the deadline to September 30, 2023, allowing for more time to test the non-root migration scripts and migrate linux collectors. We appreciate your support in helping us achieve our goal of running all collectors using non-privileged credentials. ACTION REQUIRED: Migrate any collectors which are running under root users to non-privileged users For more details, please refer to: https://www.logicmonitor.com/support/migrating-collector-from-root-to-non-root-user If your current collector installation process uses the root user to install linux collectors, please start using non-privileged user For more details, please refer to: https://www.logicmonitor.com/support/collectors/collector-installation/installing-collectors#Linux-collector. TIMELINE: Migrate your current linux collector install base to non-privileged users as soon as possible, however no later than September 30, 2023. Your current collectors will not be affected by this change, only new installs will not be installed as root. Thank you for your prompt attention to this matter. If you have any questions, please contact Logicmonitor Support or contact your Customer Success Manager (CSM) Is there a reasonwhy this will not work in your environment? Would you still like to run a linux collector as root? Let us know in the comments Thank you!akshay_mysore2 years agoProduct Manager259Views33likes8CommentsEnhanced API v3 documentation
Hello LMCommunity! In our constant endeavor to promote LogicMonitor REST API v3, we have enhanced API v3 documentation. Understandingoffset and sizein API Positive and negative totalvalue for alerts Cloud API in API v3 (coming soon) To reiterate, LogicMonitor REST API v3Swaggeris the first document to start your API journey! Don't forget to contact your CSM to participate in the Delta API beta and get access to its support document.abhishek_bhambo2 years agoFormer Employee39Views2likes0CommentsExperience the new API v3
As you all know, we have implemented all the latest enhancements in LogicMonitor REST API v3 ONLY. Some of the key features that work only with API v3 are: Advance filtering to get accurate result Bearer token for authentication Delta feature on device\devices endpoint (coming soon…) To start using all these features, we strongly recommend you to upgrade to LogicMonitor API v3 as the base version in your environment. For more details, refer to the following support articles: REST API v3 Swagger doc: https://www.logicmonitor.com/support/v3-swagger-documentation REST API v3 Advanced filtering: https://www.logicmonitor.com/support/rest-api-advanced-filters Bearer token with REST API v3: https://www.logicmonitor.com/support/bearer-tokenabhishek_bhambo2 years agoFormer Employee376Views33likes7CommentsSample Webcheck via Groovy Scripts to test-out!!!
In LM tool, you can choose to manually add your request and response Groovy scripts directly into the text boxes found under both of these Script tabs or, as shown next, you can choose to first complete the fields under the Settings tab (e.g.HTTP Version,Method,Follow redirect,Authentication Required,Headers,HTTP Response Format, etc.) and then click theGenerate Script from Settingsbutton to have LogicMonitor auto-generate request and response scripts based on those settigs. The latter option produces a basic template for your Groovy scripts. (For more information on completing the fields found under the Settings tab for both the HTTP request and response, seeAdding a Web Check). So with LM platform tool, you can alsoauto-generate request and response scriptsscript as shown in above screenshot. Now hereis an sample script to providing a code snippet in Groovy that's using the Santaba HTTP library to make an HTTP GET request to a specified URL (in this case, "https://www.google.com") and retrieve headers from the response. Additionally, it appearsto set basic authentication credentials for the request. The script here is just for reference,to test the scripted HTTP responses.This code block here is the part which makes the http get to google Here's a breakdown of your code: import com.santaba.agent.groovyapi.http.*; // Instantiate an http client object for the target system httpClient = HTTP.open(); // Set basic authentication credentials httpClient.setAuthentication("myusername", "mypassword"); // Specify the URL url = "https://www.google.com"; // Perform an HTTP GET request def getResponse = httpClient.get(url); // Get headers from the response headers = httpClient.getHeaders(); // Close the http client httpClient.close(); Alternatively you can leverage the LM tool to monitored the Few things to note for the code here : The code appears to be setting basic authentication credentials ( myusername and mypassword ), but it's important to note that sending credentials in plaintext might not be secure. Consider using HTTPS and more secure authentication methods if available. The code is making an HTTP GET request to the specified URL ( "https://www.google.com" ) and storing the response in the getResponse variable. After the request, the code retrieves headers from the response using httpClient.getHeaders() and stores them in the headers variable. Finally, the HTTP client is closed using httpClient.close() . Link for more references : https://www.logicmonitor.com/support/services/adding-managing-services/executing-internal-web-checks-via-groovy-scripts Example Request Script Commands Keep Learning & Keep Exploring with LM !!!!!! Interested in learning more about features of your LogicMonitor portal? Check our some of our webinars in our community!https://www.logicmonitor.com/live-training-webinars Sign up for self guided training by clicking the "Training" link at the top right of your portal. Check out our Academy resources!https://www.logicmonitor.com/academy/Persie2 years agoLM Champion209Views20likes0CommentsWebsite monitoring with Zscaler client in LM
To monitor website access via Zscaler using LogicMonitor, you can set up web checks to verify if the websites are accessible and responsive. LogicMonitor's web checks allow you to periodically test the availability and performance of websites from different geographic locations. Here's how you can set up monitoring for website access via Zscaler: Pre-requisite You will need to have introduced the Zscaler client to the LM Collector Server placed in the network environment where Zscaler is used forcommunication to access the Internet via websites. Zscaler reference doc for more information : https://help.zscaler.com/zia/skipping-inspection-traffic-specific-urls-or-cloud-apps https://help.zscaler.com/zia/whitelisting-urls Our Collectors gather information from your infrastructure, encrypt it, and deliver it to the LogicMonitor Service through an outbound TLS-encrypted connection, where it is stored, processed and presented back to users through a web interface. Every LogicMonitor customer has a DNS record of [customername].logicmonitor.com. This record resolves to two or more public IP addresses at any given time. Because these IP addresses can and do change over time, it’s imperative that your network’s firewall(s) permit access to all of our public IP addresses For more information about these IP addresses you may refer to the following article - https://www.logicmonitor.com/support/about-logicmonitor/overview/logicmonitor-public-ip-addresses-dns-names We recommend whitelisting *.logicmonitor.com for your network. In addition, you will need outbound TCP port 443 and port 80 access. Port 80 is only used if one attempts to access LogicMonitor via a non-secure http address. This will initially reach port 80 and then be redirected to port 443 for encryption. In order to use our Remote Session functionality, you will also need RDP or SSH on port 443. Create Web-Checks in LogicMonitor: Log in to your LogicMonitor account. Navigate to Settings > Alerts > Web Checks. Click on Add Web Check to create a new web check:https://www.logicmonitor.com/support/services/adding-managing-services/adding-a-web-service Configure LM Web-Check Parameters: Enter the URL of the website you want to monitor. Choose the check frequency (how often the website is checked). Select the geographic locations from which the checks will be performed. You can choose locations that are relevant to your customer's usage of Zscaler. Set up alert thresholds for response time, HTTP status codes, or other relevant metrics:https://www.logicmonitor.com/support/services/adding-managing-services/adding-a-web-service#:~:text=will%20be%20sent.-,Configuring%20URL%20Request%20and%20Response%20Settings,-For%20every%20Web Test to perform on Zscaler client environment: To simulate website access through Zscaler, you might need to set up LogicMonitor to perform web checks from the customer's network. This could involve configuring LogicMonitor collectors within the customer's network environment. Response Time and Status Code Monitoring: Monitor the response time and HTTP status codes returned by the website. This helps you identify slow or unavailable websites. Alerting and Notifications: Set up alert thresholds based on your customer's requirements. For example, you can configure LogicMonitor to trigger alerts if the response time exceeds a certain threshold or if specific HTTP status codes are returned :https://www.logicmonitor.com/support/services/about-services/services-alerts Custom Scripting (Optional): If Zscaler adds specific headers or modifies responses, you might need to customize your web checks accordingly. LogicMonitor supports custom scripting for advanced monitoring scenarios. Remember that setting up web checks from within your customer's network environment might require coordination with their IT team, as well as consideration of security and compliance policies. Additionally, LogicMonitor's capabilities might evolve over time, so refer to their documentation or support resources for the most up-to-date guidance on web checks and monitoring configurations.335Views15likes0CommentsUpgrade your Collectors to MGD 33.006 before October 03, 2023!
Each year LogicMonitor rolls out the minimum required version (the MGD) for all collectors. It is the most mature version containing the gist of all the enhancements and fixes we’ve added throughout the year. To achieve uniformity, the MGD becomes the base version for all future releases. As we approach the time for the MGD automatic upgrade, we would like to inform you that the GD Collector 33.006 will be designated as the MGD Collector 33.006. This means that all collectors must be upgraded to GD Collector 33.006 or higher. Note: If it is absolutely necessary, we may release security patches. In such scenarios, the MGD version 33.006 will be incremented, and we will keep you informed. Schedule for MGD 33.006: MGD 33.006 Rollout: August 29, 2023 Voluntary upgrade time period: Anytime before October 3, 2023 Automatic upgrade Scheduled: October 3, 2023 at 5:30 am PST Please note that it is critical to upgrade to the new MGD version! On October 03, 2023, any collectors still using a version below MGD will not be able to communicate with the LogicMonitor platform. This is due to the improvements made to the authentication mechanism of the LogicMonitor platform. Actions Required: Look for the MGD rollout notification email from LogicMonitor on August 29, 2023 Upgrade your collectors to GD 33.006 to avoid loss of communication Thank you for your prompt attention to this matter. If you have any questions, please contact LogicMonitor Support or contact your Customer Success Manager (CSM).226Views15likes3CommentsSNMP Trap Credentials on Resource Properties Enhancement
HelloLM Community! Just wanted to highlight this enhancement thathas beenreleased recentlyin EA Collector 34.100 to support the use of SNMP trap credentialson resource properties. When using this collector version or newer, you can add snmptrap.* properties on resource/group level for the collector to decryptthe trap messages receivedfrom monitored devices. The credentials are used in the following order: Collector first checks for the set of credential snmptrap.* in the host properties. If the snmptrap.* credentials are not defined, it looks for the set of snmp.* in the host properties. If sets for both snmptrap.* and snmp.* properties are not defined, it looks for the set eventcollector.snmptrap.* present in the agent.conf setting. More details can be found in the below articles in our documentation: SNMP Trap Monitoring:https://www.logicmonitor.com/support/logicmodules/eventsources/types-of-events/snmp-trap-monitoring EA Collector 34.100 Release Notes:https://www.logicmonitor.com/release-notes/ea-collector-34-10056Views14likes0CommentsCommon ConfigSource Documentation
Hello LogicMonitor community, I wanted to drop this here as it’s been something in the works for a long time. As I am sure many of your are all awareour development team has been working on and has actually released a new set of ConfigSources called “Common Configs”. These have been released into our core repository since late 2021 with support for more manufacturers and more features being added into these LogicModules throughout the last 2 years. The time has come that we have been able to get a support document around these Common Configs released with the requirements for them, optional parameters you can add to help them be successful, and thelist of all currently released modules related to this suite. https://www.logicmonitor.com/support/common-config-monitoring188Views28likes0CommentsLM API v3 Python boilerplates
In light of the coming sunset of v1 and v2 of LM API, I decided to start updating some of my own scripts. I ran successful tests ofthese python boilerplates with v3:https://github.com/krshearman/LMAPI_v3 I hope it helps someone get started using LM API v3!kendall2 years agoLM Champion145Views15likes5CommentsMonitoring folders on Windows servers
I was recently asked by a customer if it was possible to monitor the size of a folder or the file count in a folder on a Windows server. Well there sure is, <whistles> YO UNC Monitor- come on down. UNC Monitor- is part of the Core DS, the DSs that are installed by default when LogicMonitor is first deployed. As seen on the UNC Monitor Description section: Great, there is a way to do this. Ok how do I do this? Hmm? Well looking at the next section of the DS, Technical Notes. Add an instance manually, oooohh k. But how do I perform this specific voodoo? Hmm maybe if I look further down on the DS? Ok there is the Groovy Script, what do I get when I run the Test Script? Ok so I do have 5 folders that contain 41 files using a total of 7,013 KB. Great that’s some info but still not what I need for a path and I still don’t know how to “add the instance manually” Maybe there is some documentation on how to do this? Oh yeah it’s right here Step 2 states “click the down arrow icon button located next to the manage button for that device. From this dropdown menu, select “Add Other Monitoring” So now I know the steps I need to take and I know that there are folders that are shared. How do I know what the shared folders are without having to log onto the server? That’s where the debug facility comes in for help. The easiest way to access debug is open any raw data screen and click on debug You will be presented with all the available commands. To assist in finding the available folder shared we will be using !wmi If you aren’t familiar with any command just type the command and you will be presented with information. In this case I want to show the shared folders on this server. With a bit of knowledge on Win32 Classes we can find this info. I’ll cover Win32 Classes in another post. Now I have everything I need to get this folder monitored through UNC, RIGHT?!? RIGHT?!? I’m looking at the device and there is no DS for UNC showing How do I add it manually if I don’t have DS ugggh. Since I have my handy dandy info from the documentation I know I need to: Once you click on that you get Add a name as you want it to show up on the instance list. Then you can add a path from the list that was obtained through leveraging the debug facility. Once both of those are filled in,hit save twice and tada you get And more importantly you now have alert tuning that you can perform on this specific UNC drive742Views17likes2CommentsDocker Collector Deployment Improvements
This post will provide additional configurations not covered in the LogicMonitor “Installing the Collector in a Container” support guide. Docker specific documentation can be found here https://docs.docker.com/engine. This post will not cover the Docker specific configurations such as creating a network. Support Guide Docker Container Collector Installation If you follow the support guide linked above to deploy a collector in a Docker container, you will be able to monitor resources using the Docker collector. However, if you have deployed a collector in a Docker container, and only followed the support guide linked above, you may have noticed the following items: The collector container is deployed to the default bridge network and this is not recommended for production use cases. The collector container by default is not set to perform any restarts. The collector container is not assigned an IP upon startup which will impact how LogicMonitor identifies the collector container resource if restarted with a different IP. The collector container is not provisioned to handle ingestion of syslog and netflow data. When viewing the collector in the “Collectors” tab, the collector is not linked to a resource. The “Device Name” of the collector is the container ID and a meaningful container hostname would be preferred. The collector is not listed anywhere in the Resource Tree, including “Devices by Type/Collectors”. If you look at the “Collectors” dashboard, the collector container metrics are not present. Screenshot Showing the Docker Collector Not Linked to a Resource Screenshot Showing Docker Collector Nowhere to be Found in Resource Tree Screenshot Showing Missing Docker Collector Metrics in “Collectors” Dashboard Improvements to the Docker Container Collector Installation The improvements for the items listed above are simple to implement. Here’s an existing example of a Docker command to deploy the collector in a Docker container that was created using only the support guide linked above. ### Example Docker Command Built Using Support Guide sudo docker run --name 'docker_collector' -d \ -e account='YOUR_PORTAL_NAME' \ -e access_id='YOUR_ACCESS_ID' \ -e access_key='YOUR_ACCESS_KEY' \ -e collector_size='medium' \ -e description='Docker Collector' \ logicmonitor/collector:latest Items 1, 2, 3, 4, and 6 in the list above are handled with additional Docker flags that should be added to the Docker example built using the support guide linked above. Let’s improve on the support guide example to resolve items 1, 2, 3, 4, and 6. Item 1 requires defining a network for the Docker container. This post assumes you already have a Docker network defined that you will attach the container to. The code example uses a network name of “logicmonitor”. Item 2 requires defining a Docker container restart policy. Docker has different options for the restart policy so adjust the code example to suit your environmental needs. Item 3 requires defining an IP for the Docker container. This post assumes you already have a Docker network defined where you will assign the container an IP valid for the network defined in your environment. The code example uses an IP of “172.20.0.7”. Item 4 requires defining port forwarding between the Docker host and the Docker container. The code example is using the default ports for syslog and netflow. Adjust to match the ports used in your environment. Item 6 requires defining a meaningful hostname for the Docker container. Here are the improvements added to the support guide code example to resolve items 1, 2, 3, 4, and 6. ### Improved to Define Container Network, Restart Policy, IP, ### Port Forwarding, and hostname sudo docker run --name 'docker_collector' -d \ -e account='YOUR_PORTAL_NAME' \ -e access_id='YOUR_ACCESS_ID' \ -e access_key='YOUR_ACCESS_KEY' \ -e collector_size='medium' \ -e description='Docker Collector' \ --network logicmonitor \ ## Item 1 --restart always \ ## Item 2 --ip 172.20.0.7 \ ## Item 3 -p 514:514/udp \ ## Item 4: syslog -p 2055:2055/udp \ ## Item 4: netflow --hostname 'docker_collector' \ ## Item 6 logicmonitor/collector:latest After you have deployed the collector with the additional Docker configurations to handle items 1, 2, 3, 4, and 6, items 5, 7, and 8 are resolved by adding the Docker container as a monitored resource in the LogicMonitor portal. Use the IP of the Docker container when adding the collector into monitoring. Adding the Docker container as a monitored resource will: Resolve item 5 by linking the Collector “Device Name” to the monitored Docker container resource Resolve item 7 by adding the Docker container to the Resource Tree and “Devices by Type/Collectors” group Resolve item 8 as the “Collector” datasources will be applied to the monitored Docker container and the metrics will be displayed in the Collectors dashboard Screenshot Showing the Docker Collector Linked to a Resource Screenshot Showing Docker Collector in Resource Tree Screenshot Showing Docker Collector Metrics in “Collectors” Dashboard118Views6likes1CommentFeature enhancement: Preserve changes to alert thresholds when updating customized LogicModule
LogicMonitor continues to roll out more enhancements to the Module Toolbox to make viewing, updating, installing, and managing LogicModules easier than before. In release v186, it is now possible to preserve alert thresholds when updating modules to newer versions. This enhancement adds to the current ability to preserve Active Discovery filters, AppliesTo, Collection Interval, Discovery Interval, Display Name, and Group during module updates. These preservation options provide the benefit of our LogicModule team’s enhancements and fixes combined with your environment-specific customizations. To preserve your module’s alert thresholds when updating via the module Exchange, use the toggle on the right pane of the “Final Review” update window. Once you finish the module update, your alert thresholds will remain. Happy updating! Support article reference: https://www.logicmonitor.com/support/modules-management69Views9likes2CommentsLogicMonitor, Groovy, Python and APIs Ohhh my
You just started your new job and they have LogicMonitor as their observability tool, that’s great! One huge step up from other organizations you’ve worked with. The last guy that worked on the platform did a pretty good job of keeping up with the platform but you see some improvements that would really help with the organization of the portal. You want to create multiple folder structures. The first group of nested folders will be based on the equipment location the second will be based on the equipment task. This will allow you to provide additional people access to the portal but limit their visibility into only the things that are in their location or their job function. Yup sounds great, your coworkers like it and more importantly your manager approves. Now how are you going to assign 700 pieces of equipment at 30 different sites to the correct folder with the correct permissions? Oh yeah you got a secret weapon, APIs! The goal of this and subsequent posts is to try to demystify APIs. What are they? How are they used? More importantly, how do they benefit you and of course your organization. This will be the first installment of community posts discussing APIs and how they are used to perform tasks within the LogicMonitor portal. If you are like I was and just dipping your toes into APIs and have started looking at videos on youtube or reading about the power of APIs you might be a bit overwhelmed. That’s perfectly fine, it’s totally understandable! APIs are a new tool that will help you perform repetitive tasks within LogicMonitor or any other application that provides APIs. Since LogicMonitor is a SaaS application you don’t have a database where you can perform queries or other actions. So queries and other tasks are replaced with the use of APIs. As you might have learned growing up, knowing what something is can make it less scary. So, what does API mean? API is an acronym for Application Programming Interface. See not that scary, yeah right. I can tell you that the vast majority of people that use APIs on a regular basis have no idea what the acronym stands for. What really is an API? An API, more specifically a REST API, is an instruction that you can use with an application to do something. The word DO is the key here, better said DO is the verb. The verbs that are used for API calls are as follows: GET allows you to get some information from the application POST allows you create something within the application DELETE allows you to delete something within the application PUT allows you to update something within the application PATCH allows you to update something within the application What’s that that I’m hearing through time and network space? Yup I’m hearing WHOA WHOA WHOA PUT allows you to update something within the application PATCH allows you to update something within the application Both have the same function? Why have two verbs that do the same thing? Well even though both perform a similar function each does it in a different manner. PUT replaces the entire item that you are updating with new data PATCH only replaces a portion of the item you are updating I will go into more detail for each verb in future installments of this series. If you are still with me. Really? I haven’t scared, bored or frustrated you with my ramblings yet? Ok so you are with me, cool. Why should you use up some of the storage in your brain to learn about APIs? APIs will simplify your life while using not only LogicMonitor but any application that provides you API access. Here are a couple more real world examples of how APIs can be leveraged to automate tasks within LogicMonitor A customer has several hundred stores across the United States. The customer wants the stores placed in nested folders Store folder - contains all devices at the store District folder - multiple stores make up a district State folder - All districts within a state Region folder - Multiple states make a region Trying to create this kind of structure manually would take days and be prone to errors. You can create the nested group structure by leveraging APIs with your favorite scripting language, I prefer Python, and a CSV file containing store information. While creating the group structure you can also add any necessary custom properties. Another real world example of leveraging APIs A customer has a very large user list within LogicMonitor, almost 800 users. These users can change roles and/or have their access suspended. Going through all 800 users to set roles, change user groups and update the account status can be daunting and honestly a brain melting task that you probably wouldn’t wish even on the guy that stole that prime parking spot from you at the mall during christmas shopping season. In comes super API and with a bit of Python coding you can perform this whole task automagically. PLUS, since this task was performed using a script you will be able to do this task repeatedly without breaking a sweat. In my next installment I will show how to start using APIs. Since it’s an unspoken rule that all guides need to provide links to relevant information, here are my hopefully useful links. LogicMonitor Swagger A listing of all actions that can be performed using the LogicMonitor API. Learning Python If you are new to coding I recommend Python since it is OS agnostic and has a massive user base and information. Learning Groovy Here you thought groovy was just a word from the 60s and 70s, Au contraire mon frère. Groovy is the primary scripting language used within LogicMonitor. We will go deeper into this Groovy language later on. LogicMonitor How can we do anything without including a link to the best observability platform around.PenYa2 years agoFormer Employee151Views10likes1CommentA DataSource to Troubleshoot ERI Merging
One of the most common behaviours noticed in topology maps is ERI Merging. This is caused when two or more devices share the same identifier (ERI). The example I always like to give when I’m teaching Topology Mapping is the word “football”. To a European like myselfthis is a game played with your feet, however in other parts of the worldthis is an altogether different game. Now let’s imagine we have a Topology Map connecting various sports together; what would show up if the map connects “football” to “basketball” - would it be the kicking game or the throwing game? Well, in LogicMonitor, it would be effectively indeterministic to tell. The two games would merge into a single object in the map (they merge into one of the resources at “random”). A key indicator of merging is one device showing as another device in the topology map. Luckily, there are a few out of the boxways to overcome this merging - the topo.blacklistand topo.namespace properties. If you’re interested in finding out more about merging how these are used, I have created a LearningByte which you can watch for free in LM Academy here - you will need a free Academy account created first. https://academy.logicmonitor.com/topology-mapping-toponamespace-topoblacklist/1329206 In order to use the blacklist property, you must know which ERIs are being merged. This can be discovered in the UI through a manual comparison of ERIs between resources (you can export to excel and process there if you’d like), however this can be a cumbersome process and doesn’t reveal how many resources are merged. That’s where my new ERIMergeTroubleshooter comes in. Using the LogicMonitor API to run the !erimergelist and !erimergedetail collector debug commands, it creates one instance for each merged ERI and a subsequent instance level property listing which other resources merge with that ERI. For example, we can see that this “Router” resource has merged with a “Server” resource. Applying the troubleshooter DataSource, it immediately reveals which particular ERI has merged, and which resource it has merged with (this is a trivial example, although most situations are often more complex). If you’d like to try out this custom Logicmodule, it can be imported now from the LMExchange (locator: F26PEJ); it will be great to hear some feedback from real world testing! Caveats: By default, this applies to all resources in the portal, so users should modify the appliesto if they require testing on specific devices only The module has not been tested against or developed for chained ERI merging API credentials are to be added as device properties based on the technical notes Thanks!Chris_Wallis2 years agoLM Champion178Views15likes1CommentLM Synthetics
What is Synthetic Monitoring? Synthetic Monitoring is an active approach to testing a website or web service by simulating visitor requests to test for availability, performance, and function. Synthetic monitoring uses emulation or scripted recordings of user interaction or transactions to create automated tests. These tests simulate a critical end user pathway or interaction on a website or web application. The results of these tests give early insight into the quality of the end user experience by reporting metrics on availability and latency, and identifying potential issues. LM Synthetics monitoring leverages Selenium—a trusted open-source browser automation and scripting tool to send Synthetics data to LogicMonitor. LM Synthetics LogicMonitor uses the following Selenium tools for the Synthetics monitoring solution: Selenium GRID—Proxy server that allows tests to be executed on a device Selenium IDE Chrome Extension—Browser recording tool that allows you to create and run the synthetics checks The following diagram illustrates how LogicMonitor leverages Selenium to collect Synthetics data: Allows you to find out as soon as possible if any site/web service is having any issues Allows you to identify and solve problems quickly to prevent widespread performance issues. Allows for more uptime and proper working of your website Important factors for a website are page loading speed, performance and uptime. Gives you the ability to track metrics over time, viewing trends for further analysis Ping Checks Simple test to see if a website is up and running Alerts can be triggered based on if 1 or many locations are not able to ping the site insuring alerts are generated accurately. The checks can be triggered to run up to every 1 minute Website Checks Standard Web Check Displayed as “web checks” in LM interface Performed by one or more geographically dispersed checkpoint locations These locations are hosted by LM and external to your network Overall purpose is to verify that your website is functioning appropriately for users outside of your network and to adequately reflect these user’ experiences. Internal Web Check Performed by one or more collectors internal to your network Purpose is to verify that websites and cloud services (internal or external to your network) are functioning appropriately for users within your private network Selenium Checks Selenium synthetic checks are automated tests that use the Selenium web testing framework to simulate user interactions with a website or web application. These checks can be used to verify the functionality and performance of a website, such as checking that links are working properly, forms are submitting correctly, and pages are loading quickly. The tests can be run on a regular basis to ensure that the website is functioning as expected and to catch any issues early on. They can measure how long it takes to complete a specific workflow/task, such as logins, page loading, verifying specific text, validating users can input data. Selenium is the capstone item, they can check all end user access. This then segways into “we know there is a problem, now where is the problem”- enter APM/Logs. Ex. We get an alert that the login failed, now we would use other tools to figure out why/where. With the metrics, you will be able to tell exactly which check failed in the workflow process Selenium Synthetics Dashboard Adding Synthetics Website Test Common Use Cases MSP’s- tons of customers with different portals and keeping track with what's going on with all of those. They are meeting their SLA’s for uptime. If a customer has any kind of website they want to ensure is accessible all the time. Internal would be M365, or proprietary tool they built themselves. External- e-commerce website- clicking on promo link, check out sequence. You can also see trends over time and you can see busy parts of the day where a load balancer might be beneficial. Dev-ops might be interested in this and how it correlates with traces or push metrics we are sending. They can use synthetics as a “piece of the puzzle” to better understand the whole picture. The app teams may want to verify the login process of the application is functioning correctly Resources https://www.logicmonitor.com/support/services/adding-managing-services/adding-a-web-service https://www.logicmonitor.com/blog/ping-checks-vs-synthetics-vs-uptime#ping-checks-vs-synthetics https://www.logicmonitor.com/support/selenium-webchecks https://www.logicmonitor.com/support/selenium-synthetics-setup https://www.logicmonitor.com/support/lm-synthetics-overview © 2022 LogicMonitorConfidential and proprietaryCameron_Compton2 years agoLM Champion231Views12likes1CommentRetrieving data from an external API via a Groovy Scripted module
1) Using Expert mode, define a resource as the hostname of the api in question, i.e. api.someapinamehere.com. For the purposes of this example, I'm going to make a call to worldtimeapi.org for data on the timezone America/Chicago to determine if daylight savings time is in effect. 2) Next, choose an available collector and group (optional) and click save. Note: normally you would also add your api user name (if any) and api token as properties, but in this case, it's not necessary. 2) Next, go to Settings -> LogicModules -> DataSources and click Add -> Datasource 3) Follow the steps below to adjust the DataSource: 4) Insert this script in the text box entitled Groovy Script under Collector Attributes import com.santaba.agent.groovyapi.expect.Expect; import com.santaba.agent.groovyapi.snmp.Snmp; import com.santaba.agent.groovyapi.http.*; import com.santaba.agent.groovyapi.jmx.*; import org.xbill.DNS.*; import groovy.json.*; //Defines host as the name of the resource added, in this case worldtimeapi.org def host = hostProps.get("system.hostname"); //Defines the path to endpoint def endpointUrl = "/api/timezone/America/Chicago" //Defines port. SSL would require 443. def port = 80 //Opens connection def httpClient = HTTP.open(host, port) //Get response def response = httpClient.get(endpointUrl) //Define status code def statusCode = httpClient.getStatusCode() //Close connection httpClient.close() //Extract values or handle error if(statusCode == 200){ response = new JsonSlurper().parseText(httpClient.getResponseBody()) if(response['dst'] == true){ status = 1 println("dstStatus=${status}") } else { status = 0 println("dstStatus=${status}") } } else { println("Your HTTP get request was not successful. StatusCode=${statusCode}") } 5) Add Normal datapoint named dstStatus 6) Save Module 7) You will see data painted in your portal for the device worldtimeapi.org under the name of the DataSource you created. You can learn more about this on this recently updated support doc: https://www.logicmonitor.com/support/terminology-syntax/scripting-support/access-a-website-from-groovy#450Views11likes0Comments