Making the Most of LM Collectors: March Product Power Hour Recap
Overview March’s Product Power Hour zoomed in on one of the most critical components of the LM platform: the Collector. This session was all about helping you harness the full potential of LM Collectors—from deployment tips to tuning best practices. Our product experts walked through how to scale monitoring, improve data collection, and boost reliability across complex environments. With hands-on guidance, live demos, and direct answers to your questions, this session was a must-watch for any practitioner looking to level up their collector game. Key Highlights ⭐ Collector Best Practices: Practical guidance on sizing, scaling, and redundancy planning to optimize performance and ensure data continuity. ⭐ Auto-Balancing & Failover: How to configure Collectors for high availability using Auto-Balancing and Collector Groups. ⭐ Troubleshooting Tools: Demo of new collector diagnostics and log features to help quickly isolate and resolve issues. ⭐ Collector Configuration Management: Centralized tools for easier updates and streamlined deployments at scale. Q&A Q: How many devices can a Collector handle? A: Depends on specs and polling intervals—monitor CPU, memory, and poll duration for optimization. Q: Best practice for Collector failover? A: Use Collector Groups and enable Auto-Balancing for seamless failover. Q: Are there plans to enhance Collector management? A: Yes! Improvements are coming to simplify deployment and updates at scale. Q: Can we get alerts on Collector health? A: You bet. Alert on collector status, queue depth, and failures to stay ahead of issues. Customer Call-outs ⭐ “This makes managing distributed environments so much easier.” ⭐ “Collector Group auto-balancing was a game changer for us.” ⭐ “Loving the detailed diagnostics—helps our ops team move fast.” What’s Next 🚀 1H Launch Webinar April 3 - Level Up Your IT Universe Get a front-row seat to LogicMonitor’s biggest innovations of the first half of the year. This session spotlights AI-powered observability, enhanced workflows, and powerful platform upgrades. ⚡Product Power Hour April 24 – 1H Launch + Logs Leveled Up! Dive into the latest updates across Logs, AI, and LM’s observability suite in addition to fast-paced demos and real-world use cases in this log-focused edition of Product Power Hour. 🌍 Elevate Community Conferences Our flagship in-person event series is back! Connect with peers, attend expert-led sessions, and get hands-on product experience. Elevate 2025 will showcase the latest innovations in AI-powered observability, empowering enterprises to optimize their modern data centers. Dallas – April 30 Sydney – May 29 London – June 25 Pre-Elevate LM Training | LIVE Already attending Elevate? Join us in Dallas and London a day early to earn your first Logs Badge through immersive, instructor-led training and hands-on labs. Perfect for new users or anyone looking to sharpen their logging skills. Register today! Dallas – April 29 London – June 24 🍽️ User Group Dinners Connect in person with other LM users in your city over dinner and real talk. Share wins, swap stories, and grow your network - RSVP here: London – April 1 Dallas – April 29 Chicago – May 6 Nashville – May 13 💻 Virtual User Groups AMER East – June 3 AMER West – June 5 EMEA – June 10 APAC – June 12 Additional Resources If you missed any part of the session or want to revisit the content, we’ve got you covered: Review the slide deck here Want to see the full session? Watch the recording below ⬇️42Views1like0CommentsBest Practices for Practitioners: Collector Management and Troubleshooting
Overview The LogicMonitor Collector is a critical Software as a Service (SaaS) component designed to collect performance metrics across diverse IT infrastructures. It provides a centralized, intelligent monitoring solution to gather data from hundreds of devices without requiring individual agent installations. By encrypting and securely transmitting data via SSL, the Collector offers a flexible approach to infrastructure monitoring that adapts to complex and diverse network environments. Key Principles Implement a strategic, unified approach to infrastructure monitoring that provides comprehensive visibility across diverse environments Ensure collectors are lightweight, efficient, and have minimal performance impact on monitored resources Maintain robust security through encrypted data transmission and carefully managed credential handling Design a monitoring infrastructure that can dynamically adjust to changing network and resource landscapes Regularly review, tune, and update collector configurations to maintain optimal monitoring performance Comprehensive Collector Management Collector Placement Strategies Strategic Location Install collectors within the same network segments as monitored resources Choose servers functioning as syslog or DNS servers for optimal placement Avoid monitoring across vast internet connections, firewalls, or NAT gateways Sizing Considerations Select appropriate collector size based on expected monitoring load Consider available memory and system resources Understand collector type limitations (e.g., Windows collectors can monitor both Windows and other devices, while Linux collectors are limited to devices) Network and Security Configuration Configure unrestricted monitoring protocols (SNMP, WMI, JDBC) Implement NTP synchronization for accurate time reporting Use proxy servers if direct internet connectivity is restricted Configure firewall rules to allow necessary collector communications Collector Groups Organize collectors logically: By physical location By customer (for MSPs) By environment (development, production, etc.) Utilize Auto-Balanced Collector Groups (ABCG) for dynamic device load sharing Version Management Schedule regular updates Choose appropriate release types (MGD, GD, EA) Maintain update history for tracking changes Use downgrade option if experiencing version-specific issues Logging and Troubleshooting Log Management Adjust log levels strategically: Trace: Most verbose (use sparingly) Debug: Detailed information for troubleshooting Info: Default logging level Warn/Error: Issue-specific logging Configure log file retention in wrapper.conf Send logs to LogicMonitor support when collaborating on complex issues Troubleshooting Specific Environments Linux Collectors Check Name Service Caching Daemon (NSCD) configuration Verify SELinux settings Use getenforce or sestatus to check SELinux status Temporarily set SELinux to Permissive mode for debugging Windows Collectors Ensure service account has "Log on as a service" rights Check local security policy settings Resolve Error 1069 (logon failure) by updating user rights Advanced Techniques Credential Management Integrate with Credential Vault solutions: CyberArk Vault Delinea Vault Use dual account configurations for credential rotation Collector Debug Facility Utilize the command-line interface for remote debugging Run debug commands to troubleshoot data collection issues Performance and Optimization Regularly monitor collector performance metrics Tune collector configuration based on monitoring load Disable Standalone Script Engine (SSE) if memory is constrained Implement proper log and temporary file management Maintenance Checklist ✅ Regularly update collectors ✅ Monitor performance metrics ✅ Review collector logs ✅ Validate data collection accuracy ✅ Test failover and redundancy configurations ✅ Manage Scheduled Down Time (SDT) during maintenance windows Conclusion Successful LogicMonitor Collector management is a dynamic process that requires strategic planning, continuous optimization, and a deep understanding of your specific infrastructure needs. The key to effective monitoring lies in strategically placing collectors, configuring them appropriately, and regularly reviewing their performance and configuration. By following these best practices, organizations can create a robust, adaptable monitoring strategy that provides comprehensive visibility into their IT ecosystem. Additional Resources Management and Maintenance: Viewing Collector Events Managing Collector Logs Adding SDT to Collector Adding Collector Group Collector Version Management Integrating with Credential Vault Integrating with CyberArk Vault for Single Account Integrating with CyberArk Vault for Dual Account Troubleshooting: Troubleshooting Linux Collectors Troubleshooting Windows Collectors Collector Debug Facility Restarting Collector1.6KViews6likes3CommentsBest Practices for Practitioners: Collector Installation and Configuration
Overview The LogicMonitor Collector is a Software as a Service (SaaS) that collects the data required for IT infrastructure monitoring in the LM Envision platform. Installed on Linux and/or Windows servers, it gathers performance metrics from selected devices across an organization's IT stack, whether it’s on-prem, off-prem, or in the cloud, using standard monitoring protocols. Unlike traditional monitoring approaches, a single Collector can monitor hundreds of devices without requiring individual agent installations on each resource. The Collector's core strength lies in its proprietary built-in intelligence that automatically recognizes device types and applies pre-configured Modules that define precise monitoring parameters specified to that device or platform. By encrypting collected data and transmitting it securely to LogicMonitor's servers via SSL, the Collector provides a flexible and centralized approach to infrastructure monitoring. This unique design allows organizations to strategically place Collectors within their network, enabling comprehensive performance visibility while minimizing monitoring overhead and complexity, with its monitoring capacity adapting to the device or service complex resources and specific metrics being collected. Key Principles LogicMonitor Collector deployment is guided by principles of efficiency, scalability, and intelligent monitoring: Centralized SaaS monitoring through strategic collector placement Simplified device discovery and metric collection Minimal performance impact on monitored resources Secure, encrypted data transmission Using the LogicMonitor Collector Recommended for: Complex IT infrastructures with multiple network segments Organizations requiring comprehensive, centralized monitoring Environments with diverse device types and monitoring requirements Not recommended for: Extremely small environments with few devices Networks with strict segmentation preventing central data collection Environments with severe network connectivity limitations Recommended Installation Best Practices Collector Placement and Sizing Install collectors close to or within the same network segments as monitored resources Choose servers that function as syslog or DNS servers for optimal placement Select the appropriate collector size based on the expected monitoring load Consider memory and system resources when sizing collectors Avoid monitoring resources across vast internet connections, firewalls, or through NAT gateways Keep in mind Windows collectors can monitor BOTH window and Linux devices while Linux collectors can only monitor Linux devices Recommended Disk Space New installation: ~500 MiB Logs: Up to 800 MiB Temporary files: <1500 MiB Report cache: <500 MiB NetFlow (if enabled): Up to 30 GiB Total recommended: <3.5 GiB (without NetFlow) Network and Security Configuration Ensure outgoing HTTPS (port 443) connectivity to LogicMonitor servers Configure unrestricted monitoring protocol (ex: SNMP, WMI, JDBC) Use proxy servers if direct internet connectivity is restricted Implement NTP synchronization for accurate time reporting Configure firewall rules to allow necessary collector communications Windows Collector Installation Recommended installation methods: Interactive Install Shield Wizard PowerShell silent installation Can be downloaded direct or bootstrap via CDN Service account considerations: For monitoring Windows systems in the same domain: Use domain account with local admin permissions For monitoring systems in different domains: Use local administrator account Ensure "Log on as a service" permissions are granted Linux Collector Installation Prerequisites: Bourne shell sudo package installed (for non-root installations) vim-common package (for xxd binary in newer versions) Recommended installation user: Default logicmonitor user Use executable permissions and install via binary Container Deployment Supported Kubernetes services: Microsoft Azure Kubernetes Service (AKS) Amazon Elastic Kubernetes Service (EKS) Google Kubernetes Service (GKS) Limitations: Full package installation only Linux-based collectors cannot monitor Windows WMI Performance and Optimization Monitor collector performance metrics regularly Tune collector size and configuration based on monitoring load Disable Standalone Script Engine (SSE) if memory is constrained Implement proper log and temporary file management Use container deployments for Kubernetes environments Best Practices Checklist ✅ Select strategically located servers for collector installation ✅ Choose the appropriate collector size based on expected monitoring load ✅ Configure reliable network connectivity and firewall rules ✅ Use non-root users for Linux collector installations ✅ Implement NTP time synchronization ✅ Monitor collector performance metrics ✅ Regularly update collectors to the latest stable versions ✅ Set collector “Down” notification chains for proper collector down alerting Monitoring and Validation Verify collector connection in LogicMonitor portal after installation Monitor collector CPU utilization, disk usage, and performance metrics Periodically review collector logs for potential issues Validate data collection accuracy and completeness Utilize and test collector failover and redundancy configurations Conclusion LogicMonitor Collectors provide a powerful, flexible approach to infrastructure monitoring, enabling organizations to gain comprehensive visibility with minimal operational overhead. By following best practices in placement, configuration, and ongoing management, IT teams can create a robust monitoring strategy that adapts to evolving infrastructure needs. Successful collector deployment requires careful planning, ongoing optimization, and a thorough understanding of your specific infrastructure requirements. Regularly reviewing and adjusting your monitoring approach will ensure continued effectiveness and performance. Additional Resources Collector Capacity Collector Versions Adding Collector Installing Collectors in Silent Mode Installing the Collector in a Container Configuring WinRM for Windows Collector agent.conf Collector Settings Collector Script Caching642Views2likes1CommentLogicMonitor Collector Ports to be used while monitoring end-user devices
Review a full list of protocols and ports required for monitoring User Activity. This post will provide information regarding the ports, protocols, use case & configuration settings if required that is been used in general, with respect to LM platform. Using the " <port>/<protocol> " format is a common and standardized way to indicate network ports along with the associated protocols. This format helps provide a clear and concise representation of the port and protocol being discussed below : Inbound communication : Port Protocol Use Case Configuration Setting 162 UDP SNMP traps received from target devices eventcollector.snmptrap.address 514 UDP Syslog messages received from target devices eventcollector.syslog.port 2055 UDP NetFlow data received from target devices netflow.ports 6343 UDP sFlow data received from target devices netflow.sflow.ports 7214 HTTP/ Proprietary Communication from custom JobMonitors to Collector service httpd.port 2056 UDP JFlow data received from target devices Outbound communication : Port Protocol Use Case Configuration Setting 443 HTTP/TLS Communication between the Collector and the LogicMonitor data center (port 443 must be permitted to access LogicMonitor’s public IP addresses; If your environment does not allow the Collector to directly connect with the LogicMonitor data centers, you can configure the Collector to communicate through a proxy.) N/A Other non-privileged SNMP, WMI, HTTP, SSH, JMX, etc. Communication between Collector and target resources assigned for monitoring N/A Internal communication : Port Protocol Use Case Configuration Setting 7211 Proprietary Communication between Watchdog and Collector services to OS Proxy service (sbwinproxy/sblinuxproxy) sbproxy.port 7212 Proprietary Communication from Watchdog service to Collector service agent.status.port 7213 Proprietary Communication from Collector service to Watchdog service watchdog.status.port 15003 Proprietary Communication between Collector service and its service wrapper N/A 15004 Proprietary Communication between Collector service and its service wrapper N/A Destination Ports : Port Protocol Use Case 135 TCP Port 135 is used for DCOM's initial communication and RPC (Remote Procedure Call) endpoint mapping.. DCOM often uses higher port numbers in the range of 49152 to 65535 for dynamically allocated ports 22 TCP TCP for SSH connections 80 UDP NetFlow data received from target devices 443 UDP sFlow data received from target devices 25 HTTP/ Proprietary Communication from custom JobMonitors to Collector service 161 UDP JFlow data received from target devices 1433 TCP/UDP TCP for Microsoft SQL 1434 TCP/UDP The protocol used by port 1434 depends on the application that is using the port. For example, SQL Server uses TCP for communication with clients, while the SQL Server Browser service uses UDP 1521 TCP/UDP TCP/UDP to listen for database connections from Oracle clients 3306 TCP/UDP TCP/UDP for MySQL 5432 TCP TCP for PostgreSQL 123 NTP Connection from the library to an external NTP server. 445 TCP Server Message Block (SMB) protocol over TCP/IP LM Collector's monitoring protocols support a number of other monitoring protocols that can be incorporated into this list based on your preferences. Our LM collector supports a number of different monitoring protocols, so we can add to this list as necessary. Hopefully, through these details shared above, we will be able to understand what ports/protocols are used in LM platform. Thanks!7KViews38likes1CommentFixing misconfigured Auto-Balanced Collector assignments
I’ve seen this issue pop up a lot in support so I figured this post may help some folks out. I just came across a ticket the other day so it’s fresh on my mind! In order for Auto-Balanced Collector Groups (ABCG) to work properly, i.e. balance and failover, you have to make sure that the Collector Group is set to the ABCG and (and this is the important part) the Preferred Collector is set to “Auto Balance”. If it is set to an actual Collector ID, then it won’t get the benefits of the ABCG. You want this, not that: Ok, so that’s cool but now the real question is how do you fix this? There’s not really a good way to surface in the portal all devices where this is misconfigured. It’s not a system property so a report or AppliesTo query won’t help here… Fortunately, not all hope is lost! You can use the ✨API✨ When you GET a Resource/device, you will get back some JSON and what you want is for the autoBalancedCollectorGroupId field to equal the preferredCollectorGroupId field. If “Preferred Collector” is not “Auto Balance” and set to a ID, then autoBalancedCollectorGroupId will be 0 . Breaking it down step by step: First, get a list of all ABCG IDs https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/#/Collector%20Groups/getCollectorGroupList /setting/collector/groups?filter=autoBalance:true Then, with any given ABCG ID, you can filter a device list for all devices where there’s this mismatch https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/#/Devices/getDeviceList /device/devices?filter=autoBalancedCollectorGroupId:0,preferredCollectorGroupId:11 (where 11 is the ID of a ABCG) And now for each device returned, make a PATCH so that autoBalancedCollectorGroupId is now set to preferredCollectorGroupId https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/#/Devices/patchDevice Here’s a link to the full script, written in Python for you to check out. I’ll also add it below in a comment since this is already getting long. Do you have a better, easier, or more efficient way of doing this? I’d love to hear about it!399Views12likes9CommentsMonitor DFS Share(windows server) using LM Collector!!
Greetings to all members of the LM community. Hope you all are doing great! Our community blog in this section, discusses on how to monitor DFS share in LM & general recommendations to follow for our LM collector to monitor the share path in today's community blog: Configuring DFS share on Windows server : This DFS share service is dependent on two parameters to establish communication with the target server, shown below, as you can see from the target server: With these two parameters, domain name and IP are used to configure communication with DFS for the purpose of LM data collection. In my test environment, I've created a Stand-alone Namespace that has the following permissions on the local path: In addition to defining the local path permissions for a DFS share, you also have the option to edit the permission for the local path of the shared folder at the time of creating the share path : Pre-requiste/Permissions required : As well as permission, there may be other things the LM collector needs before it can access remote DFS shares : Network Discovery: Enabling network discovery helps the monitoring tool discover and enumerate devices, including network shares, on the network. This can be useful when setting up data collection for resources in remote domains. Firewall and Network Configuration: Ensure that the necessary ports and protocols are open in the firewall between your monitoring tool and the remote domain. Network discovery and access to DFS shares often require specific ports and protocols to be allowed through firewalls. Namespace Path: When specifying the DFS share path in your monitoring tool, use the DFS namespace path (e.g., [ \\(domain/IP).com\dfs] rather than the direct server path. This ensures that the tool can access the share through the DFS namespace. Trust Relationships and Permissions: Ensure that trust relationships between domains are correctly established to allow access. Additionally, configure permissions on the DFS shares and namespace to grant access to the monitoring tool's credentials. It's important to note that the exact steps and configurations may vary depending on your specific network setup, DFS version, and domain structure. Additionally, working with your organization's IT administrators and domain administrators is essential to ensure proper setup and access to DFS resources in remote domains. Monitoring DFS share on LM portal : In the course of testing on the windows server with role-based or feature installation for DFS service, it' is set to discovered or acknowledge the information for DFSR monitoring in LM, when an IP address or domain name(FQDN) is known or defined under shared path as shown below. Edit the necessary configurations for each UNC path you are adding as a monitored instance. These configurations are detailed in the following sections. Under Resource → Add Other Monitoring you can configure DFS path under section “UNC Paths” Updating DFS share path in LM Monitors the accessibility of a UNC path from an collector agent. May be a directory or file path required on LM portal to be defined. Discovery of DFS path in LM Once you finalise the above instructions from the target DFS server, you can monitor a UNC share, whether a domain DFS share or otherwise, using the UNC Monitor DataSource. This DataSource will do a directory listing on the given UNC share and report success or failure. The UNC Monitor DataSource will monitor the accessibility of the UNC path from the collector monitoring this device. Once you have added the DFS share to be monitored, LogicMonitor will begin monitoring the share and will generate alerts if there are any problems. Link for more references: https://www.logicmonitor.com/support/devices/device-datasources-instances/monitoring-web-pages-processes-services-and-unc-paths#:~:text=to%20get%20output.-,UNC%20Paths,-To%20monitor%20a https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/dfsn-access-failures Keep Learning & Keep Exploring with LM !!!!!! Interested in learning more about features of your LogicMonitor portal? Check our some of our webinars in our community!https://www.logicmonitor.com/live-training-webinars Sign up for self guided training by clicking the "Training" link at the top right of your portal. Check out our Academy resources!https://www.logicmonitor.com/academy/525Views15likes0CommentsMigrate your Linux collectors to non-root by Sept 30!
Hello All, Thank you for supporting LogicMonitor's efforts to ensure Collector Security. With your help, we have been able to transition ~7,000 collectors to non-privileged users out of the 10,000 linux collectors currently live in customer environments. Per our last email on this topic, we had shared a deadline of June 30, 2023 for customers to migrate their collectors from root to non-root users. Due to customer requests needing more time, we have now extended the deadline to September 30, 2023, allowing for more time to test the non-root migration scripts and migrate linux collectors. We appreciate your support in helping us achieve our goal of running all collectors using non-privileged credentials. ACTION REQUIRED: Migrate any collectors which are running under root users to non-privileged users For more details, please refer to: https://www.logicmonitor.com/support/migrating-collector-from-root-to-non-root-user If your current collector installation process uses the root user to install linux collectors, please start using non-privileged user For more details, please refer to: https://www.logicmonitor.com/support/collectors/collector-installation/installing-collectors#Linux-collector. TIMELINE: Migrate your current linux collector install base to non-privileged users as soon as possible, however no later than September 30, 2023. Your current collectors will not be affected by this change, only new installs will not be installed as root. Thank you for your prompt attention to this matter. If you have any questions, please contact Logicmonitor Support or contact your Customer Success Manager (CSM) Is there a reason why this will not work in your environment? Would you still like to run a linux collector as root? Let us know in the comments Thank you!280Views33likes8CommentsUpgrade your Collectors to MGD 33.006 before October 03, 2023!
Each year LogicMonitor rolls out the minimum required version (the MGD) for all collectors. It is the most mature version containing the gist of all the enhancements and fixes we’ve added throughout the year. To achieve uniformity, the MGD becomes the base version for all future releases. As we approach the time for the MGD automatic upgrade, we would like to inform you that the GD Collector 33.006 will be designated as the MGD Collector 33.006. This means that all collectors must be upgraded to GD Collector 33.006 or higher. Note: If it is absolutely necessary, we may release security patches. In such scenarios, the MGD version 33.006 will be incremented, and we will keep you informed. Schedule for MGD 33.006: MGD 33.006 Rollout: August 29, 2023 Voluntary upgrade time period: Anytime before October 3, 2023 Automatic upgrade Scheduled: October 3, 2023 at 5:30 am PST Please note that it is critical to upgrade to the new MGD version! On October 03, 2023, any collectors still using a version below MGD will not be able to communicate with the LogicMonitor platform. This is due to the improvements made to the authentication mechanism of the LogicMonitor platform. Actions Required: Look for the MGD rollout notification email from LogicMonitor on August 29, 2023 Upgrade your collectors to GD 33.006 to avoid loss of communication Thank you for your prompt attention to this matter. If you have any questions, please contact LogicMonitor Support or contact your Customer Success Manager (CSM).239Views15likes3CommentsSNMP Trap Credentials on Resource Properties Enhancement
Hello LM Community! Just wanted to highlight this enhancement that has been released recently in EA Collector 34.100 to support the use of SNMP trap credentials on resource properties. When using this collector version or newer, you can add snmptrap.* properties on resource/group level for the collector to decrypt the trap messages received from monitored devices. The credentials are used in the following order: Collector first checks for the set of credential snmptrap.* in the host properties. If the snmptrap.* credentials are not defined, it looks for the set of snmp.* in the host properties. If sets for both snmptrap.* and snmp.* properties are not defined, it looks for the set eventcollector.snmptrap.* present in the agent.conf setting. More details can be found in the below articles in our documentation: SNMP Trap Monitoring: https://www.logicmonitor.com/support/logicmodules/eventsources/types-of-events/snmp-trap-monitoring EA Collector 34.100 Release Notes: https://www.logicmonitor.com/release-notes/ea-collector-34-10077Views15likes0CommentsDocker Collector Deployment Improvements
This post will provide additional configurations not covered in the LogicMonitor “Installing the Collector in a Container” support guide. Docker specific documentation can be found here https://docs.docker.com/engine. This post will not cover the Docker specific configurations such as creating a network. Support Guide Docker Container Collector Installation If you follow the support guide linked above to deploy a collector in a Docker container, you will be able to monitor resources using the Docker collector. However, if you have deployed a collector in a Docker container, and only followed the support guide linked above, you may have noticed the following items: The collector container is deployed to the default bridge network and this is not recommended for production use cases. The collector container by default is not set to perform any restarts. The collector container is not assigned an IP upon startup which will impact how LogicMonitor identifies the collector container resource if restarted with a different IP. The collector container is not provisioned to handle ingestion of syslog and netflow data. When viewing the collector in the “Collectors” tab, the collector is not linked to a resource. The “Device Name” of the collector is the container ID and a meaningful container hostname would be preferred. The collector is not listed anywhere in the Resource Tree, including “Devices by Type/Collectors”. If you look at the “Collectors” dashboard, the collector container metrics are not present. Screenshot Showing the Docker Collector Not Linked to a Resource Screenshot Showing Docker Collector Nowhere to be Found in Resource Tree Screenshot Showing Missing Docker Collector Metrics in “Collectors” Dashboard Improvements to the Docker Container Collector Installation The improvements for the items listed above are simple to implement. Here’s an existing example of a Docker command to deploy the collector in a Docker container that was created using only the support guide linked above. ### Example Docker Command Built Using Support Guide sudo docker run --name 'docker_collector' -d \ -e account='YOUR_PORTAL_NAME' \ -e access_id='YOUR_ACCESS_ID' \ -e access_key='YOUR_ACCESS_KEY' \ -e collector_size='medium' \ -e description='Docker Collector' \ logicmonitor/collector:latest Items 1, 2, 3, 4, and 6 in the list above are handled with additional Docker flags that should be added to the Docker example built using the support guide linked above. Let’s improve on the support guide example to resolve items 1, 2, 3, 4, and 6. Item 1 requires defining a network for the Docker container. This post assumes you already have a Docker network defined that you will attach the container to. The code example uses a network name of “logicmonitor”. Item 2 requires defining a Docker container restart policy. Docker has different options for the restart policy so adjust the code example to suit your environmental needs. Item 3 requires defining an IP for the Docker container. This post assumes you already have a Docker network defined where you will assign the container an IP valid for the network defined in your environment. The code example uses an IP of “172.20.0.7”. Item 4 requires defining port forwarding between the Docker host and the Docker container. The code example is using the default ports for syslog and netflow. Adjust to match the ports used in your environment. Item 6 requires defining a meaningful hostname for the Docker container. Here are the improvements added to the support guide code example to resolve items 1, 2, 3, 4, and 6. ### Improved to Define Container Network, Restart Policy, IP, ### Port Forwarding, and hostname sudo docker run --name 'docker_collector' -d \ -e account='YOUR_PORTAL_NAME' \ -e access_id='YOUR_ACCESS_ID' \ -e access_key='YOUR_ACCESS_KEY' \ -e collector_size='medium' \ -e description='Docker Collector' \ --network logicmonitor \ ## Item 1 --restart always \ ## Item 2 --ip 172.20.0.7 \ ## Item 3 -p 514:514/udp \ ## Item 4: syslog -p 2055:2055/udp \ ## Item 4: netflow --hostname 'docker_collector' \ ## Item 6 logicmonitor/collector:latest After you have deployed the collector with the additional Docker configurations to handle items 1, 2, 3, 4, and 6, items 5, 7, and 8 are resolved by adding the Docker container as a monitored resource in the LogicMonitor portal. Use the IP of the Docker container when adding the collector into monitoring. Adding the Docker container as a monitored resource will: Resolve item 5 by linking the Collector “Device Name” to the monitored Docker container resource Resolve item 7 by adding the Docker container to the Resource Tree and “Devices by Type/Collectors” group Resolve item 8 as the “Collector” datasources will be applied to the monitored Docker container and the metrics will be displayed in the Collectors dashboard Screenshot Showing the Docker Collector Linked to a Resource Screenshot Showing Docker Collector in Resource Tree Screenshot Showing Docker Collector Metrics in “Collectors” Dashboard160Views7likes1Comment