LogicMonitor Collector Ports to be used while monitoring end-user devices
Review a full list of protocols and ports required for monitoring User Activity. This post will provide information regarding the ports, protocols, use case & configuration settings if required that is been used in general, with respect to LM platform. Using the " <port>/<protocol> " format is a common and standardized way to indicate network ports along with the associated protocols. This format helps provide a clear and concise representation of the port and protocol being discussed below : Inbound communication : Port Protocol Use Case Configuration Setting 162 UDP SNMP traps received from target devices eventcollector.snmptrap.address 514 UDP Syslog messages received from target devices eventcollector.syslog.port 2055 UDP NetFlow data received from target devices netflow.ports 6343 UDP sFlow data received from target devices netflow.sflow.ports 7214 HTTP/ Proprietary Communication from customJobMonitorsto Collector service httpd.port 2056 UDP JFlow data received from target devices Outbound communication : Port Protocol Use Case Configuration Setting 443 HTTP/TLS Communication between the Collector and the LogicMonitor data center (port 443 must be permitted to access LogicMonitor’spublic IP addresses; If your environment does not allow the Collector to directly connect with the LogicMonitor data centers, you canconfigure the Collector to communicate through a proxy.) N/A Other non-privileged SNMP, WMI, HTTP, SSH, JMX, etc. Communication between Collector and target resources assigned for monitoring N/A Internal communication : Port Protocol Use Case Configuration Setting 7211 Proprietary Communication between Watchdog and Collector services to OS Proxy service (sbwinproxy/sblinuxproxy) sbproxy.port 7212 Proprietary Communication from Watchdog service to Collector service agent.status.port 7213 Proprietary Communication from Collector service to Watchdog service watchdog.status.port 15003 Proprietary Communication between Collector service and its service wrapper N/A 15004 Proprietary Communication between Collector service and its service wrapper N/A Destination Ports : Port Protocol Use Case 135 TCP Port 135 is used for DCOM's initial communication and RPC (Remote Procedure Call) endpoint mapping..DCOM often uses higher port numbers in therange of 49152 to 65535 fordynamically allocated ports 22 TCP TCP for SSH connections 80 UDP NetFlow data received from target devices 443 UDP sFlow data received from target devices 25 HTTP/ Proprietary Communication from customJobMonitorsto Collector service 161 UDP JFlow data received from target devices 1433 TCP/UDP TCP for Microsoft SQL 1434 TCP/UDP The protocol used by port 1434 depends on the applicationthatis using the port. For example, SQL Server uses TCP forcommunication with clients, while the SQL Server Browserservice uses UDP 1521 TCP/UDP TCP/UDP to listen for database connections from Oracle clients 3306 TCP/UDP TCP/UDP for MySQL 5432 TCP TCP for PostgreSQL 123 NTP Connection from the library to an external NTP server. 445 TCP Server Message Block (SMB) protocol over TCP/IP LM Collector's monitoring protocols support a number of other monitoring protocols that can be incorporated into this list based on your preferences.Our LM collector supports a number of different monitoring protocols, so we can add to this list as necessary. Hopefully, through these details shared above, we will be able to understand what ports/protocols are used in LM platform. Thanks!5.5KViews38likes1CommentFixing misconfigured Auto-Balanced Collector assignments
I’ve seen this issue pop up a lot in support so I figured this post may help some folks out. I just came across a ticket the other day so it’s fresh on my mind! In order for Auto-Balanced Collector Groups (ABCG) to work properly, i.e.balance and failover, you have to make sure that the Collector Group is set to the ABCG and (and this is the important part) the Preferred Collector is set to “Auto Balance”. If it is set to an actual Collector ID, then it won’t get the benefits of the ABCG. You want this, not that: Ok, so that’s cool but now the real question is how do you fix this? There’s not really a good way to surface in the portal all devices where this is misconfigured. It’s not a system property so a report or AppliesTo query won’t help here… Fortunately, not all hope is lost! You can use the✨API✨ When you GET a Resource/device, you will get back some JSON and what you want is for the autoBalancedCollectorGroupId field to equal the preferredCollectorGroupId field. If “Preferred Collector” is not “Auto Balance” and set to a ID, then autoBalancedCollectorGroupId will be 0 . Breaking it down step by step: First, get a list of all ABCG IDs https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/#/Collector%20Groups/getCollectorGroupList /setting/collector/groups?filter=autoBalance:true Then, with any given ABCG ID, you can filter a device list for all devices where there’s this mismatch https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/#/Devices/getDeviceList /device/devices?filter=autoBalancedCollectorGroupId:0,preferredCollectorGroupId:11 (where 11 is the ID of a ABCG) And now for each device returned, make a PATCH so that autoBalancedCollectorGroupId is now set to preferredCollectorGroupId https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/#/Devices/patchDevice Here’s a link to the full script, written in Python for you to check out. I’ll also add it below in a comment since this is already getting long. Do you have a better, easier, or more efficient way of doing this? I’d love to hear about it!290Views12likes9CommentsMonitor DFS Share(windows server) using LM Collector!!
Greetings to all members of the LM community. Hope you all are doing great! Our community blog in this section, discusses onhow to monitor DFS share in LM & general recommendations to follow for our LM collector to monitor the share path in today's community blog: Configuring DFS share on Windows server : This DFS share service is dependent on two parameters to establish communication with the target server, shown below, as you can see from the target server: With these two parameters, domain name and IP are used to configure communication with DFS for the purpose of LM data collection. In my test environment, I've created a Stand-alone Namespace that has the following permissions on the local path: In addition to defining the local path permissions for a DFS share, you also have the option to edit the permission for the local path of the shared folder at the time of creating the share path : Pre-requiste/Permissions required : As well as permission, there may be other things the LM collector needs before it can access remote DFS shares : Network Discovery: Enabling network discovery helps the monitoring tool discover and enumerate devices, including network shares, on the network. This can be useful when setting up data collection for resources in remote domains. Firewall and Network Configuration: Ensure that the necessary ports and protocols are open in the firewall between your monitoring tool and the remote domain. Network discovery and access to DFS shares often require specific ports and protocols to be allowed through firewalls. Namespace Path: When specifying the DFS share path in your monitoring tool, use the DFS namespace path (e.g., [ \\(domain/IP).com\dfs] rather than the direct server path. This ensures that the tool can access the share through the DFS namespace. Trust Relationships and Permissions: Ensure that trust relationships between domains are correctly established to allow access. Additionally, configure permissions on the DFS shares and namespace to grant access to the monitoring tool's credentials. It's important to note that the exact steps and configurations may vary depending on your specific network setup, DFS version, and domain structure. Additionally, working with your organization's IT administrators and domain administrators is essential to ensure proper setup and access to DFS resources in remote domains. Monitoring DFS share on LM portal : In the course of testing on the windows serverwith role-based or feature installation for DFS service, it' is set to discovered or acknowledge the information for DFSR monitoring in LM, when an IP address or domain name(FQDN) is known or defined under shared path as shown below. Edit the necessary configurations for each UNC path you are adding as a monitored instance. These configurations are detailed in the following sections. Under Resource →Add Other Monitoring you can configure DFS path under section “UNC Paths” Updating DFS share path in LM Monitors the accessibility of a UNC path from an collector agent. May be a directory or file path required on LM portal to be defined. Discovery of DFS path in LM Once you finalise the above instructions from the target DFS server, you can monitor a UNC share, whether a domain DFS share or otherwise, using the UNC Monitor DataSource. This DataSource will do a directory listing on the given UNC share and report success or failure. The UNC Monitor DataSource will monitor the accessibility of the UNC path from the collector monitoring this device. Once you have added the DFS share to be monitored, LogicMonitor will begin monitoring the share and will generate alerts if there are any problems. Link for more references: https://www.logicmonitor.com/support/devices/device-datasources-instances/monitoring-web-pages-processes-services-and-unc-paths#:~:text=to%20get%20output.-,UNC%20Paths,-To%20monitor%20a https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/dfsn-access-failures Keep Learning & Keep Exploring with LM !!!!!! Interested in learning more about features of your LogicMonitor portal? Check our some of our webinars in our community!https://www.logicmonitor.com/live-training-webinars Sign up for self guided training by clicking the "Training" link at the top right of your portal. Check out our Academy resources!https://www.logicmonitor.com/academy/443Views15likes0CommentsMigrate your Linux collectors to non-root by Sept 30!
Hello All, Thank you for supporting LogicMonitor's efforts to ensure Collector Security. With your help, we have been able to transition ~7,000 collectors to non-privileged users out of the 10,000 linux collectors currently live in customer environments. Per our last email on this topic, we had shared a deadline of June 30, 2023 for customers to migrate their collectors from root to non-root users. Due to customer requests needing more time, we have now extended the deadline to September 30, 2023, allowing for more time to test the non-root migration scripts and migrate linux collectors. We appreciate your support in helping us achieve our goal of running all collectors using non-privileged credentials. ACTION REQUIRED: Migrate any collectors which are running under root users to non-privileged users For more details, please refer to: https://www.logicmonitor.com/support/migrating-collector-from-root-to-non-root-user If your current collector installation process uses the root user to install linux collectors, please start using non-privileged user For more details, please refer to: https://www.logicmonitor.com/support/collectors/collector-installation/installing-collectors#Linux-collector. TIMELINE: Migrate your current linux collector install base to non-privileged users as soon as possible, however no later than September 30, 2023. Your current collectors will not be affected by this change, only new installs will not be installed as root. Thank you for your prompt attention to this matter. If you have any questions, please contact Logicmonitor Support or contact your Customer Success Manager (CSM) Is there a reasonwhy this will not work in your environment? Would you still like to run a linux collector as root? Let us know in the comments Thank you!271Views33likes8CommentsUpgrade your Collectors to MGD 33.006 before October 03, 2023!
Each year LogicMonitor rolls out the minimum required version (the MGD) for all collectors. It is the most mature version containing the gist of all the enhancements and fixes we’ve added throughout the year. To achieve uniformity, the MGD becomes the base version for all future releases. As we approach the time for the MGD automatic upgrade, we would like to inform you that the GD Collector 33.006 will be designated as the MGD Collector 33.006. This means that all collectors must be upgraded to GD Collector 33.006 or higher. Note: If it is absolutely necessary, we may release security patches. In such scenarios, the MGD version 33.006 will be incremented, and we will keep you informed. Schedule for MGD 33.006: MGD 33.006 Rollout: August 29, 2023 Voluntary upgrade time period: Anytime before October 3, 2023 Automatic upgrade Scheduled: October 3, 2023 at 5:30 am PST Please note that it is critical to upgrade to the new MGD version! On October 03, 2023, any collectors still using a version below MGD will not be able to communicate with the LogicMonitor platform. This is due to the improvements made to the authentication mechanism of the LogicMonitor platform. Actions Required: Look for the MGD rollout notification email from LogicMonitor on August 29, 2023 Upgrade your collectors to GD 33.006 to avoid loss of communication Thank you for your prompt attention to this matter. If you have any questions, please contact LogicMonitor Support or contact your Customer Success Manager (CSM).229Views15likes3CommentsSNMP Trap Credentials on Resource Properties Enhancement
HelloLM Community! Just wanted to highlight this enhancement thathas beenreleased recentlyin EA Collector 34.100 to support the use of SNMP trap credentialson resource properties. When using this collector version or newer, you can add snmptrap.* properties on resource/group level for the collector to decryptthe trap messages receivedfrom monitored devices. The credentials are used in the following order: Collector first checks for the set of credential snmptrap.* in the host properties. If the snmptrap.* credentials are not defined, it looks for the set of snmp.* in the host properties. If sets for both snmptrap.* and snmp.* properties are not defined, it looks for the set eventcollector.snmptrap.* present in the agent.conf setting. More details can be found in the below articles in our documentation: SNMP Trap Monitoring:https://www.logicmonitor.com/support/logicmodules/eventsources/types-of-events/snmp-trap-monitoring EA Collector 34.100 Release Notes:https://www.logicmonitor.com/release-notes/ea-collector-34-10064Views14likes0CommentsDocker Collector Deployment Improvements
This post will provide additional configurations not covered in the LogicMonitor “Installing the Collector in a Container” support guide. Docker specific documentation can be found here https://docs.docker.com/engine. This post will not cover the Docker specific configurations such as creating a network. Support Guide Docker Container Collector Installation If you follow the support guide linked above to deploy a collector in a Docker container, you will be able to monitor resources using the Docker collector. However, if you have deployed a collector in a Docker container, and only followed the support guide linked above, you may have noticed the following items: The collector container is deployed to the default bridge network and this is not recommended for production use cases. The collector container by default is not set to perform any restarts. The collector container is not assigned an IP upon startup which will impact how LogicMonitor identifies the collector container resource if restarted with a different IP. The collector container is not provisioned to handle ingestion of syslog and netflow data. When viewing the collector in the “Collectors” tab, the collector is not linked to a resource. The “Device Name” of the collector is the container ID and a meaningful container hostname would be preferred. The collector is not listed anywhere in the Resource Tree, including “Devices by Type/Collectors”. If you look at the “Collectors” dashboard, the collector container metrics are not present. Screenshot Showing the Docker Collector Not Linked to a Resource Screenshot Showing Docker Collector Nowhere to be Found in Resource Tree Screenshot Showing Missing Docker Collector Metrics in “Collectors” Dashboard Improvements to the Docker Container Collector Installation The improvements for the items listed above are simple to implement. Here’s an existing example of a Docker command to deploy the collector in a Docker container that was created using only the support guide linked above. ### Example Docker Command Built Using Support Guide sudo docker run --name 'docker_collector' -d \ -e account='YOUR_PORTAL_NAME' \ -e access_id='YOUR_ACCESS_ID' \ -e access_key='YOUR_ACCESS_KEY' \ -e collector_size='medium' \ -e description='Docker Collector' \ logicmonitor/collector:latest Items 1, 2, 3, 4, and 6 in the list above are handled with additional Docker flags that should be added to the Docker example built using the support guide linked above. Let’s improve on the support guide example to resolve items 1, 2, 3, 4, and 6. Item 1 requires defining a network for the Docker container. This post assumes you already have a Docker network defined that you will attach the container to. The code example uses a network name of “logicmonitor”. Item 2 requires defining a Docker container restart policy. Docker has different options for the restart policy so adjust the code example to suit your environmental needs. Item 3 requires defining an IP for the Docker container. This post assumes you already have a Docker network defined where you will assign the container an IP valid for the network defined in your environment. The code example uses an IP of “172.20.0.7”. Item 4 requires defining port forwarding between the Docker host and the Docker container. The code example is using the default ports for syslog and netflow. Adjust to match the ports used in your environment. Item 6 requires defining a meaningful hostname for the Docker container. Here are the improvements added to the support guide code example to resolve items 1, 2, 3, 4, and 6. ### Improved to Define Container Network, Restart Policy, IP, ### Port Forwarding, and hostname sudo docker run --name 'docker_collector' -d \ -e account='YOUR_PORTAL_NAME' \ -e access_id='YOUR_ACCESS_ID' \ -e access_key='YOUR_ACCESS_KEY' \ -e collector_size='medium' \ -e description='Docker Collector' \ --network logicmonitor \ ## Item 1 --restart always \ ## Item 2 --ip 172.20.0.7 \ ## Item 3 -p 514:514/udp \ ## Item 4: syslog -p 2055:2055/udp \ ## Item 4: netflow --hostname 'docker_collector' \ ## Item 6 logicmonitor/collector:latest After you have deployed the collector with the additional Docker configurations to handle items 1, 2, 3, 4, and 6, items 5, 7, and 8 are resolved by adding the Docker container as a monitored resource in the LogicMonitor portal. Use the IP of the Docker container when adding the collector into monitoring. Adding the Docker container as a monitored resource will: Resolve item 5 by linking the Collector “Device Name” to the monitored Docker container resource Resolve item 7 by adding the Docker container to the Resource Tree and “Devices by Type/Collectors” group Resolve item 8 as the “Collector” datasources will be applied to the monitored Docker container and the metrics will be displayed in the Collectors dashboard Screenshot Showing the Docker Collector Linked to a Resource Screenshot Showing Docker Collector in Resource Tree Screenshot Showing Docker Collector Metrics in “Collectors” Dashboard141Views7likes1Comment