ContributionsMost RecentMost LikesSolutionsRe: Has anybody noticed the flaw in LogSource logic? Hi @Stuart Weenig Thank you for taking the time to share your feedback with us. We genuinely appreciate your insights into our system! In your post, you’ve rightly identified a challenge we face when it comes to mapping logs to log sources. It’s indeed a bit of a chicken-and-egg situation. Allow me to clarify the reasoning behind our approach: When a log is sent from a device to Logicmonitor and enters our processing pipeline, the question arises: how do we identify the source device? Is it by the message content, the header information, or the log format? If we were to rely solely on the message content for identification, it would be a time-consuming and cumbersome process. This would mean having to account for log formats from a wide array of vendors, which can be quite complex. Our solution is to identify logs by the device itself. Typically, devices of the same type tend to generate similar logs. As a result, grouping logs based on the device within the “appliesTo” section simplifies the process. Trying to use “appliesTo” logic to match specific log formats, on the other hand, would be unwieldy and require complex regular expressions. Therefore, the first step is to properly identify the log’s source device before we can effectively take any actions. Our goal is to properly identify a log to avoid things like improperly tagging or filtering. We find that identifying logs by the resource (device) makes the most sense in this context. Attempting to do this by pipeline would place us back in a similar situation, as pipelines are essentially groups of devices. I hope this explanation clarifies our approach, and if you have any further questions or suggestions, please feel free to share them. Your input is invaluable as we continually strive to improve our system. Thank you for being a part of our community. Re: How can Linux servers' queue length and depth be monitored? Hi @Rakzskull , it doesn’t look like we have anything out of the box for this. How would you monitor queue length/depth? I just did some quick googling and found a command that may work to get you the average queue depth size on Linux machines. Based off this command : iostat -xdc 60 1 There are 2 datapoints avgrq-sz avgqu-sz Would this be what you are looking for? You could write a SSH datasource that polls every minute for this data . I gave a glance to see if this was available via SNMP and it looks like its not but I could have missed something. Once we are able to narrow down the data, we can certainly bring it in to the platform LM Synthetics What is Synthetic Monitoring? Synthetic Monitoring is an active approach to testing a website or web service by simulating visitor requests to test for availability, performance, and function. Synthetic monitoring uses emulation or scripted recordings of user interaction or transactions to create automated tests. These tests simulate a critical end user pathway or interaction on a website or web application. The results of these tests give early insight into the quality of the end user experience by reporting metrics on availability and latency, and identifying potential issues. LM Synthetics monitoring leverages Selenium—a trusted open-source browser automation and scripting tool to send Synthetics data to LogicMonitor. LM Synthetics LogicMonitor uses the following Selenium tools for the Synthetics monitoring solution: Selenium GRID—Proxy server that allows tests to be executed on a device Selenium IDE Chrome Extension—Browser recording tool that allows you to create and run the synthetics checks The following diagram illustrates how LogicMonitor leverages Selenium to collect Synthetics data: Allows you to find out as soon as possible if any site/web service is having any issues Allows you to identify and solve problems quickly to prevent widespread performance issues. Allows for more uptime and proper working of your website Important factors for a website are page loading speed, performance and uptime. Gives you the ability to track metrics over time, viewing trends for further analysis Ping Checks Simple test to see if a website is up and running Alerts can be triggered based on if 1 or many locations are not able to ping the site insuring alerts are generated accurately. The checks can be triggered to run up to every 1 minute Website Checks Standard Web Check Displayed as “web checks” in LM interface Performed by one or more geographically dispersed checkpoint locations These locations are hosted by LM and external to your network Overall purpose is to verify that your website is functioning appropriately for users outside of your network and to adequately reflect these user’ experiences. Internal Web Check Performed by one or more collectors internal to your network Purpose is to verify that websites and cloud services (internal or external to your network) are functioning appropriately for users within your private network Selenium Checks Selenium synthetic checks are automated tests that use the Selenium web testing framework to simulate user interactions with a website or web application. These checks can be used to verify the functionality and performance of a website, such as checking that links are working properly, forms are submitting correctly, and pages are loading quickly. The tests can be run on a regular basis to ensure that the website is functioning as expected and to catch any issues early on. They can measure how long it takes to complete a specific workflow/task, such as logins, page loading, verifying specific text, validating users can input data. Selenium is the capstone item, they can check all end user access. This then segways into “we know there is a problem, now where is the problem”- enter APM/Logs. Ex. We get an alert that the login failed, now we would use other tools to figure out why/where. With the metrics, you will be able to tell exactly which check failed in the workflow process Selenium Synthetics Dashboard Adding Synthetics Website Test Common Use Cases MSP’s- tons of customers with different portals and keeping track with what's going on with all of those. They are meeting their SLA’s for uptime. If a customer has any kind of website they want to ensure is accessible all the time. Internal would be M365, or proprietary tool they built themselves. External- e-commerce website- clicking on promo link, check out sequence. You can also see trends over time and you can see busy parts of the day where a load balancer might be beneficial. Dev-ops might be interested in this and how it correlates with traces or push metrics we are sending. They can use synthetics as a “piece of the puzzle” to better understand the whole picture. The app teams may want to verify the login process of the application is functioning correctly Resources https://www.logicmonitor.com/support/services/adding-managing-services/adding-a-web-service https://www.logicmonitor.com/blog/ping-checks-vs-synthetics-vs-uptime#ping-checks-vs-synthetics https://www.logicmonitor.com/support/selenium-webchecks https://www.logicmonitor.com/support/selenium-synthetics-setup https://www.logicmonitor.com/support/lm-synthetics-overview © 2022 LogicMonitor Confidential and proprietary
Top ContributionsLM SyntheticsRe: Has anybody noticed the flaw in LogSource logic?Re: How can Linux servers' queue length and depth be monitored?