Cloudwatch custom datasource metric path including wildvalue
Hi, I have created a cloudwatch custom datasource to pull custom metrics from cloudwatch by following this document -https://www.logicmonitor.com/support/lm-cloud/getting-started-lm-cloud/5-adding-monitoring-custom-aws-cloudwatch-metrics We got a scenario where we are pulling custom metric for a AMQ broker in cloudwatch. So we created a datasource for that,While creating datapoints(where metric path is specified) we like to include wildvalue in that. metric path we looking to create - pulling CPU utilization metric for different brokers. Where ##wildvalue## is placed with Broker Name. AWS/AmazonMQ>Broker:##wildvalue##-1>CpuUtilization>Average AWS/AmazonMQ>Broker:##wildvalue##-2>CpuUtilization>Average When wildvalue is replaced with Broker Name - “prod-1”.. Metric path should look like below. AWS/AmazonMQ>Broker:prod-1-1>CpuUtilization>Average AWS/AmazonMQ>Broker:prod-1-2>CpuUtilization>Average By this way we can reduce number of datapoints created a datasource, also we can use that datasource for multiple devices. So could someone please provide suggestion on this. Thanks.,42Views6likes0CommentsAlert Thresholds - Changing units Free Space datapoint on Volume Capacity
Hi All, I wanted to put a question to you all to see if you have any recommendations. We have several drives that we are monitoring where it doesn’t make sense to track the % used datapoint due to the very large (or in a couple of cases, very small) total capacity of the drive. As such, we configured the Free Space datapoint for alerts instead. This is working well, however because the Free Space datapoint tracks “available storage capacity of the volume in bytes” it is difficult to quickly understand the potential impact of the alert when it triggers and error for < 268435456000 bytes remaining. Is there any way to change this datapoint so that it reports in a larger unit? GB would be preferred, but even MB would be helpful.Solved98Views8likes4CommentsDoes cloud watch API calls bound to Data Sources or Data points?
Hi All, I have an EC2 instance in my aws account. The instance is assigned to local collector in the same subnet to collect metrics. At the same time the Cloud watch data sources are also continue to collect the metrics. I was not able to completely disable the cloud collector data sources because we need some metrics. My question is if we uncheck some of the data points(e.x :CPUUtilization)in the EC2 data sources to stop collecting the metrics, will it reduce API calls to the Cloud watch? By this way if we can save some cloud watch cost from AWS side? Does cloud watch API calls bound to Data Sources or Data points?Solved65Views11likes7CommentsComplex Groovy DataPoint access to Script/Batchscript output
I need access to Script/Batchscript output in a Complex Groovy DataPoint. This isa rather fundamental omission. It should be trivial to permit the output[“datapoint”] approach supported for other collection types. Make it so!Solved148Views5likes8CommentsMemory Count just for active VMs
Hello I count the overall memory for all of our vms. Now I try to find out how to configure the monitoring to just count the memory of all vms which are in power-On status. I saw the datapointsMemoryConsumed,MemoryGranted andMemoryActive. What is the difference between this datapoints and is one of this point the right one to count just RAM for active VMs (VMs which are not powered off)? looking forward for your feedback Enrico31Views1like1CommentLive Training - Tuning Datapoints and Alerts - 15th JUNE 2022 - APAC
Hi all , Thanks for attending ourLive Training - Tuning Datapoints and Alerts - 15th JUNE 2022 - APAC region . Please view the video recording : Please do complete the feedback form here ;https://docs.google.com/forms/d/e/1FAIpQLScPWW5DzNxe2W5ieh6PjamLYWcP5AhDbUl1E3U7ZKryEgwEoA/viewform22Views0likes0CommentsMore options for alert trigger interval
Hi, I already raised this with LogicMonitor via email, but just re-iterating here. For some datapoints, where we want to generate warning/error/critical alerts, you can use the collection interval and alert trigger interval to basically set the amount of time that should elapse if a datapoint threshold/logic triggers an alert. But it's not possible to currently for example set a completely custom interval based on duration. e.g. if I want to generate a warning alert after 3 hours, and an error alert after 4 hours, you have to use a combination of the two things above to get close enough to the duration you want. It would be great if you could, regardless of the collection interval, have more options in the alert trigger interval (currently 1 to 10, and 20,30,40,50,60). So, if I have a collection interval of 5 minutes, I can currently achieve 2.5 hours or 5 hours using 30 and 60 alert trigger interval respectively. Couldn't there be a regular number input rather than a drop-down with predefined options for the alert trigger interval? or a separate option that allows a completely flexible duration? Also, can a custom interval already be set using the API, regardless of the UI, as I could try that? If there's another way to achieve what I want, would be happy to hear it.. :-) Thanks, Roland7Views0likes1CommentNormal Datapoints: Allow JSON reponses to dynamically populate Name and Post Processor values.
While working on optimizing Powershell scripts for Logic Monitor, we found out that Active Discovery was great for some applications.However, when it came Powershell invoking commands(running scripts on servers), we found that Active Discovery has thepotential to generate too many connections to servers. The answer we arrived at was doing everything in one script and returning it all in a JSON response. This worked significantly better than the dynamic Active Discovery, but had one draw back. The data points had to be manually entered. My suggestion is that Logic Monitor modify the data points to allow reference to the JSON responses. Meaning, that we would set one instance of a Data Point with a Name field that indicates the JSON path to an array with all of the Instancesand the Post Process could be pointed to the corresponding JSON path for the Value of each instance. JSONExample: [ { "Title":"Name of an Instance", "Value":1 }, { "Title":"Name of another Instance", "Value": 2000 } ] DataPoint would look something like this: Name Metric Type Raw Metric Post Processor Alert Threshold json(Title ) guage output json(Value ) != 1 Results would create instances like this on a graph as you would if you type them out normal: "Name of an Instance": 1 "Name of another Instance": 2000 I believe this would be more efficient and allow us to be still dynamic. Thanks, Jason Wainwright13Views0likes4Comments