Fixing misconfigured Auto-Balanced Collector assignments
I’ve seen this issue pop up a lot in support so I figured this post may help some folks out. I just came across a ticket the other day so it’s fresh on my mind! In order for Auto-Balanced Collector Groups (ABCG) to work properly, i.e.balance and failover, you have to make sure that the Collector Group is set to the ABCG and (and this is the important part) the Preferred Collector is set to “Auto Balance”. If it is set to an actual Collector ID, then it won’t get the benefits of the ABCG. You want this, not that: Ok, so that’s cool but now the real question is how do you fix this? There’s not really a good way to surface in the portal all devices where this is misconfigured. It’s not a system property so a report or AppliesTo query won’t help here… Fortunately, not all hope is lost! You can use the✨API✨ When you GET a Resource/device, you will get back some JSON and what you want is for the autoBalancedCollectorGroupId field to equal the preferredCollectorGroupId field. If “Preferred Collector” is not “Auto Balance” and set to a ID, then autoBalancedCollectorGroupId will be 0 . Breaking it down step by step: First, get a list of all ABCG IDs https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/#/Collector%20Groups/getCollectorGroupList /setting/collector/groups?filter=autoBalance:true Then, with any given ABCG ID, you can filter a device list for all devices where there’s this mismatch https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/#/Devices/getDeviceList /device/devices?filter=autoBalancedCollectorGroupId:0,preferredCollectorGroupId:11 (where 11 is the ID of a ABCG) And now for each device returned, make a PATCH so that autoBalancedCollectorGroupId is now set to preferredCollectorGroupId https://www.logicmonitor.com/swagger-ui-master/api-v3/dist/#/Devices/patchDevice Here’s a link to the full script, written in Python for you to check out. I’ll also add it below in a comment since this is already getting long. Do you have a better, easier, or more efficient way of doing this? I’d love to hear about it!290Views12likes9CommentsCreating a propertySource to populate a NOC widget in a dashboard... need ## in a string.
The NOC widget items have a field that requires me to have the string"##RESOURCEGROUP##" pushed through the JSON into the NOC Item…since I’m using a propertySource to run the script on a schedule (I have a larger VM with a collector with a longer script timeout just for doing deeper scripted work through the API or Full Domain sweep types of things that will take more time), The LM System is going to try to replace that at run time rather than returning the explicit string. Who knows the correct escape sequence for turning that into a string literal on its way into the RestApi Patch? Scripting questions through support is best effort, and I don’t usually come with easy questions.Solved124Views11likes5CommentsRunning a Perl script on an AIX box over SSH?
Hi, We have an old monitoring system that we’re trying to decommission and move everything into LM. The current system connects to an AIX server using an SSH Key, and then runs a perl script that’s located in a particular folder. It then takes the output from that script and determines if there’s an alert condition to tell someone about. I need to move this same functionality into LM. I’m assuming the SSH access part shouldn’t be a big deal. I can either manually setup a username/password or I found into on putting a key in LM somewhere and using that. Once LM can connect to the server, can it launch a script file that’s located on the server? I’m not sure if I need to recreate the script inside of LM, or if it can just tell the server to execute the script it already has. If the script runs remotely, can LM then parse the returned data to determine if something is an error or not? If anyone has any tutorials or anything on how I can start working on this, let me know. I don’t know anything about scripting and LM and so far, don’t really know where to start. Thanks!Solved103Views15likes1CommentScripted Alert Thresholds
It should be possible to groovyscript Alert Thresholds, based on (for example) ILPs and hostProperties. I need to modify the SNMP_Network_Interfaces to vary the InDiscardPercent threshold depending on whether this is a radioMAC interface type and whether it is a given customer. Something along the lines of: def isRadio = instanceProps.get('auto.interface.type') == 'radioMAC'; def customerCode = hostProps.get('customer.code'); if(isRadio && customerCode == 'ACME') { // No threshold return ''; } // The default return '> 10';Solved224Views17likes26CommentsUnable to authenticate Rest api with servicenow to get devices
Hi All, I am trying to authenticate in servicenow using script but it is not working. It is throwing an error authentication failed status 401. var ACCESS_ID = '123'; var ACCESS_KEY = 'abc'; var ACCOUNT_NAME = 'test'; var resourcePath = '/device/devices'; var epoch = (new Date()).getTime(); var id = 2; var requestVars = 'GET' + epoch + resourcePath; var HexUtil = { convertByteArrayToHex : function(byteArray) { var hex = ""; byteArray.forEach(function(byteValue) { hex += HexUtil.convertByteToHex(byteValue); }); return hex; }, convertByteToHex : function(b) { var hexChar = ["0", "1", "2", "3", "4", "5", "6", "7","8", "9", "a", "b", "c", "d", "e", "f"]; return hexChar[(b >> 4) & 0x0f] + hexChar[b & 0x0f]; } }; // Compute the HMACSHA256 hash using GlideDigest var key = "key"; key = encodeURIComponent(key); key = GlideStringUtil.base64Encode(key); var msg = "message"; msg = encodeURIComponent(msg); var mac = new GlideCertificateEncryption(); signature = mac.generateMac(requestVars, "HmacSHA256", ACCESS_ID); var bytes = GlideStringUtil.base64DecodeAsBytes(signature); var hex = HexUtil.convertByteArrayToHex(bytes); var hexB64 = GlideStringUtil.base64Encode(hex); var token = 'LMv1 ' + ACCESS_ID + ':' + signature + ':' + epoch; gs.info('Devices = ' +token); var httpRequest = new GlideHTTPRequest('https://' + ACCOUNT_NAME + '.logicmonitor.com/santaba/rest/device/devices'); httpRequest.addHeader('Content-Type', 'application/json'); httpRequest.addHeader('Authorization', token); //httpRequest.addHeader('x-server-version', '3'); var response = httpRequest.getBody(); gs.log(response); Could you please help me on this.315Views4likes3CommentsAd-hoc script running
Often when an alert pops up, I find myself running some very common troubleshooting/helpful tools to quickly gather more info. It would be nice to get that info quickly and easily without having to go to other tools when an alert occurs. For example - right now, when we get a high cpu alert the first thing I do is run pslist -s \\computername (PSTools are so awesome) and psloggedon \\computername to see who's logged in at themoment. I know it's possible to create a datasource to discover all active processes, and retrieve CPU/memory/disk metrics specific to a given process, but processes on a given server might change pretty frequently so you'd have to run active discovery frequently. It just doesn't seem like the best way and most of the time I don't care what's running on the server and only need to know "in the moment." A way to run a script via a button for a given datasource would be a really cool feature. Maybe on the datasource you could add a feature to hold a "gather additional data" or meta-datascript, the script could then be invoked manually onan alert or datasource instance. IE when an alert occurs, you can click on a button in the alert called "gather additional data" or something which would run the script and produce a small box or window with the output. The ability to run periodically (every 15 seconds or 5 minutes, etc) would also be useful. This would also give a NOC the ability to troubleshoot a bit more or provide some additional context around an alert without everyone having to know a bunch of tools or have administrative access to a server.16Views1like7CommentsCreate Dynamic group by Script
Hi Members I am trying to create 'Device Dynamic Groups' with Python script. but I am failing. Could you pleas me what I am missing: Conditions of the groups name: If Device display name starts with prod, all device should group. data = '{"name":"test111","parentId":111,"appliesTo":"startsWith("system.displayname","Prod")"}' I have attached the full script too. I am running Python 3.6.5 #!/bin/env python import requests import json import hashlib import base64 import time import hmac #Account Info AccessId ='XXXXXX' AccessKey ='XXXXXX' Company = 'contoso' #Request Info httpVerb ='POST' resourcePath = '/device/groups' data = '{"name":"test111","parentId":111,"appliesTo":"startsWith("system.displayname","Prod")"}' #Construct URL url = 'https://'+ Company +'.logicmonitor.com/santaba/rest' + resourcePath #Get current time in milliseconds epoch = str(int(time.time() * 1000)) #Concatenate Request details requestVars = httpVerb + epoch + data + resourcePath #Construct signature #signature = base64.b64encode(hmac.new(AccessKey,msg=requestVars,digestmod=hashlib.sha256).hexdigest()) hmac = hmac.new(AccessKey.encode(),msg=requestVars.encode(),digestmod=hashlib.sha256).hexdigest() signature = base64.b64encode(hmac.encode()) #Construct headers #auth = 'LMv1 ' + AccessId + ':' + signature + ':' + epoch #headers = {'Content-Type':'application/json','Authorization':auth} auth = 'LMv1 ' + AccessId + ':' + signature.decode() + ':' + epoch headers = {'Content-Type':'application/json','Authorization':auth} #Make request response = requests.post(url, data=data, headers=headers) Please assist Thank you.16Views0likes3CommentsAPI/Script
Hi I want to create device group named as "Fileserver" in /device/servers This is the first time I am running the script, so could you please check if my script is ok. Also please let me know how I run this script. I am sharing the scriptwith you. #!/bin/env python import requests import json import hashlib import base64 import time import hmac #Account Info AccessId ='TBA' AccessKey ='TBA' Company = 'contoso' #Request Info httpVerb ='POST' resourcePath = '/device/servers' data = '{"name":"Fileserver"}' #Construct URL url = 'https://'+ Company +'.logicmonitor.com/santaba/rest' + resourcePath #Get current time in milliseconds epoch = str(int(time.time() * 1000)) #Concatenate Request details requestVars = httpVerb + epoch + data + resourcePath #Construct signature signature = base64.b64encode(hmac.new(AccessKey,msg=requestVars,digestmod=hashlib.sha256).hexdigest()) #Construct headers auth = 'LMv1 ' + AccessId + ':' + signature + ':' + epoch headers = {'Content-Type':'application/json','Authorization':auth} #Make request response = requests.post(url, data=data, headers=headers) #Print status and body of response print 'Response Status:',response.status_code print 'Response Body:',response.content6Views0likes7CommentsInternal Scripted Service Check "test script" button and debug
Although a TEST step button is available in service checks (external and internal) this only shows the request made and the response output windows. Neither type gives you any kind of output from the post-processing of the response i.e. JSONPATH results. For the Internal service check that has the scripted option (rather than settings) this is even more noticeable as there is no obvious way to put any println or debug in your response script to make sure it is doing the write thing or work out why it is not doing the right thing. The addition of a stdout/output window that shows response (or request) script output would be really helpful. Also possiblythe addition of a Test Script button for the response script that would run the script and show output as per datasources would also be great. I ran these ideas past support and they suggested raising a feature request so I have. They did also suggest I could test my scripts (request or response) in the console debug window which is possible but not obvious ways to mock LMRequest/LMResponse objects so that the script can run the same way as it would normally as a service check. If there are examples or ways to do this then this may be a good subject to create a support page explaining how to do it.7Views0likes0CommentsMonitoring Tomcat Context Web Response
Monitoring Tomcat Context with HTTP Data Collection Tomcat is not a common home pet that people normally know. It is not a type of a cat ("a male cat") after all but an eccentric name like any other open source software's names i.e. Guacamole, Apache, RabbitMQ, etc. which I believe represents the freedom nature of those software, where creativity is the main ingredient. I am not here to explain about Tomcat per se, either because of Google provides abundant information about it or it is out of the scope of my article. Simply put, Tomcat is a servlet whereby multiple Context containers can exist. Each context refers to a web application. Recently we have quite a few customers request to monitor the status and response time of the Tomcat Context, which is actually simple since there are already readily available HTTP Datasource orInternal Web Service Checkfor that purpose. However, our customers have, in this case, multiple contexts as in normal Tomcat application. Therefore Active Discovery (AD) is needed to get the list of Contexts running on Tomcat before the simple HTTP Data collection, that has only two Datapoints (Response Time & Status) will be applied. For Active Discovery part, I would need to give credit to our renownedDavid Leewhom you might be familiar if you ever open a ticket with LogicMonitor Support channel. For the sake of confidentiality, all the screenshots will be in my own lab instead of our client, it is, however, producing the similar intended result although there are more Context containers in the real production environment. The AD is a short groovy script utilizingJava Management Extension to connect to remote Tomcat from Collector. Discovery Method chosen should be 'SCRIPT' in this Datasource. importcom.santaba.agent.groovyapi.http.*; importcom.santaba.agent.groovyapi.jmx.*; importjavax.management.remote.JMXServiceURL importjavax.management.remote.JMXConnectorFactory importorg.jboss.remotingjmx.* def jmx_host = hostProps.get('system.hostname'); def jmx_port = hostProps.get('tomcat.jmxports'); def jmx_url ="service:jmx:rmi:///jndi/rmi://"+ jmx_host +":"+ jmx_port +"/jmxrmi"; context_array = jmx_conn.getChildren("Catalina:type=Manager,host=localhost,context=*"); context_array.each { context -> println context+"##"+context } return0; (note: tomcat.jmxports is the port used by JMX to connect to Tomcat servlet and in this case is a standard port 9000) Following is the result of AD: (note: one of the context name is: 'context-test') which can be tested from the collector debug window as follow: $ !jmx port=9000 h=172.6.5.12 "Catalina:type=Manager,host=localhost,context=*" Catalina:type=Manager,host=localhost,context=* => /examples /manager /docs /context-test /host-manager / As for the data collection, the mechanism that is used to collect data is 'WEBPAGE' (Note: Port number will depend on the setting in Tomcat and the wildvalue will be the Context name) Data collection can be tested in the collector with the command!http: HTTP response received at at: 2017-04-20 09:05:30.85. Time elapsed: 3ms HTTP/1.1 200 Accept-Ranges: bytes ETag: W/"214-1492696731000" Last-Modified: Thu, 20 Apr 2017 13:58:51 GMT Content-Type: text/html Content-Length: 214 Date: Thu, 20 Apr 2017 14:05:30 GMT <html> <body> <h3>TEST Tomcat Context web access</h3> <pre> <> [ { status: "OK", context: "context-test" }, { company:"LogicMonitor", } ] </> </pre> </body> </html> Here is the final result of the monitoring: From the browser, the Tomcat context can be accessed just like a normal website: The Datasource is only available for download from LM Exchange (version 1.1). This is not available in core repository of LogicMonitor Datasource. Note: This is what Tomcat Manager looks like:3Views0likes0Comments