Create Dynamic group by Script
Hi Members I am trying to create 'Device Dynamic Groups' with Python script. but I am failing. Could you pleas me what I am missing: Conditions of the groups name: If Device display name starts with prod, all device should group. data = '{"name":"test111","parentId":111,"appliesTo":"startsWith("system.displayname","Prod")"}' I have attached the full script too. I am running Python 3.6.5 #!/bin/env python import requests import json import hashlib import base64 import time import hmac #Account Info AccessId ='XXXXXX' AccessKey ='XXXXXX' Company = 'contoso' #Request Info httpVerb ='POST' resourcePath = '/device/groups' data = '{"name":"test111","parentId":111,"appliesTo":"startsWith("system.displayname","Prod")"}' #Construct URL url = 'https://'+ Company +'.logicmonitor.com/santaba/rest' + resourcePath #Get current time in milliseconds epoch = str(int(time.time() * 1000)) #Concatenate Request details requestVars = httpVerb + epoch + data + resourcePath #Construct signature #signature = base64.b64encode(hmac.new(AccessKey,msg=requestVars,digestmod=hashlib.sha256).hexdigest()) hmac = hmac.new(AccessKey.encode(),msg=requestVars.encode(),digestmod=hashlib.sha256).hexdigest() signature = base64.b64encode(hmac.encode()) #Construct headers #auth = 'LMv1 ' + AccessId + ':' + signature + ':' + epoch #headers = {'Content-Type':'application/json','Authorization':auth} auth = 'LMv1 ' + AccessId + ':' + signature.decode() + ':' + epoch headers = {'Content-Type':'application/json','Authorization':auth} #Make request response = requests.post(url, data=data, headers=headers) Please assist Thank you.16Views0likes3CommentsAd-hoc script running
Often when an alert pops up, I find myself running some very common troubleshooting/helpful tools to quickly gather more info. It would be nice to get that info quickly and easily without having to go to other tools when an alert occurs. For example - right now, when we get a high cpu alert the first thing I do is run pslist -s \\computername (PSTools are so awesome) and psloggedon \\computername to see who's logged in at themoment. I know it's possible to create a datasource to discover all active processes, and retrieve CPU/memory/disk metrics specific to a given process, but processes on a given server might change pretty frequently so you'd have to run active discovery frequently. It just doesn't seem like the best way and most of the time I don't care what's running on the server and only need to know "in the moment." A way to run a script via a button for a given datasource would be a really cool feature. Maybe on the datasource you could add a feature to hold a "gather additional data" or meta-datascript, the script could then be invoked manually onan alert or datasource instance. IE when an alert occurs, you can click on a button in the alert called "gather additional data" or something which would run the script and produce a small box or window with the output. The ability to run periodically (every 15 seconds or 5 minutes, etc) would also be useful. This would also give a NOC the ability to troubleshoot a bit more or provide some additional context around an alert without everyone having to know a bunch of tools or have administrative access to a server.15Views1like7CommentsScripted SNMP request to different port
We have some Linux servers for which we need to use SNMP for OS monitoring, but we also need to use SNMP for some application monitoring. The applications bind to a different port, since the OS is bound to the standard SNMP port, 161. We would like to be able to use the embedded scripting, so please make it possible to specify a port with the Snmp.get() command: a = Snmp.get(host,".1.3.6.1.4.1.8072.1.3.2.3.1.2.7.108.109.45.110.102.115.100")11Views1like0CommentsInternal Scripted Service Check "test script" button and debug
Although a TEST step button is available in service checks (external and internal) this only shows the request made and the response output windows. Neither type gives you any kind of output from the post-processing of the response i.e. JSONPATH results. For the Internal service check that has the scripted option (rather than settings) this is even more noticeable as there is no obvious way to put any println or debug in your response script to make sure it is doing the write thing or work out why it is not doing the right thing. The addition of a stdout/output window that shows response (or request) script output would be really helpful. Also possiblythe addition of a Test Script button for the response script that would run the script and show output as per datasources would also be great. I ran these ideas past support and they suggested raising a feature request so I have. They did also suggest I could test my scripts (request or response) in the console debug window which is possible but not obvious ways to mock LMRequest/LMResponse objects so that the script can run the same way as it would normally as a service check. If there are examples or ways to do this then this may be a good subject to create a support page explaining how to do it.7Views0likes0CommentsAPI/Script
Hi I want to create device group named as "Fileserver" in /device/servers This is the first time I am running the script, so could you please check if my script is ok. Also please let me know how I run this script. I am sharing the scriptwith you. #!/bin/env python import requests import json import hashlib import base64 import time import hmac #Account Info AccessId ='TBA' AccessKey ='TBA' Company = 'contoso' #Request Info httpVerb ='POST' resourcePath = '/device/servers' data = '{"name":"Fileserver"}' #Construct URL url = 'https://'+ Company +'.logicmonitor.com/santaba/rest' + resourcePath #Get current time in milliseconds epoch = str(int(time.time() * 1000)) #Concatenate Request details requestVars = httpVerb + epoch + data + resourcePath #Construct signature signature = base64.b64encode(hmac.new(AccessKey,msg=requestVars,digestmod=hashlib.sha256).hexdigest()) #Construct headers auth = 'LMv1 ' + AccessId + ':' + signature + ':' + epoch headers = {'Content-Type':'application/json','Authorization':auth} #Make request response = requests.post(url, data=data, headers=headers) #Print status and body of response print 'Response Status:',response.status_code print 'Response Body:',response.content6Views0likes7CommentsMonitoring Tomcat Context Web Response
Monitoring Tomcat Context with HTTP Data Collection Tomcat is not a common home pet that people normally know. It is not a type of a cat ("a male cat") after all but an eccentric name like any other open source software's names i.e. Guacamole, Apache, RabbitMQ, etc. which I believe represents the freedom nature of those software, where creativity is the main ingredient. I am not here to explain about Tomcat per se, either because of Google provides abundant information about it or it is out of the scope of my article. Simply put, Tomcat is a servlet whereby multiple Context containers can exist. Each context refers to a web application. Recently we have quite a few customers request to monitor the status and response time of the Tomcat Context, which is actually simple since there are already readily available HTTP Datasource orInternal Web Service Checkfor that purpose. However, our customers have, in this case, multiple contexts as in normal Tomcat application. Therefore Active Discovery (AD) is needed to get the list of Contexts running on Tomcat before the simple HTTP Data collection, that has only two Datapoints (Response Time & Status) will be applied. For Active Discovery part, I would need to give credit to our renownedDavid Leewhom you might be familiar if you ever open a ticket with LogicMonitor Support channel. For the sake of confidentiality, all the screenshots will be in my own lab instead of our client, it is, however, producing the similar intended result although there are more Context containers in the real production environment. The AD is a short groovy script utilizingJava Management Extension to connect to remote Tomcat from Collector. Discovery Method chosen should be 'SCRIPT' in this Datasource. importcom.santaba.agent.groovyapi.http.*; importcom.santaba.agent.groovyapi.jmx.*; importjavax.management.remote.JMXServiceURL importjavax.management.remote.JMXConnectorFactory importorg.jboss.remotingjmx.* def jmx_host = hostProps.get('system.hostname'); def jmx_port = hostProps.get('tomcat.jmxports'); def jmx_url ="service:jmx:rmi:///jndi/rmi://"+ jmx_host +":"+ jmx_port +"/jmxrmi"; context_array = jmx_conn.getChildren("Catalina:type=Manager,host=localhost,context=*"); context_array.each { context -> println context+"##"+context } return0; (note: tomcat.jmxports is the port used by JMX to connect to Tomcat servlet and in this case is a standard port 9000) Following is the result of AD: (note: one of the context name is: 'context-test') which can be tested from the collector debug window as follow: $ !jmx port=9000 h=172.6.5.12 "Catalina:type=Manager,host=localhost,context=*" Catalina:type=Manager,host=localhost,context=* => /examples /manager /docs /context-test /host-manager / As for the data collection, the mechanism that is used to collect data is 'WEBPAGE' (Note: Port number will depend on the setting in Tomcat and the wildvalue will be the Context name) Data collection can be tested in the collector with the command!http: HTTP response received at at: 2017-04-20 09:05:30.85. Time elapsed: 3ms HTTP/1.1 200 Accept-Ranges: bytes ETag: W/"214-1492696731000" Last-Modified: Thu, 20 Apr 2017 13:58:51 GMT Content-Type: text/html Content-Length: 214 Date: Thu, 20 Apr 2017 14:05:30 GMT <html> <body> <h3>TEST Tomcat Context web access</h3> <pre> <> [ { status: "OK", context: "context-test" }, { company:"LogicMonitor", } ] </> </pre> </body> </html> Here is the final result of the monitoring: From the browser, the Tomcat context can be accessed just like a normal website: The Datasource is only available for download from LM Exchange (version 1.1). This is not available in core repository of LogicMonitor Datasource. Note: This is what Tomcat Manager looks like:2Views0likes0Comments