Silent Log Source Detection
We are working on migrating some items to LM Logs and one thing that came up was the lack of Silent Log Source detection. Log Usage is a PushModule and only updates when data is pushed to it. So unlike a traditional DataSource, you cannot alert on No Data or 0 events for X period of time. With that in mind, we wrote our own, which I am sharing here instead of in the LM Exchange as it will require a bit of changing on your own. So, this could be 100% portable, but I purposefully didn't do that to save on API calls. That this DataSource will do, is make two api calls per device, back to the LogicMonitor portal. The first is to retrieve the instances of LogUsage, which is defined in line 22. That is where you would need to change the ID to match what is in your portal. Then after it retrieves the instance of LogUsage, it queries data for said instance. That returns a JSON structure like this. { "dataPoints": [ "size_in_bytes", "event_received" ], "dataSourceName": "LogUsage", "nextPageParams": "start=1739704983&end=1741035719", "time": [ 1741110300000 ], "values": [ [ 25942, 16 ] } There will generally be more items in time and in values. We grab the first element, so [0] of the time array as that is the latest timestamp. Then we calculate the difference between now and that timestamp in hours. Then we return the difference in hours, or a NaN. /******************************************************************************* * © 2007-2024 - LogicMonitor, Inc. All rights reserved. ******************************************************************************/ import groovy.json.JsonSlurper import com.santaba.agent.util.Settings import com.santaba.agent.live.LiveHostSet import org.apache.commons.codec.binary.Hex import javax.crypto.Mac import javax.crypto.spec.SecretKeySpec String apiId = hostProps.get("lmaccess.id") ?: hostProps.get("logicmonitor.access.id") String apiKey = hostProps.get("lmaccess.key") ?: hostProps.get("logicmonitor.access.key") def portalName = hostProps.get("lmaccount") ?: Settings.getSetting(Settings.AGENT_COMPANY) String deviceid = hostProps.get("system.deviceId") Map proxyInfo = getProxyInfo() def fields = 'id,dataSourceId,deviceDataSourceId,name,lastCollectedTime,lastUpdatedTime,deviceDataSourceId' def apipath = "/device/devices/" + deviceid + "/instances" def apifilter = 'dataSourceId:43806042' def deviceinstances = apiGetMany(portalName, apiId, apiKey, apipath, proxyInfo, ['size':1000, 'fields': fields, 'filter': apifilter]) instanceid = deviceinstances[0]['id'] devicedatasourceid = deviceinstances[0]['deviceDataSourceId'] def instancepath = "/device/devices/" + deviceid + "/devicedatasources/" + devicedatasourceid + "/instances/" + instanceid + '/data' def instancedata = apiGet(portalName, apiId, apiKey, instancepath, proxyInfo, ['period':720]) def now = System.currentTimeMillis() if (instancedata['time'][0] != null && instancedata['time'][0] > 0) { def diffHours = ((instancedata['time'][0] - now) / (1000 * 60 * 60)).toDouble().round(2) println "hours_since_log=${diffHours}" } else { def diffHours = "NaN" println "hours_since_log=${diffHours}" } // If script gets to this point, collector should consider this device alive keepAlive(hostProps) return 0 /* Paginated GET method. Returns a list of objects. */ List apiGetMany(portalName, apiId, apiKey, endPoint, proxyInfo, Map args=[:]) { def pageSize = args.get('size', 1000) // Default the page size to 1000 if not specified. List items = [] args['size'] = pageSize def pageCount = 0 while (true) { pageCount += 1 // Updated the args args['size'] = pageSize args['offset'] = items.size() def response = apiGet(portalName, apiId, apiKey, endPoint, proxyInfo, args) if (response.get("errmsg", "OK") != "OK") { throw new Exception("Santaba returned errormsg: ${response?.errmsg}") } items.addAll(response.items) // If we recieved less than we asked for it means we are done if (response.items.size() < pageSize) break } return items } /* Simple GET, returns a parsed json payload. No processing. */ def apiGet(portalName, apiId, apiKey, endPoint, proxyInfo, Map args=[:]) { def request = rawGet(portalName, apiId, apiKey, endPoint, proxyInfo, args) if (request.getResponseCode() == 200) { def payload = new JsonSlurper().parseText(request.content.text) return payload } else { throw new Exception("Server return HTTP code ${request.getResponseCode()}") } } /* Raw GET method. */ def rawGet(portalName, apiId, apiKey, endPoint, proxyInfo, Map args=[:]) { def auth = generateAuth(apiId, apiKey, endPoint) def headers = ["Authorization": auth, "Content-Type": "application/json", "X-Version":"3", "External-User":"true"] def url = "https://${portalName}.logicmonitor.com/santaba/rest${endPoint}" if (args) { def encodedArgs = [] args.each{ k,v -> encodedArgs << "${k}=${java.net.URLEncoder.encode(v.toString(), "UTF-8")}" } url += "?${encodedArgs.join('&')}" } def request if (proxyInfo.enabled) { request = url.toURL().openConnection(proxyInfo.proxy) } else { request = url.toURL().openConnection() } request.setRequestMethod("GET") request.setDoOutput(true) headers.each{ k,v -> request.addRequestProperty(k, v) } return request } /* Generate auth for API calls. */ static String generateAuth(id, key, path) { Long epoch_time = System.currentTimeMillis() Mac hmac = Mac.getInstance("HmacSHA256") hmac.init(new SecretKeySpec(key.getBytes(), "HmacSHA256")) def signature = Hex.encodeHexString(hmac.doFinal("GET${epoch_time}${path}".getBytes())).bytes.encodeBase64() return "LMv1 ${id}:${signature}:${epoch_time}" } /* Helper method to remind the collector this device is not dead */ def keepAlive(hostProps) { // Update the liveHost set so tell the collector we are happy. hostId = hostProps.get("system.deviceId").toInteger() def liveHostSet = LiveHostSet.getInstance() liveHostSet.flag(hostId) } /** * Get collector proxy settings * @return Map with proxy settings, empty map if proxy not set. */ Map getProxyInfo() { // Each property must be evaluated for null to determine whether to use collected value or fallback value // Elvis operator does not play nice with booleans // default to true in absence of property to use collectorProxy as determinant Boolean deviceProxy = hostProps.get("proxy.enable")?.toBoolean() deviceProxy = (deviceProxy != null) ? deviceProxy : true // if settings are not present, value should be false Boolean collectorProxy = Settings.getSetting("proxy.enable")?.toBoolean() collectorProxy = (collectorProxy != null) ? collectorProxy : false Map proxyInfo = [:] if (deviceProxy && collectorProxy) { proxyInfo = [ enabled : true, host : hostProps.get("proxy.host") ?: Settings.getSetting("proxy.host"), port : hostProps.get("proxy.port") ?: Settings.getSetting("proxy.port") ?: 3128, user : Settings.getSetting("proxy.user"), pass : Settings.getSetting("proxy.pass") ] proxyInfo["proxy"] = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyInfo.host, proxyInfo.port.toInteger())) } return proxyInfo }71Views3likes2CommentsWebhook Event Collection & Cisco Meraki
As a Cisco Meraki Strategic Technology Partner, we are always looking for ways to make our integration the best that it can be, so you can get the most out of your investments with Cisco and LogicMonitor. So, today we kicked off R&D planning for [safe harbor statement] the ability to collect webhook events from Cisco Meraki with the following objectives. Mitigate Cisco Meraki Dashboard API rate limiting. Enable [near] real-time alerts for things like camera motion, IoT sensor measurement threshold breach (or automation button press), power supply failure... Facilite sending webhook events from Cisco Meraki to LogicMonitor I have the following assumptions. Customers want to be alerted on most, but not all webhook events. Customers want to have multiple inbound webhook configurations, i.e. for different tenants/customers or different Cisco Meraki organizations. Cisco Meraki is the first but not the only platform that customers will want to use to send webhook events to LogicMonitor. If you had a magic wand and could make such an integration do exactly what you wanted, what would be your number one ask? Thank you!93Views3likes0CommentsHow to redirect the output of the groovy script to the collector log file using groovy script?
In my groovy script, I want to redirect the output from the groovy script into the collectors log file? What should be the groovy code, to redirect the output to the collectors log file? Can anyone help me here?68Views5likes1Commentsending Windows syslogs to Logicmonitor
I know this is going to be a duh moment. But back in our Proof of Concept we setup 2-3 widows boxes to send logs to Logicmonitor so they can be parsed int he Logs section of the GUI. I cannot for the life of me find in the documentation or remember how we set it up. The only thing i can see is that we have System.pushmodules = logusage. It wont let you add that property manually so I’m guessing its just hidden somewhere else in the GUI.Solved131Views15likes2CommentsCan I monitor a JSON file? Example included.
Hi, We have a script that runs and creates an output like the file attached. We need to be able to parse this file and look at the “replication” and “counts_match” fields and alert if we don’t find certain criteria. Can LM do that? I think that LM can only access files directly if they are on a collector, so we’d make sure this file ends up there. Thanks. I guess I can’t attach a file so here’s what it looks like: { "replication": [ { "db_name": "db1 ", "replication": "running ", "local_count": "12054251", "remote_count": "8951389", "counts_match": "false" }, { "db_name": "db2 ", "replication": "running ", "local_count": "0", "remote_count": "0", "counts_match": "true" }, { "db_name": "db3 ", "replication": "running ", "local_count": "0", "remote_count": "0", "counts_match": "true" }, { "db_name": "db4 ", "replication": "running ", "local_count": "97", "remote_count": "97", "counts_match": "true" }, { "db_name": "db5 ", "replication": "running ", "local_count": "0", "remote_count": "0", "counts_match": "true" } ] }Solved515Views12likes5CommentsLM Logs parser conditional formatting operator
Submitted to LM Feedback under the title “LM Logs parser colorization based on criteria” As an engineer who is trying to see how certain logs relate to other logs, it would be helpful if I could highlight specific logs in context with other logs by using an advanced search operator to colorize certain logs that meet a certain criterion. For example, I run this query often: "PagerDuty Ticket Creation" | parse /(.*) (SUMMARY|ERROR|INFO|DEBUG): (.*)/ as Script, Severity, Msg One of the fields I parse is the Severity, which as you can see can have values of SUMMARY, ERROR, INFO, or DEBUG. It would be nice if I could add an operator to the query that would let me colorize rows based on the value of the parsed Severity column (Severity just in this case; for the general case, any expression on any column). For example, I'd like to run the query: "PagerDuty Ticket Creation" | parse /(.*) (SUMMARY|ERROR|INFO|DEBUG): (.*)/ as Script, Severity, Msg | colorize Severity == "ERROR" as orange | colorize Severity ~ /SUMMARY|INFO/ as green The result would be that rows in the table that have a value of "ERROR" would have a background color of orange (a muted orange) and rows in the table that have a value of "SUMMARY" or "INFO" would be colored green. Since the DEBUG logs don't match any colorization operator, they would have the default color of white. It might be handy if one *or* two colors could be passed, allowing me to change the color of the text and the background, or just the background. It would be ok if I could only choose from a set list of colors, but it would be great if I could specify an RGBA color.47Views12likes0CommentsGetting started with Log analysis - useful queries
We at LogicMonitor want to make taking control of your log data easy for analysis, troubleshooting, and identifying trends. In this post, we will share a few helpful queries to get started with LM Logs - what devices are generating log data and easy ways to track overall usage. In future posts, we’ll share queries to dive deeper into specific log data types. What type of queries do you want to see? Reply to this post with areas of log analysis or best practices you want. Not up to date with LM Logs? Check out this blog post highlighting recent improvements and customer stories: A lookback at LM Logs NOTE: Some assumptions for these queries: Each queries results are bound to the time picker value, adjust according to your needs * is a wildcard value meaning ALL which can be replaced by a Resource, Resource Group, Sub-Group, Device by Type or Meta Data value You may need to modify specific queries to match your LM portal Devices Sending Logs - use this query to easily see which LM monitored devices are currently ingesting log data into your portal * | count by _resource.name | sort by _count desc Total Number of Devices Sending Logs - the previous query showed which devices are generated logs, while this query identifies the overall number of devices * | count by _resource.name | count Total Volume by Resource Name - this query shows the total volume of log ingestion (as GB) by resource name, with the average, min, max size per message. The results are sorted by GB descending but you can modify the operators to identify your own trends. * | count(_size), sum(_size), max(_size), min(_size) by _resource.name | num(_sum/1000000000) as GB | num(_sum/_count) as avg_size | sort by GB desc Total Log Usage - This is a helpful query to run to see your overall log usage for the entire portal * | sum(_size) | num(_sum/1000000000) as GB | sort by GB desc And finally, Daily Usage in Buckets - run this query to see an aggregated view of your daily log usage * | beta:bucket(span=24h) | sum (_size) as size_in_bytes by_bucket | num(size_in_bytes/1000000000) as GB | sort by _bucket asc We hope these help you get started!245Views11likes0CommentsAny way to display LM Logs query results in Dashboard?
Not sure if I am overlooking something…. is it possible to put LM Logs query table showing the results of said query into a Dashboard? The purpose of that would be to have the query updated with the latest results every x minutes. I've tried Text widget and HTTP widget and embedded the URL for log query but that's not working.73Views1like1CommentEvent Source for log file monitoring
We're looking to have log file monitoring for file extension *.rpt and SQL log files. LM does not appear to support anything (out of the box) other than .log and .txt. Has anyone done this via script with other file types in Windows? If so, can you share your solution?106Views1like5Comments