Recent Discussions
API to put Multiple resource in SDT getting Authentication failed
Experts, I am trying to write a script to put multiple resources in SDT via API. However, the script fails with the error: "Authentication failed", "status":1401. This indicates that the API is rejecting my request. The API credentials are correct, as I am successfully using them for other API calls. Has anyone encountered a similar issue? Your insights would be highly appreciated.smahamulkar2 days agoNeophyte134Views1like11CommentsSilent Log Source Detection
We are working on migrating some items to LM Logs and one thing that came up was the lack of Silent Log Source detection. Log Usage is a PushModule and only updates when data is pushed to it. So unlike a traditional DataSource, you cannot alert on No Data or 0 events for X period of time. With that in mind, we wrote our own, which I am sharing here instead of in the LM Exchange as it will require a bit of changing on your own. So, this could be 100% portable, but I purposefully didn't do that to save on API calls. That this DataSource will do, is make two api calls per device, back to the LogicMonitor portal. The first is to retrieve the instances of LogUsage, which is defined in line 22. That is where you would need to change the ID to match what is in your portal. Then after it retrieves the instance of LogUsage, it queries data for said instance. That returns a JSON structure like this. { "dataPoints": [ "size_in_bytes", "event_received" ], "dataSourceName": "LogUsage", "nextPageParams": "start=1739704983&end=1741035719", "time": [ 1741110300000 ], "values": [ [ 25942, 16 ] } There will generally be more items in time and in values. We grab the first element, so [0] of the time array as that is the latest timestamp. Then we calculate the difference between now and that timestamp in hours. Then we return the difference in hours, or a NaN. /******************************************************************************* * © 2007-2024 - LogicMonitor, Inc. All rights reserved. ******************************************************************************/ import groovy.json.JsonSlurper import com.santaba.agent.util.Settings import com.santaba.agent.live.LiveHostSet import org.apache.commons.codec.binary.Hex import javax.crypto.Mac import javax.crypto.spec.SecretKeySpec String apiId = hostProps.get("lmaccess.id") ?: hostProps.get("logicmonitor.access.id") String apiKey = hostProps.get("lmaccess.key") ?: hostProps.get("logicmonitor.access.key") def portalName = hostProps.get("lmaccount") ?: Settings.getSetting(Settings.AGENT_COMPANY) String deviceid = hostProps.get("system.deviceId") Map proxyInfo = getProxyInfo() def fields = 'id,dataSourceId,deviceDataSourceId,name,lastCollectedTime,lastUpdatedTime,deviceDataSourceId' def apipath = "/device/devices/" + deviceid + "/instances" def apifilter = 'dataSourceId:43806042' def deviceinstances = apiGetMany(portalName, apiId, apiKey, apipath, proxyInfo, ['size':1000, 'fields': fields, 'filter': apifilter]) instanceid = deviceinstances[0]['id'] devicedatasourceid = deviceinstances[0]['deviceDataSourceId'] def instancepath = "/device/devices/" + deviceid + "/devicedatasources/" + devicedatasourceid + "/instances/" + instanceid + '/data' def instancedata = apiGet(portalName, apiId, apiKey, instancepath, proxyInfo, ['period':720]) def now = System.currentTimeMillis() if (instancedata['time'][0] != null && instancedata['time'][0] > 0) { def diffHours = ((instancedata['time'][0] - now) / (1000 * 60 * 60)).toDouble().round(2) println "hours_since_log=${diffHours}" } else { def diffHours = "NaN" println "hours_since_log=${diffHours}" } // If script gets to this point, collector should consider this device alive keepAlive(hostProps) return 0 /* Paginated GET method. Returns a list of objects. */ List apiGetMany(portalName, apiId, apiKey, endPoint, proxyInfo, Map args=[:]) { def pageSize = args.get('size', 1000) // Default the page size to 1000 if not specified. List items = [] args['size'] = pageSize def pageCount = 0 while (true) { pageCount += 1 // Updated the args args['size'] = pageSize args['offset'] = items.size() def response = apiGet(portalName, apiId, apiKey, endPoint, proxyInfo, args) if (response.get("errmsg", "OK") != "OK") { throw new Exception("Santaba returned errormsg: ${response?.errmsg}") } items.addAll(response.items) // If we recieved less than we asked for it means we are done if (response.items.size() < pageSize) break } return items } /* Simple GET, returns a parsed json payload. No processing. */ def apiGet(portalName, apiId, apiKey, endPoint, proxyInfo, Map args=[:]) { def request = rawGet(portalName, apiId, apiKey, endPoint, proxyInfo, args) if (request.getResponseCode() == 200) { def payload = new JsonSlurper().parseText(request.content.text) return payload } else { throw new Exception("Server return HTTP code ${request.getResponseCode()}") } } /* Raw GET method. */ def rawGet(portalName, apiId, apiKey, endPoint, proxyInfo, Map args=[:]) { def auth = generateAuth(apiId, apiKey, endPoint) def headers = ["Authorization": auth, "Content-Type": "application/json", "X-Version":"3", "External-User":"true"] def url = "https://${portalName}.logicmonitor.com/santaba/rest${endPoint}" if (args) { def encodedArgs = [] args.each{ k,v -> encodedArgs << "${k}=${java.net.URLEncoder.encode(v.toString(), "UTF-8")}" } url += "?${encodedArgs.join('&')}" } def request if (proxyInfo.enabled) { request = url.toURL().openConnection(proxyInfo.proxy) } else { request = url.toURL().openConnection() } request.setRequestMethod("GET") request.setDoOutput(true) headers.each{ k,v -> request.addRequestProperty(k, v) } return request } /* Generate auth for API calls. */ static String generateAuth(id, key, path) { Long epoch_time = System.currentTimeMillis() Mac hmac = Mac.getInstance("HmacSHA256") hmac.init(new SecretKeySpec(key.getBytes(), "HmacSHA256")) def signature = Hex.encodeHexString(hmac.doFinal("GET${epoch_time}${path}".getBytes())).bytes.encodeBase64() return "LMv1 ${id}:${signature}:${epoch_time}" } /* Helper method to remind the collector this device is not dead */ def keepAlive(hostProps) { // Update the liveHost set so tell the collector we are happy. hostId = hostProps.get("system.deviceId").toInteger() def liveHostSet = LiveHostSet.getInstance() liveHostSet.flag(hostId) } /** * Get collector proxy settings * @return Map with proxy settings, empty map if proxy not set. */ Map getProxyInfo() { // Each property must be evaluated for null to determine whether to use collected value or fallback value // Elvis operator does not play nice with booleans // default to true in absence of property to use collectorProxy as determinant Boolean deviceProxy = hostProps.get("proxy.enable")?.toBoolean() deviceProxy = (deviceProxy != null) ? deviceProxy : true // if settings are not present, value should be false Boolean collectorProxy = Settings.getSetting("proxy.enable")?.toBoolean() collectorProxy = (collectorProxy != null) ? collectorProxy : false Map proxyInfo = [:] if (deviceProxy && collectorProxy) { proxyInfo = [ enabled : true, host : hostProps.get("proxy.host") ?: Settings.getSetting("proxy.host"), port : hostProps.get("proxy.port") ?: Settings.getSetting("proxy.port") ?: 3128, user : Settings.getSetting("proxy.user"), pass : Settings.getSetting("proxy.pass") ] proxyInfo["proxy"] = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyInfo.host, proxyInfo.port.toInteger())) } return proxyInfo }Joe_Williams6 days agoProfessor31Views3likes2Comments- Spike06 days agoNeophyte44Views0likes1Comment
Monitoring for Mirth
Hello Community, We have a Mirth Appliance that we use for exchanging data with our partners. This is recently been added to our LogicMonitor instance but I'm only seeing what I would consider "basic" SNMP information. I can see data about the CPU, Disk, Memory usage. However, I don't see any "channel" information. This is something that's within Mirth and knowing if channels are clogging up, etc, would be handy to know. I don't know if this is something that can be reported on but according to this from PRTG https://kb.paessler.com/en/topic/80868-how-to-monitor-nextgen-mirth-connect-with-prtg It looks like it might be something that's reported through SNMP. I'm not an expert in SNMP/LogicMonitor so I'm coming to the community in hopes that one of you may already have something built that monitors Mirth Appliance channels. TIA!Kirby_Timm16 days agoNeophyte40Views0likes2CommentsBuilding Dynamic Groups using Powershell Batchscript DataSources
I'm looking for a way to use the "Description" field I'm collecting when building instances from a batchscripted datasource. current output I'm using in the active discovery writes WildAlias, WIldValue, Description: $($proc.processname)##$($svcmatch.displayname)##$toolname I want $toolname to drive instance grouping. I see mechanisms for using the other two, but altering those doesn't fit the use case I need for these. The Support Docs for instance grouping and for active discovery don't provide quite enough info to figure out what they're instructing without a bunch of experimentation (which is probably how I'll end up sorting this out if someone hasn't done this already). For instance (pun!), This refers to Dynamic groups ( dynamicGroup="^(.d.)" )... but does it only evaluate the GREP based on wildalias? Instance Groups | LogicMonitorSolvedCole_McDonald18 days agoProfessor52Views0likes1CommentAlert/Alerts API endpoint
Hello Everyone, I am trying to get the list of all alerts through the API. Endpoint I am using is https://companyname.logicmonitor.com/santaba/rest/alert/alerts?v=3 Through this API I am only able to get the Uncleared Alerts. Now, Issue is I want all alerts including the one that has been cleared for reporting purpose. I would really appreciate if anyone could help me with this? Thank you, MnaishSolved111Views0likes4CommentsSQL Monitoring Troubles
Hey, I've just added 4 SQL Servers to our environment. 2 of them worked perfectly and are retrieving all the SQL data. However, 2 of them aren't learning that SQL is installed: one of them reports WMI access error but wbemtest from the LM collector with the LM collector account details succeeds on both. The other just doesn't seem to know SQL is installed, no errors are shown. The one with WMI error obviously isn't even retrieving Windows data like CPU and memory. The one with no WMI error is showing this data okay. I'm not sure where to check for problems in LM - any logs I can look at? Thanksldoodle2 months agoAdvisor58Views0likes4CommentsModules for Zerto monitoring
Hi, here are some modules to monitor Zerto via their API. Appliances (ZVM/ZCM) and the Zerto Analytics portal are supported. I have made the .xml export of each module available on Github, they can be downloaded from here: https://github.com/chrisred/logicmonitor-zerto The modules are: ZertoAnalytics_Alerts.xml ZertoAnalytics_Datastores.xml ZertoAnalytics_Sites.xml ZertoAnalytics_Token.xml ZertoAnalytics_VPGs.xml ZertoAppliance_Alerts.xml ZertoAppliance_Datastores.xml ZertoAppliance_PeerSites.xml ZertoAppliance_Token.xml ZertoAppliance_VPGs.xml ZertoAppliance_VRAs.xml I'll try to keep an eye on this post for any questions.chrisred2 months agoNeophyte962Views23likes7CommentsModules for Citrix Cloud/DaaS/VAD monitoring
Hi, here are some modules to monitor Citrix DaaS/VAD via the Citrix Monitor API. These might be helpful with mixture of DaaS and on-prem VAD environments as the same modules can be used for both. Setup details are in the module notes, see the CitrixDaaS_Token notes for the Citrix Cloud API setup. I have made the .xml export of each module available on Github, they can be downloaded from here: https://github.com/chrisred/logicmonitor-citrixdaas The modules are: CitrixDaaS_ApplicationUsage.xml CitrixDaaS_ConnectionFailures.xml CitrixDaaS_DeliveryGroups.xml CitrixDaaS_LogonPerformace.xml CitrixDaaS_Machines.xml CitrixDaaS_Token.xml I'll try to keep an eye on this post for any questions.chrisred2 months agoNeophyte673Views23likes12CommentsSupport for Veeam 11 PowerShell Module
Veeam 11 released with a PowerShell Module rather than a PS Snap-In. Is anyone working to update the Veeam LogicModules? https://www.veeam.com/veeam_backup_11_0_whats_new_wn.pdf Quote • PowerShell module — By popular demand, we switched from the PowerShell snap-in to the PowerShell module, which can be used on any machine with the backup console installed. We also no longer require PowerShell 2.0 installed on the backup server, which is something many customers had problems with. • New PowerShell cmdlet — V11 adds 184 new cmdlets for both newly added functionality and expanded coverage of the existing features with a particular focus on restore functionality205Views0likes6Comments