Recent Discussions
- Spike02 days agoNeophyte63Views0likes4Comments
Others Having Challenges with Least Privilege (POLP)?
Hi all. Just wanted to reach out to the community to see if others are running into the same challenges deploying the LM least-privilege service accounts as we are. This is what we've identified so far: LM can't retrieve metrics for disks where NTFS permissions don't include read access for the service account. I've scripted a PowerShell permissions check for disks in our environment, but I feel like this isn't a scalable solution. LM can't retrieve metrics for HyperV clusters. The workaround would be similar to the above. There doesn't appear to be a scalable way to confirm monitoring works across all instances/datasources after migration. I've written a script that retrieves all monitoring data for all resources from the LM API, puts it into a SQLite database, for later before/after comparison. The onboarding/migration script only sets SDDL permissions on currently installed services. If a service is newly-installed, or updated, LM can no longer monitor the service. I was considering scheduling the script to run on a regular basis, but read in this forum that it can exceed the max security descriptor length because it writes duplicate permissions. I've reach out to support on all of these issues and been told everything is 'working as expected', and that their devs 'can't anticipate every scenario'. Which is true! But none of what I described is due to an exotic configuration or niche software. Given that switching to a least-privilege model was portrayed as a 'mandate' a few months ago, I feel like remarkably little thought has gone into how this would impact customer environments, but I digress. Has anyone encountered similar issues? What's the consensus on whether the LM least-privilege model actually makes sense in the real world?MWW2 days agoNeophyte120Views1like0CommentsHow could I add my other monitoring checks like OS, SQL and third party software to ec2 instance?
I recently added our cloud resources to logic monitoring. However we need to add more monitoring checks that were registered under our local collector server. How could I add my other monitoring checks like OS, SQL and third party software to ec2 instance?tuco3 days agoNeophyte158Views1like4CommentsAPI to put Multiple resource in SDT getting Authentication failed
Experts, I am trying to write a script to put multiple resources in SDT via API. However, the script fails with the error: "Authentication failed", "status":1401. This indicates that the API is rejecting my request. The API credentials are correct, as I am successfully using them for other API calls. Has anyone encountered a similar issue? Your insights would be highly appreciated.smahamulkar5 days agoNeophyte137Views1like11CommentsSilent Log Source Detection
We are working on migrating some items to LM Logs and one thing that came up was the lack of Silent Log Source detection. Log Usage is a PushModule and only updates when data is pushed to it. So unlike a traditional DataSource, you cannot alert on No Data or 0 events for X period of time. With that in mind, we wrote our own, which I am sharing here instead of in the LM Exchange as it will require a bit of changing on your own. So, this could be 100% portable, but I purposefully didn't do that to save on API calls. That this DataSource will do, is make two api calls per device, back to the LogicMonitor portal. The first is to retrieve the instances of LogUsage, which is defined in line 22. That is where you would need to change the ID to match what is in your portal. Then after it retrieves the instance of LogUsage, it queries data for said instance. That returns a JSON structure like this. { "dataPoints": [ "size_in_bytes", "event_received" ], "dataSourceName": "LogUsage", "nextPageParams": "start=1739704983&end=1741035719", "time": [ 1741110300000 ], "values": [ [ 25942, 16 ] } There will generally be more items in time and in values. We grab the first element, so [0] of the time array as that is the latest timestamp. Then we calculate the difference between now and that timestamp in hours. Then we return the difference in hours, or a NaN. /******************************************************************************* * © 2007-2024 - LogicMonitor, Inc. All rights reserved. ******************************************************************************/ import groovy.json.JsonSlurper import com.santaba.agent.util.Settings import com.santaba.agent.live.LiveHostSet import org.apache.commons.codec.binary.Hex import javax.crypto.Mac import javax.crypto.spec.SecretKeySpec String apiId = hostProps.get("lmaccess.id") ?: hostProps.get("logicmonitor.access.id") String apiKey = hostProps.get("lmaccess.key") ?: hostProps.get("logicmonitor.access.key") def portalName = hostProps.get("lmaccount") ?: Settings.getSetting(Settings.AGENT_COMPANY) String deviceid = hostProps.get("system.deviceId") Map proxyInfo = getProxyInfo() def fields = 'id,dataSourceId,deviceDataSourceId,name,lastCollectedTime,lastUpdatedTime,deviceDataSourceId' def apipath = "/device/devices/" + deviceid + "/instances" def apifilter = 'dataSourceId:43806042' def deviceinstances = apiGetMany(portalName, apiId, apiKey, apipath, proxyInfo, ['size':1000, 'fields': fields, 'filter': apifilter]) instanceid = deviceinstances[0]['id'] devicedatasourceid = deviceinstances[0]['deviceDataSourceId'] def instancepath = "/device/devices/" + deviceid + "/devicedatasources/" + devicedatasourceid + "/instances/" + instanceid + '/data' def instancedata = apiGet(portalName, apiId, apiKey, instancepath, proxyInfo, ['period':720]) def now = System.currentTimeMillis() if (instancedata['time'][0] != null && instancedata['time'][0] > 0) { def diffHours = ((instancedata['time'][0] - now) / (1000 * 60 * 60)).toDouble().round(2) println "hours_since_log=${diffHours}" } else { def diffHours = "NaN" println "hours_since_log=${diffHours}" } // If script gets to this point, collector should consider this device alive keepAlive(hostProps) return 0 /* Paginated GET method. Returns a list of objects. */ List apiGetMany(portalName, apiId, apiKey, endPoint, proxyInfo, Map args=[:]) { def pageSize = args.get('size', 1000) // Default the page size to 1000 if not specified. List items = [] args['size'] = pageSize def pageCount = 0 while (true) { pageCount += 1 // Updated the args args['size'] = pageSize args['offset'] = items.size() def response = apiGet(portalName, apiId, apiKey, endPoint, proxyInfo, args) if (response.get("errmsg", "OK") != "OK") { throw new Exception("Santaba returned errormsg: ${response?.errmsg}") } items.addAll(response.items) // If we recieved less than we asked for it means we are done if (response.items.size() < pageSize) break } return items } /* Simple GET, returns a parsed json payload. No processing. */ def apiGet(portalName, apiId, apiKey, endPoint, proxyInfo, Map args=[:]) { def request = rawGet(portalName, apiId, apiKey, endPoint, proxyInfo, args) if (request.getResponseCode() == 200) { def payload = new JsonSlurper().parseText(request.content.text) return payload } else { throw new Exception("Server return HTTP code ${request.getResponseCode()}") } } /* Raw GET method. */ def rawGet(portalName, apiId, apiKey, endPoint, proxyInfo, Map args=[:]) { def auth = generateAuth(apiId, apiKey, endPoint) def headers = ["Authorization": auth, "Content-Type": "application/json", "X-Version":"3", "External-User":"true"] def url = "https://${portalName}.logicmonitor.com/santaba/rest${endPoint}" if (args) { def encodedArgs = [] args.each{ k,v -> encodedArgs << "${k}=${java.net.URLEncoder.encode(v.toString(), "UTF-8")}" } url += "?${encodedArgs.join('&')}" } def request if (proxyInfo.enabled) { request = url.toURL().openConnection(proxyInfo.proxy) } else { request = url.toURL().openConnection() } request.setRequestMethod("GET") request.setDoOutput(true) headers.each{ k,v -> request.addRequestProperty(k, v) } return request } /* Generate auth for API calls. */ static String generateAuth(id, key, path) { Long epoch_time = System.currentTimeMillis() Mac hmac = Mac.getInstance("HmacSHA256") hmac.init(new SecretKeySpec(key.getBytes(), "HmacSHA256")) def signature = Hex.encodeHexString(hmac.doFinal("GET${epoch_time}${path}".getBytes())).bytes.encodeBase64() return "LMv1 ${id}:${signature}:${epoch_time}" } /* Helper method to remind the collector this device is not dead */ def keepAlive(hostProps) { // Update the liveHost set so tell the collector we are happy. hostId = hostProps.get("system.deviceId").toInteger() def liveHostSet = LiveHostSet.getInstance() liveHostSet.flag(hostId) } /** * Get collector proxy settings * @return Map with proxy settings, empty map if proxy not set. */ Map getProxyInfo() { // Each property must be evaluated for null to determine whether to use collected value or fallback value // Elvis operator does not play nice with booleans // default to true in absence of property to use collectorProxy as determinant Boolean deviceProxy = hostProps.get("proxy.enable")?.toBoolean() deviceProxy = (deviceProxy != null) ? deviceProxy : true // if settings are not present, value should be false Boolean collectorProxy = Settings.getSetting("proxy.enable")?.toBoolean() collectorProxy = (collectorProxy != null) ? collectorProxy : false Map proxyInfo = [:] if (deviceProxy && collectorProxy) { proxyInfo = [ enabled : true, host : hostProps.get("proxy.host") ?: Settings.getSetting("proxy.host"), port : hostProps.get("proxy.port") ?: Settings.getSetting("proxy.port") ?: 3128, user : Settings.getSetting("proxy.user"), pass : Settings.getSetting("proxy.pass") ] proxyInfo["proxy"] = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyInfo.host, proxyInfo.port.toInteger())) } return proxyInfo }Joe_Williams9 days agoProfessor34Views3likes2CommentsMonitoring for Mirth
Hello Community, We have a Mirth Appliance that we use for exchanging data with our partners. This is recently been added to our LogicMonitor instance but I'm only seeing what I would consider "basic" SNMP information. I can see data about the CPU, Disk, Memory usage. However, I don't see any "channel" information. This is something that's within Mirth and knowing if channels are clogging up, etc, would be handy to know. I don't know if this is something that can be reported on but according to this from PRTG https://kb.paessler.com/en/topic/80868-how-to-monitor-nextgen-mirth-connect-with-prtg It looks like it might be something that's reported through SNMP. I'm not an expert in SNMP/LogicMonitor so I'm coming to the community in hopes that one of you may already have something built that monitors Mirth Appliance channels. TIA!Kirby_Timm19 days agoNeophyte44Views0likes2CommentsBuilding Dynamic Groups using Powershell Batchscript DataSources
I'm looking for a way to use the "Description" field I'm collecting when building instances from a batchscripted datasource. current output I'm using in the active discovery writes WildAlias, WIldValue, Description: $($proc.processname)##$($svcmatch.displayname)##$toolname I want $toolname to drive instance grouping. I see mechanisms for using the other two, but altering those doesn't fit the use case I need for these. The Support Docs for instance grouping and for active discovery don't provide quite enough info to figure out what they're instructing without a bunch of experimentation (which is probably how I'll end up sorting this out if someone hasn't done this already). For instance (pun!), This refers to Dynamic groups ( dynamicGroup="^(.d.)" )... but does it only evaluate the GREP based on wildalias? Instance Groups | LogicMonitorSolvedCole_McDonald22 days agoProfessor55Views0likes1CommentAlert/Alerts API endpoint
Hello Everyone, I am trying to get the list of all alerts through the API. Endpoint I am using is https://companyname.logicmonitor.com/santaba/rest/alert/alerts?v=3 Through this API I am only able to get the Uncleared Alerts. Now, Issue is I want all alerts including the one that has been cleared for reporting purpose. I would really appreciate if anyone could help me with this? Thank you, MnaishSolved114Views0likes4CommentsSQL Monitoring Troubles
Hey, I've just added 4 SQL Servers to our environment. 2 of them worked perfectly and are retrieving all the SQL data. However, 2 of them aren't learning that SQL is installed: one of them reports WMI access error but wbemtest from the LM collector with the LM collector account details succeeds on both. The other just doesn't seem to know SQL is installed, no errors are shown. The one with WMI error obviously isn't even retrieving Windows data like CPU and memory. The one with no WMI error is showing this data okay. I'm not sure where to check for problems in LM - any logs I can look at? Thanksldoodle2 months agoAdvisor59Views0likes4CommentsModules for Zerto monitoring
Hi, here are some modules to monitor Zerto via their API. Appliances (ZVM/ZCM) and the Zerto Analytics portal are supported. I have made the .xml export of each module available on Github, they can be downloaded from here: https://github.com/chrisred/logicmonitor-zerto The modules are: ZertoAnalytics_Alerts.xml ZertoAnalytics_Datastores.xml ZertoAnalytics_Sites.xml ZertoAnalytics_Token.xml ZertoAnalytics_VPGs.xml ZertoAppliance_Alerts.xml ZertoAppliance_Datastores.xml ZertoAppliance_PeerSites.xml ZertoAppliance_Token.xml ZertoAppliance_VPGs.xml ZertoAppliance_VRAs.xml I'll try to keep an eye on this post for any questions.chrisred2 months agoNeophyte970Views23likes7Comments