Recent Discussions
Heartbeat Calculation Methods
Hey Guys, I am fairly new to LM, so forgive me if this naive but I am unable to figure out how Lm calculates heartbeat especially in cases where ping is not working. From the only documentation page, I understood Ping, SNMP , WMI and so on build-in collection methods. What are the other methods ? For example: I have a linux server which I am monitoring via SSH and ping is not working. And since SSH doesnt count as it is a scripting method, So I looked at Port datasource which has "webpage" as collection method and it turns out heartbeat is updating based on this only. Now, I would really like to know which other methods participate in this calculation ? Thanks AK36Views0likes6CommentsReports UIv4 are difficult to use
I will just say my most scathing summary up top, then I will drill into an example: I do not believe Reports UIv4 should even be in production. It is that problematic. I wanted to make an ad hoc Interface Bandwidth report for the last 30 days After laying out my params, I clicked SAVE. LM proceeded to run the report, even though I ONLY HIT SAVE It was a lot of data, so I had to wait 5-10 minutes even though I DID NOT RUN IT LM didn't send any CSV to me anywhere, nothing happened. I was still on the edit form with SAVE and SAVE and RUN greyed out I tried to leave the page, it wanted me to click continue to quit. I did so, and my report wasn't saved!!!!!! I repeated ALL of the above but did SAVE and RUN. I had to wait 10 mins again, nothing happened. No explanations. I closed the browser fully, came back and my report HAD been saved. I clicked on MANAGE and LM tried to RUN the whole report before letting me manage it. I closed my browser because I didn't want to wait 10 minutes. I went back to the reports page, clicked PLAY and it told me it was running already and asked for email About 30 minutes later it was never sent. But in the mean time, I repeated ALL of the above under a new report LM didn't think was still running. Because I didnt want to wait for it to run for 10 mins just to manage it. I experienced ALL of the above again. At some point, I clicked manage to change it to a scheduled report, it tried to PLAY it to show results before presenting manage. I wanted 10 minutes and FINALLY got to the manage page I switched it to scheduled. I clicked SAVE. I forgot that my changes sometimes aren't saved unless I do the other option. Somehow I got back to that state where I couldn't leave the page without losing my changes EVEN THOUGH I had clicked SAVE because it thinks I have pending changes, but SAVE and SAVE and RUN are greyed out. Here is the final state. I'll just stop there. How is the Reports section even usable in UIv4. I never under any circumstances want a report to play when I'm just looking at it, managing it, or even when browsing to it in the report list. Its terrible when debugging a report that is being painful for LM-limitation reasons. I get similar complaints from users on this or other problems. The only person in our org who likes UIv4 doesnt ever use the product, he just looks at it occasionally. The whole reports UIv4 needs to go back to formula.Lewis_Beard4 days agoProfessor16Views0likes2CommentsFeedback: Please Remove In-Product Marketing from LogicMonitor UI
Hi all, I’ve recently noticed that LogicMonitor has added a new menu item called “Cost Optimization”. Behind this link there’s a page that explains the feature and how to subscribe by contacting my CSM for an additional cost. As the main administrator of our LogicMonitor portal, I want to share my concern: I never requested this feature and do not plan to use it. The placement feels like a marketing push inside a tool we’re already paying premium dollars for. It is creating unnecessary confusion among my users, who keep asking me about the function, whether we are adopting it, etc.—which in turn costs me time. After raising a support ticket, I was told that I could disable the menu item by adjusting role settings. To me, this feels like the wrong approach. I don’t think it should be on customers to hide marketing-driven features inside a paid product. My feedback to LogicMonitor is simple: Please reconsider this type of in-product marketing. Features that require an additional subscription should not appear as permanent menu items unless explicitly enabled by the customer. I’d like to hear what others think—has anyone else run into the same frustration? Thanks, JeroenJeroenB4 days agoNeophyte55Views6likes4CommentsAutomatically Delete Instances - SQL System Jobs
I'm having trouble with the Auto Delete option. Automatically Delete Instances enabled: LogicMonitor removes job instances when it thinks they no longer exist - often triggered by temporary disconnections or failovers. This causes thresholds and alerts to vanish unexpectedly (previously set thresholds on the individual job level disappear). Mixed environments (AGs): Automatically Delete Instances disabled: Job instances persist even after failover, which can result in false warnings on old replicas that are no longer active. I need certain jobs to persist so their alert thresholds remain intact, but they're being removed automatically when auto delete is on. On top of that, I’m dealing with AG replicas that fail over along with the jobs, which leaves behind failed job warnings on the previous replica when the auto delete is off. I thought of many things but there doesn't seem to be perfect solution to this yet. Any thoughts? Ideas?lucjad9 days agoNeophyte19Views0likes4Commentsamend aws regions via api call
We monitor a number of AWS Accounts, using the AWS Account Cloud monitoring (These are within a single organisation, but were configured prior to the addition of organisations in LogicMonitor) I am trying to use an API Call to gather the aws accounts and then to remove monitoring from unused regions. I have some code to build a script to do this (so i could examine the output before making the changes, and test on one or two aws accounts.), but this is returning every device. Can anyone help in refactoring the api calls to only change the regions where needed? SCRIPT ########################################################################## import requests import json # === CONFIGURATION === bearer_token = [bearer token] company = [company] # e.g., 'yourcompany' base_url = f'https://{company}.logicmonitor.com/santaba/rest' # === HEADERS === def get_headers(): return { 'Authorization': f'Bearer {bearer_token}', 'Content-Type': 'application/json', 'Accept': 'application/json' } # === STEP 1: GET AWS ACCOUNTS FROM LOGICMONITOR === def get_aws_accounts(): resource_path = '/device/devices' params = { 'filter': 'system.cloud.type:"AWS"', 'size': 1000 } url = base_url + resource_path response = requests.get(url, headers=get_headers(), params=params) try: response.raise_for_status() data = response.json() print("Raw response:") print(json.dumps(data, indent=4)) # <-- Add this line to inspect the structure if 'data' in data and 'items' in data['data']: return data['data']['items'] else: print("Unexpected response format.") return [] except requests.exceptions.HTTPError as http_err: print(f"HTTP error occurred: {http_err}") print(f"Response content: {response.text}") return [] except Exception as err: print(f"Other error occurred: {err}") return [] # === STEP 2: PRINT UPDATE COMMANDS === def print_update_command(device_id, regions_to_keep): resource_path = f'/device/devices/{device_id}' url = base_url + resource_path payload = { "properties": [ { "name": "aws.regions", "value": ",".join(regions_to_keep) } ] } print(f"\nPATCH {url}") print("Headers:") print(json.dumps(get_headers(), indent=4)) print("Payload:") print(json.dumps(payload, indent=4)) def write_patch_request_to_file(device_id, regions_to_keep, device_name): url = f"{base_url}/device/devices/{device_id}" headers = get_headers() payload = { "properties": [ { "name": "aws.regions", "value": ",".join(regions_to_keep) } ] } curl_cmd = ( f"curl -X PATCH '{url}' " f"-H 'Authorization: {headers['Authorization']}' " f"-H 'Content-Type: application/json' " f"-H 'Accept: application/json' " f"-d '{json.dumps(payload)}'" ) with open("patch_requests.txt", "a") as f: f.write(f"# PATCH request for {device_name} (ID: {device_id})\n") f.write(curl_cmd + "\n\n") # === MAIN EXECUTION === if __name__ == '__main__': unused_regions = ['us-east-2', 'us-west-1', 'us-west-2', 'eu-central-1', 'eu-central-2', 'eu-north-1', 'eu-north-2', 'eu-south-1', 'eu-south-2', 'eu-west-1', 'eu-west-3', 'ap-east-1', 'ap-northeast-1', 'ap-northeast-2', 'ap-northeast-3', 'ap-south-1', 'ap-south-2', 'ap-southeast-1', 'ap-southeast-2', 'ap-southeast-3', 'ap-southeast-4', 'af-south-1', 'ca-central-1', 'il-central-1', 'me-central-1', 'me-south-1', 'sa-east-1'] all_regions = ['us-east-1', 'us-east-2', 'us-west-1', 'us-west-2', 'eu-central-1', 'eu-central-2', 'eu-north-1', 'eu-north-2', 'eu-south-1', 'eu-south-2', 'eu-west-1', 'eu-west-2', 'eu-west-3', 'ap-east-1', 'ap-northeast-1', 'ap-northeast-2', 'ap-northeast-3', 'ap-south-1', 'ap-south-2', 'ap-southeast-1', 'ap-southeast-2', 'ap-southeast-3', 'ap-southeast-4', 'af-south-1', 'ca-central-1', 'il-central-1', 'me-central-1', 'me-south-1', 'sa-east-1'] regions_to_keep = [r for r in all_regions if r not in unused_regions] aws_accounts = get_aws_accounts() for account in aws_accounts: device_id = account['id'] name = account['displayName'] write_patch_request_to_file(device_id, regions_to_keep, name) ############################################################################Tim_OShea11 days agoNeophyte19Views0likes2CommentsBetter "Adding a Service" documentation?
I wanted to start looking into Services, the last time I messed around with one, they were more limited. So I went to this page https://www.logicmonitor.com/support/adding-a-service and there is a list of all the options to add a service, but honestly there is no explanation at all. For example: 1) is the IP/DNS name for the service needed? Why? What does it even do? 2) Why do I need to specify a collector? Is this like a Netscan where I can just pick any collector to take up the load? How much load does this add? 3) If I am picking an auto-balanced group, why must I pick a preferred collector? I was about to type a long continuation but basically .... is there a page that actually explains these individual options in an in-depth fashion? Whether or not there is some allbound training or a badge for it or whatever, I'd think there would be some documentation discussing in detail what each item is, other than a simple listing of what it is. Basically, this page seems written as a casual reference for people who already use this regularly. I'm wondering if there is any proper documentation somewhere or if its gatekept behind some training? I'm about to blunder through with trial and error, but I prefer not to. :)SolvedLewis_Beard13 days agoProfessor56Views0likes7CommentsPowerShell Module 7.4.3 Missing cmdlets
I'm trying to pull a list of alerts for a device, and in the LM documentation, there is a cmdlet called Get-LMDeviceAlerts, https://logicmonitor.github.io/lm-powershell-module-docs/documentation/devices/get-lmdevicealerts/. But when I try running the command, I get an error. Get-LMDeviceAlerts : The term 'Get-LMDeviceAlerts' is not recognized as the name of a cmdlet, function, script file, or operable program. When I do: Get-Command -Module Logic.Monitor | WHERE Name -eq "Get-LMDeviceAlerts", it returns nothing. I've tried removing and re-importing the module and still no cmdlet.25Views0likes4CommentsCMDB Integration
Hi all I'm looking into setting up a sync from LogicMonitor into our Service Now. The mechanics of doing so seem pretty straight forward with the LM CMDB sync application. I want to make sure I'm putting devices into the right CI Classes in ServiceNow. I plan to do so by setting up some dynamic groups, each pulling in the devices for each class, then assign a property to each dynamic group to define the class that the device belongs to. This is all as described in LMs documentation on CMDB sync with Service Now. For the CI Classes themselves... starting with network devices, Service Now has a generic "netgear" (as in network equipment, not the manufacturer Netgear!) which is extended by the following classes that are specific to device of network device. cmdb_ci_ip_switch cmdb_ci_ip_firewall cmdb_ci_lb cmdb_ci_ip_router There is a property on all devices called predef.externalResourceType which would appear to be a perfect fit for this use case as it has values like Switch, Firewall, Router, LoadBalancer. But, it's really designed for topology mapping and I'm not sure how the platform devices what value to assign. In some cases, I've found it has an incorrect value - such as "Firewall" when a device is actually a "Switch" or in some cases "Unknown". I'm thinking I might create my own AppliesTo function for each CI Class that I want to use, base it on the value of predef.externalResourceType but then have my own additional inclusions and exclusions. Then I can tweak things as needed to fix any issues I find with incorrect values in the predef.externalResourceType. Then I can use my AppliesTo functions in the Dynamic Groups and finally have a Dynamic Group for everything that isn't included in my AppliesTo functions - which can sync to a generic CI type. Being an MSP, the things that are sync'd to the CMDB will be changeable, so will need this to be flexible and hopefully not require a load of upkeep! If anyone has done this before, I'd be interested to understand how you defined which LM device goes into which CI class on your CMDB.Dave_Lee19 days agoAdvisor44Views0likes1CommentReporting back-end ignoring Time Range?
I've had a few people report that their scheduled reports, that have been working and untouched for ages, are delivering reports that completely ignoring the time range selector on reports now. Its for scheduled ones and ad-hoc. In the preview you'll see your filters and the fact that maybe you have 49 alerts, but as soon as you run it, the time window has been completely ignored by the back-end that serves up the report. So we are getting alerts limited by all our other filters on the reports, but the times are pulling all the way back to 2023 (full 2-year retention period). Anyone else experiencing this? I'm going to open a support ticket, but I'm just wondering if anyone else has seen the reports ignoring the time limits in the deliverable. (Preview looks correct until its run). Thanks!Lewis_Beard20 days agoProfessor49Views1like3CommentsMonitor a running .exe
We have a need to monitor if a particular .exe is running on a windows system. So if you were to go into task manager and just look for whether it was running or not (e.g. task.exe exists). Seems pretty simple but I'm not sure how to get alerted if the process no longer exists. Thanks in advanceSolved63Views0likes6Comments