Recent Discussions
Log Ingesting API - Anyone having issues?
Hello, I've been facing issues with the logs ingestion API recently. To make it simple, I'm just trying to ingest a simple test message (no device correlation, etc.) & it always returns status 401. Tried via Postaman, Bash, Python, Powershell, from several sources (including my own laptop which has 0 FW restrictions). I know the user account needs to have Manage permissions at the Logs piece & it has indeed. I've tried with the portal admin already. This is bugging me as I'm not understanding why this isn't working & I would like to understand if others are facing the same. I've raised a internal support case already, but, no luck so far.Vitor_Santos2 days agoExpert9Views0likes3CommentsHeartbeat Calculation Methods
Hey Guys, I am fairly new to LM, so forgive me if this naive but I am unable to figure out how Lm calculates heartbeat especially in cases where ping is not working. From the only documentation page, I understood Ping, SNMP , WMI and so on build-in collection methods. What are the other methods ? For example: I have a linux server which I am monitoring via SSH and ping is not working. And since SSH doesnt count as it is a scripting method, So I looked at Port datasource which has "webpage" as collection method and it turns out heartbeat is updating based on this only. Now, I would really like to know which other methods participate in this calculation ? Thanks AK39Views0likes6CommentsReports UIv4 are difficult to use
I will just say my most scathing summary up top, then I will drill into an example: I do not believe Reports UIv4 should even be in production. It is that problematic. I wanted to make an ad hoc Interface Bandwidth report for the last 30 days After laying out my params, I clicked SAVE. LM proceeded to run the report, even though I ONLY HIT SAVE It was a lot of data, so I had to wait 5-10 minutes even though I DID NOT RUN IT LM didn't send any CSV to me anywhere, nothing happened. I was still on the edit form with SAVE and SAVE and RUN greyed out I tried to leave the page, it wanted me to click continue to quit. I did so, and my report wasn't saved!!!!!! I repeated ALL of the above but did SAVE and RUN. I had to wait 10 mins again, nothing happened. No explanations. I closed the browser fully, came back and my report HAD been saved. I clicked on MANAGE and LM tried to RUN the whole report before letting me manage it. I closed my browser because I didn't want to wait 10 minutes. I went back to the reports page, clicked PLAY and it told me it was running already and asked for email About 30 minutes later it was never sent. But in the mean time, I repeated ALL of the above under a new report LM didn't think was still running. Because I didnt want to wait for it to run for 10 mins just to manage it. I experienced ALL of the above again. At some point, I clicked manage to change it to a scheduled report, it tried to PLAY it to show results before presenting manage. I wanted 10 minutes and FINALLY got to the manage page I switched it to scheduled. I clicked SAVE. I forgot that my changes sometimes aren't saved unless I do the other option. Somehow I got back to that state where I couldn't leave the page without losing my changes EVEN THOUGH I had clicked SAVE because it thinks I have pending changes, but SAVE and SAVE and RUN are greyed out. Here is the final state. I'll just stop there. How is the Reports section even usable in UIv4. I never under any circumstances want a report to play when I'm just looking at it, managing it, or even when browsing to it in the report list. Its terrible when debugging a report that is being painful for LM-limitation reasons. I get similar complaints from users on this or other problems. The only person in our org who likes UIv4 doesnt ever use the product, he just looks at it occasionally. The whole reports UIv4 needs to go back to formula.Lewis_Beard7 days agoProfessor22Views1like2CommentsAutomatically Delete Instances - SQL System Jobs
I'm having trouble with the Auto Delete option. Automatically Delete Instances enabled: LogicMonitor removes job instances when it thinks they no longer exist - often triggered by temporary disconnections or failovers. This causes thresholds and alerts to vanish unexpectedly (previously set thresholds on the individual job level disappear). Mixed environments (AGs): Automatically Delete Instances disabled: Job instances persist even after failover, which can result in false warnings on old replicas that are no longer active. I need certain jobs to persist so their alert thresholds remain intact, but they're being removed automatically when auto delete is on. On top of that, I’m dealing with AG replicas that fail over along with the jobs, which leaves behind failed job warnings on the previous replica when the auto delete is off. I thought of many things but there doesn't seem to be perfect solution to this yet. Any thoughts? Ideas?lucjad12 days agoNeophyte19Views0likes4Commentsamend aws regions via api call
We monitor a number of AWS Accounts, using the AWS Account Cloud monitoring (These are within a single organisation, but were configured prior to the addition of organisations in LogicMonitor) I am trying to use an API Call to gather the aws accounts and then to remove monitoring from unused regions. I have some code to build a script to do this (so i could examine the output before making the changes, and test on one or two aws accounts.), but this is returning every device. Can anyone help in refactoring the api calls to only change the regions where needed? SCRIPT ########################################################################## import requests import json # === CONFIGURATION === bearer_token = [bearer token] company = [company] # e.g., 'yourcompany' base_url = f'https://{company}.logicmonitor.com/santaba/rest' # === HEADERS === def get_headers(): return { 'Authorization': f'Bearer {bearer_token}', 'Content-Type': 'application/json', 'Accept': 'application/json' } # === STEP 1: GET AWS ACCOUNTS FROM LOGICMONITOR === def get_aws_accounts(): resource_path = '/device/devices' params = { 'filter': 'system.cloud.type:"AWS"', 'size': 1000 } url = base_url + resource_path response = requests.get(url, headers=get_headers(), params=params) try: response.raise_for_status() data = response.json() print("Raw response:") print(json.dumps(data, indent=4)) # <-- Add this line to inspect the structure if 'data' in data and 'items' in data['data']: return data['data']['items'] else: print("Unexpected response format.") return [] except requests.exceptions.HTTPError as http_err: print(f"HTTP error occurred: {http_err}") print(f"Response content: {response.text}") return [] except Exception as err: print(f"Other error occurred: {err}") return [] # === STEP 2: PRINT UPDATE COMMANDS === def print_update_command(device_id, regions_to_keep): resource_path = f'/device/devices/{device_id}' url = base_url + resource_path payload = { "properties": [ { "name": "aws.regions", "value": ",".join(regions_to_keep) } ] } print(f"\nPATCH {url}") print("Headers:") print(json.dumps(get_headers(), indent=4)) print("Payload:") print(json.dumps(payload, indent=4)) def write_patch_request_to_file(device_id, regions_to_keep, device_name): url = f"{base_url}/device/devices/{device_id}" headers = get_headers() payload = { "properties": [ { "name": "aws.regions", "value": ",".join(regions_to_keep) } ] } curl_cmd = ( f"curl -X PATCH '{url}' " f"-H 'Authorization: {headers['Authorization']}' " f"-H 'Content-Type: application/json' " f"-H 'Accept: application/json' " f"-d '{json.dumps(payload)}'" ) with open("patch_requests.txt", "a") as f: f.write(f"# PATCH request for {device_name} (ID: {device_id})\n") f.write(curl_cmd + "\n\n") # === MAIN EXECUTION === if __name__ == '__main__': unused_regions = ['us-east-2', 'us-west-1', 'us-west-2', 'eu-central-1', 'eu-central-2', 'eu-north-1', 'eu-north-2', 'eu-south-1', 'eu-south-2', 'eu-west-1', 'eu-west-3', 'ap-east-1', 'ap-northeast-1', 'ap-northeast-2', 'ap-northeast-3', 'ap-south-1', 'ap-south-2', 'ap-southeast-1', 'ap-southeast-2', 'ap-southeast-3', 'ap-southeast-4', 'af-south-1', 'ca-central-1', 'il-central-1', 'me-central-1', 'me-south-1', 'sa-east-1'] all_regions = ['us-east-1', 'us-east-2', 'us-west-1', 'us-west-2', 'eu-central-1', 'eu-central-2', 'eu-north-1', 'eu-north-2', 'eu-south-1', 'eu-south-2', 'eu-west-1', 'eu-west-2', 'eu-west-3', 'ap-east-1', 'ap-northeast-1', 'ap-northeast-2', 'ap-northeast-3', 'ap-south-1', 'ap-south-2', 'ap-southeast-1', 'ap-southeast-2', 'ap-southeast-3', 'ap-southeast-4', 'af-south-1', 'ca-central-1', 'il-central-1', 'me-central-1', 'me-south-1', 'sa-east-1'] regions_to_keep = [r for r in all_regions if r not in unused_regions] aws_accounts = get_aws_accounts() for account in aws_accounts: device_id = account['id'] name = account['displayName'] write_patch_request_to_file(device_id, regions_to_keep, name) ############################################################################Tim_OShea14 days agoNeophyte19Views0likes2CommentsPowerShell Module 7.4.3 Missing cmdlets
I'm trying to pull a list of alerts for a device, and in the LM documentation, there is a cmdlet called Get-LMDeviceAlerts, https://logicmonitor.github.io/lm-powershell-module-docs/documentation/devices/get-lmdevicealerts/. But when I try running the command, I get an error. Get-LMDeviceAlerts : The term 'Get-LMDeviceAlerts' is not recognized as the name of a cmdlet, function, script file, or operable program. When I do: Get-Command -Module Logic.Monitor | WHERE Name -eq "Get-LMDeviceAlerts", it returns nothing. I've tried removing and re-importing the module and still no cmdlet.25Views0likes4CommentsCMDB Integration
Hi all I'm looking into setting up a sync from LogicMonitor into our Service Now. The mechanics of doing so seem pretty straight forward with the LM CMDB sync application. I want to make sure I'm putting devices into the right CI Classes in ServiceNow. I plan to do so by setting up some dynamic groups, each pulling in the devices for each class, then assign a property to each dynamic group to define the class that the device belongs to. This is all as described in LMs documentation on CMDB sync with Service Now. For the CI Classes themselves... starting with network devices, Service Now has a generic "netgear" (as in network equipment, not the manufacturer Netgear!) which is extended by the following classes that are specific to device of network device. cmdb_ci_ip_switch cmdb_ci_ip_firewall cmdb_ci_lb cmdb_ci_ip_router There is a property on all devices called predef.externalResourceType which would appear to be a perfect fit for this use case as it has values like Switch, Firewall, Router, LoadBalancer. But, it's really designed for topology mapping and I'm not sure how the platform devices what value to assign. In some cases, I've found it has an incorrect value - such as "Firewall" when a device is actually a "Switch" or in some cases "Unknown". I'm thinking I might create my own AppliesTo function for each CI Class that I want to use, base it on the value of predef.externalResourceType but then have my own additional inclusions and exclusions. Then I can tweak things as needed to fix any issues I find with incorrect values in the predef.externalResourceType. Then I can use my AppliesTo functions in the Dynamic Groups and finally have a Dynamic Group for everything that isn't included in my AppliesTo functions - which can sync to a generic CI type. Being an MSP, the things that are sync'd to the CMDB will be changeable, so will need this to be flexible and hopefully not require a load of upkeep! If anyone has done this before, I'd be interested to understand how you defined which LM device goes into which CI class on your CMDB.Dave_Lee22 days agoAdvisor45Views0likes1CommentReporting back-end ignoring Time Range?
I've had a few people report that their scheduled reports, that have been working and untouched for ages, are delivering reports that completely ignoring the time range selector on reports now. Its for scheduled ones and ad-hoc. In the preview you'll see your filters and the fact that maybe you have 49 alerts, but as soon as you run it, the time window has been completely ignored by the back-end that serves up the report. So we are getting alerts limited by all our other filters on the reports, but the times are pulling all the way back to 2023 (full 2-year retention period). Anyone else experiencing this? I'm going to open a support ticket, but I'm just wondering if anyone else has seen the reports ignoring the time limits in the deliverable. (Preview looks correct until its run). Thanks!Lewis_Beard23 days agoProfessor49Views1like3CommentsAny way to get the Memory Graphs to properly show data in Gigabytes?
Hi, When I go to the Memory graphs, the numbers don't match exactly how I'd like them to. For example: I have a server in vCenter that has 128 Gigs of RAM. When I look at the Memory Graph, This server says 125.26G RAM but the graph is at around 135G of RAM. If I look at the Raw Data tab, the MemTotal is listed as 134497869824. This matches where the line on the graph is, but doesn't match what's in the pop up window or at the bottom of the graph. If I take 134497869824 and divide it by 1024 a few times, I do end up with the 125.26 number, but it's a bit confusing when the graph number doesn't match the actual number. If I go into the Graph definition and Turn Off the Scale by units of 1024 option, then the numbers underneath the graph change and will match what's on the graph, but then the numbers are inflated. We're going through a project right now to find VMs that have more memory than they need and determine what to reduce them to. I'd like to easily just look at the graph and see how much they're using so I'd like it that when I have a VM that I've assigned "128" Gigs of memory to, I'd like the graph to have the MemTotal line at 128. Is that possible? Does everyone else use the graphs with the slider on or off? Has anyone else been confused by this or is it just me? ;)Kelemvor28 days agoExpert21Views0likes1CommentPowerShell module auto-load
Hi all I'm struggling with something so thought I'd see if anyone else has experienced this... We have a customer running Azure Local which is essentially a Windows Server cluster running Hyper-V and Storage Spaces Direct. We've configured a least priv user for monitoring, this is working fine for WMI queries but none of the PowerShell based modules are working. I've done a load of troubleshooting and found that WinRM will allow connections, but we can't even run basic cmdlets like Write-Host because it doesn't find the commands. It works fine though if we explicitly load the required modules (e.g. Import-Module Microsoft.PowerShell.Utility). We can test and this works fine This proves that the modules we need are there and that there is nothing preventing us from using them (there is no "Just Enough Admin" setup to block it for example). I suppose I could work through all the modules in the platform, identify all the module dependencies and write in code to check and load them, but that would be quite an undertaking and I really can't justify running custom versions of all the LM modules to workaround an issue on a single customer environment. Has anyone run into this before and have a solution? A few things I've ruled out: The modules exist on the system (we're just trying to use built-in/standard modules at the moment) The environment vars have the right Modules path set (and we can import modules manually, so that is working) Ruled out execution policy (if we manually import a module, it works fine) Ruled out a Constrained Session.... at least, I believe so because $execution.context.sessionstate states LanguageMode=FullLanguage Rules out Just Enough Administration being in place... again, I believe so because $PSSenderInfo states ConfigurationName=Microsoft.PowerShell (I believe this would be different if we were operating under a different JEA enforced profile configuration). Also, there's nothing stopping us from manually importing and using modules. It looks to me like it's just module auto-load that is disabled but, as I understand it, this has to be explicitly disable and the customer hasn't done so. I understand from the customer that it works fine with an admin account, perhaps there is some hardening that Microsoft applies automatically as it's a customised Azure Local specific version of Windows Server. I did try and explicitly enable auto-load by creating a profile file for my non-admin user and setting the value $PSModuleAutoloadingPreference='All' but that seemed to have no effect. I'm not convinced it's even looking for a profile file to be honest. When I use WinRM to run "$PROFILE | Select-Object *" then nothing is returned. The customer has opened a ticket with Microsoft about this, although is getting fairly vague suggestions around JEA (which I don't believe is in place) and that Azure Local may have some hardening. So I thought I'd put it to the community :) I'll also raise it with LM support. DaveDave_Lee28 days agoAdvisor14Views0likes0Comments