Recent Discussions
Log Ingesting API - Anyone having issues?
Hello, I've been facing issues with the logs ingestion API recently. To make it simple, I'm just trying to ingest a simple test message (no device correlation, etc.) & it always returns status 401. Tried via Postaman, Bash, Python, Powershell, from several sources (including my own laptop which has 0 FW restrictions). I know the user account needs to have Manage permissions at the Logs piece & it has indeed. I've tried with the portal admin already. This is bugging me as I'm not understanding why this isn't working & I would like to understand if others are facing the same. I've raised a internal support case already, but, no luck so far.Vitor_Santos4 hours agoExpert9Views0likes3CommentsFeedback: Please Remove In-Product Marketing from LogicMonitor UI
Hi all, I’ve recently noticed that LogicMonitor has added a new menu item called “Cost Optimization”. Behind this link there’s a page that explains the feature and how to subscribe by contacting my CSM for an additional cost. As the main administrator of our LogicMonitor portal, I want to share my concern: I never requested this feature and do not plan to use it. The placement feels like a marketing push inside a tool we’re already paying premium dollars for. It is creating unnecessary confusion among my users, who keep asking me about the function, whether we are adopting it, etc.—which in turn costs me time. After raising a support ticket, I was told that I could disable the menu item by adjusting role settings. To me, this feels like the wrong approach. I don’t think it should be on customers to hide marketing-driven features inside a paid product. My feedback to LogicMonitor is simple: Please reconsider this type of in-product marketing. Features that require an additional subscription should not appear as permanent menu items unless explicitly enabled by the customer. I’d like to hear what others think—has anyone else run into the same frustration? Thanks, JeroenSolvedJeroenB5 hours agoNeophyte96Views8likes8CommentsCan Logic Monitor Pull Routing tables
We are still new to getting all the features to work in Logic Monitor. The question came up from our Networking team to assist with pulling Routing tables from the Network Devices. Is that possilbe? How do we go about collecting that? is there a logic module that can assist?Henry_Steinhaue2 days agoNeophyte117Views4likes5CommentsReports UIv4 are difficult to use
I will just say my most scathing summary up top, then I will drill into an example: I do not believe Reports UIv4 should even be in production. It is that problematic. I wanted to make an ad hoc Interface Bandwidth report for the last 30 days After laying out my params, I clicked SAVE. LM proceeded to run the report, even though I ONLY HIT SAVE It was a lot of data, so I had to wait 5-10 minutes even though I DID NOT RUN IT LM didn't send any CSV to me anywhere, nothing happened. I was still on the edit form with SAVE and SAVE and RUN greyed out I tried to leave the page, it wanted me to click continue to quit. I did so, and my report wasn't saved!!!!!! I repeated ALL of the above but did SAVE and RUN. I had to wait 10 mins again, nothing happened. No explanations. I closed the browser fully, came back and my report HAD been saved. I clicked on MANAGE and LM tried to RUN the whole report before letting me manage it. I closed my browser because I didn't want to wait 10 minutes. I went back to the reports page, clicked PLAY and it told me it was running already and asked for email About 30 minutes later it was never sent. But in the mean time, I repeated ALL of the above under a new report LM didn't think was still running. Because I didnt want to wait for it to run for 10 mins just to manage it. I experienced ALL of the above again. At some point, I clicked manage to change it to a scheduled report, it tried to PLAY it to show results before presenting manage. I wanted 10 minutes and FINALLY got to the manage page I switched it to scheduled. I clicked SAVE. I forgot that my changes sometimes aren't saved unless I do the other option. Somehow I got back to that state where I couldn't leave the page without losing my changes EVEN THOUGH I had clicked SAVE because it thinks I have pending changes, but SAVE and SAVE and RUN are greyed out. Here is the final state. I'll just stop there. How is the Reports section even usable in UIv4. I never under any circumstances want a report to play when I'm just looking at it, managing it, or even when browsing to it in the report list. Its terrible when debugging a report that is being painful for LM-limitation reasons. I get similar complaints from users on this or other problems. The only person in our org who likes UIv4 doesnt ever use the product, he just looks at it occasionally. The whole reports UIv4 needs to go back to formula.Lewis_Beard5 days agoProfessor22Views1like2CommentsCisco Meraki Environmental Sensor Monitoring
Today, new modules are available in LM Exchange to monitor Cisco Meraki MT-Series Environmental Sensors. These fully support Resource Explorer and include a new (IoT) Sensor Topology Graphic. If you are subscribed, these devices count toward the Wireless Access Points SKU.Patrick_Rouse6 days agoProduct Manager82Views11likes1CommentHeartbeat Calculation Methods
Hey Guys, I am fairly new to LM, so forgive me if this naive but I am unable to figure out how Lm calculates heartbeat especially in cases where ping is not working. From the only documentation page, I understood Ping, SNMP , WMI and so on build-in collection methods. What are the other methods ? For example: I have a linux server which I am monitoring via SSH and ping is not working. And since SSH doesnt count as it is a scripting method, So I looked at Port datasource which has "webpage" as collection method and it turns out heartbeat is updating based on this only. Now, I would really like to know which other methods participate in this calculation ? Thanks AK39Views0likes6CommentsBetter "Adding a Service" documentation?
I wanted to start looking into Services, the last time I messed around with one, they were more limited. So I went to this page https://www.logicmonitor.com/support/adding-a-service and there is a list of all the options to add a service, but honestly there is no explanation at all. For example: 1) is the IP/DNS name for the service needed? Why? What does it even do? 2) Why do I need to specify a collector? Is this like a Netscan where I can just pick any collector to take up the load? How much load does this add? 3) If I am picking an auto-balanced group, why must I pick a preferred collector? I was about to type a long continuation but basically .... is there a page that actually explains these individual options in an in-depth fashion? Whether or not there is some allbound training or a badge for it or whatever, I'd think there would be some documentation discussing in detail what each item is, other than a simple listing of what it is. Basically, this page seems written as a casual reference for people who already use this regularly. I'm wondering if there is any proper documentation somewhere or if its gatekept behind some training? I'm about to blunder through with trial and error, but I prefer not to. :)SolvedLewis_Beard7 days agoProfessor56Views0likes7CommentsWhy is the support AI chatbot always broken?
It takes DAYS to be able to open a ticket because the AI chatbot is never working. When I'm finally able to get connected with support, I mention this and I'm told "yeah, we were busy". That's fine, but this is the ONLY way to open a ticket and it's not very helpful.159Views1like8CommentsAutomatically Delete Instances - SQL System Jobs
I'm having trouble with the Auto Delete option. Automatically Delete Instances enabled: LogicMonitor removes job instances when it thinks they no longer exist - often triggered by temporary disconnections or failovers. This causes thresholds and alerts to vanish unexpectedly (previously set thresholds on the individual job level disappear). Mixed environments (AGs): Automatically Delete Instances disabled: Job instances persist even after failover, which can result in false warnings on old replicas that are no longer active. I need certain jobs to persist so their alert thresholds remain intact, but they're being removed automatically when auto delete is on. On top of that, I’m dealing with AG replicas that fail over along with the jobs, which leaves behind failed job warnings on the previous replica when the auto delete is off. I thought of many things but there doesn't seem to be perfect solution to this yet. Any thoughts? Ideas?lucjad8 days agoNeophyte19Views0likes4Commentsamend aws regions via api call
We monitor a number of AWS Accounts, using the AWS Account Cloud monitoring (These are within a single organisation, but were configured prior to the addition of organisations in LogicMonitor) I am trying to use an API Call to gather the aws accounts and then to remove monitoring from unused regions. I have some code to build a script to do this (so i could examine the output before making the changes, and test on one or two aws accounts.), but this is returning every device. Can anyone help in refactoring the api calls to only change the regions where needed? SCRIPT ########################################################################## import requests import json # === CONFIGURATION === bearer_token = [bearer token] company = [company] # e.g., 'yourcompany' base_url = f'https://{company}.logicmonitor.com/santaba/rest' # === HEADERS === def get_headers(): return { 'Authorization': f'Bearer {bearer_token}', 'Content-Type': 'application/json', 'Accept': 'application/json' } # === STEP 1: GET AWS ACCOUNTS FROM LOGICMONITOR === def get_aws_accounts(): resource_path = '/device/devices' params = { 'filter': 'system.cloud.type:"AWS"', 'size': 1000 } url = base_url + resource_path response = requests.get(url, headers=get_headers(), params=params) try: response.raise_for_status() data = response.json() print("Raw response:") print(json.dumps(data, indent=4)) # <-- Add this line to inspect the structure if 'data' in data and 'items' in data['data']: return data['data']['items'] else: print("Unexpected response format.") return [] except requests.exceptions.HTTPError as http_err: print(f"HTTP error occurred: {http_err}") print(f"Response content: {response.text}") return [] except Exception as err: print(f"Other error occurred: {err}") return [] # === STEP 2: PRINT UPDATE COMMANDS === def print_update_command(device_id, regions_to_keep): resource_path = f'/device/devices/{device_id}' url = base_url + resource_path payload = { "properties": [ { "name": "aws.regions", "value": ",".join(regions_to_keep) } ] } print(f"\nPATCH {url}") print("Headers:") print(json.dumps(get_headers(), indent=4)) print("Payload:") print(json.dumps(payload, indent=4)) def write_patch_request_to_file(device_id, regions_to_keep, device_name): url = f"{base_url}/device/devices/{device_id}" headers = get_headers() payload = { "properties": [ { "name": "aws.regions", "value": ",".join(regions_to_keep) } ] } curl_cmd = ( f"curl -X PATCH '{url}' " f"-H 'Authorization: {headers['Authorization']}' " f"-H 'Content-Type: application/json' " f"-H 'Accept: application/json' " f"-d '{json.dumps(payload)}'" ) with open("patch_requests.txt", "a") as f: f.write(f"# PATCH request for {device_name} (ID: {device_id})\n") f.write(curl_cmd + "\n\n") # === MAIN EXECUTION === if __name__ == '__main__': unused_regions = ['us-east-2', 'us-west-1', 'us-west-2', 'eu-central-1', 'eu-central-2', 'eu-north-1', 'eu-north-2', 'eu-south-1', 'eu-south-2', 'eu-west-1', 'eu-west-3', 'ap-east-1', 'ap-northeast-1', 'ap-northeast-2', 'ap-northeast-3', 'ap-south-1', 'ap-south-2', 'ap-southeast-1', 'ap-southeast-2', 'ap-southeast-3', 'ap-southeast-4', 'af-south-1', 'ca-central-1', 'il-central-1', 'me-central-1', 'me-south-1', 'sa-east-1'] all_regions = ['us-east-1', 'us-east-2', 'us-west-1', 'us-west-2', 'eu-central-1', 'eu-central-2', 'eu-north-1', 'eu-north-2', 'eu-south-1', 'eu-south-2', 'eu-west-1', 'eu-west-2', 'eu-west-3', 'ap-east-1', 'ap-northeast-1', 'ap-northeast-2', 'ap-northeast-3', 'ap-south-1', 'ap-south-2', 'ap-southeast-1', 'ap-southeast-2', 'ap-southeast-3', 'ap-southeast-4', 'af-south-1', 'ca-central-1', 'il-central-1', 'me-central-1', 'me-south-1', 'sa-east-1'] regions_to_keep = [r for r in all_regions if r not in unused_regions] aws_accounts = get_aws_accounts() for account in aws_accounts: device_id = account['id'] name = account['displayName'] write_patch_request_to_file(device_id, regions_to_keep, name) ############################################################################Tim_OShea14 days agoNeophyte19Views0likes2Comments