ContributionsMost RecentMost LikesSolutionsPowerShell script template for API Get website/websites I'm having some trouble getting website listing via API requests to complete via PowerShell and would welcome any feedback. This has been modified to be explicit for '/website/websites'. I have another version of the code below where $resourcepath is set to '/device/devices' and x-version is 1 that is successful but X-Version 1 fails with website requests using both tools. In Postman I can get a successful response with X-Version 3 but I'm not finding the differentiating factor between that and this script. <# Use TLS 1.2 #> [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 <# account info #> $accessId = $accessKey = $company = <# request details #> $httpVerb = 'GET' $resourcePath = '/website/websites' <# Construct URL #> $url = 'https://' + $company + '.logicmonitor.com/santaba/rest' + $resourcePath <# Get current time in milliseconds #> $epoch = [Math]::Round((New-TimeSpan -start (Get-Date -Date "1/1/1970") -end (Get-Date).ToUniversalTime()).TotalMilliseconds) <# Concatenate Request Details #> $requestVars = $httpVerb + $epoch + $resourcePath <# Construct Signature #> $hmac = New-Object System.Security.Cryptography.HMACSHA256 $hmac.Key = [Text.Encoding]::UTF8.GetBytes($accessKey) $signatureBytes = $hmac.ComputeHash([Text.Encoding]::UTF8.GetBytes($requestVars)) $signatureHex = [System.BitConverter]::ToString($signatureBytes) -replace '-' $signature = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($signatureHex.ToLower())) <# Construct Headers #> $auth = 'LMv1 ' + $accessId + ':' + $signature + ':' + $epoch $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $headers.Add("Authorization",$auth) $headers.Add("X-Version",'3') $headers.Add("Content-Type",'application/json') <# Make Request #> $response = Invoke-RestMethod -Uri $url -Method $httpVerb -Header $headers $response.ToString() <# Print status and body of response #> $status = $response.status $body = $response.data| ConvertTo-Json -Depth 5 Write-Host "Status:$status" Write-Host "Response:$body" Re: Postman API variable use I moved to using Powershell. I believe the problem is with the body being explicitly RAW and JSON. I think it has to be RAW or maybe TEXT format to allow variable values to be referenced correctly and then converted to JSON via pre-request script but have not had a chance to test again. 5 hours ago, JonR said: I have what seems like a similar issue with a get operation getting a device by ID. When I set the ID manually in the request it returns a single device as expected. When I set the ID with an environment variable authentication fails with a 1401 code returned. The headers and request path look identical in both cases. Was there a resolution found for the original post? Working request with ID 5 set statically in the request: {{url}}/device/devices/5 sends:GET https://(company).logicmonitor.com/santaba/rest/device/devices/5 Broken request with id variable set to 5: {{url}}/device/devices/{{id}} sends:GET https://(company).logicmonitor.com/santaba/rest/device/devices/5 I tried modifying the pre-request script as well and had the exact same results. Script modification: var id = pm.environment.get('id'); var request_vars = (http_verb == 'GET'||http_verb == 'DELETE') ? http_verb + epoch + resource_path + id: http_verb + epoch + request.data + resource_path; The following works for me (specific to JonR's request) Pre-request script at the collection level: // Get API credentials from environment variables var api_id = pm.environment.get('api_id'); var api_key = pm.environment.get('api_key'); var dvId = pm.environment.get('dvId'); var deviceId = pm.environment.get('deviceId') var dsId = pm.environment.get('dsId'); var datasourceName = pm.environment.get('datasourceName'); var instId = pm.environment.get('instId'); // Get the HTTP method from the request var http_verb = request.method; // Extract the resource path from the request URL var resource_path = request.url.replace(/(^{{url}})([^\?]+)(\?.*)?/, '$2'); var resource_path = resource_path.replace("{{dvId}}", dvId); var resource_path = resource_path.replace("{{dsId}}", dsId); var resource_path = resource_path.replace("{{datasourceName}}", datasourceName); var resource_path = resource_path.replace("{{instId}}", instId); var resource_path = resource_path.replace("{{deviceId}}", deviceId); console.log('resource_path: ' + resource_path) // Get the current time in epoch format var epoch = (new Date()).getTime(); // If the request includes a payload, included it in the request variables var request_vars = (http_verb == 'GET'||http_verb == 'DELETE') ? http_verb + epoch + resource_path : http_verb + epoch + request.data + resource_path; // Generate the signature and build the Auth header var signature = btoa(CryptoJS.HmacSHA256(request_vars,api_key).toString()); var auth = "LMv1 " + api_id + ":" + signature + ":" + epoch; // Write the Auth header to the environment variable pm.environment.set('auth', auth); Environment variables (pre-existing): api_id, api_key, auth, url, dvId, deviceId, dsld, datasourceName, instId (Current Value of dvId is set to a real device ID) Get request: {{url}}/device/devices/{{dvId}} Authorization tab, type is inherit. Headers tab key:Authorization, value:{{auth}} (also a good idea to set X-Version to 1 or 3 depending on what type of request for future reference. This works without it.) Pre-request script should have no content (if you are changing device id here, the request will fail as this runs after the collection level pre-request script but before the request is sent). Re: Azure Site Recovery Monitoring Up voted. We are working with support because we found that cloud based Azure Backup Job and Replication job datasources generate alerts on errors but cannot be remediated at the instance level because instances (backup job id's) cannot be rerun. The datapoints do not seem to represent the overall result of all attempts to backup during a cycle (secondary/tertiary backup attempts also have unique backup job ids and create new instances) that automatically execute if the first one fails. Would be great if agent health could be isolated from overall job success for replication jobs but understand there may be a limitation on data available by API. Re: Postman API variable use This is the raw JSON format body in the request { "name": "{{groupName}}" } The postmaster console when I send shows this as the request body: { "name": "Huzzah" } But the response body returns: {"errorMessage":"Authentication failed","errorCode":1401,"errorDetail":null} If I change the raw JSON format body back to a string { "name": "Huzzah" } then it sends successfully and makes the group. No other changes are required in Postmaster. So if I trust Postmaster's console, the request outbound looks correct and identical with both requests. Postman API variable use Has anyone successfully used variables with POST calls in Postman? I'm successful with basic tests creating groups and devices (no vars) but once variables are introduced in the place of values in the JSON body, postman console shows the data values are correctly being substituted but I'm getting 1401 errors. Re: Disable active discovery The goal: can we only show active instances to stakeholders (LM users) who have access to LM resources. User/role permissions don't currently offer a setting that turn this off. The alternative would be to leverage the existing framework which does allow instances to be deleted but by default rediscovers within at most 24 hours. We already have a need to disable discovered instances which means we are in a position of having customized select datasources already.The challenge is to identify what would be the impact to take the extra step of disabling active discovery as well then deleting disabled instances. Certainly this would apply only to select datasources but that could still affect thousands of instances. I'm trying to keep an open mind about this so that I can provide all relevant information on what this will take or why it will not work. Keeping that in mind, this is what I've considered to see if there is a path forward (just theory): Cleanup 1. Disable active discovery on target datasource(s) 2. API or LM report data of device info and disabled instances from target datasource(s) that will be deleted (this will be used later for cross reference) 3. Delete all disabled instances from targeted datasources(s) (heavy maintenance burden if done manually, API would be preferred if this is possible by using aforementioned data in step 2) Ideally at this point we have the desired result of only actionable instances that would remain as is until active discovery is run independently. We need active discovery to not run from the other potential starter conditions outside the datasource except when triggered manually. I have a case open with LM support to validate/verify the following and am also testing in a sandbox. I believe once a datasource has active discovery disabled, it will not run automatically for new devices and it will not run automatically when datasource based changes (i.e. adding a new custom datapoint that would require polling). I know this contradicts the Active Discovery Execution section of the documentation herehttps://www.logicmonitor.com/support/logicmodules/datasources/active-discovery/what-is-active-discoverybut this appears to be the exception and again I've asked for this to be vetted. Assuming we are still in a good position, we will come to an point where active discovery will need to be executed to health check and pick up new instances that may need to be monitored: Manual discovery 1. Run active discovery on individual devices (Manually or ideally use API if possible) 2. API or LM report of device information and disabled instances from target datasource (new list) 3. Diff check to identify new instances when compared to previous and mark (use data from Cleanup step 2 as previous) 4. Identify if any instances should be monitored going forward and enable those 5. Disabled instance data updated from step 2 to provide new data for use in next step 6. Delete all disabled instances (heavy maintenance burden if done manually, API would be preferred if this is possible by using aforementioned data from step 5) This may not be the right approach, I am still catching up on API functionality which may present gaps and ultimately if we can't stop active discovery from executing independently then there is no value in pursuing further. I'm discovering the possibilities and pushing boundaries so I can report what can and can't be done. Appreciate the discussion and welcome any new ideas, options and feedback! Disable active discovery I've been asked to look into disabling active discovery with the goal that we control discovery manually (manually initiated from resources page or by API if possible). The goal is to prevent rediscovery of instances so that only active, alert tuned instances are visible and new instances are discovered in a controlled check monthly or quarterly. Are there any major cons with this approach that required rollback or revisiting in your experience? Any best practices around successful deployment would be appreciated or redirection to better alternatives. Thanks in advance! Consistent data polls and no data We'd like to have better visibility to data quality and polling -for example if 10 polls are expected per hour for a particular datapoint with an effective threshold and only 8 are received then that is of interest. That being more of a "did we get a poll" cross check. Checking for quantity and no data would also be valuable independent of using the no data alerting functionality. Has anyone else looked into this and come up with a solution? Re: Universal 'No Data' monitoring On 6/28/2021 at 4:56 PM, Michael Rodrigues said: We're investigating how to properly support this use case going forward. We appreciate the patience and understanding in the meantime. Just wanted to summarize as of right now there is no SSE based option that provides this information, correct? LM Integration with Autotask SOAP 1.5 retiring I haven't been able to find that LogicMonitor is working to update to REST API for their Autotask integration. For anyone who has moved away from using this integration, what was useful - email parsing, custom API, something else? as we are open to new and better options!
Top ContributionsRe: Azure Site Recovery MonitoringRe: Postman API variable useRe: Postman API variable usePostman API variable useRe: Disable active discoveryDisable active discoveryConsistent data polls and no dataLM Integration with Autotask SOAP 1.5 retiringRe: Universal 'No Data' monitoringPowerShell script template for API Get website/websites