ContributionsMost RecentMost LikesSolutionsRe: Integration - Custom HTTP Delivery issues with AWS NLB I might not have been super clear earlier.. so the only way I am able to test this right now is using the Test Alert Delivery function. I know there's ways to return data in the HTTP response but I am nowhere near being able to do that right now. Last year when I was setting up the Integration from LM to Jira.. the Test Alert Delivery was all I needed. I assumed I could get by using mainly/only it this time. And the Integration logs show 'failed to parse' under external ticket ID, and the log details show the entire HTTP response as: "Timeout received waiting for HTTP Response". And to reiterate from earlier, I do not have the Include an ID provided.... checkbox checked. Re: Integration - Custom HTTP Delivery issues with AWS NLB Nope, I did not check that box. The rulebook I'm using either prints HELLO if the message contains that as it's value, or prints the entire payload if message != HELLO. So as far as I know, it's not waiting on anything. When I test using curl, there are no time outs. Re: Integration - Custom HTTP Delivery issues with AWS NLB Ok so I might have jumped the gun above.. so turns out removing the trailing slash now allows LM to reach the endpoint but I get a 404 returned. <html> <head><title>404 Not Found</eda-event-streams/title></eda-event-streams/head> <body> <center><h1>404 Not Found</eda-event-streams/h1></eda-event-streams/center> <hr><center>nginx</eda-event-streams/center> </eda-event-streams/body> </eda-event-streams/html> So without the slash, LM reports a 404, the endpoint a 301. With the slash endpoint reports a 200, but LM a timeout. Re: Integration - Custom HTTP Delivery issues with AWS NLB Ok well nevermind I guess, turns out the trailing / in the URL is what was breaking this. Ugh Integration - Custom HTTP Delivery issues with AWS NLB Ok so I'm trying to setup a custom http delivery to an Event-Driven Ansible webhook. The EDA target is in AWS behind a network load balancer where SSL terminates on the ec2. For sake of argument let's assume everything on the AWS side has been done; sec group with LM's public IP's, no bad NACLs, NLB is interface-facing, public DNS is good, etc. So I can test the webhook using curl from an internal and external host, and it's successful! But testing from LM ends in nothing but a request timed out. I do not know if it's TLS/SSL related.. Is it DNS? Like can LM correctly resolve my fqdn? Is it some LB attribute like preserve client ip's? Here is the URL in question, sanitized: https://ansible.domain.com/eda-event-streams/api/eda/v1/external_event_stream/630729f0-702d-4b66-8e77-66f12b923fb0/post/ I did set the method to POST, included the user and passwd and a very simple Raw payload of {"message": "HELLO"} Re: SDTing via API, one source work the other does not Taking this suggestion I replaced the :" with %3A%22 and the trailing " with another %22. This time the call looks like it might have been sent. I need to add a couple more audit log actions. But it's a solid maybe! Re: SDTing via API, one source work the other does not Oh wait! I lied! I have a rule that's sending a web request to LM that ACK's an alert that's using a var in the url.. but it's not a filter query. https://xxxx.logicmonitor.com/santaba/rest/alert/alerts/{{LMalertID}} Re: SDTing via API, one source work the other does not So in a Jira automation rule there isn't 'code' in the traditional sense of the word. It's all visual like the application Tines. This might help: https://imgur.com/a/oK61qK8 So what happens is there's a manual trigger, if the servicename == logicmonitor on the Incident in question then create a var: lmalertid from field 'issue.labels', print that to the audit log. Create a second var: deviceHostname from 'issue.customfield_20574', which is supplied from the ##host## token when the custom http delivery Active rule is triggered from LM. Next we print that deviceHostname to the audit log, then send the first web request. The URL is https://xxxx.logicmonitor.com/santaba/rest/device/devices?filter=name:"{{deviceHostname}}*". The HTTP method is GET, with no body, and the following headers: Content-Type:application/json Authorization:Bearer <token> X-Version:3 Cache-Control:no-cache User-Agent:Jira/JSM Accept:/ Accept-Encoding:gzip, deflate, br So provided LM sends the ##host## token, which defines the customfield_20574, and the rule can use that field to define {{deviceHostname}} and the URL is correctly formatted then it will throw the value of deviceHostname into the filter and send it. I have other rules that successfully send these Jira variables in the body.. this is the first one I've tried using them in the URL. Re: SDTing via API, one source work the other does not Hmm so that 'translates' to {{deviceHostname}}* at the destination however it would not work in a Jira 'Send web request' URL with a variable in the url string. So the var won't get defined but instead passed as-is. I noticed Postman will send that encodes value as readily as the prior (name:"name-here*"). Any chance a filter can be part of the body? Re: SDTing via API, one source work the other does not As a test I removed the dbl-quotes, 'Invalid filter'.. same as what Postman says if I do the same.
Top ContributionsRe: SDTing via API, one source work the other does notMonitoring HAProxy?SDTing via API, one source work the other does notRe: SDTing via API, one source work the other does notBig Number widget showing a date?Monitoring a CRL (certificate revocation list)Re: Monitoring HAProxy?Re: Big Number widget showing a date?Re: Big Number widget showing a date?Re: Big Number widget showing a date?