Forum Discussion

jwainwright's avatar
8 years ago

Normal Datapoints: Allow JSON reponses to dynamically populate Name and Post Processor values.

While working on optimizing Powershell scripts for Logic Monitor, we found out that Active Discovery was great for some applications. However, when it came Powershell invoking commands(running scripts on servers), we found that Active Discovery has the potential to generate too many connections to servers. The answer we arrived at was doing everything in one script and returning it all in a JSON response. This worked significantly better than the dynamic Active Discovery, but had one draw back. The data points had to be manually entered. 

My suggestion is that Logic Monitor modify the data points  to allow reference to the JSON responses. Meaning, that we would set one instance of a Data Point with a Name field that indicates the JSON path to an array with all of the Instances and the Post Process could be pointed to the corresponding JSON path for the Value of each instance.

JSON Example:

[
    {
        "Title":"Name of an Instance",
        "Value":1    
    },
    {
        "Title":"Name of another Instance",
        "Value": 2000    
    }
]

DataPoint would look something like this:

Name               Metric Type        Raw Metric        Post Processor        Alert Threshold

 json(Title

  • )        guage                  output               json(Value
  • )               != 1
  •  

    Results would create instances like this on a graph as you would if you type them out normal:

    "Name of an Instance": 1

    "Name of another Instance": 2000

     

    I believe this would be more efficient and allow us to be still dynamic.
    Thanks,

    Jason Wainwright 

     

     

    • Correct, except we do not use ##WILDVALUE##, we build a JSON response and return that. Then, once we created the datapoints, we would reference the JSON object. Currently, our JSON looks something like this:

      {
          "Name of some queue we want to know age of oldest message":44,   
          "Name of some other queue we want to know age of oldest message":0
      }

       

      example from exported XML
            <datapoint>
              <name>Name of some queue we want to know age of oldest message</name>
              <dataType>7</dataType>
              <type>2</type>
              <postprocessormethod>json</postprocessormethod>
              <postprocessorparam>Name of some queue we want to know age of oldest message</postprocessorparam>
              <usevalue>output</usevalue>
              <alertexpr>!= 0 0 0</alertexpr>
              <alertmissing>1</alertmissing>
              <alertsubject></alertsubject>
              <alertbody></alertbody>
              <description></description>
              <maxvalue></maxvalue>
              <minvalue></minvalue>
              <userparam1></userparam1>
              <userparam2></userparam2>
              <userparam3></userparam3>
              <iscomposite>false</iscomposite>
              <rpn></rpn>
              <alertTransitionIval>0</alertTransitionIval>
              <alertClearTransitionIval>0</alertClearTransitionIval>
            </datapoint>

    • Ah I understand now, thanks for the explanation.  So in your collection script you are doing the same also, grabbing all the data in one connection, and then looping through the data to generate the output using ##WILDVALUE##.

    • Hi Mosh,

      We were using Active Discovery  with PowerShell to discover MSMQ queues on each server. This in itself was not an issue, but the way to check each queue requires each Queue found in the discovery would need a connection to the server and a check on whatever we were looking for(ie, Age of oldest message, or subscription verifications). With only a few queues, this may not be much of an issue, but with our 125+ queues on a server and each request requiring an open connection from the logic monitor collector we found that this could overwhelm the CPU resources on a server, fairly quickly(given the right scenario). The solution was pretty simple. We removed the active discovery script and combined it in Collector script("Embedded PowerShell Script") but returned the data in JSON. This allowed for all 125+ queues to return information with 1 connection in under 2 seconds, where as before, each of the 125+ queues would have to fire a script that connected to the server and would take less than a second each, but have 125+ connections spawned over the duration of the 5 minute monitor.

    • Hi Jason,

      Re "we found that Active Discovery has the potential to generate too many connections to servers", may I ask what is it that you're discovering (the instances)?  Are they things that need to be discovered in one go, or could they be discovered one by one or in batches?