Forum Discussion

Kelemvor's avatar
Kelemvor
Icon for Expert rankExpert
9 months ago
Solved

Can I monitor a JSON file? Example included.

Hi,

We have a script that runs and creates an output like the file attached.  We need to be able to parse this file and look at the “replication” and “counts_match” fields and alert if we don’t find certain criteria.  Can LM do that?

I think that LM can only access files directly if they are on a collector, so we’d make sure this file ends up there.

Thanks.

I guess I can’t attach a file so here’s what it looks like:

{
  "replication": [
    {
      "db_name": "db1 ",
      "replication": "running ",
      "local_count": "12054251",
      "remote_count": "8951389",
      "counts_match": "false"
    },
    {
      "db_name": "db2 ",
      "replication": "running ",
      "local_count": "0",
      "remote_count": "0",
      "counts_match": "true"
    },
    {
      "db_name": "db3 ",
      "replication": "running ",
      "local_count": "0",
      "remote_count": "0",
      "counts_match": "true"
    },
    {
      "db_name": "db4 ",
      "replication": "running ",
      "local_count": "97",
      "remote_count": "97",
      "counts_match": "true"
    },
    {
      "db_name": "db5 ",
      "replication": "running ",
      "local_count": "0",
      "remote_count": "0",
      "counts_match": "true"
    }
  ]
}

5 Replies

  • That’s pretty easy. You can have the file stored anywhere you’re comfortable with the collector having access, even on the collector itself. Then just have the script SSH into the collector, read/parse the json, loop through the stuff and output the data. I do something similar to monitor the disks on my collectors regardless of OS:

    try {
    def sout = new StringBuilder(), serr = new StringBuilder()
    if (hostProps.get("system.collectorplatform") == "windows"){
    def proc = 'powershell -command "Get-CimInstance Win32_LogicalDisk | Where{$_.DriveType -eq 3} | Format-Table -Property DeviceID, Size, FreeSpace -HideTableHeaders"'.execute()
    proc.consumeProcessOutput(sout, serr)
    proc.waitForOrKill(3000)
    // println("stdout: ${sout}")
    sout.eachLine{
    splits = it.tokenize(" ")
    if (splits[0]){
    println("${splits[0].replaceAll(':','')}##${splits[0].replaceAll(':','')}")
    }
    }
    return 0
    }
    // linux logic is here, but you get the point
    } catch (Exception e){
    println(e)
    return 1
    }

    You could do a simple cat command, then parse the json using a JsonSlurper object. Then it would be pretty easy to loop through the data and output the lines you need for discovery and for collection.

    Something like this should work for you:

    import groovy.json.JsonSlurper 
    x = new JsonSlurper()
    try {
    def sout = new StringBuilder(), serr = new StringBuilder()
    // path for this file is LM install path. /usr/local/logicmonitor/agent/bin
    def proc = 'cat sample_data.json'.execute()
    proc.consumeProcessOutput(sout, serr)
    proc.waitForOrKill(3000)
    // println(serr)
    if (serr.size() == 0){
    data = x.parseText(sout.toString())
    data.replication.each{
    it.db_name = it.db_name.trim()
    println("${it.db_name}##${it.db_name}")
    println("${it.db_name}.local_count: ${it.local_count}")
    println("${it.db_name}.remote_count: ${it.remote_count}")
    println("${it.db_name}.counts_match: ${(it.counts_match == 'true') ? 1 : 0}")
    println("${it.db_name}.replication: ${(it.replication == 'running') ? 1 : 0}")
    }
    return 0
    } else {println(serr); return 2}
    } catch (Exception e){println(e);return 1}

    This one script should work as the AD script and also the collection script. This is because the output contains both discovery and collection formatted lines. Discovery will ignore the collection lines and collection will ignore the discovery lines.

    You’d make a batchscript DS with multi-instance and discovery enabled. Set the wildvalue to be the unique id (this should be default on most datasources). 

  • You can have that script run as a task in a datasource, then add to your script to loop through the replication object and output the right things. This is doable without having to store any files on the collector.

  • Actually, you wouldn’t even have to modify your script. LM has a JSON/BSON post processor. Meaning that you can pull out the values from the json as part of the datapoint definition.

  • Apparently there are some concerns about having LM have direct access to these servers.  I think we’re going to have to have the server run the script on its own and then output the file somewhere.  Then we’ll need LM to read in the file and do the parsing.