That’s pretty easy. You can have the file stored anywhere you’re comfortable with the collector having access, even on the collector itself. Then just have the script SSH into the collector, read/parse the json, loop through the stuff and output the data. I do something similar to monitor the disks on my collectors regardless of OS:
try {
def sout = new StringBuilder(), serr = new StringBuilder()
if (hostProps.get("system.collectorplatform") == "windows"){
def proc = 'powershell -command "Get-CimInstance Win32_LogicalDisk | Where{$_.DriveType -eq 3} | Format-Table -Property DeviceID, Size, FreeSpace -HideTableHeaders"'.execute()
proc.consumeProcessOutput(sout, serr)
proc.waitForOrKill(3000)
// println("stdout: ${sout}")
sout.eachLine{
splits = it.tokenize(" ")
if (splits[0]){
println("${splits[0].replaceAll(':','')}##${splits[0].replaceAll(':','')}")
}
}
return 0
}
// linux logic is here, but you get the point
} catch (Exception e){
println(e)
return 1
}
You could do a simple cat command, then parse the json using a JsonSlurper object. Then it would be pretty easy to loop through the data and output the lines you need for discovery and for collection.
Something like this should work for you:
import groovy.json.JsonSlurper
x = new JsonSlurper()
try {
def sout = new StringBuilder(), serr = new StringBuilder()
// path for this file is LM install path. /usr/local/logicmonitor/agent/bin
def proc = 'cat sample_data.json'.execute()
proc.consumeProcessOutput(sout, serr)
proc.waitForOrKill(3000)
// println(serr)
if (serr.size() == 0){
data = x.parseText(sout.toString())
data.replication.each{
it.db_name = it.db_name.trim()
println("${it.db_name}##${it.db_name}")
println("${it.db_name}.local_count: ${it.local_count}")
println("${it.db_name}.remote_count: ${it.remote_count}")
println("${it.db_name}.counts_match: ${(it.counts_match == 'true') ? 1 : 0}")
println("${it.db_name}.replication: ${(it.replication == 'running') ? 1 : 0}")
}
return 0
} else {println(serr); return 2}
} catch (Exception e){println(e);return 1}
This one script should work as the AD script and also the collection script. This is because the output contains both discovery and collection formatted lines. Discovery will ignore the collection lines and collection will ignore the discovery lines.
You’d make a batchscript DS with multi-instance and discovery enabled. Set the wildvalue to be the unique id (this should be default on most datasources).