Forum Discussion
Ok, so you need it to run on the collector. If you're not running it in the collector debug now, how are you running it? Groovy installed on your laptop? Are there custom libraries you've installed on your laptop that you're importing into the script? If so, you'd need to install those jar files on the collector. Mike_Aracic could probably point you to docs on how to do that.
Yeah, that's the problem, we added the script using the LM Collector GUI, associated it to that server, the script shows up as valid but not associated to run on any host. We are new to LM and still learning how it works. If we could execute it, the rest I think we can figure out.
I did try to pre-test the script with Groovy on the Collector server, but while LM affirms that it can run Groovy scripts, that executable is not available system wide, not in the user environment. I'm hesitant to add another install of Groovy because I don't want to risk a library clash with LM itself. The sqlnet client and LDAP binaries have all been tested on the server and connect fine to any valid database name.
If we can just get the script to run and raise some errors, we'd have something the move forward with..
- Anonymous9 months ago
If your new datasource isn't applying to any devices, you might consider checking the AppliesTo expression. But have you considered how your instances will get created? Discovery? Batchscript vs. script?
- casteeb9 months agoNeophyte
In Nagios this was very simple, when we built new Oracle RAC DBs for the clients, we just added the DB name to the server config in Nagios and called our check-oracle.sh script with it. The script returns 0 or 1 plus the output from the connect attempt. This showed up in Nagios as up / down and if down the message Oracle Sqlnet returned - (commonly No listener found , if the DBA started the DB and forgot to start the listener). Our Oracle LDAP manages the connection string, and the port is open to the Nagios servers. I'd just like to replicate that functionality. Very easy.
- Anonymous9 months ago
That's pretty much what you can get to in LM, but your setup is a little more complicated. More powerful tools require more to set up properly. Once you get it setup properly, it would be as easy as adding the db name to the instance list property.
Your next step is to think about whether you would do batchscript or script. This depends on how your collection script does or will work. Does the script run once and return the data for one db? Or can it run once and return the status for all the DBs. Probably the former, in which case you'd land on a script (not batchscript) type datasource.
The next is to determine the input parameters needed for your collection script. Probably the db_name and maybe some credentials, yes? Your collection script would pull in the db name from the wildvalue or wildalias (which we'll set when we talk about discovery) using `hostProps.get("wildvalue")` or `hostProps.get("wildalias")`. You can pull in the credentials using the `hostProps.get("propname")` method, `hostProps.get("jdbc.oracle.user")` for example. You might also have properties for the port number and any other parts of the connection string. The target server could be fetched using `hostProps.get("system.hostname")`. You need to make sure your collection script can pull the inputs dynamically at runtime.
For discovery, you should at least give a passing thought to whether the db names could be discovered programmatically. Does the list of DB names exist already? Could LM pull that list from there? If so, write a script to do it.
If not, you'll store the list as a property in LM on the device. Then your discovery script would pull from that property to create the instances in LM. This is the property you would update every time you have a new db name you want to add to the list. Each instance would, under the best circumstances, have both an identifier and a display name. They can be the same, but often the display name looks better if it's human readable.
Let's say you had 3 databases: sales, marketing, and customers. You would create a property that you might call "oracle_rac_dbs" and give it value of "sales,marketing,customers". You'd create this property on the server where the DBs exist and where you can query to get the status of each. You could then use a simple script to interpret the property list and create each instance in LM under the device like this:
hostProps.get("oracle_rac_dbs").tokenize(",").each{ println("${it}##${it}") } return 0
The first line does three things:
- Fetch the value from the oracle_rac_dbs property (which is a string).
- Split the string everywhere there is a comma.
- Start a loop through the resulting individual strings.
The println statement tells LM to create an instances where the wildvalue is the individual string and the wildalias is the individual string.
This would result in the following instances:
However, it woud probably be better to give better display names (wildaliases) to your instances. To do this, we'd simply embed the display name in the list stored in the property: "sales|Sales,Marketing|Direct Marketing,customers|CRM".
We'd adjust our discovery script to look like this:
hostProps.get("oracle_rac_dbs").tokenize(",").each{ (wildvalue,wildalias) = it.tokenize("|") println("${wildvalue}##${wildalias}") } return 0
The first line does the same thing as before. The second line splits each instance string into two along the pipe (|) character and stores the results in wildvalue and wildalias, respectively. The next line outputs the data to LM in the format required to build the instances. This would result in instances that look like this:
The nice thing about using a property for discovery is that it dictates your AppliesTo. If the property exists, monitor it; if it doesn't, don't. In this case, I'd set the AppliesTo to simply be "oracle_rac_dbs". That tells LM to apply the datasource only to servers that have the property (the servers you added the property to).
Once you have your instance creation sorted, you plug in your collection script, which pulls the wildvalue to use as the principal input for your collection script. The collection script then connects to the db, returns a 0 or a 1 (as the return value of the script would be easiest). Then you setup a datapoint to read the return value of the script as a datapoint.
Once you get it all setup, your workflow would be like this:
When we built new Oracle RAC DBs for the clients, we just added the DB name to the oracle_rac_dbs property on the device in LM which calls our collection script with it. The script returns 0 or 1.
Related Content
- 2 years ago