Making a Backup for Cisco ISE in LogicMonitor
I am trying to setup a ConfigSource in LogicMonitor for Cisco ISE backups. We want the backup to only pull when a change is made in ISE. The out-of-the-box Cisco_IOS and Cisco_NXOS ConfigSources don't seem to work for ISE. So I tried to make a ConfigSource from the LM pagehttps://www.logicmonitor.com/support/logicmodules/articles/creating-a-configsourcebut I haven't been able to get it to work properly. I made a copy ofCisco_IOS so I could have the Config Checks and removed the scripts. I have the following: AppliesTo: ( ( startsWith( system.sysinfo, "Cisco Identity Services Engine" ) ) ) && ( (ssh.user && ssh.pass ) || (config.user && config.pass) ) Parameters: import com.santaba.agent.groovyapi.expect.Expect host = hostProps.get("system.hostname"); user = hostProps.get("config.user"); pass = hostProps.get("config.pass"); cli=Expect.open(host, user, pass); cli.expect("#"); cli.send("terminal length 0\n"); cli.expect("#"); cli.send("show running-config all\n"); cli.expect(/Current configuration.*\n/); cli.send("exit\n"); cli.expect("#exit"); config=cli.before(); config = config.replaceAll(/ntp clock-period \d+/,"ntp clock-period "); cli.expectClose(); println config; Test comes back with the error: The script failed, elapsed time: 60 seconds - End of stream reached, no match found java.io.IOException: End of stream reached, no match found Does anyone have any ideas?302Views1like39CommentsDPM (Data Protection Manager)
Monitoring DPM First was a blur to us. To monitor all jobs wasn't the best thing as a multi instance datasource as the jobs were so many the monitoring of them could potentially use all of the backup server computepower. We also tried the batch script collector but we never got that to work for some reason. Then we took a step back sat down and thinking. We don't actually need to see all jobs. It's quite enough just to see which server got problems. So, i wrote this datasource(it's a multi instance datasource) that collect all computer objects and then for each object checks DPM Alerts and send it out to a custom message. Please enjoy, Z9NNJH14Views0likes2CommentsCollector dynamic groups
Collector groups were added recently, and are detailed here:a href="https://communities.logicmonitor.com/topic/637-collectors" rel="">https://communities.logicmonitor.com/topic/637-collectors Now let's expand upon that functionality...What ifcollectors be grouped dynamically? Identically to how Devices dynamic groups work,could I assign properties to collectors, then build dynamic groups based from the properties? Ways thatenvision sorting collectors: Production/test, primary/backup,collector number (1-99, 100-199, 200-299, etc.),zip code, time zone, alphabetical. In each of these cases, this would give full flexibility on how collector upgrades are scheduled. Currently if we have a mass collector upgrade to a new General Release, it can be a little painful to manage so many upgrades running simultaneously (or at least in very short order). I am most interested in being able to split them up into primary, backup and single collector groups. This way, I know that it's pretty safe to upgrade the collectors that have backups after hours, since there is another collector to failover to. And I surely want to be available staff-wise if I am doingupgrades for those collectors that have no backup collector. Close behind sorting into primary/backup/singleis the need to sort them by customer (which currently works fine). The issue is that you can't put a collector into more than one group, which precludes from even setting up these to items manually.4Views1like1Commentobject versioning
There are currently far too many opportunities to commit errors in LM from which is isdifficult to recover since there is no version tracking. Ideally, it would be possible to revert to a previous version of any object, but especially very sensitive objects like logicmodules, alert policies, etc. I have created my own method of dealing with this, which leverages the API to store JSON streams of all critical elements regularly, changes committed via git (certain adjustments to the original results are needed to avoid a constant update stream). Recovery would be very manual, but at least possible. This would be far more useful within the system itself. Thanks, Mark4Views3likes1CommentSDT for minor alerts
Hi, Every morning i have to clear a couple of hundred alerts from my inbox that come in while our customers servers are running backups. We often get 'disk latency' 'network latency' type alerts while the backups are running. As they are run outside of hours, we do not really need these. Please could you add a way of creating SDT based on alert severity or better yet, build a mechanism to schedule backup window times to filter noise alerts like disk latency. I'm sure I'm not the only one to encounter this issue. Kris4Views0likes4Comments