Forum Discussion

Dominique's avatar
Dominique
Icon for Advisor rankAdvisor
5 years ago

Group vs Resource

Hello,

I have a "Resource Group" where we disabled the "Windows Events > Windows System Event Log".

For some resources I need to have the Windows System Event Log Enabled for some specific events.

I saw on the resource in the Disabled Datasource Group".

What should I need to activate some events for some resource?

Thanks,

Dom

6 Replies

  • I don't think you can re-enable something at a lower level. What I do is modify the AppliesTo to the EventSource so it only applies to devices with a particular category. You will see some built in DataSources work this way too. So for Windows System Event Log for example you can change "isWindows()" to something like 'isWindows() && hasCategory("SysEventLogs")' then assign that Category manually to groups/resources you do want to applied to. If there are particular classes of resources that you always want this applies to, you can also custom write your own PropertySource to assign that category for you.

  • Anonymous's avatar
    Anonymous

    Keep in mind that the suggested best practice is to clone the PropertySource and make the changes on your clone. In this way, when updates are pulled down from the repository, your changes aren't overwritten. This has the disadvantage that you have to disable the original PropertySource (which would get re-enabled when/if that PS were updated from repo). 

  • Would you say that that suggestion apply to EventSources (this thread), DataSources and any other LogicModules?

  • Anonymous's avatar
    Anonymous

    Definitely PropertySources and DataSources. EventSources are difficult to build out of the box since the thing you want to alarm on varies greatly from environment to environment (which is why there isn't that much diversity in the ESs out of the box). 

    That said, any LogicModule can be updated from the repo by any portal administrator, which overwrites any changes made to that repo and reset the entire thing to the newly updated values. When you pull down from repo, you can see a diff between the two XML files (kill me now) to see what's changes and what you'll have to reset. 

    For complex datasources, I've started building a git pull into the script so that it pulls down a script from a github repo and then the groovy script executes the downloaded script. That way, I can update the collection script all I want and it doesn't change the DS. Ansible tower does this for its scheduled jobs really cleanly. My version is potentially overly complex since it requires git to be available on the collector. I do this by having docker installed on the collector and pulling down the repo using the alpine/git container. This also allows me to pull down a script in any language and run it in any interpreter in a docker container (that's also pulled from the docker hub at runtime). LevelUp would have had me presenting this method in some form, but it'll probably be much later.  Sorry, i'm rambling now.

  • I wasn't aware (or wasn't at the time) of that suggested best practice back when we started using LM 4+ years ago, but also not sure how well that works over the long term if you change enough stuff, I worry that this end up forking any non-trivial changes DataSources from the repo so you can miss any bug fixes or improvements. Also you can end up with stuff like multiple copies of the same DataSource with slight changes or per-customer (for MSPs), although you can plan ahead a bit to mitigate that. The category assignment is easy and minimal change that is easy to verify and mark as audited, imho. But then again we spend a lot of time auditing changes too, the new LM Exchange should improve that. The new LM Exchange seems to be geared towards people modifying the original DataSource anyway since it seems to separate items like AppliesTo and thresholds.

    The whole GIT pull thing is very interesting although I expect that wouldn't scale well.

  • Anonymous's avatar
    Anonymous

    Yeah, it's a double edged sword. If you don't fork, your updates can get overwritten. If you do fork, you don't get the updates. Yes, the new Exchange should have a feature that will let you do something like a safe-ish merge.

    The git thing works pretty well in Ansible tower. I've seen 200-250 different projects sync'd with github within a single tower. The thing is, there's only load when it's initially cloned. After that, it's just checking the current commit locally and comparing it to the remote commit. If the remote commit is newer, only the deltas are pulled and merged locally.  It's just a thought experiment right now, but it's one way that datasource code (which is rarely changed by customers) could be compartmentalized away from datasource settings (which are often changed by customers). It is moot anyway until the Exchange comes out with this feature (since it is much farther down the development path than my solution).