Forum Discussion

mnagel's avatar
mnagel
Icon for Professor rankProfessor
7 years ago

add key/value store (redis or similar)

It is becoming very clear we cannot rely on parameters in LM to drive scripts, either because some tokens are mysteriously unavailable for use as parameters (discovered only when assuming they should be), or because tokens have limitations on length that preclude using them for data passed to logicmodules.  Please consider integrating a distributed key/value store like redis into LM, with data replicated among collectors.  This would help with access to configuration data as well as cross-run results within or across datasources.  Ideally this would work natively with Groovy and PowerShell.

8 Replies

  • Yes, a central key/value repository would be useful, with universal access to keys via scripting in particular.

  • Mnagel - Love this idea, but curious where you had issues with tokens being unavailable for use as parameters.  Looking to determine if I'm also "assuming they should be" and need to be corrected. 

  • One obvious missing token I have raised to support several times is WILDVALUE -- it is not possible to reference this in alerts, which means you cannot say in an alert which input value triggered the alert.  And of course, not a bug, but a feature request, which I hear frequently.  You also cannot pass but a few specific tokens into PowerShell scripts, and the limitations are not well documented.

    This specific issue is related more to recent security monitoring we have been asked to implement.  It is not necessarily the correct tool, but then the whole point of using LM is to avoid a plethora of tools.  When trying to encode the expected security settings for a Windows folder into a field, we found there are size limits so had to just use fields to index hardcoded values in the script.  A key/value store would help.  It would also help deal with monitoring for changes in values, like group membership, etc.  I can roll out my own and figure out how to get that sync'ed on all the collectors, or it could be a service provided by the collectors themselves (preferable).

  • Any movement on this? I have another use case where caching within LM would be very helpful -- API result caching.  Cisco has an API to lookup vulnerabilities based on platform and version.  They also have daily call limits.  There is no reason to call on the same platform/version more than once per day, but the "LM way" would be to setup something akin to the MS Patch DS @Mike Suding wrote, so this needs to be bound to AD per host.  I can hack around the issue by writing files, but it already bugs me that module scripts can even write files to begin with.  Actually doing it makes me remember how dangerous that is and it makes me sad.  Having an integrated cache system like this would also allow all collectors in the cache group to leverage the data rather than each caching separately (and increasing the API use rate).  Other relevant places API result caching would be useful is for weather lookups for a location -- this only needs to be done once per location every so often (30 minutes perhaps).  Not every device in a location needs to actually look up the weather for its location, but you want it to look like that so each device has the status bound.  And those services have limits (and costs for increasing limits).  Maybe redis is not the specific solution, but something to achieve this would be very helpful.

  • Another use case -- currently eventsources are of limited use due to lack of correlation/counting.  For script-base eventsources, you could at least use a k/v store to detect the same event and extend its lifetime and update a counter in the k/v store.  Not perfect, but it is impossible now so this would be an improvement.

  • Anonymous's avatar
    Anonymous

    Suggestion for a way this could be done:

    Store the k/v pairs as properties on the root or on the parent folder. If there were a groovy method built in that let you do something as simple as

    props.set("object","key","value")

    that would work right? I'm asking because of other benefits to having this ability right in Groovy without need for authentication, authorization, or (?) a huge section of code to do an LM API call.

  • I never really received any useful information, but there are two different things that are not documented:

        * MEMCACHE method in datasources
        * memcache JAR in the LM agent/lib directory

    For the latter, I did figure out it is this one by examining the ToC: com.danga.MemCached

    I have had more difficulty tracking down official documentation on that than I would expect (though I am sure it is out there).  I have found some useful examples, so will see if I can actually leverage the library.

    https://sites.google.com/site/networkprogrammingforjava/home/miscellaneous/others/memcached