Tuning a collector to work on a t2.micro EC2 instance?
I know that it is not exactly recommended/reliable to use a 1GB/1CPU Core machine to monitor...but it seems that installing a "nano" sized collector on a t2.micro AWS instance and having it just monitoritself brings the AWS instance to a screeching halt. I am seeing that when the collector is running, top shows that CPU pegs to 100% almost nonstop. Memory is not hit quite as bad..but it does get up there to use 500mb+ But the CPU load average is 5+ cores and it makes the systemunusable. Sometimes this causes the instance to throwstatus alerts & even crash. Question: Has anybodybeen able to tweak the wrapper.conf etc files to make the collector CPU load less demanding?Solved1View1like1CommentFailover Collector confusion
LM Team, When you are on your primary collector and choose a failover collector and are deciding to check that box "Automatically failback when THIS collector becomes available again". The way that it is worded, THIS collector could mean the current collector you are configuring. Then again, the checkbox is underneath the failover collector you just selected and it is indented, so it's easy to convince yourself that it could be referring to the failover collector. I think it would be worth re-wording or at least offering a hover-hint (?) box to clarify that the failback will be occurring on the collector you are currently configuring. Looking back, I had checked this box differently depending on how I felt that day and which made most sense to me at the time. Some clairification would be good. Thanks!2Views1like2CommentsRead only agent / collector
I know I've brought this up before, but I'd like to bring it up again. LM's requirement that collectors run as local admins (or system) is a GAPING security hole in your product. No amount of certificate signing, or other like security measures are a replacement for running a collector or an agent as a read only account. The fact is, with every security measure you take, if the collector is running as an admin account or a system account, its going to be exploitable in one way or another. Having the signed scripts and what not, would be great, but really it shouldn't be the primary focus IMO. Security is much better when its locked down by default and opened up as needed, compared to what you guys are doing, which is a completely open system, that you're trying to add security enhancements on top of. It's almost akin to you guys having no firewall,and then adding a few rules here an there to block certain types of traffic, while the rest of the network is completely exposed. A more prefered architecture (security wise) would be an agent / collector that can run as a read only account and be supported. WMI, perfmon, and many other functions all work fine with a regular user, when it's executed locally. That is why an agent or a special collector is needed. Most ideal communication path would be an "agent" talks to a "collector" which then talks to the portal. This would also allow us to keep our internet locked down. I suspect this would also have the other advantage of taking a lot of load off the collectors and really putting most of the work on the agent, which is ultimately better given that the workload would be distributed. For now though, even having a "supported" configuration for a collector not running as a local admin / system would be a great step in the right direction. The reason this is less of a concern for solution like Solarwinds and SCOM is they're on premises based solutions, meaning there is much lower external risk factor. You guys are cloud, and there for need to design the solution from an untrusted point of view.7Views1like11CommentsCollector dynamic groups
Collector groups were added recently, and are detailed here:a href="https://communities.logicmonitor.com/topic/637-collectors" rel="">https://communities.logicmonitor.com/topic/637-collectors Now let's expand upon that functionality...What ifcollectors be grouped dynamically? Identically to how Devices dynamic groups work,could I assign properties to collectors, then build dynamic groups based from the properties? Ways thatenvision sorting collectors: Production/test, primary/backup,collector number (1-99, 100-199, 200-299, etc.),zip code, time zone, alphabetical. In each of these cases, this would give full flexibility on how collector upgrades are scheduled. Currently if we have a mass collector upgrade to a new General Release, it can be a little painful to manage so many upgrades running simultaneously (or at least in very short order). I am most interested in being able to split them up into primary, backup and single collector groups. This way, I know that it's pretty safe to upgrade the collectors that have backups after hours, since there is another collector to failover to. And I surely want to be available staff-wise if I am doingupgrades for those collectors that have no backup collector. Close behind sorting into primary/backup/singleis the need to sort them by customer (which currently works fine). The issue is that you can't put a collector into more than one group, which precludes from even setting up these to items manually.1View1like1CommentCollector "data collecting task" equivalent for netflow
Working through best practice of creating collector dashboards. The various data collecting tasks provide a wealth of info that can be customized as desired into widgets for such a dashboard. But there doesn't seem to be anything that can provide visibility into the underlying collector mechanisms (tasks, processes, thread, cpu, mem, etc) that support netflow operation. Would probably be nice to be able to see such info, and to be able to put it onto a collector dashboard particularly since its best to pipe netflow to a dedicated collector.0Views1like1Comment