ContributionsMost RecentMost LikesSolutionsRe: Selenium on Linux Collector to execute synthetic webchecks Glad to hear you found this helpful @Teijo Forsell! I have just confirmed that the LM-Exchange DataSources posted above should now be public. Thanks for sharing your DataSource template as well! Selenium on Linux Collector to execute synthetic webchecks You can use thisbash script to install and configure all the necessary components in order to execute synthetic webchecks on a Linux (Debian/Ubuntu) Collector. This script will install Chrome, Chromedriver, and Selenium to the Collector's host (besure the Collector isinstalled prior to executing the bash script). Once this is done, you can execute selenium commands from within LogicMonitor DataSources (sample DataSources can be found within the LM-Exchange: "Synthetics Check Template"and "LogicMonitor Login Selenium"). The easiest approach I've found to getting a synthetics recording into LMis to use the Selenium IDE Chrome extension to record and then export the recording as a java file. From here, simply copy-paste thedriver.commands from this exported file into the DataSource. For simplicity and clarity, I've always applied the synthetics DataSource to the Collector itself. Inherent in this approach is that you can have multiple synthetics DataSources applied to a single Collector. The final result can look something like this: Though it is not depicted here in this example, you can insert timestamps into the DataSource groovy script to measure the duration of each step in your webcheck. For more info on uploading other custom jars to the Collector, check out this Support Page. Re: Monitoring Linux systems using SSH Hey @Mosh- You can find it with locator code:CGWHNZ Re: Monitoring Linux systems using SSH There is now a collection of SSH based datasources for monitoring Linux: Linux_CPU_Cores, Linux_Block_DevicePerformance, Linux_Memory_Usage, Linux_Host_Uptime, Linux_Filesystem_Usage, Linux_CPU_Load, Linux_Network_TCPUDP, Linux_Network_Interfaces They work in tandem with the Linux_Monitoring_SSH PropertySource. These can all be added via the LogicMonitor DataSource Repository. Re: DataSource - Acknowledged Alerts Leaderboard Upgraded version with corrected Avg. Time To Ack, as well as new datapoints: Total Cleared Alerts (per user) and Avg. Time To Clear (per user). Again, for past 1000 alerts: Locator Code: FLFAJN FireEye NX DataSources 6FXJKT - Disks 9PGJCL - Fans FWHEYZ - Power Supply M29KRM - RAID Status PJZE2H - System Status J4RNAD - Temperature GGDD29 - Malware Protection Alerts HL9ZYA - Malware Protection Statistics M69E6T - Malware Protection Threats GEIST Watchdog DataSources HM7MKA -GEIST_Watchdog_Internal_Dew_Point CMFNFD -GEIST_Watchdog_Internal_Humidity EY2LN6 -GEIST_Watchdog_Internal_Temp Re: How to change "Send Collector Down Notification" to more then 60 collectors using script Hi Shawn, This script would not be run as a DataSource within LogicMonitor. It would be run as a .py file from your server or workstation. You can see our full documentation on using the REST API here: https://www.logicmonitor.com/support/rest-api-developers-guide/overview/using-logicmonitors-rest-api/ Best, Jake Re: How to change "Send Collector Down Notification" to more then 60 collectors using script Hi Shawn, Yes, you can use our REST API to programmatically change the "Send Collector Notifications" to One Time. We have documentation on how to make the PUT request here, but you can use a PATCH request so that you're only modifying the resendIval field. Here's the code I used to execute this: #Request Info httpVerb ='PATCH' resourcePath = '/setting/collectors/8' queryParams = '?patchFields=resendIval' data = '{"resendIval":0}' #Construct URL url = 'https://'+ Company +'.logicmonitor.com/santaba/rest' + resourcePath +queryParams #Get current time in milliseconds epoch = str(int(time.time() * 1000)) #Concatenate Request details requestVars = httpVerb + epoch + data + resourcePath #Construct signature signature = base64.b64encode(hmac.new(AccessKey,msg=requestVars,digestmod=hashlib.sha256).hexdigest()) #Construct headers auth = 'LMv1 ' + AccessId + ':' + signature + ':' + epoch headers = {'Content-Type':'application/json','Authorization':auth} #Make request response = requests.patch(url, data=data, headers=headers) #Print status and body of response print 'Response Status:',response.status_code print 'Response Body:',response.content With this, you'll loop through your collector ID's (recommendusing the GET all collectors request, followed by the PATCH requests). Hope this helps! Jake Re: Aerohive Rogue Access Points DBrandt, Have you looked into our AeroHive DataSources? These might provide the information you're looking for.
Top ContributionsSelenium on Linux Collector to execute synthetic webchecksRe: Selenium on Linux Collector to execute synthetic webchecksGEIST Watchdog DataSourcesFireEye NX DataSourcesRe: How to change "Send Collector Down Notification" to more then 60 collectors using scriptRe: How to change "Send Collector Down Notification" to more then 60 collectors using scriptRe: Aerohive Rogue Access PointsRe: DataSource - Acknowledged Alerts LeaderboardDataSource - Acknowledged Alerts LeaderboardEnviroMUX DataSources