Modules for Citrix Cloud/DaaS/VAD monitoring
Hi, here are some modules to monitor Citrix DaaS/VAD via the Citrix Monitor API. These might be helpful with mixture of DaaS and on-prem VAD environments as the same modules can be used for both. Setup details are in the module notes, see the CitrixDaaS_Token notes for the Citrix Cloud API setup. I have made the .xml export of each module available on Github, they can be downloaded from here: https://github.com/chrisred/logicmonitor-citrixdaas The modules are: CitrixDaaS_ApplicationUsage.xml CitrixDaaS_ConnectionFailures.xml CitrixDaaS_DeliveryGroups.xml CitrixDaaS_LogonPerformace.xml CitrixDaaS_Machines.xml CitrixDaaS_Token.xml I'll try to keep an eye on this post for any questions.267Views21likes8CommentsLinux services and autodiscovery
Hey guys, I just wanted to let you know that I took LogicMonitor's default datasource, "Linux_SSH_ServiceStatus", and added auto discovery to it. The only thing that is needed at the resource or group level is that the following three properties are set: ssh.user ssh.pass linux.ssh.services.regex ( default: "sshd\.service" ) I published the datasource under 93Y4PC (currently under security review as of this post) The discovery script gets the output of systemctl list-units --all --type=service --plain --state=loaded | egrep service Loop through each line of the output and see if it matches the regex from the property, "linux.ssh.services.regex" A person could even set the regex to be ".*" which would grab all of the services.. then turn around and create a filter to exclude certain things. For example if I wanted everything but services with the name ssh in them, I could create a filter that says ##WILDVALUE## not contain ssh.23Views1like2CommentsRubrik: Cloud Data Management & Enterprise Backup
* FYI: Official LogicMonitor supported Rubrik modules are now in CORE * Here's some Rubrik LogicModules. *NOTE: Alert thresholds have been lightly/conservatively set. Requires rubrik.user & rubrik.pass be set. PropertySources Rubrik_Product_Info CDA9LK addCategory_Rubrik NGLPWA DataSources Rubrik_Archive_UnmanagedObjects 4376C6 Rubrik_Cluster_Hosts X6CXZN Rubrik_Cluster_Nodes 6D43GA Rubrik_Global_IOStatistics PGJYTY Rubrik_Global_PhysicalHostIngest 6RGWMJ Rubrik_Global_Storage EZHJFX Rubrik_Global_Streams EHX4RZ Rubrik_JobMonitoring_ActivePast24Hours 2NP6RY Rubrik_MSSQL_Databases Z622YK Rubrik_Node_Drives RHZT4F Rubrik_Reports_ObjectBackupTaskSummaryBySLADomain MH2Y6J Rubrik_Reports_ObjectProtectionSummaryBySLADomain TDXHH9 Rubrik_Reports_SLAComplianceSummaryBySLADomain 6Y9NNM Rubrik_SAML_SSOStatus TFHRFN Rubrik_SLA_Domains 44THK9 Rubrik_Storage_CompressionStatistics AC7WEE Rubrik_Storage_ManagedVolumes RW6ZCA Rubrik_VMware_VMs XLT9LL241Views9likes33CommentsPowershell in five easy steps
A few years ago, I wrote a powershell tutorial. Thought some may find it useful. On to the Future with Powershell – PowerShell.org It's less programming tutorial than other programming tutorials. I wrote it for the "you can tear this mouse from my cold hands" type of admin who are being forced to learn powershell to do their jobs. It's peppered with some nuggets of geeky humor to keep it rolling forward and is just enough to get someone started using powershell (or any other programming language) by focusing on the larger concepts of Storage, Input, Output, Decisions, Loops... the building blocks of every language from assembly to applescript.17Views1like0CommentsPrint Spooler
Has anyone been able to get a non-cluster print spooler to work? I’m trying to list out the printers connected to the print spooler so I don’t have to manually maintain an individual printer list and have been unable to get custom scripting to work by adjusting the cluster code. Let me know if anyone has a better way to do this.58Views3likes5CommentsNOC Rollup Status Dashboards for MSPs
LM doesn't come with it out of the box, so I built the NOC Dashboard I've wanted. It provides high level, at-a-glance health indicators for each of our client environments we manage. This makes a great "big board" for a NOC room or a second screen status board for work from home NOC/Support folks. I do have three examples in this code for ways to filter for specific teams/purposes. This all collapses for ease of reference correctly in Powershell ISE on windows. Line 282 references a dataSource I wrote that counts frequency of specific eventlog events to illustrate potential brute force attempts (CTM are my initials, we tag our scripts to make finding the best source of answers faster in the future - old habit from pen & paper change logs from a previous job). As any screenshots would contain client names, I'm unable to post any screen shots of the results of this, but my current settings for my Main dashboard are (This is the first Dashboard I've made that looks better in UIv4 than 3): ... #!!! These two need to be changed. First is a string, second an integer #!!! See the comment block below for instructions # The first chunk of your company's logicmonitor URL $company = "yourCompanyNameHere" # ID of the group to be used as a source for the NOC widget items $parentGroupID = <parentGroupID> <# Netgain Technology, llc ( https://netgaincloud.com ) 2/26/2024 - Developed by Cole McDonald Disclaimer: Neither Netgain nor Cole McDonald are not responsible for any unexpected results this script may cause in your environment. To deploy this: - COLLECTOR: you will need a collector for scripting, this will be the single applies to target. You may need to increase the script timeout depending on the size of your device deployment. - DASHBOARD: you will need a Dashboard with a NOC widget on it. The name can be whatever you'd like, there will be a name change in the "name" property for the initial array. In the case of the first example here, "NOC - Master" - PARENT GROUP: you will need to identify the ID# of the group you wish to use as the source for the subgroup list and set the $parentGroupID to the appropriate ID# Purpose: Create an auto-updating high level NOC dashboard that can show - Rollup state for a list of client subgroups from our \Clients group - Group Indicators for a specific dataSource - Group indicators for a subset of devices within each group After the API region, there are three separate dashboards referenced to illustrate the 3 methods for using this dataSource. NOTE: my code uses backticks for line continuation. Where possible in my code, each line indicates a single piece of information about the script's algorithm and the first character in each line from a block indicates the line's relationship to the one above it. #> #region Rest API Initialization and Functions # Init variables used in the RESTApi functions $URLBase = "https://$company.logicmonitor.com/santaba/rest" $accessID = "##ApiAccessID.key##" $accessKey = "##ApiAccessKey.key##" #-------- The Functions ---------- function Send-Request { param ( $cred , $URL , $accessid = $null, $accesskey = $null, $data = $null, $version = '3' , $httpVerb = "GET" ) if ( $accessId -eq $null) { exit 1 } <# Use TLS 1.2 #> [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 <# Get current time in milliseconds #> $epoch = [Math]::Round( ( New-TimeSpan ` -start (Get-Date -Date "1/1/1970") ` -end (Get-Date).ToUniversalTime()).TotalMilliseconds ) <# Concatenate Request Details #> $requestVars = $httpVerb + $epoch + $data + $resourcePath <# Construct Signature #> $hmac = New-Object System.Security.Cryptography.HMACSHA256 $hmac.Key = [Text.Encoding]::UTF8.GetBytes( $accessKey ) $signatureBytes = $hmac.ComputeHash( [Text.Encoding]::UTF8.GetBytes( $requestVars ) ) $signatureHex = [System.BitConverter]::ToString( $signatureBytes ) -replace '-' $signature = [System.Convert]::ToBase64String( [System.Text.Encoding]::UTF8.GetBytes( $signatureHex.ToLower() ) ) <# Construct Headers #> $auth = 'LMv1 ' + $accessId + ':' + $signature + ':' + $epoch $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $headers.Add( "Authorization", $auth ) $headers.Add( "Content-Type" , 'application/json' ) # uses version 2 of the API $headers.Add( "X-version" , $version ) <# Make Request #> $response = Invoke-RestMethod ` -Uri $URL ` -Method $httpVerb ` -Body $data ` -Header $headers ` -erroraction SilentlyContinue ` -warningaction SilentlyContinue Return $response } function Get-LMRestAPIObjectListing { param ( $URLBase , $resourcePathRoot , # "/device/devices" $size = 1000 , $accessKey , $accessId , $version = '2' ) $output = @() $looping = $true $counter = 0 while ($looping) { #re-calc offset based on iteration $offset = $counter * $size $resourcePath = $resourcePathRoot $queryParam = "?size=$size&offset=$offset" $url = $URLBase + $resourcePath + $queryParam # Make Request $response = Send-Request ` -accesskey $accessKey ` -accessid $accessId ` -URL $url ` -version $version if ( $response.items.count -eq $size ) { # Return set is full, more items to retrieve $output += $response.items $counter++ } elseif ( $response.items.count -gt 0 ) { # Return set is not full, store date, end loop $output += $response.items $looping = $false } else { # Return set is empty, no data to store, end loop $looping = $false } } write-output $output } # Get Dashboards $resourcePath = "/dashboard/dashboards" $dashboards = Get-LMRestAPIObjectListing ` -resourcePathRoot $resourcePath ` -accessKey $accessKey ` -accessId $accessID ` -URLBase $URLBase # Get Widgets $resourcePath = "/dashboard/widgets" $widgets = Get-LMRestAPIObjectListing ` -resourcePathRoot $resourcePath ` -accessKey $accessKey ` -accessId $accessID ` -URLBase $URLBase # Get Groups $resourcePath = "/device/groups" $Groups = Get-LMRestAPIObjectListing ` -resourcePathRoot $resourcePath ` -accessKey $accessKey ` -accessId $accessID ` -URLBase $URLBase #endregion function generateJSON { param( $dashInfo, $clientnames, $deviceDisplayName = "*", $DSDisplayName = "*" ) $itemArray = @() foreach ($name in $clientnames) { $itemArray += @{ "type" = "device" "deviceGroupFullPath" = "Clients/$name" "deviceDisplayName" = $deviceDisplayName "dataSourceDisplayName" = $DSDisplayName "instanceName" = "*" "dataPointName" = "*" "groupBy" = "deviceGroup" "name" = "`#`#RESOURCEGROUP`#`#" } } # Write JSON back to the API for that widget $outputJSON = "`n`t{`n`t`t`"items`" : [`n" foreach ($item in $itemArray) { $elementJSON = @" { `"type`" : `"$($item.type)`", `"dataPointName`" : `"$($item.dataPointName)`", `"instanceName`" : `"$($item.instanceName)`", `"name`" : `"$($item.name)`", `"dataSourceDisplayName`" : `"$($item.dataSourceDisplayName)`", `"groupBy`" : `"$($item.groupBy)`", `"deviceGroupFullPath`" : `"$($item.deviceGroupFullPath)`", `"deviceDisplayName`" : `"$($item.deviceDisplayName)`" } "@ if ($item -ne $itemArray[-1]) { $outputJSON += "$elementJSON,`n" } else { # Last Item $outputJSON += "$elementJSON`n`t`t]`n`t}" } } write-output $outputJSON } # Get Client Names from groups $clientnames = ( $groups ` | where parentid -eq $parentGroupID ` | where name -notmatch "^\." ).name | sort #ID Master Dashboard # declare dashboard name and set default id and widgetid to use in the loop later $masterDash = @{ id=0; widgetid=0; name="NOC - Master" } $master = $dashboards | ? name -eq $masterDash.name if (($master.name).count -eq 1) { $masterDash.id = $master.id $masterDash.widgetid = $master.widgetsConfig[0].psobject.Properties.name $outputJSON = generateJSON ` -dashInfo $masterDash ` -clientnames $clientnames $resourcePath = "/dashboard/widgets/$($masterDash.widgetid)" $url = $URLBase + $resourcePath $widget = Send-Request ` -accessKey $accessKey ` -accessId $accessID ` -data $outputJSON ` -URL $URL ` -httpVerb "PATCH" } #ID Network Dashboard # declare dashboard name and set default id and widgetid to use in the loop later $networkDash = @{ id=0; widgetid=0; name="NOC - Network" } # preset filters for specific dashboard targeting by device $networkDeviceDisplayNameString = "*(meraki|kemp)*" $network = $dashboards | ? name -eq $networkDash.name if (($network.name).count -eq 1) { $networkDash.id = $network.id $networkDash.widgetid = $network.widgetsConfig[0].psobject.Properties.name $outputJSON = generateJSON ` -dashInfo $networkDash ` -clientnames $clientnames ` -deviceDisplayName $networkDeviceDisplayNameString $resourcePath = "/dashboard/widgets/$($networkDash.widgetid)" $url = $URLBase + $resourcePath $widget = Send-Request ` -accessKey $accessKey ` -accessId $accessID ` -data $outputJSON ` -URL $URL ` -httpVerb "PATCH" } #ID Security Dashboard # declare dashboard name and set default id and widgetid to use in the loop later $securityDash = @{ id=0; widgetid=0; name="NOC - Security" } # preset filters for specific dashboard targeting by datasource $securityDataSourceDisplayNameString = "Event Frequency Sec:4625 CTM" $security = $dashboards | ? name -eq $securityDash.name if (($security.name).count -eq 1) { $securityDash.id = $security.id $securityDash.widgetid = $security.widgetsConfig[0].psobject.Properties.name $outputJSON = generateJSON ` -dashInfo $securityDash ` -clientnames $clientnames ` -DSDisplayName $securityDataSourceDisplayNameString $resourcePath = "/dashboard/widgets/$($securityDash.widgetid)" $url = $URLBase + $resourcePath $widget = Send-Request ` -accessKey $accessKey ` -accessId $accessID ` -data $outputJSON ` -URL $URL ` -httpVerb "PATCH" }49Views0likes3CommentsProgrammatic Ping Alert
We currently lack the ability to white list domain names on our firewall, so I have to do everything via IP. Recently I’ve come across an issue where a company won’t give me their external IP’s because they can change, or so they say. For several weeks I’ve pinged the IP’s and it has always been 1 of 4 IPs. Has anyone created some kind of ping alert that does something like “ping easypost.com and api.easypost.com if the IP’s returned are not in 169.62.110.130-169.62.110.133, alert me” I’m not much of a programmer myself so I’d need something pretty “plug and play”. TIA!Solved57Views6likes3CommentsHow to redirect the output of the groovy script to the collector log file using groovy script?
In my groovy script, I want to redirect the output from the groovy script into the collectors log file? What should be the groovy code, to redirect the output to the collectors log file? Can anyone help me here?18Views4likes1CommentCitrix Cloud Monitoring
Installation 1. Install the package from LM Exchange "Citrix Cloud" 2. Install Cloud Connector property source: LocatorJYW9D7 Configuration This datasource requires several properties to be set: CITRIX.CLOUD.CUSTOMER - This is found in the Citrix Cloud Portal:Identity and Access Management > API Access > Secure Clients. Copy the bolded customer ID on the page. CITRIX.CLOUD.ID - Create a secure client, you can name it "LogicMonitor". The ID here will be used for this property. CITRIX.CLOUD.PASS - This is the secret when you created the secure client. CITRIXCLOUD.OAUTH.KEY - This will be autogenerated and populated by LogicMonitor using the above credentials. There is aCitrix CloudOAuth datasource that will generate a bearer token and save it as a property on the device. LM.API.ID -Create an API token in LogicMonitor with administrator privileges, copy the Access ID. LM.API.KEY - This is the API token access key that was created above. LM.API.ACCOUNT - This is your LogicMonitor account name, you can probably copy the subdomain of your LM portal. https://yourco.logicmonitor.com 1. Set the properties above (except CITRIXCLOUD.OAUTH.KEY)wherever you'd like depending on your folder structure. I like to set the LM API propertiesat the root and the Citrix Cloud properties per client (folder). 2. Find your cloud connector device in LM and add the category "PrimaryCC". Make sure you have the Cloud Connector property source installed as well! 3. The OAuth datasource should run, generating a token that the other datasources will use to query Citrix Cloud's API. You can also do a manual "poll now" to speed up the process. You should now see theCITRIXCLOUD.OAUTH.KEY property on the device. If you have any issues, feel free to private message me!1.2KViews37likes46CommentsDoes anyone have any experience with monitoring Windows Processes?
I’ve checked the community for datasources and I don’t see anything to what I’m specifically looking for. Our organization currently utilizes the Microsoft_Windows_Services datasource (modified a little bit for our specific needs) to monitor services. I’m looking for something similar to monitor windows processes. Similar to the Microsoft_Windows_Services datasource, what I am hoping to accomplish is provide a list of keywords that will either match or be contained in the process name that I want to monitor, provide a list of machines that I want to monitor those processes on, andthen get alerted on if those processes stop running. Some issues I am running into so far are: Win32_Process always returns a value of NULL for status and state. So I cannot monitor for those two class level properties. Powershell’s Get-Process does not return status or state, rather it just looks for processes that are actively running, so I would need to get creative in having LogicMonitor create the instance and what value to monitor in the instance. Some of the processes I want to monitorcreate multiple processes with the same name, and LogicMonitor then groups them all together into one instance, which makes monitoring diffucult. Some of the process I want to monitor are processes that only run if an application is manually launched, which means that again I will need to get creative in how I set up monitoring because I don’t want to get alerts when a process that I know shouldn’t be running is not running. Because the processes I am trying to monitor are not going to be common for everyone everywhere, something that other people could do to try to replicate my scenario would be: Open Chrome. When Chrome is launched, you will get a processed called “Chrome”. Now, open several other tabs of Chrome, you will just get more processes named “Chrome”. Now, keeping in mind the points I made earlier, set up monitoring to let you know when the 3rd tab in Chrome has been closed, even though the rest of the Chrome tabs arestill open. How would you break that down? My first thought would be to monitor the PIDs, however, when you reboot your machine, your PIDs will likely change. Also, I don’t want to have the datasource wild value search by PID, because that would get confusing really fast once you have 2 or 3 different PIDs that you want to monitor. All suggestions are welcome, and any help is greatly appreciated. Bonus points if you can get this to work with the discovery method as Script and you use an embedded Groovy or Powershell script.Solved304Views12likes19CommentsCisco Umbrella Virtual Appliance Datasource and Proprtysource
Update: I jumped the gun…they aren’t out of security review yet…will update once they are... I have shared a datasource and propertysource I’ve created for monitoring the health of Cisco Umbrella Virtual Appliances in my environment. Thought they could help out other that might be using them as well.80Views11likes4CommentsSaaS platform monitoring using API or default integration if possible?
I would like to integrate and monitor below SaaS platform. If anybody having idea or best way to do it, please let me know or help to share any documentation. genesys Voice cloud Airwatch Tanium Cloud Absolute JAMF Tetherfi Teradici Chrome Admin Console Imaging servers and EUC connectors70Views10likes1CommentModules for Zerto monitoring
Hi, here are some modules to monitor Zerto via their API. Appliances (ZVM/ZCM) and the Zerto Analytics portal are supported. I have made the .xml export of each module available on Github, they can be downloaded from here: https://github.com/chrisred/logicmonitor-zerto The modules are: ZertoAnalytics_Alerts.xml ZertoAnalytics_Datastores.xml ZertoAnalytics_Sites.xml ZertoAnalytics_Token.xml ZertoAnalytics_VPGs.xml ZertoAppliance_Alerts.xml ZertoAppliance_Datastores.xml ZertoAppliance_PeerSites.xml ZertoAppliance_Token.xml ZertoAppliance_VPGs.xml ZertoAppliance_VRAs.xml I'll try to keep an eye on this post for any questions.516Views20likes3CommentsVM creation date info from Vsphere
Hi, I am trying to add an attribute forVM creation date on datasource:VMware_vSphere_VirtualMachinePerformance I tried to add below line in the Active Discovery script: 'auto.config.create_Date' : vmConfig?.createDate, But getting an error. Has anyone else already tried getting this property of the VM or knows a solution?88Views8likes0CommentsMulti Step website test with variable
We have a website test on a site that uses session based tokens, whcih are passed by the authentication process. I can curl the website and get a token, and then paste the token as a bearer token in another curl, but can’t figure out:- how to do this with a website test or how to build a new datasource to do this. The token expires after 15 minutes, so setting up as a persistant value in the headers for the webtest doesnt work. Can anyone help?Solved200Views12likes13CommentsDataSource recording zeroes for non-zero raw responses
I wrote a DataSource that is exhibiting aweird problem. Most of the data points are recording zeroes, but when I click "Poll Now", I see non-zero values in the Raw Response. The DataSource uses multi-line key=value pairs. I'm not sure how to troubleshoot this.Has anyoneseen this before? Here is a screenshot to illustrate my issue:Solved115Views5likes18CommentsMonitoring Ec2 instances
Hi All, I have been going through the documentation and it suggests that when we are setting up monitoring for ec2 with autoscaling we should select netscan frequency of 10 minutes. This is the minimum time we can configure and new device may take upto 15 minutes to be monitored. My question is is there a way if a new instance is launched we can bring that into monitoring less than 10 or5 minutes.61Views7likes1CommentSolidFire DataSources
Re-worked SolidFire DataSources.Require you to set: api.user api.pass api.version DataSource Locator Codes: 7HKYDG: NetApp_SolidFire_Volumes WFFDKZ: NetApp_SolidFire_Cluster 7TPCAM:NetApp_SolidFire_Drives MYFFMA:NetApp_SolidFire_Nodes Tested and working against v12. Enjoy, David82Views11likes8CommentsEventsource
hi iam using te following code for event script. The output iam getting isThere would be no script events for the selected device.IF iam running in datasource the script is working both in jason format import groovy.sql.Sql import groovy.json.JsonBuilder import org.json.JSONObject def hostname = hostProps.get("system.hostname") def user = hostProps.get("sybase.user") def pass = hostProps.get("sybase.pass") def port = 21000 def url = "jdbc:sybase:Tds:" + hostname + ":21000" def driver = "com.sybase.jdbc4.jdbc.SybDriver" def sql = Sql.newInstance(url, user, pass, driver) if (sql.connection) { // println("Connected to Sybase server on ${hostname}") def filePath = "/tmp/BTS01DEV_WNL_DS.log" // Replace with the actual file path on the Linux machine def file = new File(filePath) if (file.exists() && file.isFile()) { def lines = file.readLines() // Define the keywords to search for def keywords = [ "warning", "error", "fail", // Add more keywords as needed ] // Find the lines that contain the keywords along with their indices def matchedLines = lines .findAll { line -> keywords.any { keyword -> line.toLowerCase().contains(keyword) } } .collect { line -> [line: lines.indexOf(line) + 1, message: line] } // Construct the final JSON object def json = new JsonBuilder() json { events matchedLines } // Convert the JSON object to a string def jsonString = json.toString() // Remove leading characters until the first '{' character def startIndex = jsonString.indexOf('{') def trimmedJsonString = jsonString.substring(startIndex) // Create the JSONObject from the trimmed JSON string def jsonObject = new JSONObject(trimmedJsonString) println(jsonObject.toString()) } else { println("File not found or is not a regular file") } } else { println("Failed to connect to Sybase server on ${hostname}") } out put in datasource: {"events":[{"line":41,"message":"00:0000:00000:00000:2021/04/21 14:54:02.58 kernel Warning: Operating System stack size is greater than 2MB. If it is too large, ASE may run out of memory during thread creation. You can reconfigure it using 'limit' (csh) or 'ulimit' (bash)"},{"line":177,"message":"00:0006:00000:00002:2021/04/21 14:54:03.46 kernel Warning: Cannot set console to nonblocking mode, switching to blocking mode."},{"line":313,"message":"00:0000:00000:00000:2021/04/21 16:42:08.18 kernel Warning: Operating System stack size is greater than 2MB. If it is too large, ASE may run out of memory during thread creation. You can reconfigure it using 'limit' (csh) or 'ulimit' (bash)"},{"line":636,"message":"00:0002:00000:00002:2021/04/21 16:41:26.28 kernel Warning: Cannot set console to nonblocking mode, switching to blocking mode."}148Views3likes5CommentsWhen an anomaly isn't an anomaly what could i do?
What can i do when anomaly detection wont work ( something that is seen on a regular basis, and dynamic threshold also wont help where it is within range? For example a drive on a server gets filled with data ( drive is normally cleared down on a daily basis ) but when someone decides to upload a larger than expected amount the drive hasn't been cleared or with other uploads throughout the day there isn't enough space. You are happy if the drive is above 80% during the night because if it hasn't cleared it can be dealt with in the morning ( no need to get anyone out of bed ) but if there is a rapid spike ( more than 2.5% growth in used space in a 30min period ) then they need an alert to get out of bed and fix / make enough room for the data. A possible solution is a datasourcethat will alert if the drive is over the 80% but only with that rapid growth. DataSource calls the api for the last 30min worth of data and calculates the growth rate. The below is the code for a C drive but the drive letter can be changed easily in the code below, same with the 2.5% and the 80% values, they could also be parameterised for different ranges on different devices. <# Use TLS 1.2 #> [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 <# account info #> $accessId = '##apiaccessid.key##' $accessKey = '##apiaccesskey.key##' $company = '##company##' $deviceId = "##system.deviceId##" <# request details #> $httpVerb = 'GET' $resourcePath = "/device/devices/$deviceId/devicedatasources" $queryParams = '?filter=dataSourceName:"WinVolumeUsage-"' <# Construct URL #> $url = 'https://' + $company + '.logicmonitor.com/santaba/rest' + $resourcePath + $queryParams <# Get current time in milliseconds #> $epoch = [Math]::Round((New-TimeSpan -start (Get-Date -Date "1/1/1970") -end (Get-Date).ToUniversalTime()).TotalMilliseconds) <# Concatenate Request Details #> $requestVars = $httpVerb + $epoch + $data + $resourcePath <# Construct Signature #> $hmac = New-Object System.Security.Cryptography.HMACSHA256 $hmac.Key = [Text.Encoding]::UTF8.GetBytes($accessKey) $signatureBytes = $hmac.ComputeHash([Text.Encoding]::UTF8.GetBytes($requestVars)) $signatureHex = [System.BitConverter]::ToString($signatureBytes) -replace '-' $signature = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($signatureHex.ToLower())) <# Construct Headers #> $auth = 'LMv1 ' + $accessId + ':' + $signature + ':' + $epoch $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $headers.Add("Authorization",$auth) $headers.Add("Content-Type",'application/json') $headers.Add("X-Version","3") <# Make Request #> $response = Invoke-RestMethod -Uri $url -Method $httpVerb -Header $headers <# Get Device DataSource ID #> $deviceDataSourceId = $response.items.id <# request details #> $httpVerb = 'GET' $resourcePath = "/device/devices/$deviceId/devicedatasources/$deviceDataSourceId/data" $queryParams = '' <# Construct URL #> $url = 'https://' + $company + '.logicmonitor.com/santaba/rest' + $resourcePath + $queryParams <# Get current time in milliseconds #> $epoch = [Math]::Round((New-TimeSpan -start (Get-Date -Date "1/1/1970") -end (Get-Date).ToUniversalTime()).TotalMilliseconds) <# Concatenate Request Details #> $requestVars = $httpVerb + $epoch + $data + $resourcePath <# Construct Signature #> $hmac = New-Object System.Security.Cryptography.HMACSHA256 $hmac.Key = [Text.Encoding]::UTF8.GetBytes($accessKey) $signatureBytes = $hmac.ComputeHash([Text.Encoding]::UTF8.GetBytes($requestVars)) $signatureHex = [System.BitConverter]::ToString($signatureBytes) -replace '-' $signature = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($signatureHex.ToLower())) <# Construct Headers #> $auth = 'LMv1 ' + $accessId + ':' + $signature + ':' + $epoch $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $headers.Add("Authorization",$auth) $headers.Add("Content-Type",'application/json') <# Make Request #> $response = Invoke-RestMethod -Uri $url -Method $httpVerb -Header $headers <# Print status and body of response #> $status = $response.status $body = $response.data | ConvertTo-Json -Depth 5 function Select-Nth { param([int]$N) $Input | Select-Object -First $N | Select-Object -Last 1 } $array1 = @($response.data.instances.'WinVolumeUsage-C:\'.values) $first = $array1[0] | Select-Nth 3 $last = $array1[19] |Select-Nth 3 $growth = $first - $last if (($growth -gt 2.5) -and ($first -ge 80)){ return 1 }else { return 2 } Hope this gives you some ideas to develop alerting further😁130Views10likes2CommentsWindows Scheduled Task(s) monitoring
Hello folks, We had a request from a client where he had the need to monitor some important scheduled task(s). While checking the documentation, we wanted to avoid the procedure on the actual 'Job Monitor' for scheduled tasks (since the clients is kind of harsh when it comes to make changes in their boxes, even if those represent no harm). Since we were only interested in the actual task return code (after its run), we've done a quick DS (powershell). Just sharing it here in case it's useful for anyone. GitHub repo ->Here For now, we're alarming on the actual task return code != 0. Since MS advises that anything differing from that represents some sort of issue. However, if you've any ideas/improvements just let me know :)/emoticons/smile@2x.png 2x" title=":)" width="20" /> Thank you!177Views5likes6CommentsPalo Alto Improvements
Here are some datasources we added to get better information on Palo Alto firewalls: Certificate Status:KFWLJ9 High Availability Detail:EMXWRR(this one includes a bunch of HA info, including HA link status, compat status and so forth. Many auto properties for reference on the local and peer units. All datapoints currently use the default alert templates, but I am hoping to extend that and leverage the auto properties for those messages) Support Status:3YJJCZ License Status:DXEAP4 All use the XML API, so will require security review (no idea how long that takes).145Views9likes18CommentsMonitoring HAProxy?
I've got a trio of Load Balancers running HAProxy that I need to monitor, and I found the HAProxy module and installed it. I verified using the 'Test Applies To' and it found all 3 servers, so I assume that means it's been associated right? It's been a few days and the resources are not displaying any HAP related info nor do I have a dropdown (IDK the correct term) under the resource itself like there is for CPU, Disks, etc. Second question.. reading the description here:https://www.logicmonitor.com/integrations/ha-proxyam I correct in assuming that the only stats this module will report is sessions? If so that's missing a ton of important stats.... Thanks!!166Views4likes13Commentswindows certificate store scan
I have written a DS that uses PowerShell to discover any SSL Certificate within the Windows certificate stores and generates alerts for those expiring soon and for those that have already expired. The alert messages are still generic as I am fighting a weird timeout issue with the data collection code against remote devices. The AD code works fine and the data collection code is virtually identical, simpler in fact as we have the serial number on hand. If I run it from the collector itself in a PS console, it also works fine. Just seems to go to lunch when run from within LM itself. If anyone wants to take a look and see if they can find the problem, that would be much appreciated -- my intent is to polish it up and release it publicly. It is in code review, not clear how long that will take with the new LMExchange feature. 2YPMLN219Views5likes12Commentsincreasing speed of 'Sources using Powershell Remote Sessions
TLDR: don't load session profiles when using Powershell Remote. Use an explicit pssession with the -nomachineprofile flag present. Several LM provided 'Sources use it. Many of the 'Sources I've written in the past use it as well. Here's the find/replace for the LM provided ones (find & replace are separated into comment regions): #region FIND try { #-----Determin the type of query to make----- # check to see if this is monitoring the localhost collector, as we will not need to authenticate. if ($hostname -like $collectorName) { $response = Invoke-Command -ScriptBlock $scriptBlock } # are wmi user/pass set -- e.g. are these device props either not substiuted or blank elseif (([string]::IsNullOrWhiteSpace($wmi_user) -and [string]::IsNullOrWhiteSpace($wmi_pass)) -or (($wmi_user -like '*WMI.USER*') -and ($wmi_pass -like '*WMI.PASS*'))) { # no $response = Invoke-Command -ComputerName $hostname -ScriptBlock $scriptBlock } else { # yes. convert user/password into a credential string $remote_pass = ConvertTo-SecureString -String $wmi_pass -AsPlainText -Force; $remote_credential = New-Object -typename System.Management.Automation.PSCredential -argumentlist $wmi_user, $remote_pass; $response = Invoke-Command -ComputerName $hostname -Credential $remote_credential -ScriptBlock $scriptBlock } exit 0 } catch { # exit code of non 0 will mean the script failed and not overwrite the instances that have already been found throw $Error[0].Exception exit 1 } #endregion #region REPLACE try { $option = New-PSSessionOption -NoMachineProfile #-----Determin the type of query to make----- if ($hostname -like $collectorName) { # check to see if this is monitoring the localhost collector, # as we will not need to authenticate. $session = new-pssession ` -SessionOption $option } elseif ( ([string]::IsNullOrWhiteSpace($wmi_user) ` -and [string]::IsNullOrWhiteSpace($wmi_pass)) ` -or ( ($wmi_user -like '*WMI.USER*') ` -and ($wmi_pass -like '*WMI.PASS*') ) ) { # are wmi user/pass set # -- e.g. are these device props either not substiuted or blank # no $session = new-pssession ` -computername $hostname ` -SessionOption $option } else { # yes. convert user/password into a credential string $remote_pass = ConvertTo-SecureString ` -String $wmi_pass ` -AsPlainText ` -Force; $remote_credential = New-Object ` -typename System.Management.Automation.PSCredential ` -argumentlist $wmi_user, $remote_pass; $session = new-pssession ` -computername $hostname ` -credential $remote_credential ` -SessionOption $option } $response = Invoke-Command ` -session $session ` -ScriptBlock $scriptBlock exit 0 } catch { # exit code of non 0 will mean the script failed and not overwrite the instances # that have already been found throw $Error[0].Exception exit 1 } #endregion p.s. the replacement code has the formatting I prefer. Feel free to change it to suit your whitespace/line length needs. Mine is a blend of every language I've ever used as I traditionally have been the only one looking at my code. I call my formatting method the "I have to fix this 2 years from now and have a half an hour to figure it out" format. generally, 1 specific function per line, sections that collapse into a single line to make it easier to work through the code. The first character of each line should inform how it relates to the line above it. spaces added to make neat functional columns of similar parts of the line (parameter name, variable). Most programmers hate the formatting I use, but it works for me. There are line continuation characters " `" to make it all fall into place.51Views6likes0CommentsFortigate Managed Switches
We were having trouble monitoring Fortigate switches once they had been brought under Fortimanager control as they no longer can be queried directly with SNMP. The switches get onboarded to a 169.x.x.x management network and while it might be possible to make firewall rules etc and use theFortinet_FortiSwitch datasources it wouldnt be fun or practical in all collector deployments. So we made an addition to fortigate Fortinet_FortiGate_ManagedSwitch Published with identifier MK3TRR Collects; 1. switch status (up/down) with an alert 2. Status of ports 1-52 and collates into a total switch port utilisation complex datapoint with graph. Hope this is helpful260Views1like5CommentsAdded Module, Cannot Find Datasource to Disable?
I was looking at current Datasources we have and wanted to try out some others from the Public Repository. I installed one, but it is not listed under my Datasources anywhere. It does show as installed, and it added a ton of datapoints under all of my Windows Servers. So it is there, but for the life of me I cannot find where this module installed so I can adjust what it is being pushed out to.13Views0likes1CommentPalo Alto data source to grab HA Interfaces stats improved.
Hello, We've noticed that the config source 'PaloAlto_FW_HA_Interface' that's published on LM repo (version 1.4) doesn't use the best logic to try both API variations. On newer versions of PA (>=10.2.x) the API call changes slightly (since they've placed the <counters> section in a different path). With that being said, the OOTB module was failing on PAs with >= version. We've added some logic there & we're now having the version in mind (making use of ##auto.entphysical.softwarerev## property) to define how the API call will look like. Tested on several versions & it works smoothly. We've published our version on the exchange (code: DL74FN). If it's unavailable there for some reason, you can grab ithere. Thank you!24Views0likes0CommentsPalo Alto config source to capture XML configuration improved.
Hello, We've noticed that the config source 'PaloAlto_FW_RunningConfigXML' that's published on LM repo (version 1.4) doesn't use the best logic to try both API variations. They're using an IF statement to capture errors after the 1st API attempt, however, the script raises an exception if it fails, which will cause the script to abort & never actually try the 2nd variation. Instead of an IF statement, this should be done with the try{}catch{} statement, this way it will attempt the 2nd variation if the 1st one fails. We've published our version on the exchange (code: 9A779T). If it's unavailable there for some reason, you can grab it here. Thank you!65Views0likes0CommentsAPI ingest from UltraDNS into Logic Monitor
I am attempting to ingest data from UltraDNS to create an alarm in LM and have spoken with their support but I am seeming to run into a dead end. I am trying to create an API call into LM that would populate an usage report. I am able to gather that data from UltraDNS but have no idea how to push that data into LogicMonitor. UDNS uses a bearer token for authentication but when I spoke with LM support, they say that only accept basic token authentication. Has anyone created an API call like this or can anyone offer any assistance? If I don't have to reinvent the wheel, I would really rather not have to do so. Thank you in advance.80Views0likes26Comments