Reports UIv4 now available!
Reports UIV4 can now be toggled on at the top of the Reports screen. Check out some of the new features including: a revamped reports tree navigation new reports listing overview with search easy report action access This is a per user setting. You can also easily toggle between UIv3 and UIv4 to see what’s changed. The Reports support doc has a more in depth view of what’s now available. We would love to hear your thoughts on the new UI and know more about how we can improve!348Views20likes11CommentsNeed help on PaloAlto_FW_RunningConfigXML API configsource
Currently, the sole option is to collect/view the configuration xml when a change occurs. So,IsthereawayinLMtogenerateareportusingthePaloAlto_FW_RunningConfigXMLAPIconfigsource? or Is it possible to collect the configuration backup at any specific time interval? Thanks in advance :)70Views16likes2CommentsCapacity/performance reporting via a 3rd party data warehouse
Hi Community, I’m hoping someone can provide some guidance on the best method or REST endpoint to pull granular performance/consumption data from? I’m asking as we’re looking to bring metrics like CPU, memory usage, uptime & storage consumption into our data warehouse. The best LM V3 REST endpoint I can find so far is “/device/devices/{deviceId}/devicedatasources/{hdsId}/instances/{id}/data”. Thanks83Views11likes2CommentsUsing LogicMonitor's REST API as a Power BI Source - Part 2
Overview Back in 2020 I shared an article on how to use any of LogicMonitor's REST API methods as a datasource in Power BI Desktop for reporting. That opened a good deal of potential but also had some limitations; in particular, it relied on use of basic authentication that will eventually be deprecated, and it could only make a single API call so could only fetch up to 1,000 records. I've documented how to use a LogicMonitor bearer token instead of basic authentication, but bearer tokens aren't currently available in every portal (just our APM customers for now) and it still faces the single call limitation. In lieu of a formal Power BI data connector for LogicMonitor being available yet, there is another option available that is more secure and a good deal more flexible: using Microsoft Power BI Desktop's native support for Python! Folks familiar with LogicMonitor's APIs know there is a wealth of example Python scripts for many of our REST methods (example). These scripts not only allow us to leverage accepted methods of authentication but also allow combining calls and tweaking results in various ways that can be very useful for reporting. Best of all, using these scripts inside Power BI isn't difficult and can even be used for templated reports (I'll include some working examples at the end of this article). While these instructions focus on Power BI Desktop, reports leveraging Python can also be published to the Power BI service (Microsoft article for reference). Prerequisites Power BI Desktop. You can install this via Microsoft's Store app or from the following link:https://www.microsoft.com/en-us/download/details.aspx?id=58494 The latest version of Python installed on the same system as Power BI Desktop. This can also be installed via Microsoft's Store app or from:https://www.python.org/downloads/windows/ Some basic familiarity with LogicMonitor’s REST APIs, though I'll provide working examples here to get you started. The full API reference can be found at: https://www.logicmonitor.com/support/rest-api-developers-guide/overview/using-logicmonitors-rest-api First, install Power BI Desktop and Python, then configure each according to this Microsoft article: https://learn.microsoft.com/en-us/power-bi/connect-data/desktop-python-scripts. When adding the Python modules, you'll also want to add the 'requests' module by running: pip install requests Modifying Python Scripts for Use in Power BI As mentioned in the Microsoft articleabove, Power BI will expect Python scripts to use the Pandas module for output. If you're not familiar with Pandas, it's used extensively by the data science community for analyzing bulk data. Adding Pandas to one of the example scripts can be as easy as adding 'import pandas as pd' to the list of import statements at top of the script, then converting the JSON returned by LogicMonitor's API to a Pandas dataframe. For example, if the script captured the API results (as JSON) in a variable called "allDevices", we can convert that to a Pandas dataframe as simply as something like:pandasDevices = pd.json_normalize(allDevices) In that example, "pd" is the name we gave to the Pandas modules back in the "import" statement we added, and "json_normalize(allDevices)" tells Pandas to take our JSON and convert it to a Pandas dataframe. We can then simply print that variable as our output for Power BI Desktop to use as reporting data. Below is a full Python script that fetches all the devices from your portal and prints it as a Pandas dataframe. This is just a minor variation of an example given in our support documentation. You'd just need to enter your own LogicMonitor API ID, key, and portal name in the variables near the top. #!/bin/env python import requests import json import hashlib import base64 import time import hmac # Pandas is required for PowerBI integration... import pandas as pd # Account Info... # Your LogicMonitor portal's API access ID... AccessId = 'REPLACE_WITH_YOUR_LM_ACCESS_ID' # Your LogicMonitor portal's API access Key... AccessKey = 'REPLACE_WITH_YOUR_LM_ACCESS_KEY' # Your LogicMonitor portal. Example: if you access your portal at https://xyz.logicmonitor.com, then your portal name is "xyz"... Company = 'REPLACE_WITH_YOUR_LM_PORTAL_NAME' # Create list to keep devices... allDevices = [] # Loop through getting all devices... count = 0 done = 0 while done==0: # Request Info... httpVerb ='GET' resourcePath = '/device/devices' data='' # The following query filters for just standard on-prem resources (deviceType=0), so adjust to suite your needs... queryParams ='?v=3&offset='+str(count)+'&size=1000&fields=alertStatus,displayName,description,deviceType,id,link,hostStatus,name&filter=deviceType:0' # Construct URL... url = 'https://'+ Company +'.logicmonitor.com/santaba/rest' + resourcePath + queryParams # Get current time in milliseconds... epoch = str(int(time.time() * 1000)) # Concatenate Request details... requestVars = httpVerb + epoch + data + resourcePath # Construct signature... hmac1 = hmac.new(AccessKey.encode(),msg=requestVars.encode(),digestmod=hashlib.sha256).hexdigest() signature = base64.b64encode(hmac1.encode()) # Construct headers... auth = 'LMv1 ' + AccessId + ':' + signature.decode() + ':' + epoch headers = {'Content-Type':'application/json','Authorization':auth} # Make request... response = requests.get(url, data=data, headers=headers) # Parse response & total devices returned... parsed = json.loads(response.content) total = parsed['total'] devices = parsed['items'] allDevices.append(devices) numDevices = len(devices) count += numDevices if count == total: print ("done") done = 1 else: print ("iterating again") # (for debugging) Print all devices... # print (json.dumps(allDevices, indent=5, sort_keys=True)) # Grab just the data items... items = allDevices[0] # Convert the JSON to a Panda dataframe that PowerBI can consume... resources = pd.json_normalize(items) # Print the dataframe... print (resources) If you run that Python script directly, you'll see it prints in a columnar format instead of the raw JSON returned by the API. It's a good example of how Pandas converted the raw data into a more formal data structure for manipulation & analysis. Power BI Desktop leverages that data directly for ingestion into its own reporting engine, which is a pretty powerful combination. Now let's show how to put it to use! How to create a Power BI report that pulls data directly from LogicMonitor via Python In Power BI Desktop, click the Get Databutton. This can be to start a new report or to add to an existing report. Choose to get data from an "Other" source, choose "Python script", then click the Connect button. Paste in your complete and working Python script, then click OK. (some examples are attached to the bottom of this article) Power BI will run the script. Depending on the amount of data being retrieved this can take anywhere from a few seconds to a few minutes. When it's complete, you'll see the Navigator pane with the name of the Python Pandas dataframe from the script output. Check the box next to the item to see the data preview. If the sample looks good then click theLoadbutton. After Power BI has loaded the data you'll be presented with the report designer, ready for you to create your report. To see the full preview of the data from your portal, click theDataicon to the left of the report workspace. When you need to pull the latest data from LogicMonitor, just click theRefreshbutton in the toolbar. To Convert a Report to a Parameterized Template If you've created a Python-based report and want to save it as a re-useable, parameterized template, we'll first need to add our necessary parameters and enable Power BI to pass those values to the script. With the report active that we want to turn into a template, click theModelicon to the left of the workspace. From there, click the three dots in the upper-right corner of the table generated by the Python script and chooseEdit Query. That will open the Power Query Editor. From here clickManage Parameterson the toolbar. For our example we'll add three new parameters, which we'll call "AlertID", "AlertKey" & "PortalName" (feel free to label them however you choose). For each, enter the respective criterion used for accessing your LogicMonitor API in theCurrent Valuefield. Below's an example of what it would look like when completed. When done click theOKbutton to close the dialog. Next, click the table name in theQuerieslist ("alerts" in our example screenshot) and click theAdvanced Editoroption in the toolbar. You'll see Power BI’sM languagequery for our datasource, including the Python script embedded in it. We're going to edit the script to replace the hard-coded API parameters to use thePower BI parameters wedefined instead. Replace the values of the script'sAccessId,AccessKey, andCompanyvariables with the following, respectively (including the quotes): " & AccessID & " " & AccessKey & " " & PortalName & " Note that those will be inside the single quotes for each of the variables. Refer to the screenshot below for an example of how it would look (the changes have been highlighted). When ready click theDonebutton. ClickClose & Applyon the Power Query Editor to commit our changes. If all looks good, now let's save this as a Power BI template. Click theFilemenu, then chooseSave As. Change theSave As Typeto "Power BI template files (*.pbit)", provide a filename and clickSave. Power BI will prompt to provide a description for your template. Your report is now ready for sharing! When you open the template file, you'll be prompted to enter values for the parameters we configured in steps 2 & 3. Enter those, hit theLoadbutton, and you'll then be presented with the report designer ready to go. Example Files Here are some example Python scripts modified to use Pandas to help get you started: get_alerts.powerbi.py get_devices_powerbi.py Here are some basic, ready-to-use Power BI report templates based on the above scripts: Power BI template for reporting on Alerts Power BI template for reporting on Resources/Devices1.2KViews9likes2CommentsCan't tell if past alerts were during an SDT or not?!
I consider this a bug, but maybe it's WAD and needs to be reevaluated. When an alert occurs during an SDT, the "IN SDT" field for that alert gets set to True. This is good because the alert is during an SDT. The problem is that once the alert clears, even if it's still in an SDT, the IN SDT field gets changed to False. This is not good. Because it does this, there is no way to run a report on alerts for the week/month/whatever and tell which ones were during an SDT and which ones weren't. Example: We do patching every month and this causes some of our websites and servers to go down, and CPUs to spike, while things are installing and rebooting. We set these to an SDT because we know it's going to happen. When we go to run our monthly alert reports, we see lots of errors for uptime, ping, CPU, etc from the checks that ran during the reboot. We don't need to investigate these because they were during SDT which means they were expected. When I set the IN SDT field to False in the report, assuming it's going to then show me only the alerts that occurred outside of an SDT, that's not what I get. I get EVERY alert because that field gets set to False for every alert when it clears. I don't understand what the rationale is for doing this as it removes very important functionality for anyone who runs reports after-the-fact. Simply leaving the IN SDT field alone, when the alert clears, would solve this problem. If the alerts cleared while in an SDT, leave the field as True. If the alert cleared wile not in an SDT, leave the field as False. That way I can tell, and run reports on, which alerts were expected and which were not. Thanks62Views6likes7CommentsAbility to group interface instances?
Hello, As an MSP we've the need to group multiple interface instances (from different devices &different clients as well) in order to set common thresholds, reports, etc... From my research that isn't possible within LM. Anyone able to do that? This would be very useful from the monitoring/management perspective. Regards,26Views4likes2Comments