Forum Discussion

John_Lockie's avatar
3 years ago

IssueWarningQuote / Exchange Online Mailbox Quota Alerts?

Good morning.

We are currently using the O365 SaaS connector.  While this is a nice connector, it lacks in any detailed monitoring such as individual mailbox activity/stats.  There is one metric I am looking to manage via LM, which is the "IssueWarningQuota" mailbox status.  I would like to know when a mailbox has consumed more space than the "IssueWarningQuota".  This would allow technology to be made aware when a user's mailbox is getting to the point where we need to ask them to archive or delete, etc.  It's also a very critical metric for managing service account mailboxes...which if they hit the "ProhibitSendReceiveQuota" could cause production issues....

This is not available in the SaaS product.  I am wondering if anyone else has successfully done this in logic monitor.  I know we can run powershell with custom datapoints.  But I also am not quite sure yet how to correctly grab mailbox sizes and check them against IssueWarningQuota.....any ideas?

6 Replies

  • I think LM had per-user checks in the legacy collector-based DataSources but it would frequently fail. For many environments it would take too long to collect the data due to the number of users and it would time out.

    https://web.archive.org/web/20200919180848/https://www.logicmonitor.com/support/monitoring/applications-databases/microsoft-office-365-monitoring

    "Exchange Online: Initially created to monitor Exchange Online mailbox statistics, this DataSource was removed from the Office 365 monitoring package in April 2020 due to timeout issues stemming from slow Microsoft PowerShell module performance. We are currently researching alternative approaches to retrieving the metrics."

    P.S. I thought that was the point of moving to a SaaS version to help fix these, but there isn't a mention of that.

  • Yeah - haha....the new SaaS version is not as good as the old one.  There are also fewer metrics around SharePoint sites.  Some of this is Microsoft some of it is LM.

    But I agree w/ LM that these particular powershell commands are latent so it wasn't sustainable.  I am wondering if we cannot use custom datasource and run itfrom a machine that has powershell (with EOL module).  The script itself will take some thought, because there is no flag on accounts that have reached the "IssueWarningQuota".  So we'd have to do some unique script that stores that field for each mailbox and matches it against current mailbox size, and alerts when current mailbox size > IssueWarningQuota....if we end up developing this I may post it here.  There are specific service mailboxes we want to do this for, so possibly we only do it against those handful....seems feasible.  We could even do a custom datasource for each service account and then we can simply grab mailbox size, and create custom alert when mailbox size is > what we are comfortable with.  This is less dynamic, cause if an exchange admin changes IssueWarningQuota our datasource become out of sync, since we'd need to in theory update the alert as well....

    We'll likely engineer something for this, and post back here.

  • I would ask support if they can get you a copy of one of the old DataSources as they do most of the work. I switched jobs so I no longer have a copy myself.

    Some generic suggestions:

    • Don't suggest coding thresholds in DataSources directly, instead provide values that LM can set thresholds on. For example don't code QuoteWarning as 0=ok, 1=almost at warning, 2=over quota. Instead do something like RemainingBeforeQuotaWarning (size minus quota) or RemainingBeforeQuotaReadOnly that you can setup LM to what warning level you want. You then don't need to worry about quota or mailbox size changing.
    • Connecting to MS will sometimes time out, so you might want to code multiple attempts to login in the code
    • If the script takes too long (>2 min default), LM will kill the script which will leave the session option to MS. MS will only allow 3 at once, so you may want to keep it only open as long as needed and perhaps kill any existing session before starting. Also keep that in mind while testing.
    • If you use PSRemoting and the script doesn't close the connection (or LM kills the script), it will leave some temp files behind which can very slowly fill up the collector's drive. Not sure if this applies for M365 connections.
    • If you turn on Multi Instance but don't provide an autodiscovery script, you can add needed instances via Add Monitoring Instance in the Resources page. Rather then say hardcode service accounts in an autodiscover script.

     

  • We solved this with a little engineering....

    We have a windows collector - and installed the EOL module on this collector.  We then created a custom datapoint with embedded powershell script.  YMMV on authentication, but we built a read-only account for this, then stored in our keyvault, called via API and conver to secure string.  The actual command we are using to collect the data we need is (I assume folks reading this can understand we use some variables for things like mailbox identity, and those are snipped for security reasons...):

    $info= Get-Mailbox -Identity $mailboxname | Select  DisplayName, ProhibitSendQuota,IssueWarningQuota, @{Name="TotalItemSize"; Expression={( Get-MailboxStatistics -Identity $mailboxname | Select  TotalItemSize).TotalItemSize}}
    
    $ProhibitSendQuota= $info.ProhibitSendQuota -replace '\ GB.*$'
    Write-Host "ProhibitSendQuota=$ProhibitSendQuota"
     
    $IssueWarningQuota= $info.IssueWarningQuota  -replace '\ GB.*$'
    Write-Host "IssueWarningQuota=$IssueWarningQuota"
    
    $TotalItemSize=$info.TotalItemSize.Value  -replace '\ GB.*$'
    Write-Host "TotalItemSize=$TotalItemSize" 

     

    This gets us a clean table of data

     

    We also have 3 datapoints we can build alerts off of

     

  • On 12/20/2021 at 11:00 AM, Mike Moniz said:

    I would ask support if they can get you a copy of one of the old DataSources as they do most of the work. I switched jobs so I no longer have a copy myself.

    Some generic suggestions:

    • Don't suggest coding thresholds in DataSources directly, instead provide values that LM can set thresholds on. For example don't code QuoteWarning as 0=ok, 1=almost at warning, 2=over quota. Instead do something like RemainingBeforeQuotaWarning (size minus quota) or RemainingBeforeQuotaReadOnly that you can setup LM to what warning level you want. You then don't need to worry about quota or mailbox size changing.
    • Connecting to MS will sometimes time out, so you might want to code multiple attempts to login in the code
    • If the script takes too long (>2 min default), LM will kill the script which will leave the session option to MS. MS will only allow 3 at once, so you may want to keep it only open as long as needed and perhaps kill any existing session before starting. Also keep that in mind while testing.
    • If you use PSRemoting and the script doesn't close the connection (or LM kills the script), it will leave some temp files behind which can very slowly fill up the collector's drive. Not sure if this applies for M365 connections.
    • If you turn on Multi Instance but don't provide an autodiscovery script, you can add needed instances via Add Monitoring Instance in the Resources page. Rather then say hardcode service accounts in an autodiscover script.

     

     

    Good feedback.  What do you mean about multi instance and no discovery script?  We did not hard code service account in the autodiscover script (although technically the api call to get the creds is hard coded here).  You piqued my curiosity with this comment...

    Regarding coding thresholds into the datasource, couldn't agree more.  We ended up being able to grab all the data points that allow us flexibility w/ LM logic to be able to alert.  For now we kept it simple by biulding alert threshold > 90 90 90 so that if the totalsize goes above 90 we trip a crit alert.  Down the road we may be able to do something nice such as "totalsize > issuewarning" and then trigger it.  So as we manage the issuewarning setting on a given mailbox, the LM logic is current...and no alert threshhold needs to be modified.

    This is the first time we have done powershell.  We have a few groovy scripts on custom datasources that do things like check cloud storage for last file write date to accomplish a sort of "dead man's switch" on ETL stuff....and it has worked nicely.  I am stoked to see we are able to do so much with powershell as well...!

  • 23 hours ago, John Lockie said:

    Good feedback.  What do you mean about multi instance and no discovery script?  We did not hard code service account in the autodiscover script (although technically the api call to get the creds is hard coded here).  You piqued my curiosity with this comment...

     

    I was referring to how to get the $mailboxname value and if you want to check multiple mailboxes (but not all) with the same DataSource. Instead of creating multiple DataSources per mailbox like "AdminMailboxQuotaCheck" or "ImportantMailboxQuoteCheck", you can create just one DataSource that uses ##wildvalue## for $mailboxname and create multiple instances. If the DataSource has multi instance enabled but Autodiscover disabled, you can manually create multiple instances on the device itself, like how PingMulti works.