Forum Discussion

David_Perske's avatar
3 years ago

Fortigate Managed Switches

We were having trouble monitoring Fortigate switches once they had been brought under Fortimanager control as they no longer can be queried directly with SNMP. The switches get onboarded to a 169.x.x.x management network and while it might be possible to make firewall rules etc and use the Fortinet_FortiSwitch datasources it wouldnt be fun or practical in all collector deployments.

So we made an addition to fortigate

Fortinet_FortiGate_ManagedSwitch

Published with identifier MK3TRR

Collects;

1. switch status (up/down) with an alert

2. Status of ports 1-52 and collates into a total switch port utilisation complex datapoint with graph.

Hope this is helpful

  • Sounds very interesting!  Please let us know when it is available -- right now it fails with "LogicModule belongs to another private repository."  We have struggled with this as well and have concluded we must get a management VLAN / IP assigned to each switch separate from the FortiLink address, but we cannot get cooperation on that (so far) from our clients.

  • Ok turned off the private switch - sorry, first publish to the repo ;)

    Interested to know how it goes.

  • Anonymous's avatar
    Anonymous

    This could probably be optimized a bit and you'll learn a lot about how SNMP discovery works in LM. If it's working for you and you don't want to change it, I totally understand, you can ignore the rest of this.

    It looks like the goal of this DS is to gather two kinds of data: 1) about the status of the switch, which is found at 1.3.6.1.4.1.12356.101.24.1.1.1.7.1.x where x is the ID of the switch and 2) the status of each port, which is found at 1.3.6.1.4.1.12356.101.24.2.1.1.6.1.x.y, where x is the id of the switch and y is the id of the port.

    Because of this, I think you should consider two datasources. One to collect information about the switch and one to collect the info about the ports. The switch datasource would be basically what you have now, without the port datapoints (although you might consider removing the .1 from the end of the discovery OID as that might be variable and should be part of the wildvalue instead of hardcoded).

    For the port datasource, you'd change your discovery OID to 1.3.6.1.4.1.12356.101.24.2.1.1.6.1.5. When you pass this OID int LM for discovery, it will do an snmp walk and the returning data will look something like this (totally made up since i don't have the right fortigates to test against):

    1.16.1 => port1
    1.16.2 => port2
    1.16.3 => port3
    1.16.4 => port4
    1.16.5 => port5
    1.16.6 => port6
    1.16.7 => port7
    1.16.8 => port8
    1.16.9 => port9
    1.16.10 => port10
    1.16.11 => port11
    1.16.12 => port12

    Discovery has three options for turning this list into instances. Remember, the goal of discovery is to return a list of instances. The list of instances needs a minimum of two parameters, wildvalue (id) and wildalias (display name). The difference between them is what to use as the wildalias or display name for the instance.

    value - choosing this option will use the left side of the returned data as the wildvalue and the right side as the wildalias. This is what you do in 98% of the cases. You do this any time there is an OID that contains the display name you want to use. It uses the "value" on the right side as the wildalias.

    wildcard - choosing this option will use the left side of the returned data as the wildvalue and also as the wildalias. This is what you do in 1.9% of the cases. You do this any time there is no OID containing a good display name for your instance. In this case, your instances would display as "16.1", "16.2", etc. This happens with CPU; the standard hrProcessor MIB doesn't provide any OID that can be used for a meaningful name. Instead, we just refer to each CPU using a display name that's the same as the ID.

    lookup - this option is for 0.1% of the cases out there and is so rare, it's not worth explaining in the context of this problem.

    Since we do have an OID that contains a good display name (1.3.6.1.4.1.12356.101.24.2.1.1.5), we should use "value" discovery. This will create a list of ports on the switch. Each port will have a good display name and the ID will be comprised of the switch id and the port id, which we'll need when we make our datapoint.

    As for the datapoint, we won't need to make more than one. Since the status is contained in 1.3.6.1.4.1.12356.101.24.2.1.1.6, all we need to do is make one datapoint, where the OID is 1.3.6.1.4.1.12356.101.24.2.1.1.6.##WILDVALUE##. 

    This obviously changes how you'd do that complex datapoint. It adds up the status of all the ports (assuming 52 ports). I think this would be better done using aggregation on an overview graph, rather than through a complex datapoint. 

    Reply here with any questions, i'm always eager to help out with this kind of stuff.

  • I've been looking at this and the first Datasource to get the switch status is pretty straightforward.  The second, not so much.

    The issue is that there are two different items that need to be indexed -- the switch and the port on the switch.  For example, an snmpwalk on 1.3.6.1.4.1.12356.101.24.2.1.1.5 returns:

    .1.3.6.1.4.1.12356.101.24.2.1.1.5.1.14.18001334.1 = STRING: "port1"
    .1.3.6.1.4.1.12356.101.24.2.1.1.5.1.14.18001334.2 = STRING: "port2"
    .1.3.6.1.4.1.12356.101.24.2.1.1.5.1.14.18001334.3 = STRING: "port3"
    .1.3.6.1.4.1.12356.101.24.2.1.1.5.1.14.18001334.4 = STRING: "port4"

    ...

    .1.3.6.1.4.1.12356.101.24.2.1.1.5.1.14.18001431.1 = STRING: "port1"
    .1.3.6.1.4.1.12356.101.24.2.1.1.5.1.14.18001431.2 = STRING: "port2"
    .1.3.6.1.4.1.12356.101.24.2.1.1.5.1.14.18001431.3 = STRING: "port3"

     

    So it looks like 18001334 and 18001431 are the indexes for different switches while the trailing .1, .2 etc. are the port indexes.

    If we do the same for 1.3.6.1.4.1.12356.101.24.2.1.1.4 it shows the switch serial number for each port so you can see it looks like this is the case:

    .1.3.6.1.4.1.12356.101.24.2.1.1.4.1.14.18001334.1 = STRING: "S248DNTF18001334"
    .1.3.6.1.4.1.12356.101.24.2.1.1.4.1.14.18001334.2 = STRING: "S248DNTF18001334"
    .1.3.6.1.4.1.12356.101.24.2.1.1.4.1.14.18001334.3 = STRING: "S248DNTF18001334"
    .1.3.6.1.4.1.12356.101.24.2.1.1.4.1.14.18001334.4 = STRING: "S248DNTF18001334"

    ...

    .1.3.6.1.4.1.12356.101.24.2.1.1.4.1.14.18001431.1 = STRING: "S248DNTF18001431"
    .1.3.6.1.4.1.12356.101.24.2.1.1.4.1.14.18001431.2 = STRING: "S248DNTF18001431"
    .1.3.6.1.4.1.12356.101.24.2.1.1.4.1.14.18001431.3 = STRING: "S248DNTF18001431"

     

    So for the instance names you'd probably want to combine the two of those into the display name or you'll end up with multiple instances with the same name.  Is that something that can be done in an SNMP datasource or would we need to use Groovy for that?

  • Anonymous's avatar
    Anonymous
    26 minutes ago, David Good said:

    So for the instance names you'd probably want to combine the two of those into the display name or you'll end up with multiple instances with the same name.  Is that something that can be done in an SNMP datasource or would we need to use Groovy for that?

    Exactly right. You would have to use a scripted datasource to do this since you'd need to combine the two values in order to get a unique wildvalue and unique wildalias.