VMware_Status Datasource update
We were running into an issue where disks were not consolidating properly following our backup completing, we are using Veeam with VMware. Since this wasn't included our of the box I have updated the default datasource to include a consolidationNeeded graph and datapoint. consolidationNeeded - Whether any disk of the virtual machine requires consolidation. This can happen for example when a snapshot is deleted but its associated disk is not committed back to the base disk.(https://code.vmware.com/apis/358#/doc/vim.vm.RuntimeInfo.html) This can cause a problem with performance as it will require more IO and for database servers this can be damaging for high io solutions. Since I am not able to attach XML files, I have copied the exported code below. If you want to do this manually, you will need to modify theactive discovery script to the collector attributes script and then add the data point and graph. In the active discovery script make the following changes: In the props_hash def include a line for "'auto.runtime.consolidationNeeded' : vm?.runtime?.consolidationNeeded" make sure you include a comma to separate the values. def props_hash = ['auto.config.alternate_guest_os_Name': vm?.config?.alternateGuestName, 'auto.config.annotation' : vm?.config?.annotation, 'auto.config.firmware' : vm?.config?.firmware, 'auto.config.guest_os_full_Name' : vm?.config?.guestFullName, 'auto.config.guest_os_id' : vm?.config?.guestId, 'auto.config.managed_by' : vm?.config?.managedBy?.type ?: "false", 'auto.config.modified' : vm?.config?.modified?.getTime(), 'auto.config.template' : vm?.config?.template, 'auto.guest.guest_os_family' : vm?.guest?.guestFamily, 'auto.guest.guest_os_full_name' : vm?.guest?.guestFullName, 'auto.guest.guest_os_id' : vm?.guest?.guestId, 'auto.guest.hostname' : vm?.guest?.hostName, 'auto.guest.tools_version' : vm?.guest?.toolsVersion, 'auto.guest.tools_version_status' : vm?.guest?.toolsVersionStatus2, 'auto.hardware.memory_mb' : vm?.config?.hardware?.memoryMB ?: 0, 'auto.hardware.num_cpu' : vm?.config?.hardware?.numCPU ?: 0, 'auto.hardware.num_cores_per_socket' : vm?.config?.hardware?.numCoresPerSocket ?: 0, 'auto.hardware.num_virtual_disks' : vm?.summary?.config?.numVirtualDisks ?: 0, 'auto.hardware.num_ethernet_cards' : vm?.summary?.config?.numEthernetCards ?: 0, 'auto.resource_pool' : vm?.resourcePool?.name, 'auto.resource_pool_full_path' : resource_pool_array.reverse().join(' -> '), 'auto.snapshot_count' : vm?.layoutEx?.snapshot?.size() ?: 0, 'auto.cluster' : esxhost?.parent?.name, 'auto.cluster_full_path' : cluster_path_array.reverse().join(' -> '), 'auto.runtime.host' : esxhost?.name, 'auto.runtime.connection_state' : vm?.runtime?.connectionState, 'auto.runtime.power_state' : vm?.runtime?.powerState, 'auto.runtime.consolidationNeeded' : vm?.runtime?.consolidationNeeded] Test the script against a vcenter server to make sure you are getting a true or false response from a server. In thecollector attributes script make the following changes Create a consolidation_states def, this will allow us to translate the value of the boolean to a 0 or 1 def consolidation_states = [ 'false' : 0, 'true' : 1 ] In the Iteration of VMs section, add a line for the def consolidation_Needed value, this is where we are actually translating the value to 0 or 1 with a toString compare. // iterate over vms vms.each { vm -> // Get AD info def wildvalue = vm.MOR.val def power_state = power_states[vm?.runtime?.powerState?.val] def consolidation_Needed = consolidation_states[vm?.runtime?.consolidationNeeded?.toString()] Finally, at the end, create a println to make the value go to standard out // Print that data println wildvalue + ".PowerState=" + power_state println wildvalue + ".ConsolidationNeeded=" + consolidation_Needed Test the script against a vcenter server to make sure you are getting a 0 or 1 response from a server if you are getting a null something is misconfigured. Once you get that working it is just a simple matter of adding a datapoint and graph Datapoint graph8Views2likes1CommentFYI: LM can trigger ESXi 6.5 hostd to crash
Hi, I just got done working with VMware support on an issue where our ESXi 6.5 hostd process would crash during a booting phase. We eventually traced it back to a bug in some vSAN code that LM monitoring is polling.. It doesn't matter if you're running vSAN in your environment or not. Our work around has been to disable host level monitoring in LM for ourESXi hosts for now and it's been stable ever since. The expected fix is scheduled for release in Q3 2018 from VMware.12Views2likes12Comments