Updating ESXi Vendor Images with Latest Patches

This post will walk through updating a vendor specific ESXi image with updated VIBs. In this instance we are applying patch ESXi650-201803001 which bundles the esx-base, esx-tboot, vsan, and vsan health VIBs (ESXi650-201803401-BG) with the updated CPU microcode (ESXi650-201803402-BG), to provide part of the hypervisor-assisted guest mitigation for operating systems of the Branch Target Injection issue (CVE-2017-5715) commonly known as Spectre. The appropriate patches for ESXi 6.0 and 5.5 can be found in VMware Security Announcement VMSA-2018-0004.3 here.

For more information on Meltdown and Spectre see this blog post, VMwares responses can be found here, on the VMware Security & Compliance Blog here, as well as VMware Security Announcement VMSA-2018-0004 here. Ensure your vCenter Server is also patched accordingly by following the guidance in this post.

There are a number of ways to push out ESXi patches to hosts, such as CLI, Update Manager, Auto Deploy. The latest images can be downloaded from the patch repository here. As we are using vendor specific images, which are typically slow to be updated from the main VMware image, there is no vendor image available that mitigates against Spectre at the time of writing. Therefore the steps below will cover replacing VIBs in the HPE ESXi 6.5 image with the updated VIBs released by VMware. The same process can be used for other vendor images and ESXi versions by downloading the appropriate images, however the custom image we create may not be supported, and therefore may not be appropriate for production environments.

meltdown-spectre-vmware

The steps below assume Auto Deploy and Image Builder are already setup. You don’t need to use Auto Deploy to be able to use the Image Builder, but the services do need to be started, if they’re not then see the Auto Deploy Guide. Download the latest vendor image, in my case I am using HPE, and the latest ESXi build from the patch repository here.

Log into the vSphere web client and click the Auto Deploy icon from the home page.

Auto_Deploy

Click the Software Depots tab. Software depots contain images or software packages. If you don’t already have a custom software depot click the Add Software Depot icon to add a new custom depot where images will be stored. Use the Import Sofware Depot to upload a zip file, in this case we need to add the vendor image (in my case VMware-ESXi-6.5.0-Update1-7388607-HPE-650.U1.10.2.0.23-Feb2018-depot.zip) and the updated VMware image (ESXi650-201803001.zip).

Select the software depot containing the vendor image, in my case VMware-ESXi-6.5.0-Update1-7388607-HPE-650.U1.10.2.0.23-Feb2018-depot. Under Image Profiles select the vendor image and click Clone.

Auto_Deploy_2

We are cloning the vendor image to replace the updated VIBs. Enter a name and vendor for the image, select the software depot.

Image_Builder_2

On the next page the software packages are listed, those already included in the build are ticked. Ensure the Software depot is set to All depots in the drop-down.

Review the updated VIBs in the appropriate ESXi patch release.

ESXi650-201803401-BG:

  • VMware_bootbank_esx-base_6.5.0-1.41.7967591
  • VMware_bootbank_esx-tboot_6.5.0-1.41.7967591
  • VMware_bootbank_vsan_6.5.0-1.41.7547709
  • VMware_bootbank_vsanhealth_6.5.0-1.41.7547710

ESXi650-201803402-BG:

  • VMware_bootbank_cpu-microcode_6.5.0-1.41.7967591

Use the search function to find each of the updated VIBs. Un-select the existing version and select the new version to add it to the build.

Image_Builder_3

For the Spectre patches remember to include the CPU microcode.

Image_Builder_4

Once complete click Next and Finish. Select the custom software depot where the image has been created. The image is now ready to use with an Auto Deploy rule, or can be exported in ISO or ZIP format by right clicking and selecting Export Image Profile.

Image_Builder_5

For the Spectre updates after the new image has been installed/applied to an ESXi host we can perform some verification of the hypervisor-assisted guest mitigation. This blog post from virtuallyGhetto provides PowerCLI functions and instructions for validating the correct microcode and patches are present. In the example below I have updated host 1 but not host 2:

Verify_1

The virtual machines can also be validated to confirm they are seeing the new CPU features, a power cycle is required for each VM. Before power cycling:

Verify_2

After power cycling:

Verify_3

ESXi 6.5 FCoE Adapters Missing

After installing or upgrading to ESXi 6.5 FCoE adapters and datastores are missing. In this case the hardware in use is a HP ProLiant BL460c Gen9 server with HP FlexFabric 10Gb 2-port 536FLB adapters, although this seems to have been a problem for other vendors (see here) and versions too.

This issue should be resolved with a driver provided by the vendor which has the FCoE auto discovery on boot parameter enabled. Cross reference your hardware against the VMware Hardware Compatibility Guide here, and confirm you are using the correct version of the bnx2fc driver and firmware. If no updated driver is available from the vendor then review the workarounds outlined below.

Stateful Installs

Credit to this article, SSH onto the host and run the following commands.

esxcli fcoe adapter list lists the discovered FCoE adapters, at this stage there will be no results.

esxcli fcoe nic list lists the adapters available as potential FCoE candidates. Locate the name of the adapter.

esxcli fcoe nic enable -n vmnicX enables the adapter, replace vmnicX with the adapter name, for example vmnic2.

esxcli fcoe nic discover -n vmnicX enables discovery on the adapter, replace vmnicX with the adapter name.

esxcli fcoe adapter list lists the discovered FCoE adapters, you should now see the FCoE adapters listed.

The storage adapters should now be showing in the vSphere web client, however if you are using stateless installs with Auto Deploy, then this workaround is not persistent and is lost at reboot.

storageadapters2

Stateless Installs

Credit to this article, we were able to create a custom script bundle to enable discovery on the FCoE adapters as part of the deploy rule with the steps below. Custom script bundles open up a lot of possibilities with Auto Deploy, but at this stage they are CLI only. I also noticed that if you create a deploy rule with a script bundle from the CLI, although it shows in the GUI if you then edit that rule in the GUI (for something unrelated, e.g. updated host profile) then it removes the script bundle without warning. So this is something you would need to weigh up against your environment, if you are already using CLI to configure deploy rules it shouldn’t be a problem.

PowerCLI can now be installed directly through PowerShell, if you don’t already have PowerCLI installed see here.

  • First up we’ll need to create the script on a Linux / Unix system. I just used a test ESXi host we had kicking about over SSH. Type vi scriptname.sh replacing with an appropriate name for your script.
  • The file will open, type i to begin editing.
  • On the first line enter #!/bin/ash followed by the relevant enable and discover commands from the section above. You can see in the example below the commands for enabling vmnic2 and vmnic3 as FCoE adapters.

ssh1

  • Press escape to leave the text editor and type :wq to save changes to the file and close.
  • Next we need to create the script bundle that will be imported into Auto Deploy. Type tar -cvzf bundlename.tgz scriptname.sh

ssh2

  • Copy the script bundle with the .tgz extension to your local machine, or the computer from where you will be using PowerCLI to create the deploy rule. In my case I copied the file over with WinSCP.
  • You should also have an ESXi image in zip format, make a note of the location. Add the script bundle and the ESXi software depot by running the following commands Add-ScriptBundle location\bundlename.tgz and Add-EsxSoftwareDepot location\file.zip. If you need further assistance with building custom images or using PowerCLI to manage Auto Deploy see the VMware Auto Deploy 6.x Guide and How to Create Custom ESXi Images posts.

ps1

  • Build the deploy rule using your own variables, again if you’re already using Auto Deploy I’m assuming you know this bit, we’re just adding an additional item in for the script bundle. See the guide referenced above if you need assistance creating deploy rules. I have used:
    • New-DeployRule -Name "Test Rule" -Item "autodeploy-script","HPE-ESXi-6.5.0-Build-5146846", LAB_Cluster, -Pattern "ipv4=192.168.0.101" | Add-DeployRule

ps2

  • The deploy rule is created and activated, I can now see it in the Auto Deploy GUI in the vSphere web client, with the associated script bundle. When the host boots from the deploy rule the script is extracted and executed, and the FCoE adapters are automatically enabled and discovered on boot.

autodeployGUI

  • If you don’t use the | Add-DeployRule parameter then the deploy rule will be created but show inactive. You can activate using the GUI but do not edit the rule using the GUI or the script bundle will break.
  • If you are updating an existing image then don’t forget to remove cached rules by remediating host associations, under the Deployed Hosts tab.

vCenter Server Appliance Integrated TFTP Server

This post covers the steps required to use the vCenter Server Appliance for Auto Deploy, with the built in TFTP server in vSphere 6.5. For more information on Auto Deploy, and to see the process for creating ESXi images and deploy rules to boot hosts, see the VMware Auto Deploy 6.x Guide. This post assumes that you have a working vCenter Server Appliance, and may be of use if you have recently migrated from Windows vCenter to VCSA.

Enable Auto Deploy

Open the vSphere web client and click System Configuration, Nodes. Select the vCenter Server and open the Related Objects tab. The Auto Deploy, ImageBuilder Service, and VMware vSphere ESXi Dump Collector services should all be set to Automatic and Running.

To start a service right click and select Start, then select Edit Startup Type and choose Automatic.

servicesLog out of the web client and log back in. You should now see the Auto Deploy icon on the home page.

autodeploy1

Enable TFTP

Now that Auto Deploy is enabled we can configure the TFTP server. Enable SSH on the VCSA by browsing to the Appliance Management page: https://VCSA:5480 where VCSA is the IP or FQDN of your appliance.

Log in as the root account. From the Access page enable SSH Login and Bash Shell.

SSH

SSH onto the vCenter Appliance, using a client such as Putty, and log in with the root account. First type shell and hit enter to launch Bash.

To start the TFTP service enter service atftpd start. Check the service is started using service atftpd status.

startservice

To allow TFTP traffic through the firewall on port 69; we must run iptables -A port_filter -p udp -m udp –dport 69 -j ACCEPT. Validate traffic is being accepted over port 69 using iptables -nL | grep 69.

firewall

The TFTP server will now work, however we need to make a couple of additional changes to make the configuration persistent after the VCSA is rebooted. There isn’t an official VMware way of doing this, and as it’s done in Linux there may be more than one way of achieving what we want. Basically I am going to backup iptables and create a script to restore iptables and start the TFTP service when the appliance boots. The steps are outlined below and this worked for me, however as a reminder this is not supported by VMware, and if you are a Linux expert you’ll probably find a better way round it.

The following commands are all run in Bash on the vCenter Server Appliance, you can stay in the existing session we were using above.

First make a copy of the existing iptables config by running iptables-save > /etc/iptables.rules.

Next change the directory by running cd /etc/init.d, and create a new script: vi scriptname.sh, for example: vi starttftp.sh.

Press I to begin typing. I used the following, which was copied from the Image Builder Service startup script, and modified for TFTP.

#! /bin/sh
#
# TFTP Start/Stop the TFTP service and allow port 69
#
# chkconfig: 345 80 05
# description: atftpd

### BEGIN INIT INFO
# Provides: atftpd
# Required-Start: $local_fs $remote_fs $network
# Required-Stop:
# Default-Start: 3 5
# Default-Stop: 0 1 2 6
# Description: TFTP
### END INIT INFO

service atftpd start
iptables-restore -c < /etc/iptables.rules

The file must be in the above format to be compatible with chkconfig which runs the script at startup. I left the defaults in from the Image Builder Service as it made sense they started at the same time and had the same dependencies. If you wish to modify further see the following sources: Bash commands, Script Options, Startup, and vmwarebits.com for the iptables commands.

Press escape to leave the editor and :wq to save the file and quit.

Next set execute permissions on the script by running chmod +x scriptname.sh, for example: chmod +x starttftp.sh.

To set the script to run at startup use chkconfig –add scriptname.sh, for example: chkconfig –add starttftp.sh.

Reboot the vCenter appliance to test the script is running. If successful the atftpd service will be started and port 69 allowed, you can check these with service atftpd status and iptables -nL | grep 69.

Close the session and disable SSH if required.

Configure DHCP

In this example I will be using PXE boot to boot the ESXi hosts using a DHCP reservation. On the DHCP scope that will be serving the hosts I have configured reservations and enabled options 066 and 067. In the value for option 066 (Boot Server Host Name) goes the IP address or FQDN of the vCenter Server where TFTP is running. In the value for option 067 (Bootfile Name) I have entered the BIOS DHCP File Name (undionly.kpxe.vmw-hardwired).

DHCP

Now that Auto Deploy is up and running using the built-in components of VCSA 6.5 you can begin creating ESXi images and deploy rules to boot hosts; using the Auto Deploy GUI. See the VMware Auto Deploy 6.x Guide.