Tag Archives: ESXi

Updating ESXi Vendor Images with Latest Patches

This post will walk through updating a vendor specific ESXi image with updated VIBs. In this instance we are applying patch ESXi650-201803001 which bundles the esx-base, esx-tboot, vsan, and vsan health VIBs (ESXi650-201803401-BG) with the updated CPU microcode (ESXi650-201803402-BG), to provide part of the hypervisor-assisted guest mitigation for operating systems of the Branch Target Injection issue (CVE-2017-5715) commonly known as Spectre. The appropriate patches for ESXi 6.0 and 5.5 can be found in VMware Security Announcement VMSA-2018-0004.3 here.

For more information on Meltdown and Spectre see this blog post, VMwares responses can be found here, on the VMware Security & Compliance Blog here, as well as VMware Security Announcement VMSA-2018-0004 here. Ensure your vCenter Server is also patched accordingly by following the guidance in this post.

There are a number of ways to push out ESXi patches to hosts, such as CLI, Update Manager, Auto Deploy. The latest images can be downloaded from the patch repository here. As we are using vendor specific images, which are typically slow to be updated from the main VMware image, there is no vendor image available that mitigates against Spectre at the time of writing. Therefore the steps below will cover replacing VIBs in the HPE ESXi 6.5 image with the updated VIBs released by VMware. The same process can be used for other vendor images and ESXi versions by downloading the appropriate images, however the custom image we create may not be supported, and therefore may not be appropriate for production environments.

meltdown-spectre-vmware

The steps below assume Auto Deploy and Image Builder are already setup. You don’t need to use Auto Deploy to be able to use the Image Builder, but the services do need to be started, if they’re not then see the Auto Deploy Guide. Download the latest vendor image, in my case I am using HPE, and the latest ESXi build from the patch repository here.

Log into the vSphere web client and click the Auto Deploy icon from the home page.

Auto_Deploy

Click the Software Depots tab. Software depots contain images or software packages. If you don’t already have a custom software depot click the Add Software Depot icon to add a new custom depot where images will be stored. Use the Import Sofware Depot to upload a zip file, in this case we need to add the vendor image (in my case VMware-ESXi-6.5.0-Update1-7388607-HPE-650.U1.10.2.0.23-Feb2018-depot.zip) and the updated VMware image (ESXi650-201803001.zip).

Select the software depot containing the vendor image, in my case VMware-ESXi-6.5.0-Update1-7388607-HPE-650.U1.10.2.0.23-Feb2018-depot. Under Image Profiles select the vendor image and click Clone.

Auto_Deploy_2

We are cloning the vendor image to replace the updated VIBs. Enter a name and vendor for the image, select the software depot.

Image_Builder_2

On the next page the software packages are listed, those already included in the build are ticked. Ensure the Software depot is set to All depots in the drop-down.

Review the updated VIBs in the appropriate ESXi patch release.

ESXi650-201803401-BG:

  • VMware_bootbank_esx-base_6.5.0-1.41.7967591
  • VMware_bootbank_esx-tboot_6.5.0-1.41.7967591
  • VMware_bootbank_vsan_6.5.0-1.41.7547709
  • VMware_bootbank_vsanhealth_6.5.0-1.41.7547710

ESXi650-201803402-BG:

  • VMware_bootbank_cpu-microcode_6.5.0-1.41.7967591

Use the search function to find each of the updated VIBs. Un-select the existing version and select the new version to add it to the build.

Image_Builder_3

For the Spectre patches remember to include the CPU microcode.

Image_Builder_4

Once complete click Next and Finish. Select the custom software depot where the image has been created. The image is now ready to use with an Auto Deploy rule, or can be exported in ISO or ZIP format by right clicking and selecting Export Image Profile.

Image_Builder_5

For the Spectre updates after the new image has been installed/applied to an ESXi host we can perform some verification of the hypervisor-assisted guest mitigation. This blog post from virtuallyGhetto provides PowerCLI functions and instructions for validating the correct microcode and patches are present. In the example below I have updated host 1 but not host 2:

Verify_1

The virtual machines can also be validated to confirm they are seeing the new CPU features, a power cycle is required for each VM. Before power cycling:

Verify_2

After power cycling:

Verify_3

ESXi 6.5 FCoE Adapters Missing

After installing or upgrading to ESXi 6.5 FCoE adapters and datastores are missing. In this case the hardware in use is a HP ProLiant BL460c Gen9 server with HP FlexFabric 10Gb 2-port 536FLB adapters, although this seems to have been a problem for other vendors (see here) and versions too.

This issue should be resolved with a driver provided by the vendor which has the FCoE auto discovery on boot parameter enabled. Cross reference your hardware against the VMware Hardware Compatibility Guide here, and confirm you are using the correct version of the bnx2fc driver and firmware. If no updated driver is available from the vendor then review the workarounds outlined below.

Stateful Installs

Credit to this article, SSH onto the host and run the following commands.

esxcli fcoe adapter list lists the discovered FCoE adapters, at this stage there will be no results.

esxcli fcoe nic list lists the adapters available as potential FCoE candidates. Locate the name of the adapter.

esxcli fcoe nic enable -n vmnicX enables the adapter, replace vmnicX with the adapter name, for example vmnic2.

esxcli fcoe nic discover -n vmnicX enables discovery on the adapter, replace vmnicX with the adapter name.

esxcli fcoe adapter list lists the discovered FCoE adapters, you should now see the FCoE adapters listed.

The storage adapters should now be showing in the vSphere web client, however if you are using stateless installs with Auto Deploy, then this workaround is not persistent and is lost at reboot.

storageadapters2

Stateless Installs

Credit to this article, we were able to create a custom script bundle to enable discovery on the FCoE adapters as part of the deploy rule with the steps below. Custom script bundles open up a lot of possibilities with Auto Deploy, but at this stage they are CLI only. I also noticed that if you create a deploy rule with a script bundle from the CLI, although it shows in the GUI if you then edit that rule in the GUI (for something unrelated, e.g. updated host profile) then it removes the script bundle without warning. So this is something you would need to weigh up against your environment, if you are already using CLI to configure deploy rules it shouldn’t be a problem.

PowerCLI can now be installed directly through PowerShell, if you don’t already have PowerCLI installed see here.

  • First up we’ll need to create the script on a Linux / Unix system. I just used a test ESXi host we had kicking about over SSH. Type vi scriptname.sh replacing with an appropriate name for your script.
  • The file will open, type i to begin editing.
  • On the first line enter #!/bin/ash followed by the relevant enable and discover commands from the section above. You can see in the example below the commands for enabling vmnic2 and vmnic3 as FCoE adapters.

ssh1

  • Press escape to leave the text editor and type :wq to save changes to the file and close.
  • Next we need to create the script bundle that will be imported into Auto Deploy. Type tar -cvzf bundlename.tgz scriptname.sh

ssh2

  • Copy the script bundle with the .tgz extension to your local machine, or the computer from where you will be using PowerCLI to create the deploy rule. In my case I copied the file over with WinSCP.
  • You should also have an ESXi image in zip format, make a note of the location. Add the script bundle and the ESXi software depot by running the following commands Add-ScriptBundle location\bundlename.tgz and Add-EsxSoftwareDepot location\file.zip. If you need further assistance with building custom images or using PowerCLI to manage Auto Deploy see the VMware Auto Deploy 6.x Guide and How to Create Custom ESXi Images posts.

ps1

  • Build the deploy rule using your own variables, again if you’re already using Auto Deploy I’m assuming you know this bit, we’re just adding an additional item in for the script bundle. See the guide referenced above if you need assistance creating deploy rules. I have used:
    • New-DeployRule -Name "Test Rule" -Item "autodeploy-script","HPE-ESXi-6.5.0-Build-5146846", LAB_Cluster, -Pattern "ipv4=192.168.0.101" | Add-DeployRule

ps2

  • The deploy rule is created and activated, I can now see it in the Auto Deploy GUI in the vSphere web client, with the associated script bundle. When the host boots from the deploy rule the script is extracted and executed, and the FCoE adapters are automatically enabled and discovered on boot.

autodeployGUI

  • If you don’t use the | Add-DeployRule parameter then the deploy rule will be created but show inactive. You can activate using the GUI but do not edit the rule using the GUI or the script bundle will break.
  • If you are updating an existing image then don’t forget to remove cached rules by remediating host associations, under the Deployed Hosts tab.

vCenter Server Appliance Integrated TFTP Server

This post covers the steps required to use the vCenter Server Appliance for Auto Deploy, with the built in TFTP server in vSphere 6.5. For more information on Auto Deploy, and to see the process for creating ESXi images and deploy rules to boot hosts, see the VMware Auto Deploy 6.x Guide. This post assumes that you have a working vCenter Server Appliance, and may be of use if you have recently migrated from Windows vCenter to VCSA.

Enable Auto Deploy

Open the vSphere web client and click System Configuration, Nodes. Select the vCenter Server and open the Related Objects tab. The Auto Deploy, ImageBuilder Service, and VMware vSphere ESXi Dump Collector services should all be set to Automatic and Running.

To start a service right click and select Start, then select Edit Startup Type and choose Automatic.

servicesLog out of the web client and log back in. You should now see the Auto Deploy icon on the home page.

autodeploy1

Enable TFTP

Now that Auto Deploy is enabled we can configure the TFTP server. Enable SSH on the VCSA by browsing to the Appliance Management page: https://VCSA:5480 where VCSA is the IP or FQDN of your appliance.

Log in as the root account. From the Access page enable SSH Login and Bash Shell.

SSH

SSH onto the vCenter Appliance, using a client such as Putty, and log in with the root account. First type shell and hit enter to launch Bash.

To start the TFTP service enter service atftpd start. Check the service is started using service atftpd status.

startservice

To allow TFTP traffic through the firewall on port 69; we must run iptables -A port_filter -p udp -m udp –dport 69 -j ACCEPT. Validate traffic is being accepted over port 69 using iptables -nL | grep 69.

firewall

The TFTP server will now work, however we need to make a couple of additional changes to make the configuration persistent after the VCSA is rebooted. There isn’t an official VMware way of doing this, and as it’s done in Linux there may be more than one way of achieving what we want. Basically I am going to backup iptables and create a script to restore iptables and start the TFTP service when the appliance boots. The steps are outlined below and this worked for me, however as a reminder this is not supported by VMware, and if you are a Linux expert you’ll probably find a better way round it.

The following commands are all run in Bash on the vCenter Server Appliance, you can stay in the existing session we were using above.

First make a copy of the existing iptables config by running iptables-save > /etc/iptables.rules.

Next change the directory by running cd /etc/init.d, and create a new script: vi scriptname.sh, for example: vi starttftp.sh.

Press I to begin typing. I used the following, which was copied from the Image Builder Service startup script, and modified for TFTP.

#! /bin/sh
#
# TFTP Start/Stop the TFTP service and allow port 69
#
# chkconfig: 345 80 05
# description: atftpd

### BEGIN INIT INFO
# Provides: atftpd
# Required-Start: $local_fs $remote_fs $network
# Required-Stop:
# Default-Start: 3 5
# Default-Stop: 0 1 2 6
# Description: TFTP
### END INIT INFO

service atftpd start
iptables-restore -c < /etc/iptables.rules

The file must be in the above format to be compatible with chkconfig which runs the script at startup. I left the defaults in from the Image Builder Service as it made sense they started at the same time and had the same dependencies. If you wish to modify further see the following sources: Bash commands, Script Options, Startup, and vmwarebits.com for the iptables commands.

Press escape to leave the editor and :wq to save the file and quit.

Next set execute permissions on the script by running chmod +x scriptname.sh, for example: chmod +x starttftp.sh.

To set the script to run at startup use chkconfig –add scriptname.sh, for example: chkconfig –add starttftp.sh.

Reboot the vCenter appliance to test the script is running. If successful the atftpd service will be started and port 69 allowed, you can check these with service atftpd status and iptables -nL | grep 69.

Close the session and disable SSH if required.

Configure DHCP

In this example I will be using PXE boot to boot the ESXi hosts using a DHCP reservation. On the DHCP scope that will be serving the hosts I have configured reservations and enabled options 066 and 067. In the value for option 066 (Boot Server Host Name) goes the IP address or FQDN of the vCenter Server where TFTP is running. In the value for option 067 (Bootfile Name) I have entered the BIOS DHCP File Name (undionly.kpxe.vmw-hardwired).

DHCP

Now that Auto Deploy is up and running using the built-in components of VCSA 6.5 you can begin creating ESXi images and deploy rules to boot hosts; using the Auto Deploy GUI. See the VMware Auto Deploy 6.x Guide.

ESXi Command Line Upgrades

Upgrading and patching of ESXi hosts can be done using the esxcli software commands, with either the online depot, or an offline bundle. For managing multiple hosts Update Manager is generally the best way to go. Update Manager is now built into VCSA 6.5 (vCenter Server Appliance 6.5 Install Guide) or can be installed on a Windows server (VMware Update Manager 6.0 Install Guide / VMware Update Manager 6.5 Install Guide).

In both the methods outlined below we will be connecting to the ESXi host via SSH. For assistance with enabling SSH review this KB article, remember to disable SSH when you’re done. Before beginning you should ensure any powered on virtual machines are shut down or migrated off the host. The host should be placed into maintenance mode and requires a reboot after patches are applied. You may find the following commands of use:

Lists the installed ESXi build version: vmware -v

Lists installed vibs: esxcli software vib list

List VMs present on the host: vim-cmd vmsvc/getallvms

Gracefully shut down a VM, replacing number with the VMID obtained from the above command: vim-cmd vmsvc/power.shutdown number

Power off a VM, replacing number with the VMID obtained from the above command: vim-cmd vmsvc/power.off number

Power on a VM, replacing number with the VMID obtained from the above command: vim-cmd vmsvc/power.on number

Enter maintenance mode: vim-cmd /hostsvc/maintenance_mode_enter

Exit maintenance mode: vim-cmd /hostsvc/maintenance_mode_exit

When installing individual vibs replace -d with -v, for example: esxcli software vib install -v viburl

The esxcli software commands below all use the update tag, this ensures that only newer contents of a patch are applied. If a system contains newer revisions of the selected patches then these will not be applied. The install tag can potentially overwrite existing drivers, and therefore the update method is recommended for upgrading ESXi and installing patches to prevent an unbootable state.

Online Depot

Useful for patching or upgrading individual hosts which have an internet connection and sufficient boot drive capacity. Open an SSH connection to the ESXi host using a client such as Putty, and log in with the root account. First enter the following command to open the firewall for outgoing http requests:

esxcli network firewall ruleset -e true -r httpClient

Find the image profile to upgrade to by reviewing the ESXi patch tracker here. To upgrade the ESXi host run the following command, replacing Imageprofile with the desired image profile name.

esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p Imageprofile

For example:

esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.0.0-20161104001-standard

When the upgrade is complete use reboot to restart the host. Finally close the outgoing http port:

esxcli network firewall ruleset -e false -r httpClient

Offline Bundle

First download the relevant offline bundle from VMware, for upgrades ESXi ISO images can be found here, patches in zip format can be found here.

Next we need to upload the downloaded file to a datastore the ESXi host or hosts have access to. Log into the vSphere web client or the ESXi host UI. Navigate to the Storage view, right click the datastore and select Browse Files. Click the upload file icon and select the zip file downloaded earlier. With the patches now accessible from the host we can start the update process.

Open an SSH connection to the ESXi host using a client such as Putty. Install the downloaded updates using the following command, replacing datastore with the name or UUID of the datastore, and zip with the file name of the downloaded patches:

esxcli software vib update -d /vmfs/volumes/datastore/zip

For example:

esxcli software vib update -d /vmfs/volumes/Datastore01/ESXi600-201611001.zip

Check the installation result, a reboot is required. The content listed below this is a breakdown of the VIBs installed, removed, and skipped. Restart the host using the reboot command.

result

Following on from upgrading or patching an ESXi host you should also ensure VMware Tools is updated on any guest virtual machines.

For more information on ESXi command line tools see the Troubleshooting with ESXi Shell and vSphere Management Assistant Guide posts.

vSphere Management Assistant Guide

The vSphere Management Assistant (vMA) can be used to remotely manage and troubleshoot multiple hosts from the command line. The vSphere Management Assistant is a SUSE Linux Enterprise based virtual appliance deployed within your vSphere infrastructure, it allows centralised management and troubleshooting of multiple ESXi hosts with automatic login, and scripting tools for developers. The vMA appliance includes the vSphere Command Line Interface (vCLI), vSphere SDK for Perl, and components for logging and authentication. The vCLI can also be installed separately on a machine of your choice running Windows or Linux. The standalone vCLI installation allows administrators to run all the commands available within the vMA, if you’re interested in installing vCLI standalone v6.5 can be downloaded here as a simple executable install. Review the release notes here for system requirements.

This post will cover the installation and configuration of vSphere Management Assistant 6.5; compatible with vSphere 5.0 and above. For managing individual hosts, locally or remotely, the ESXi Shell can be used, see the Troubleshooting with ESXi Shell post.

Installing vMA

vSphere Management Assistant v6.5 can be downloaded here, review the release notes here. Unzip the contents of the download and make a note of the file location.

In order to deploy the virtual appliance we need an available Network Protocol Profile. In the vSphere web client browse to the datacentre level where the appliance will reside, select the Manage tab and click Network Protocol Profiles. Click the green plus symbol to create a new profile, follow the wizard and assign the relevant network and settings to the profile.

networkprofile

The vSphere Management Assistant is a simple OVF deployment.

  • In the vSphere web client right click the host or cluster where the virtual appliance will reside. Click Deploy OVF Template.
  • Browse to the downloaded OVF file which was extracted from the .zip download and click Next.
  • Review the details of the appliance and click Next.
  • Accept the license terms and click Next.
  • Enter a name and location for the virtual appliance, click Next.
  • Select the storage to be used and click Next.
  • Select the network to use for the virtual machine and choose the IP allocation (DHCP or static). If static is selected enter the DNS servers, gateway and subnet mask. An additional page prompts for the IP address. Click Next.
  • On the summary page tick Power on after deployment and click Finish.

ovf1

If no Network Protocol profile is present and associated to the network in use then the virtual appliance is unable to power on, you will receive the error Cannot initialize propery ‘vami.netmask0.vSphere_Management_Assistant_(vMA)’. Network ‘VM Network’ has no associated protocol profile. In this case you should ensure the profile has been created and correctly configured.

Once the appliance is powered on open the console. Enter 0 to check the configuration, use the relevant numbers to configure the default gateway, hostname, DNS, and IP address allocation. Once complete enter 1 to exit the setup program.

vma

You will be prompted to change the default password for the vi-admin account, enter the old password vmware and a new password. Once loaded you can connect to the vSphere Management Assistant using an SSH client such as Putty. You can manage the virtual appliance by browsing to https://:5480 where is the IP address or FQDN of the appliance.

Configuring vMA

Open an SSH connection to the IP address or FQDN of the vSphere Management Assistant. Login as the vi-admin user and the password you changed during setup.

The vMA allows administrators to store credentials for automatic authentication when managing ESXi hosts. Using a component called vi-fastpass two accounts are created and the passwords stored in an unreadable format; vi-admin (administrator account) and vi-user (read only). These accounts prevent the user from having to log in to each host and facilitate unattended scheduled script operations.

Alternatively vMA can be configured to use Active Directory for authentication, providing more security controls. To use AD authentication the domain must be accessible from the vMA and DNS must be in place. The following commands are useful for AD tasks in vMA:

  • Join vMA to the domain: sudo domainjoin-cli join domain user where domain is the domain to join and user is the domain user with appropriate privileges.
  • Check the domain status: sudo domainjoin-cli query.
  • Remove vMA from the domain: sudo domainjoin-cli leave.

We can add ESXi hosts or vCenter Servers to vMA using the following commands:

  • To add a system to vMA using the default fastpass authentication: vifp addserver server -authpolicy fpauth -username user -password password where server is the ESXi host or vCenter Server to add, and user and password are the credentials to authenticate with.
  • To add a system to vMA using AD authentication: vifp addserver server –authpolicy adauth –username domain\\user where server is the FQDN of the server and domain\\user is the domain and user to authenticate with.
  • To list the systems added to vMA: vifp listservers.

With the systems authenticated and added to vMA we can now set a target system for executing vCLI commands or vSphere SDK for Perl scripts.

  • Use vifptarget -s server where server is the IP address or FQDN of the vCenter Server or ESXi host. The target system is shown in the command prompt.
  • You can add multiple targets and execute commands across multiple ESXi hosts using the bulkAddServers and mcli scripts, explained in this post by William Lam.

Using vMA

The same commands available to the ESXi shell, such as esxcli, esxcfg, esxtop (resxtop since we are connecting remotely), can be used with vCLI. Furthermore the vCLI includes a subset of vmware-cmd and vicfg commands. You can use more and less commands to assist with truncating information. For example esxcli –help | more and esxcli –help | less. More allows for scrolling down only, use enter to scroll one line at a time and space to scroll a page at a time. Less allows for scrolling both backwards (ctrl + b) and forward (ctrl +f), use q to return back to the command line. The following VMware documentation will get you started with the command line interface.

Let’s take a look at some of the most popular commands. The vmware-cmd command can be used for virtual machine operations, vicfg is primarily used for host operations and is intended to replace esxcfg long term. The main set of commands for managing the vSphere environment you will see is esxcli. The command set is broken down into namespaces, to view the available namespaces just enter esxcli.

namespaces

This propogates down the chain, for example use esxcli storage to view the options within the storage namespace. You can use –help at any level of esxcli for assistance.

storagenamespaces

You can view a full list of esxcli commands by entering esxcli esxcli command list. The screenshot below has been cropped and isn’t a full list, it may be beneficial to drill down through the relevant individual sections using the method outlined above.

list

As you can see the range of esxcli commands is vast, let’s take a look at a few examples.

  • esxcli hardware allows us to view and change the physical server hardware information and configuration. Use esxcli hardware cpu global set to enable or disable hyperthreading.

hardware

  • esxcli system allows us to view and change the ESXi system configuration. To enable or disable maintenance mode use esxcli system maintenanceMode set.

maintenance-mode

  • esxcli storage can be used for storage related tasks, use esxcli storage core path list to view attached LUNs, or esxcli storage vmfs upgrade to upgrade VMFS.

vmfs.PNG

  • esxcli network allows us to perform network related tasks, use esxcli network vswitch standard to create a new standard virtual switch.

switch

For details on patching or upgrading ESXi from the command line see the ESXi Command Line Upgrades post. I also found this great blog post by Chanaka Ekanayake who has put together some of the most useful commands and examples for use with vMA and vCLI.

Troubleshooting with ESXi Shell

The ESXi Shell gives us a subset of commands for troubleshooting and managing individual ESXi hosts. ESXi Shell can be useful to quickly investigate and resolve issues with single hosts, for example if management agents are unresponsive. This section will cover how to enable ESXi Shell, how to access the ESXi Shell, and how to use the ESXi Shell. It is important to remember that when the ESXi Shell and SSH services are enabled you are potentially opening up vulnerabilities to which attackers may be able to use maliciously. For this reason you will see a warning on any hosts in the web client when the ESXi Shell and / or SSH service is enabled. You can suppress the ESXi Shell warning by following this kb. Remember to disable the ESXi Shell when you have finished, it is also possible to configure time-outs when enabling the ESXi Shell; availability time-out to determine how long ESXi Shell is enabled for, and idle time-out to determine how long idle sessions are kept connected.

You can remotely manage multiple hosts using the vSphere Management Assistant, for more information see the vSphere Management Assistant Guide.

Enabling ESXi Shell

By default the ESXi Shell is disabled, it can be enabled using the DCUI or web client (local or vSphere).

  • DCUI (Direct Console User Interface)
    • Access the console of the ESXi host by plugging in a monitor and keyboard, or establishing a remote console session using remote server tools such as ILO, IMM, etc.
    • Press F2 and enter the root password. Browse to Troubleshooting Options.
    • Select ESXi Shell and press Enter to toggle between enabled and disabled. If you are going to access the Shell locally this is sufficient, for remote connections you must also enable SSH.
    • Press Esc twice to exit out of the menus.

esxishell

  • ESXi host web client (standalone hosts v6.5 and above)
    • Browse to the IP address of FQDN of the host and log in with the root password.
    • From the Navigation menu select Manage, and open the Services tab.
    • Locate and Start TSM for the ESXi Shell, and TSM-SSH for SSH if required.

esxiweb

  •  vSphere web client (hosts connected to vCenter Server)
    • Browse to the IP address or FQDN of the vCenter Server and log in with an administrator account.
    • Locate the host in the inventory and select the Configure tab.
    • Scroll down to the Security Profile menu under System.
    • Click Edit and start the Direct Console UI, ESXi Shell, and SSH services.

vsphereweb

Access ESXi Shell

Once enabled, the ESXi Shell can be accessed locally using the DCUI or remotely over SSH.

  • For DCUI access to the ESXi Shell press ALT + F1 from the ESXi console screen. Log in with the root password.

dcui

  • For remote access open a connection over port 22 using an SSH client such as Putty, and log in with the root password.

putty

Using ESXi Shell

The ESXi Shell contains the full range of esxcli and esxtop commands, as well as esxcfg for legacy purposes (although be aware that esxcfg is depreciated and may be phased out in future releases). The ESXi Shell is useful for performing maintenance and troubleshooting individual hosts, it cannot be used for scheduling scripting jobs. For managing multiple hosts and scripting use vSphere CLI (vCLI) either as a local installation or with the vSphere Management Assistant (vMA).

Have a look in /usr/sbin to view the available commands for the ESXi Shell; enter cd /usr/sbin and then ls. Note that commands are case sensitive.

commands

esxtop is a powerful utility for examining ESXi host performance metrics and investigating performance issues. In the ESXi Shell enter esxtop with variables such as c for CPU, m for memory, n for network, and d for disk, read more in the Troubleshooting with ESXTOP post.

esxcli is a comprehensive set of commands for managing the vSphere environment. The command set is broken down into namespaces, to view the available namespaces use the esxcli command.

namespaces

This propogates down the chain, for example use esxcli storage to view the options within the storage namespace. You can use –help at any level of esxcli for assistance.

storagenamespaces

You can view a full list of esxcli commands by entering esxcli esxcli command list. The screenshot below has been cropped and isn’t a full list, it may be beneficial to drill down through the relevant individual sections using the method outlined above.

list

As you can see the range of esxcli commands is vast, let’s take a look at a few examples.

  • esxcli hardware allows us to view and change the physical server hardware information and configuration. Use esxcli hardware cpu global set to enable or disable hyperthreading.

hardware

  • esxcli system allows us to view and change the ESXi system configuration. To enable or disable maintenance mode use esxcli system maintenanceMode set.

maintenance-mode

  • esxcli storage can be used for storage related tasks, use esxcli storage core path list to view attached LUNs, or esxcli storage vmfs upgrade to upgrade VMFS.

vmfs

  • esxcli network allows us to perform network related tasks, use esxcli network vswitch standard to create a new standard virtual switch.

switch

To exit the ESXi Shell use the exit command. Hopefully this post provides enough to get you started, if you are using ESXi Shell on a regular basis and want to view previously executed commands see this post by William Lam. For details on patching or upgrading ESXi from the command line see the ESXi Command Line Upgrades post.

VMware Snapshot Overview

This post will talk about how VMware snapshots work, what they should and should not be used for, and provide a demonstration. A snapshot preserves the state and data of a virtual machine from a specific point in time. You can create multiple snapshots to save the virtual machine in different stages of a work process. Snapshots are managed using Snapshot Manager in the vSphere web client, or with PowerCLI. You should not manually alter any of the snapshot files as this may compromise the disk chain, with potential for data loss.

What happens when I take a snapshot?

When you take a snapshot of a virtual machine a number of files are created; a new delta disk (or child disk) is created for each attached disk, in vmdk format. The delta disks follow a naming convention and sequence of vmname-000001.vmdk, vmname-000002.vmdk and so on. These files are stored with the base vmdk by default. Any changes to the virtual machine are written to the delta file(s), preserving the base vmdk file. Think of this delta file as a change log, representing the difference between the current state and the state at the time the snapshot was taken. A .vmsd file is created to store the virtual machine snapshot information defining the relationships between child disks. A .vmsn file and corresponding .vmem file is created if the active state of the virtual machine memory is included in the snapshot. These configuration files are all stored in the virtual machine directory.

snap3

When should I use a snapshot?

Use a snapshot as a short term restore point when performing changes such as updating software versions or for testing software or configuration with unknown effects. You can create multiple snapshots of a virtual machine; VMware recommend no more than 32 snapshots in a chain, however best practise for performance is to keep it low, i.e. 2-3 snapshots.

Do not use a snapshot as a backup. Although it provides a restore point a snapshot relies on the base disk(s), without this the snapshot files are worthless. If you need a restore point for more than a few days then consider other options such as traditional backup, or cloning the virtual machine. According to vSphere best practises a single snapshot should not be used for more than 24 – 72 hours. There are a number of factors that determine how long a snapshot can be kept, such as the amount of changed data, and how the application will react to rolling back to a previous point in time. Some disk types and configurations are not supported by snapshots, you can see a full list of limitations here.

What are the risks of using a snapshot?

The more changes that are made within the virtual machine the more data is written to the delta file. This means the delta file grows quickly and in theory can grow as large as the virtual disk itself if the guest operating system writes to every block of the virtual disk. This is why snapshots are strictly a short term solution. Ensure there is sufficient space in the datastore to accommodate snapshots, if the datastore fills up any virtual machines residing in that datastore will be suspended.

How do I take a snaphot?

From the vSphere web client right click the virtual machine to snapshot, select Snapshots, and Take Snapshot. Note that vCenter Server is not a requirement, snapshots are also supported through the local ESXi host web UI.

snap1

Enter a name and description for the snapshot. The contents of the virtual machines memory are included in the snapshot by default, retaining the live state of the virtual machine. If you do not capture the memory state, then the virtual machine files require quiescing, otherwise should the virtual machine be reverted to a previous state; then the disks are crash consistent. The exception to this is taking a snapshot of a powered off virtual machine, as it is not possible to capture the memory state, or quiesce the file system.

snap2

To view active snapshots locate the virtual machine in the vSphere web client and select the Snapshot tab. Snapshots are listed in order with ‘you are here’ representing the current state, at the end of the snapshot chain.

snap4

It is possible to exclude disks by changing the disk mode to independent, covered here. However please use this option with care as it may have other implications. For example if your backup software uses snapshots as part of the backup process then setting independent disks may inadvertently exclude these disks from backups.

 How do I revert back to a snapshot?

Select the snapshot you want to revert back to, and click the revert icon in the top left of the snapshot menu. The icon dialog reads ‘revert the VM to the state it was in when the snapshot was taken’.

snap4

Review the confirmation message. The virtual machine state and data will be reverted back to the point in time when the selected snapshot was taken. The current state of the virtual machine (changes made since the snapshot was taken) will be lost unless you have taken a further snapshot. Click Yes to continue.

snap5

If you have multiple snapshots you will see the ‘you are here’ marker move to the point in the chain you have reverted to. Snapshots taken after this point are still valid and can be reverted to if required. After you have reverted to a snapshot you are happy with you need to save, or commit, the state of the virtual machine. More on this below.

snap6

How do I keep the state of the virtual machine?

When you keep the current state of the virtual machine the delta disks are merged with the base disks, committing the changes and the current state of the virtual machine. This is done by using the delete snapshot options in Snapshot Manager.

  • Delete All – deletes all snapshots from the virtual machine. This merges the delta disk(s) with the base disk(s) to save, or commit, the virtual machine data and configuration at the current point in time. If you have reverted to a snapshot you still need to delete all snapshots to start writing to the base disk again.
  • Delete – deletes individual snapshots from a chain; writing disk changes since the previous snapshot to the parent snapshot delta disk. If only a single snapshot exists then deleting this snapshot is the same as a Delete All for multiple snapshots; the VM state is committed and data is written to the base disk as normal.

Right click the virtual machine in the vSphere web client and select Snapshots, Manage Snapshots. From the All Actions menu select Delete Snapshot to delete the selected snapshot, or Delete All Snapshots. In this example we are deleting all snapshots, so click Yes to confirm.

snap7

All snapshots are now removed and the current state of the virtual machine is committed to the base disk. Any changes made from here on in are written to the base disk as normal, unless another snapshot is taken.

snap8

What is snapshot consolidation?

Snapshot consolidation is useful if a Delete or Delete All operation fails; for example if a large number of snapshots exist on a virtual machine with high I/O, or if a third party tool such as backup software utilising snapshots is unable to delete redundant delta disks. Using the consolidate option removes any redundant delta disks to improve virtual machine performance and save storage space. This is done by combining the delta disks with the base disk(s) without violating a data dependency, the active state of the virtual machine does not change.

To determine if a virtual machine requires consolidation browse to the vCenter Server, cluster, or host level in the vSphere web client and click the VMs tab. Right click anywhere in the column headers and select Show/Hide Columns. Tick Needs Consolidation and click Ok.

snap9

If a virtual machine requires consolidation right click and select Snapshots, Consolidate. There is also a default alarm defined at vCenter level for virtual machine consolidation needed.

snap10

From vSphere 6 onwards the snapshot consolidation process was improved. You can read more about the specifics, and testing, in this blog post by Luca Dell’Oca.

The snapshot functions described in this post can also be managed using PowerCLI, this blog post by Anne Jan Elsinga covers the commands you’ll need.