By leveraging shared storage resources such as SAN, NAS, and DAS, vCenter Server can provide high availability and resource management across ESXi hosts in a HA (High Availability) enabled cluster. If the hosts only have access to local disks vSAN can be used to pool storage resources together into a shared datastore. When you enable HA an ESXi host in the cluster is elected as the master host, the master is responsible for monitoring other hosts using network and datastore heartbeats to distinguish between host failure and network isolation. In the event of a host failure the HA agent on the ESXi host automatically brings online the virtual machines using an operational host within the cluster. Once HA is enabled all virtual machines within that cluster are automatically protected. Although quick and simple to enable, there is a huge amount of documentation on how HA works should you wish to learn more; see the vSphere 6.5 Availability Documentation Centre or the vSphere HA Deepdive by Duncan Epping. DRS (Distributed Resource Scheduler) balances resource usage across ESXi hosts in a cluster by live migrating virtual machines when a host is running low on physical compute. HA and DRS both require vMotion to be configured; vMotion allows administrators to live migrate a virtual machine from one host to another, without downtime or service disruption. Storage vMotion is the same concept but is responsible for migrating the back end storage files from one datastore or device to another.
- The vCenter Server, configuration files, and VMs within the cluster should all reside on shared storage.
- ESXi hosts require access to the same shared datastores, and the same VM networks, with redundancy.
- Provide at least 2 datastores so that HA datastore heartbeating also has redundancy.
- Configuration maximums for a vSphere 6.x cluster are 64 hosts, and 8000 VMs.
Connect to the vSphere web client and login using an account with administrative privileges. Click on Hosts and Clusters, right click the vCenter to add a New Datacentre. Datacentre objects are intended to represent datacentres or sites and provide logical separation in the inventory. Right click the new datacentre to add a New Cluster.
Give the cluster an appropriate name and leave DRS, vSphere HA, and EVC disabled. We will enable these features shortly. EVC relates to Enhanced vMotion Compatibility and allows vMotion across different CPU generations, you can read more about EVC in this kb.
Now that we have a cluster we can begin to add the ESXi hosts. Right click the cluster and select Add Host, enter the FQDN or IP address of the host to add and click Next.
Enter the credentials of a local user with administrative access on the ESXi host, normally the root account.
Review the host summary and click Next. Add a new license key or assign sign an existing license and click Next. If you do not assign a license the ESXi host will use an evaluation license for up to 60 days.
Select the option for lockdown mode, this is disabled by default, click Next.
Decide what to do with existing virtual machine residing on the ESXi host, if there are no virtual machines provisioned accept the default option and click Next.
Review the final screen and click Finish. The host will now be added to the inventory and the cluster, repeat this process for each host. You will see the job progress in the task pane.
Before your hosts can provide high availability they need to have access to the same storage resources. Browse to Hosts and Clusters in the vSphere web client, select the host, and click the Manage tab.
First, click Storage Adapters. Here you will find a list of physical storage adapters, such as Fibre Channel HBAs, local storage controllers, etc.
Next click Storage Devices, these are the datastores that the ESXi host has access to. A datastore can be made up of local disks, shared storage such as SAN or DAS.
It is important to remember that HA and DRS can only work when multiple hosts can see the datastore where the virtual machine files are stored, this means mapping the LUN to multiple targets, or creating a vSAN datastore with local disks. If you intend on using vSAN then the datastore is configured for you as part of the vSAN setup, finish up configuring your cluster using this post and then move on to the VMware vSAN 6.5 Install Guide.
If you have mapped a LUN to your ESXi hosts then you need to create a datastore for your virtual machines to be able to use the storage. To create a datastore click Home in the navigator pane and click Storage. Ensure the datacentre is selected and click Add a Datastore, follow the wizard making sure VMFS (Virtual Machine File System) is selected and the LUN or volume you want to use. Once complete click Finish, the datastore can now be used to provision virtual machines.
Finally we need to configure vMotion and VM networks. In vSphere there are 2 types of virtual switch; standard and distributed. When using a standard switch the management plane is local to the host, and therefore networks are configured individually on each host. This is fine for small environments, however for larger environments we can use a distributed switch. A distributed switch separates the management plane onto the vCenter, allowing us to manage networks for multiple hosts from a single interface. The configuration below uses standard switches. To explore distributed switches see the vSphere Distributed Switches post.
Go back to the Home page and select Hosts and Clusters, select the host to edit and click Manage, Networking. The Physical Adapters tab lists the physical NICs the ESXi host can see.
VMkernel adapters handle infrastructure related traffic such as management, vMotion, vSAN, Fault Tolerance, iSCSI and NFS.
Virtual switches are used to serve virtual machine traffic.
To create a new virtual switch or VMkernel port click the Add Networking icon. Select the type of connection, in this example we will create a VMkernel Network Adapter for vMotion, you may also need to create a switch for virtual machine traffic in which case you would create another new connection, selecting Virtual Machine Port Group.
Since we are working on a new ESXi host we will add a new standard switch. You would use an existing switch if you wanted to add a new VLAN to an existing network of trunked VLANs in a large environment.
Configure an IP address for the new VMkernel port and select the relevant services, once complete click Finish.
The environment you are configuring dictates how you setup the networking of the host. For example in a home lab you could edit the existing management VMkernel to enable the vMotion service, so that they share the same physical interface and IP address. In a production environment you would split these out, and you may also want to include more than one physical adapter per virtual switch to build in redundancy. The virtual machine network traffic should be configured on a separate virtual switch, this could contain multiple VLANs and again, multiple NICs connected to multiple physical switches. For more information on ESXi networking see the vSphere Networking Guide.
Now that we have a cluster with hosts, storage, network and vMotion available we can enable HA, DRS, and EVC. Click the host in the vSphere web client and select the Manage tab, under the Services heading first select vSphere DRS, click Edit. Tick Turn ON vSphere DRS and expand the options for DRS Automation to see the descriptions for each level. Select the desired level of automation and click vSphere HA.
Tick the box to Turn on vSphere HA and click Ok. This post details VM Component Protection to protect against loss of paths to a storage device.
In the Configuration section of the Manage tab select VMware EVC, click Edit. Enable EVC for your processor family, and click Ok. Note the compatibility section which will validate if you have selected a compatible configuration.
The cluster is now configured and we can begin provisioning virtual machines that are automatically protected by HA and DRS. To learn how to deploy virtual machines and templates see the VM Templates and Customisation guide. To learn how to convert a physical server to a virtual machine see the Physical to Virtual Machine Conversion Guide.
For further assistance provisioning virtual machines and navigating the vSphere web client see the following VMware documentation: vSphere Virtual Machine Administration Guide and Administration with the vSphere Client.