How to Install vSphere 7.0 – vSAN 7.0

How to Install vSphere 7.0 – vSAN 7.0

Introduction

This second post in a new lab series provides a walkthrough for installing the latest iteration of vSAN 7. At the time of writing the latest version of vSAN is vSAN 7.0 Update 1. To read about what’s new see vSphere 7 and vSAN 7 Headline New Features.

VMware vSAN is a software-defined storage solution baked directly into the vSphere hypervisor. vSAN enables aggregation of local or directly-attached devices and pools them together across hosts in a vSphere cluster to provide a single shared storage pool. Functionality is abstracted from the underlying hardware and managed at a software level, within vCenter, to provide granular policy based availability and controls. Non-disruptive scale out can be achieved by adding more ESXi hosts, either in the same cluster or a new cluster, and scale up by adding more disks to the existing hardware. Multiple vSAN clusters can be created and managed within a single vCenter Server. Since vSAN is already implemented directly into ESXi, activating the functionality simply requires planning and enabling the configuration, along with the appropriate VMware vSAN licenses.

In this example vSAN will be configured in a lab environment using a 2 host cluster (Intel NUC Bean Canyon) running vSphere 7 U1C, with a third node acting as the vSAN witness. As of vSAN 7.0 U1 a single witness appliance can support up to 64 2-node clusters. If you’re looking for more information on running a vSphere lab on the Intel NUC range check out the VMware Homelab section of virten.net, which has some great guides and resources.

vSAN 7.0 Install Guide

vSAN can be configured in an all-flash or hybrid setup. In a hybrid setup, flash is used for the cache with spinning disks providing the capacity tier. Although all local capacity devices are pooled together and shared across hosts in the cluster; an optimal vSAN configuration will contain hosts with the same or similar physical storage configurations, balancing storage devices consistently across the cluster. That said, hosts without any contributing storage can also join the cluster and run virtual machines. In this type of setup, planning the deployment to cover fault tolerance and protection against loss of specific contributing nodes is of particular importance.

All hosts contributing storage devices to the cluster must include at least one flash device for local cache, alongside at least one capacity device. For hybrid configurations, the flash device must be a minimum of 10% of the anticipated consumed storage of the capacity tier, and this should account for future growth to prevent reduced performance over time as the consumed storage grows. The cache for each host in any setup does not count towards the overall size of the shared datastore. Cache and capacity devices in a host form one or more disk groups, outlined in the high level image below. For more information on capacity and sizing considerations when designing a vSAN deployment, review the VMware vSAN Design Guide and the Designing and Sizing a vSAN Cluster documentation.

VMware vSAN high level overview from the vSAN 7.0 Planning and Deployment documentation

VMware vSAN is an enterprise solution and supports all VMware features that rely on shared storage, like High Availability, Distributed Resource Scheduler, and Storage vMotion. vSAN also includes features like stretched clustering, and fault domain implementations. Hosts in a vSAN cluster can also mount other VMFS and NFS datastores, although vSAN itself does not require or rely on any kind of external storage or Storage Area Network (SAN). You can find more information in the vSAN Planning and Deployment – VMware vSphere 7.0 documentation, which should be studied before configuring vSAN, along with the relevant release notes – in this example I am using vSAN 7.0 Update 1.

System Requirements

  • VMware vSAN can be built on the following hardware:
    • vSAN ReadyNode – preconfigured solutions using hardware tested and certified for vSAN by the server OEM and VMware
    • Turn key deployments – fully packaged Hyper-Converged Infrastructure (HCI) solutions like Dell EMC VxRail
    • Custom solution – hardware components compiled by the user, all hardware used with vSphere 7 and vSAN 7 must be listed in the VMware Compatibility Guide
  • To check version compatibility with other VMware products, see also the VMware Product Interoperability Matrices.
  • A standard vSAN cluster needs at least 3 hosts, with a maximum of 64. At least 4 hosts are recommended for maximum availability due to limitations around maintenance and protection after a failure with 3-host clusters. The 2-host vSAN cluster with witness is also a separate configuration and exception.
  • Each physical host contributing capacity to the vSAN cluster requires:
    • 1 x SAS or SATA HBA, or RAID controller in passthrough mode
    • 1 x SAS or SATA SSD, or PCIe flash device, for the cache
    • At least 1 x (further) SAS or SATA SSD, or PCIe flash device, for capacity in an all-flash disk group, OR; at least 1 x SAS or NL-SAS magnetic disk, for capacity in a hybrid disk group, with no existing partition configuration in both cases
    • A minimum of 8 GB RAM, but in most cases it is preferable to have at least 32 GB RAM
    • Dedicated 1 Gbps bandwidth for hybrid configuration (10 Gbps recommended), OR; dedicated or shared 10 Gbps for all-flash configurations (25 Gb, 40 Gb, and 100 Gb are also supported) – for best results new environments should consider 25 Gbps connectivity using vSphere Distributed Switches with Network I/O Control (vSphere Standard Switches are also supported but do not offer QoS)
    • A configured VMkernel network adapter for vSAN traffic
    • A maximum network latency of 1 ms RTT for standard vSAN clusters (200 ms to a witness node, 5 ms for stretched clusters)
    • Layer 2 or Layer 3 network connectivity between hosts in the cluster (jumbo frames are supported but not required, if jumbo frames are already in use then the setting should be configured end-to-end across the environment)
    • A valid vSAN license, normally managed per CPU although per OSI licensing is available for branch office configurations
  • When sizing a vSAN cluster keep in mind the total capacity of all disks pooled together is only the raw capacity. True payload capacity can be calculated using the primary level of failures to tolerate, in conjunction with the failure tolerance method (RAID). For more information review the Designing and Sizing a vSAN Cluster documentation.
  • Prior to vSAN 7.0 U1, a general recommendation to keep the vSAN datastore below 70% usage was made. The latest release has made substantial improvements to improve usage of free capacity, and as such can be calculated per cluster based on variables outlined in the Designing for Capacity section of the VMware vSAN Design Guide.
  • It is good practice to synchronise ESXi and vCenter versions, and run the latest release. Hosts should also be in the same L2 subnet for best networking performance.
  • If your environment has firewalls review the list of Required ports for vSAN.
  • For larger enterprise environments see also the vSAN Configuration Limits.

vSAN Activation

In this example we’ll use the vSphere Cluster Quickstart page to configure vSAN. Quickstart consolidates the storage and networking workflows required to activate vSAN. A new cluster has been created containing 2 ESXi hosts running 7.0 U1C. The hosts are in maintenance mode and have no existing datastores or partition information beyond the standard boot disk. Both hosts are using PCIe flash devices in passthrough mode.

A third host will act as the witness node. The witness for a 2-host vSAN cluster needs to have available disks for writing metadata; at least 10 GB cache and 15 GB capacity. All 3 hosts need a VMkernel port configured. Since this is a lab environment, with limited physical connections and bandwidth, I have configured the management vmk port to also be used for vSAN traffic. The vmk port is a virtual adapter used to handle VMware service traffic for various functionality. If you need guidance on setting up the VMkernel adapter for vSAN, see the How to Configure vSAN VMkernel Networking Knowledge Base page.

Shared vmk0 for management and vSAN traffic (lab only)

Now that the VMkernel ports are setup for vSAN traffic, and there is IP-reachability between the vSAN cluster hosts and witness node, we can start the vSAN configuration. Select the cluster in the vSphere client and click Configure > Quickstart. For stage 1 click Edit and select the vSAN service. After a couple of seconds the pre-requisite health checks in stage 2 are complete. Providing no issues arise move on to stage 3 and click Configure.

vSAN Configuration Quickstart

Configure the network settings for the vSAN cluster. The Quickstart setup uses vSphere Distributed Switches, which are recommended, although vSphere Standard Switches are also supported. In my lab, since I already enabled vSAN traffic on the management port, I can skip the Distributed Switch setup, and click Next.

vSAN cluster deployment network configuration

Configure the vSAN cluster settings, like encryption, compression, and deduplication, as required. In this example I am using the Two node vSAN cluster deployment type. Click Next.

vSAN cluster deployment type

Select the disks and tier to be claimed for the vSAN cluster. Remember that vSAN can only use local or direct-attached storage, and not remote storage. In this example 2 x 500 GB flash devices have been allocated to the capacity tier, and 2 x 50 GB flash devices have been allocated to the cache tier. The total of the claimed disks is 1.07 TB. This does not provide any component failure protection and is only for lab purposes. I accept the recommended configuration and click Next.

vSAN cluster deployment storage types

Since my vSAN cluster is only 2-nodes, I need to add a witness host. The witness host, with available disks for metadata, and vSAN enabled VMkernel adapter for communication, is selected and passes the compatibility checks. Click Next to continue.

vSAN cluster deployment with witness host

Claim the disks for the witness host to use, in this case I have allocated a 10 GB disk for the cache tier metadata, and 15 GB disk for the capacity tier metadata. Click Next to continue.

vSAN cluster deployments with witness host disks

Review the settings configured and click Finish to deploy the vSAN configuration. Although the Quickstart interface returns a message pretty quickly saying the cluster is configured, keep an eye on activity in the Recent Tasks pane as there is likely still configuration taking place.

vSAN cluster deployment review and finalise

The easiest way to check the vSAN status is to select the cluster, click Monitor, and scroll down to vSAN. Skyline Health will show the vSAN health checks associated with the cluster, you can also see physical and virtual object states, capacity and performance.

vSAN capacity monitoring

To view or manually edit the cluster settings select the cluster, click Configure and scroll down to vSAN. Services shows the available vSAN services and their configuration, in my lab environment most of these are disabled. Disk Management shows the configured disk groups and their health state. In this lab scenario I only have 2 fault domains configured.

vSAN disk group configurations

Fault domains allow grouping together of physical hosts to protect against common failures like chassis or racks. It is best practice to configure consistent fault domains with the same number of hosts across the environment. Consider the impact on placement of data and overall number of host failures to tolerate when configuring fault domains. Clearly for a lab environment or a 2-node cluster in a small branch office setup fault domains and data availability cannot be applied in the same way as larger deployments. The following resources will help with designing such environments:

Finally, if you want to create a new storage policy to apply to the vSAN datastore, or create multiple granular policies that can be applied at VM or VMDK level, this can be done from the Menu dropdown, Policies and Profiles, VM Storage Policies. If you need more information on the policy options available review the VM Storage Policy Design Considerations documentation.

Featured image by Jonas Svidras on Unsplash

How to Install vSphere 7.0 – vCenter Server Appliance

Introduction

This opening post in a new lab series provides a walkthrough for installing the latest iteration of vSphere 7.0; bringing cloud-native workloads to the data centre with embedded Kubernetes and Tanzu. vSphere 7.0 was initially released in June 2020, and followed up with vSphere 7.0 Update 1 in October 2020. The current version at the time of writing is vSphere 7.0 Update 1c. You can track the latest releases and build numbers in this KB article.

ESXi is the market leading hypervisor, able to abstract and pool compute resources across commodity hardware, and implement granular based controls and automation. ESXi needs to be installed first on a physical machine to provide at least one host for the vCenter virtual appliance to be deployed to. vCenter Server then provides the single management pane for physical hosts and virtual machines, along with enterprise functionality like vMotion for live workload portability, High Availability for workload failover, and Distributed Resource Scheduler for automatically balancing resources. To read about what’s new in vSphere 7 see vSphere 7 and vSAN 7 Headline New Features.

In this example vCenter Server will be deployed in a lab environment to an Intel NUC Bean Canyon running ESXi 7.0 U1C. If you’re looking for more information on running a vSphere lab on the Intel NUC range check out the VMware Homelab section of virten.net, which has some great guides and resources.

vCenter 7.0 Install Guide

Several design decisions have been removed in vSphere 7 as component topology and lifecycle management have been drastically simplified. The external Platform Services Controller (PSC) deployment model available in versions 6.0 and 6.5 has been removed, only the embedded option is offered in vSphere 7.

Furthermore, running vCenter Server on Windows has finally been deprecated, and all deployments must now use the vCenter Server Appliance (VCSA). A migration path from Windows vCenter Servers 6.5 and 6.7 to VCSA 7.0 is available. The VCSA is an optimised virtual appliance running Photon OS 3.0, and contains all the vCenter required services, such as SSO, Certificate Authority, PostgreSQL, Lifecycle Manager, etc. You can find more information on the full list of services in detail from the vCenter Server Installation and Setup documentation.

The installer file for vSphere can be downloaded here, a 60 day evaluation period is automatically applied. The vCenter Server installation bundle comes as an ISO file mountable on a Windows, Linux, or Mac device. The installer must be run from a machine with network connectivity to the ESXi host or vCenter Server where the new appliance will be deployed. The target host or vCenter must be running vSphere version 6.5 or later. For multiple or repeated installations in large environments the vCenter Server Appliance and configuration can also be silently deployed using CLI and JSON file. Make sure you review the release notes with your download before starting, in this example I am using vCenter Server 7.0 Update 1c.

The vSphere download packages available from my.vmware.com

System Requirements

  • Before beginning the installation; Fully Qualified Domain Name (FQDN) resolution should be in place with forward and reverse DNS A records added, and replicated if applicable, for the vCenter Server hostname.
  • vCenter Server 7.0 can only be deployed to, and manage, ESXi hosts v6.5 or later. There is no direct upgrade path for hosts running ESXi v5.5 or 6.0 to v7.0.
  • If you are deploying to an ESXi host the host must not be in maintenance mode or lockdown mode. The ESXi host and all vSphere components should be configured to use Network Time Protocol (NTP), the installation can fail or the vpxd service may not be able to start if the clocks are not synchronised.
  • Check the compatibility of any third party products and plugins that might be used for backups, anti-virus, monitoring, etc. as these may need upgrading for vSphere 7.0 compatibility.
  • To check version compatibility with other VMware products, see the VMware Product Interoperability Matrices.
  • In addition to software, you should also check the hardware in use is compatible with vSphere 7 using the VMware Compatibility Guide. VMware support enterprise hardware, and therefore the Intel NUC devices are not listed. This isn’t an issue in a lab environment but should not be implemented in production.
  • The vCenter Server Appliance requires the following compute specifications, this includes vSphere Lifecycle Manager running as a service on the appliance:
    • Tiny (up to 10 hosts, 100 VMs) – 2 CPUs, 12 GB RAM
    • Small (up to 100 hosts, 1000 VMs) – 4 CPUs, 19 GB RAM
    • Medium (up to 400 hosts, 4000 VMs) – 8 CPUs, 28 GB RAM
    • Large (up to 1000 hosts, 10,000 VMs) – 16 CPUs, 37 GB RAM
    • X-Large (up to 2000 hosts, 35,000 VMs) – 24 CPUs, 56 GB RAM
  • Storage resources for the vCenter Server Appliance also vary based on the database requirements above:
    • Tiny – Default: 415 GB, Large: 1490 GB, X-Large: 3245 GB
    • Small – Default: 480 GB, Large: 1535 GB, X-Large: 3295 GB
    • Medium – Default: 700 GB, Large: 1700 GB, X-Large: 3460 GB
    • Large – Default: 1065 GB, Large: 1765 GB, X-Large: 3525 GB
    • X-Large – Default: 1805 GB, Large: 1905 GB, X-Large: 3665 GB
  • If your environment has firewalls review the list of Required ports for vCenter Server.
  • For large and enterprise environments review the vSphere 7.0 Configuration Limits.

Installation Stage 1

The vCenter Server 7 installation is practically identical to its predecessors’ versions 6.5 and 6.7. Download and mount the ISO on your computer, then browse to the corresponding directory for your operating system and open the installer file. In my case \vcsa-ui-installer\mac\installer.app. As we are installing a new instance, click Install.

vCenter 7.0 Installer: all options are consolidated into a single ISO

The installation is split into 2 stages, we begin with deploying the appliance in OVF format to an ESXi or vCenter target. The second stage configures the appliance. Note that the External PSC deployment is no longer available. Click Next.

Deploy vCenter Server: introduction page

Accept the license agreement and click Next.

Deploy vCenter Server: End User License Agreement (EULA)

Enter the FQDN or IP address of VCSA deployment target, this can be a vCenter Server or ESXi host that meets the system requirements outlined above. Enter the credentials of an administrative or root user and click Next, the installer will validate access.

When prompted with an untrusted SSL certificate warning, confirm the SHA1 thumbprint displayed is that of the target ESXi host or vCenter Server, and click Yes to accept. Also note that if you are connecting to an ESXi host you will only see networks on the local hosts standard switch when it comes to configuring network settings in an upcoming step. If you require a network on an existing vSphere Distributed Switch (VDS) then you will need to connect to the VDS source vCenter as your deployment target. Alternatively you can make this change post-deployment.

Deploy vCenter Server: deployment target settings

Enter the VM name for the VCSA, the appliance name must not be more than 80 characters in length and cannot contain the characters percent (%) forward slash (/) or backslash (\). Set the root password, which needs to be at least 8 characters, with a number, uppercase and lowercase letters, and a special character. Click Next to continue.

Deploy vCenter Server: VCSA credentials

Select the deployment size in line with the number of hosts and virtual machines that will be managed, click Next.

Deploy vCenter Server: VCSA compute and storage assignment

Select the datastore where the appliance will be deployed, choose thin provisioning if required, and click Next again.

Deploy vCenter Server: datastore and disk mode configuration

Enter the network settings to be applied to the appliance, including IPv4, DNS, and network adapter settings, then click Next.

Deploy vCenter Server: VCSA network settings

On the summary page, click Finish. The appliance will now be deployed.

Deploy vCenter Server: stage 1 installation

Installation Stage 2

Once complete the VCSA is deployed but the services aren’t running, click Continue to move on to stage 2. If at this point you find that the DNS entry was added without leaving sufficient time for client you’re working from to update; then you can still initiate the setup from https://vCenter-FQDN-or-IP:5480 when the vCenter Server hostname is resolving correctly

Deploy vCenter Server: stage 1 complete

Click Next to begin the VCSA setup.

Configure vCenter Server: stage 2

Configure the Network Time Protocol (NTP) servers to enable time synchronisation, and choose the Secure Shell (SSH) state for the appliance; this can be changed later. Then click Next.

Configure vCenter Server: NTP and SSH settings

Enter a unique Single Sign-On (SSO) domain name, the default is vsphere.local. vSphere uses SSO to communicate across its different software components through a secure token exchange mechanism. SSO users can be members of the local domain, or an external trusted source like Active Directory (AD). Most organisations use Microsoft AD and therefore the SSO domain name should not be the same as your Active Directory domain. Configure a password for the SSO administrator, and click Next.

If you already have existing vCenter Servers in an SSO domain that you want to join, using Enhanced Linked Mode functionality (up to 15), enter the administrator credentials for the existing SSO domain.

Configure vCenter Server: SSO settings

Select or deselect the Customer Experience Improvement Program (CEIP) box and click Next.

Configure vCenter Server: Customer Experience Improvement Program (CEIP)

Review the details on the summary page and click Finish.

Configure vCenter Server: finalise details

Click Ok to acknowledge that the VCSA setup cannot be paused or stopped once started. When the installer is complete click Close to close the wizard.

Configure vCenter Server: stage 2 complete

Post-Installation Steps

Connect to the vCenter Server after the 2-stage installation is complete using the IP address or FQDN configured from a web browser: https://vCenter-FQDN-or-IP/ui. Accessing vSphere through the Flash (FLEX) web client has been depreciated, and so the User Interface (UI) defaults to HTML5.

vCenter Server HTML5 client

Once you’re logged into vCenter you can start creating your data centre environments and adding in ESXi hosts. Both vCenter and ESXi are armed with automatic 60 day evaluation periods.

vCenter Server HTML5 client

The following steps may also be useful post-installation of vCenter Server 7.0:

  • You must apply a new vCenter Server license key before the end of the 60 day evaluation. Since this is a home lab environment I am able to use personal keys supplied by the VMware vExpert program:
    • Log into the vSphere Client using the SSO administrator credentials. An orange banner is displayed that will link you directly to the licenses page, alternatively you can select Administration from the Menu drop-down, and click Licenses.
  • Next up if you have an Active Directory domain, then you may want to add it to vCenter as an identity source. This can be configured in the Administration page under Single Sign On and Configuration.
  • The newly deployed vCenter Server can be backed up using file-based backups to a remote file share, or image-based backups of the virtual machine.:
    • For file-based backups supported protocols include FTP, FTPS, HTTP, HTTPS, SFTP, NFS, or SMB. One of the available secure protocols should be used in production environments.
    • File-based backups can be configured in the appliance management interface accessible from a web browser at https://vCenter-FQDN-or-IP:5480, using the root credentials set during deployment.
    • If needed, a file-based backup can be restored to a new vCenter Server on deployment using the Restore option in the opening vCenter Server Installer page. Review the File-Based Backup and Restore of vCenter Server documentation for a full list of included configuration.
  • Windows users may want to enable the VMware Enhanced Authentication Plug-in for integrated Windows authentication.
  • For information on applying an SSL certificate to the vCenter Server Appliance see How-to Secure vCenter Server 7 (VCSA) with a Let’s Encrypt SSL Certificate.
  • If you are having problems with starting vCenter Server double check the system requirements are all in place, then check the installation log outputs identified in the Troubleshooting vCenter Server Installation or Deployment documentation. You may also be able to generate a log bundle for VMware support if you have an appropriate support contract in place.
vCenter Server with ESXi hosts

Featured image by Jonas Svidras on Unsplash

vSphere 7 and vSAN 7 Headline New Features

vSphere 7 Cloud Infrastructure for Modern Applications Part 1:

vSphere 7 Cloud Infrastructure for Modern Applications Part 2

vSphere 7 with Kubernetes

Now and over the next 5 years, we will see a shift in how applications are built and run. In 2019 Line of Business (LOB) IT, or shadow IT, spend exceeded Infrastructure and Operations IT spend for the first time*. Modern applications are distributed systems built across serverless functions or managed services, containers, and Virtual Machines (VMs), replacing typical monolithic VM application and database deployments. The VMware portfolio is expanding to meet the needs of customers building modern applications, with a portfolio of services from Pivotal, Wavefront, Cloud Health, bitnami, heptio, bitfusion, and more. In the container space, VMware is strongly positioned to address modern application challenges for developers, business leaders, and infrastructure administrators.

Launched on March 10 2020, with expected April 2020 availability, vSphere 7 with Kubernetes is powering VMware Cloud Foundation 4. vSphere 7 with Kubernetes integration, the first product including capabilities announced as part of Project Pacific, provides real-time access to infrastructure in the Software-Defined Data Centre (SDDC) through familiar Kubernetes APIs, delivering security and performance benefits even over bare-metal hardware. The Kubernetes integration enables the full SDDC stack to utilise the Hybrid Infrastructure Services from ESXi, vSAN, and NSX-T, which provide the Storage Service, Registry Service, Network Service, and Container Service. Developers do not need to translate applications to infrastructure, instead of leveraging existing APIs to provision micro-services, while infrastructure administrators use existing vCenter Server tooling to support Kubernetes workloads along with side Virtual Machines.

At this point, on-premises Kubernetes orchestration is available through VMware Cloud Foundation 4. You can read more about Kubernetes with vSphere 7 in vSphere 7 with Kubernetes and Tanzu on VMware Cloud Foundation. Continue reading this post to review the additional functionality introduced with vSphere 7 and vSAN 7 around lifecycle management, scalability, security and compliance, you can also check the full vSphere 7 introduction here.

vSphere 7 Headline New Features

vCenter Server Profiles

vCenter Server Profiles are introduced in vSphere 7; enabling consistent configuration across the vCenter Server estate. vCenter Server Profiles export management, network, authentication, and user configurations into JSON format. The configurations can be edited, validated, and imported or pushed to up to 100 vCenter Servers, providing version control and a consistent last-known-good state. vCenter Server Profiles are accessible via 4 new REST APIs; list, export, validate, and import.  This also means they can be consumed with DCLI, PowerCLI, or other automation tools such as Ansible, Puppet, and Chef. Behind the scenes, vCenter Server Profiles are known as Infrastructure Profiles and can be found under infra-profiles in the vCenter Developer Center  API Explorer. Note that vCenter Server profiles do not replace file-based backups for vCenter, the profile exports do not contain GUIDs etc. that would be required for a full and supported vCenter Server restore.

vCenter Server Profiles

vCenter Server Update Planner

The new vCenter Server Update Planner provides native tooling to help with discovering, planning and upgrading vCenter Server and connected products successfully. VMware administrators can receive notifications in the vSphere client when an upgrade or update is available. VMware product interoperability is built-in and automatically detects installed products to provide monitoring and checks against the current vCenter Server version; showing compatible upgrades and removing guesswork and complicated interoperability questions for complex environments. To further validate upgrade paths ‘what-if’ workflows, and pre-update checks can be run against the selected target vCenter. The vCenter Server Update Planner also links to applicable release notes and Knowledge Base (KB) articles. An extra benefit of the vCenter Server 7 upgrade process is the automation of the external Platform Services Controller (PSC) which is now built into the upgrade, more on this further down the post.

vCenter Server Cluster Image Management

Cluster Image is the new model for management of the ESXi lifecycle, providing consistency of ESXi hosts across a cluster. The cluster image comprises of specific firmware, drivers, or vendor software add-ons, to create a desired state model with multi-host remediation capabilities. The Cluster Image feature is exposed through the vSphere client, REST API, and also integrates with third-party vendor management tools such as Dell OpenManage and HPE OneView. This means host firmware can now be managed and upgraded from within vSphere, removing the risk of unsupported drivers and firmware. To use this feature, all hosts in a cluster must be the same hardware type and must all be running ESXi 7.0.

New vSphere DRS Improvements

Distributed Resource Scheduler (DRS) is evolving to meet Virtual Machine needs and has undergone several new improvements. DRS now makes workload centric placement decisions based on VM data gathered every minute, as opposed to cluster centric decisions based on 5 minutes of data. Placement decisions are now based on the individual VM DRS scores and granted memory. This shifts the focus onto the workload resource fulfilment, rather than the balance of the whole cluster. The VM DRS score is calculated using CPU %RDY (Ready) time, memory swap, CPU cache behaviour, headroom for the workload to burst, and migration cost. VM DRS scores are grouped into buckets of 20% increments.

Improved DRS vSphere 7

Improved DRS in vSphere 7 now includes Scalable Shares, providing relative resource entitlements across Resource Pools. Setting a share level to ‘high’ now ensures prioritisation over lower share VM entitlements, whereas previously, the higher share level did not guarantee a higher resource entitlement. Scalable Shares need to be enabled and can be configured on a cluster level and/or Resource Pool level. Share allocations are dynamically changed depending on the number of VMs in a Resource Pool. The only exception to this rule is vSphere with Kubernetes where a Resource Pool is used as a Namespace, in this instance, Scalable Shares are used by default.

DRS placement now includes assignable hardware – support for hardware accelerators, for both DRS initial placement and vSphere High Availability (HA). When adding a new device dynamic DirectPath IO or NVIDIA GRID vGPU devices are supported. DRS works with the assignable hardware framework to find a host with an available PCIe device configured, or hardware profile when making initial placement decisions. The functionality requires the new VM hardware version 17.

New vSphere vMotion Improvements

Increased workload resource consumption as applications change-over-time has started presenting performance challenges during vMotion and stun times for large or monster VMs. To address these challenges, vMotion has been refactored as part of vSphere 7, bringing back vMotion capabilities for workloads like SAP HANA or Oracle.

During vMotion a page tracer is installed so vSphere can keep track of the memory pages that are overwritten by the guest OS while the VM is in a vMotion state. To install the page tracer, the vCPU is stopped (for micro-seconds), allowing the monitoring of memory page overwrites. These overwrites are referred to as page fires, which are replicated to the destination ESXi host. The page tracer was previously installed on all vCPUs in a VM. In vSphere 7 only one vCPU is claimed and dedicated to all the page tracing work during a vMotion operation. This improves the efficiency of page tracing and dramatically reduces the performance impact on a workload. When all memory pages have been migrated the last memory bitmap is transferred, in previous versions the entire bitmap was transferred, in vSphere 7 the bitmap is compacted, and only the last pages are sent, cutting down the switch over and stun time.

vMotion Improvements

Enhanced vMotion Capability (EVC) has been updated with new baselines for CPU packages: Intel Cascade Lake generation and AMD Zen2 generation (EPYC Rome).

New vSphere Security & Compliance Features

vSphere 7 now supports Intel Softguard Extensions (SGX) which allows applications to work within the underlying hardware to create a secure enclave that cannot be viewed by the guest OS or hypervisor. The application can store secrets or data in the enclave, which is an important feature for risk management, although currently there is minimal hardware support. Intel Ice Lake CPUs will have dual socket implementations of SGX. If implementing SGX, remember that you will lose certain features such as vMotion, snapshots, etc. if the hypervisor cannot see everything in the VM, this becomes very much an application design decision.

vSphere 7 introduces vSphere Trust Authority (vTA), providing trusted hosts and encryption key management. Previous trust models in vSphere had the potential for running secure workload on untrusted hosts, with no repercussions for failing secure baselines. Attestation and key management were done by vCenter Server, which itself could not be encrypted. The dependencies on the vCenter Server itself made it difficult to implement the principle of least privilege. With vTA a hardware root of trust is created using a separate ESXi host cluster, this can also be your management cluster. The key manager only talks directly to trusted hosts, rather than the vCenter Server. Workloads running on the trusted cluster, now including vCenter Server, can be encrypted. A smaller number of administrators can be given access to the trusted hosts, with regular admins maintaining access to the workload hosts. Currently, vTA is still foundational, so expect more functionality to be available in future releases. It is important to note that to use the trusted host model, the physical server must have the TMP 2.0 chip, which is cryptographically bonded to the host.

vSphere 7 Trust Authority

Identity Federation is introduced in vSphere 7 to modernise vSphere Authentication utilising standards-based federated authentication with enterprise Identity Providers. Using Identity Federation organisations can benefit from reduced audit scope and administrative workload, as well as security enhancements such as Multi-Factor Authentication (MFA). Initial integration will be with Active Directory Federation Services (ADFS) / Azure Active Directory, which alongside MFA is great for compliance and security. Identity Federation will also work with the Supervisor Cluster for Kubernetes, which inherits a lot of the security and functional controls from vCenter to help bridge the gap between developing modern applications and existing processes and infrastructure.vSphere 7 Identity Federation

There are hundreds of improvements in vSphere 7 to drive consistency and trust in the environment. For example, the default settings for the vSwitch now includes SecurebyDefault to enforce security settings, the Certificate Management UI has been consolidated and simplified, and so on. You can review the vSphere 7 release notes for full information.

Additional Noteworthy vSphere 7 Features

  • vCenter Server Content Library: a new interface provides vast improvements in template management. Virtual Machine templates are now checked out to edit and checked in to save, facilitating version control, quick historical view of edits, and ability to restore to previous versions. You can switch between the new view and classic view in the vSphere client. Additional features such as versioning are only available when the VM template is stored in a Content Library. Advanced configuration now allows an update of auto-sync frequency and performance optimisation.
  • vCenter Server Multi-Homing: vCenter Server 7 now supports multiple network adaptors, the maximum supported vNIC limit is 4 per vCenter Server, with NIC1 reserved for vCenter HA.
  • vCenter Server SSO Domain Consolidation: vSphere SSO domain or external PSC consolidation has been simplified with new tooling commands for domain re-pointing or un-registering: cmsso-util unregister and domain-repoint.
  • vCenter Server External PSC Consolidation: the Upgrade and Migration setup no longer allows the deployment of an external PSC. Furthermore, the external PSC consolidation process is now automatically built into the upgrade, reducing administrative time and effort during the upgrade process. This means the vCenter Server Converge Tool has been removed from the ISO. The external PSC consolidation during an upgrade is also a supported configuration in JSON format when upgrading using the CLI.
  • VM Hardware v17: the new VM hardware version features a virtual Watchdog Timer providing guest OS and application monitoring, especially important for clustered applications like databases and filesystems. Precision Time Protocol (PTP) now provides sub-millisecond timekeeping, helpful for financial and scientific applications. PTP requires both an in-guest device and the ESXi service to be enabled.
  • vCenter Server Configuration Maximums: further enhancements to vCenter Server scalability:
    • vCenter Server (standalone) number of hosts per vCenter Server: 2500, powered-on VMs per vCenter Server: 30,000
    • Linked Mode vCenter Servers (15 per SSO domain) hosts: 15,000, powered-on VMs: 150,000
    • vCenter Server latency requirements for vCenter Server to vCenter Server: 150ms, vCenter Server to ESXi Hosts: 150ms, vSphere Client to vCenter Server: 100ms

vCenter Server Config Maximums

You can read the full vSphere 7 release information at Introducing vSphere 7: Essential Services for the Modern Hybrid Cloud as well as the vSphere 7 Data Sheet and vSphere 7 Product Page.

vSAN 7 Headline New Features

Several new features have been added to vSAN 7 alongside the vSphere 7 announcement, here are the key product enhancements:

Simplified Lifecycle Management

vSphere Lifecycle Manager (vLCM) is a new approach to unified software and firmware management, increasing reliability and decreasing the number of update tools. vLCM is built around the desired state model and monitors and remediates compliance drift. Desired state and desired images are applied at cluster level and manage the server stack as a whole, across hypervisor, drivers, and firmware. Furthermore, the modular framework supports vendor firmware plugins such as Dell and HPE.

Unified Block and File Storage

Fully Integrated File Services provides a native file service built into the hypervisor through vSAN. Cluster capacity for vSAN can be provisioned into file shares with support for NFS v4.1 & v3, and file share quotas, unifying management of block and file storage. vSAN file shares are aimed at ease of use for both cloud-native and traditional workloads running in the cluster, it is not necessarily a replacement for large scale filers.

vSANFile

Expanded Data Services

Continued Integration of Cloud Native Storage provides the control plan and storage service for vSphere with Kubernetes integration, and offers file-based persistent volumes easily accessible and managed within vCenter. This now includes support for vVols, persistent volume encryption, and snapshots, volume resizing, and a mixture of tooling such as application monitoring with Wavefront, next-generation monitoring solutions like Prometheus, and infrastructure analytic solutions like vRealize Operations, providing an advanced level of visibility for vSphere administrators.

vSANCloudNative

Improved Efficiency and Operations

  • Enhancements for Stretched Cluster and 2-Node Topologies: such as support for overriding the default gateway used by vSAN hosts to simplify deployments and routed topologies, immediate repair operation after a witness host appliance is replaced.
  • Intelligent capacity management for stretched cluster topologies; when the cluster is in a capacity-constrained state, for example, due to host failure, objects in a critical condition are marked by vSAN as absent, allowing I/O to be processed at another site. The degraded state of the object in terms of resilience still stands, but the VM uptime is improved by allowing the continuation of read/write operations. The object is updated when the capacity strain condition is removed.
  • Stretched cluster awareness for DRS placement decisions; enables prioritisation of I/O read locality over VM site affinity rules, completion of vSAN resync before DRS migrations,  and a reduction in I/O across ISL in recovery conditions.
  • Improved accuracy in VM capacity reporting across vCenter UI and APIs when working with thin-provisioned VMs, swap objects, and namespace objects; reducing confusion and inconsistency over provided and used space for a given VM.
  • A new vSAN memory metric has been added in the vSAN performance service to display memory consumption of vSAN operations such as hardware and software configuration changes. The additional vSAN memory metric shows time-based memory consumption per host and is available in the vCenter UI and API.
  • New vSphere Replication object identity types to easily identify objects created by or using vSphere Replication, replacing the previous unknown object type.
  • Additional support for larger storage devices; up to 32 TB physical drives, and up to 1 PB in logical capacity. This gives the potential for improved deduplication ratios when using larger devices for the capacity tier and deduplication domain.
  • Native support for NVMe hot-plug through vSAN and vSphere for selected OEM platforms. This feature reduces host restarts and administrative complexity when carrying out planned or unplanned maintenance.
  • Removal of Eager Zero Thick (EZT) requirement for vSAN shared disks, improving application consumption and flexibility.
  • The full vSAN announcement can be found here

vSphere 7 with Kubernetes and vSAN 7 are built into VMware Cloud Foundation 4, read more on the March 10 2020 announcement in vSphere 7 with Kubernetes and Tanzu on VMware Cloud Foundation.

*LOB spend 51% to infrastructure operations spend 49% – source IDC WW Semiannual IT Spending Guide: Line of Business, 09 April 2018 (HW, SW and services; excludes Telecom)