VVOLs was introduced in vSphere 6 as an integration and management framework for external storage (NAS and SAN). The framework allows the configuration of custom storage based policies applied at virtual machine level; as opposed to applying policies at datastore level. VVOLs also offer simplified management, improved resource utilisation and the ability to offload storage operations, such as snapshots and cloning, to the storage array.
In an attempt to reduce the overload of information in one post I am splitting the write up on VMware VVOLs into two; this overview on what they are and why you need them, and separate guides on how to configure them: Configuring VVOLs with Nimble Storage or Configuring VVOLs with EMC Unity.
In a physical world servers typically had access to their own LUNs, and QoS could be attached to those LUNs accordingly. Since the arrival of virtual servers, multiple virtual machines generally reside in a single LUN or datastore with a combination of different data types. This means that any storage based policy applied to the LUN applies to a multitude of virtual machines, with servers provisioned to the storage tier with the best fit for their needs, likewise any one virtual machine having difficulties can impact others sharing that datastore. Having a single virtual machine per datastore offers poor scalability due to the 256 Lun limit in ESXi, and the huge amount of administrative overhead when provisioning a virtual machine means this isn’t a feasible solution.
VMware Virtual Volumes hook into the back end storage using vStorage APIs for Storage Awareness (VASA) enabling vSphere to integrate fully with an external storage device. Each virtual machine component resides on a VVOL, whether this be a VMDK file, swap file, etc. these VVOLs together make up the virtual machine object. Virtual machines are provisioned based on the VMware Storage Policy Based Management (SPBM) framework which uses the VASA client, both features are key to VVOLs and were introduced with vSphere 6. Storage based policies can be applied as granular as a virtual hard drive; this could be useful for a server with different drives for OS, database and logs, where it would be beneficial to have different block sizes, different snapshot times and retention periods.
When you provision a virtual machine the virtual volumes are automatically created from a storage container; a pool of raw storage configured on the array. The hypervisor accesses VVOLs using a Protocol Endpoint (PE) which is the access point and SCSI communication layer between the ESXi host and the storage array. The Protocol Endpoint is also responsible for managing paths and policies, as well as assigning each virtual volume a GUID; replacing the concept of LUNs and mount points.
The full feature list will largely depend on your storage provider, and how much time and effort they have put in to the solution. In theory storage operations such as taking snapshots, clones, replication should all be offloaded to the array, whilst still being initiated from vSphere. In all cases over-provisioning is eliminated, since each virtual volume only uses the exact amount of storage it needs.
- Before you can implement VVOLs you need to be running vSphere 6.
- If you have already licensed vSphere for standard or above there is no additional cost.
- The VASA 2.0 client is built into vSphere 6 and the VASA 3.0 client is built into vSphere 6.5. Both require a VASA provider to work. You should check with your storage provider that they support VASA, how they supply the VASA provider and what other considerations this may bring, e.g. high availability.
- If you have any specific requirements in your environment that you are unsure about, check the VVOLs FAQ.
- Cross check your hardware with VVOLs in the VMware Compatibility Guide.