A new version of Virtual SAN is now available: VMware vSAN 6.5 Install Guide.
VMware VSAN is an enterprise class, high performance, shared storage solution for Hyper-Converged Infrastructure. VSAN utilises server attached flash devices and local hard disk drives to create a highly resilient shared datastore across hosts in a vSphere cluster. To achieve high availability VMware administrators previously needed to connect to a SAN, NAS or DAS device, VSAN removes the need for dedicated external shared storage by adding a software layer that can leverage local server hardware to provide the same resiliency and feature set.
Virtual SAN is uniquely embedded within the hypervisor kernel, directly in the I/O path allowing it to make rapid data placement decisions. This means that VSAN can produce the highest levels of performance without taking compute resources, as opposed to other storage virtual appliances that run separately on top of the hypervisor. VSAN provides the flexibility of hybrid storage all the way up to all-flash architectures; delivering over 6M IOPS.
- Data protection and availability with built in failure tolerance, asynchronous long distance replication, and stretched clusters between geographically separate sites
- Leverages distributed RAID and cache mirroring to protect data against loss of a disk, host network or rack
- Minimises storage latency by accelerating read/write disk I/O traffic with built in caching on server attached flash devices
- Software based duduplication and compression with minimal CPU and memory overhead
- The ability to grow storage capacity and performance by adding new nodes or drives without disruption
- VM-centric storage policies to automate balancing and provisioning of storage resources and QoS
- Fully integrates with the VMware stack including vMotion, High Availability, Fault Tolerance, Site Recovery Manager, vRealize Automation, and vRealize Operations
- No additional install, appliances, or management interfaces
- VMware vCenter Server 6.0 U1 or later
- vSphere 6.0 U2 or above, vSphere with Operations Management 6.1 or above, or vCloud Suite 6.0 or above
- A minimum of 3 hosts in a cluster (max 64), however you can work around this by having 2 onsite capacity contributing hosts and one offsite witness host that does not contribute capacity
- Each capacity contributing host in the cluster must contain at least one flash drive for cache and one flash or HDD for persistent storage
- SATA/SAS HBA or RAID controller
- Minimum of 1 GB NICs but 10 GB is recommended (10 GB required for all-flash)
- Layer 2 multicast must be enabled on the physical switch that handles VSAN traffic, IPv4 only
- If you are deploying VSAN to your existing hardware or not using the VMware hyper-converged software stack then check the Hardware Compatibility Guide.
VSAN is licensed per CPU or per VDI desktop and comes in three tiers; standard, advanced, and enterprise. For QoS and stretched clusters you need enterprise licensing, all-flash support, inline deduplication and compression requires advanced licensing. Standard covers all other features.
To provide further flexibility and reduced costs VMware have also introduced Virtual SAN for Desktop and Virtual SAN for ROBO. Virtual SAN for Desktop is licensed per concurrent user and sold in packs of 10 and 100, this limits the use of VSAN to VDI users only. Virtual SAN for ROBO (Remote Office Branch Office) is sold in packs of 25 VMs and is limited to 25 VMs per site.
We must first ensure each host in the cluster is configured with a VMkernel port for use with VSAN traffic.
In the vSphere web client browse to each of the hosts in the designated cluster for which you intend to use VSAN, open the Manage tab and click Networking. For production environments consider using a multiple physical NICs in a standard switch or uplinks in a distributed switch.
For the purposes of this lab environment click VMKernel Adapters and select the Management network, click the Edit Settings icon. Add Virtual SAN traffic to the list of available services and click Ok. The VSAN traffic will now share the management network, this is not recommended for production workloads.
If you would prefer to setup separate networking for VSAN traffic click the Add Host Networking icon. Select the connection type as VMkernel Network Adapter and click Next.
Configure the switch settings, network adapters, and IP settings then click Finish.
There is no additional software installation required for VSAN, as the components are already embedded in the hypervisor we can simply enable the required features from the vSphere web client.
To enable VSAN browse to the appropriate cluster in the vSphere web client and click the Manage tab. Expand Virtual SAN and select General, you will see a message that Virtual SAN is not enabled, so click Configure
Review the options on the VSAN capabilities page, select any appropriate features you wish to consider. Hover over the grey information circle for further information about the available options, when you’re ready click Next.
The network validation page will confirm that each host in the cluster has a valid VSAN kernel port, click Next.
On the claim disks page select the disks to add to the cache and capacity pools then click Next and Finish.
The virtual SAN will now pool the selected resources into the VSAN datastore and you can start provisioning machines right away. VSAN creates and presents a single datastore, containing all disks, for each vSphere cluster.
If you are interested in learning more about VSAN Duncan Epping has compiled a list of all the VSAN resources you’ll need, and then some. VMware also have a hosted evaluation of VSAN you can try out with Hands On Labs.