This post lists some of the available options for connecting VMware Cloud on Amazon Web Services (AWS) with native AWS services for hybrid cloud deployments. Read more about multi-account and VPC management at Building AWS Environments for VMware Cloud Customers.
The most common way of consuming AWS services with VMware Cloud on AWS is to use the built-in Elastic Network Interface (ENI) functionality. Each VMware Cloud Software-Defined Data Center (SDDC) can be connected to another AWS Virtual Private Cloud (VPC) during the deployment phase. A VPC is Amazon’s logical separation of virtual networks. At scale, you may choose to have many VPCs and many accounts for different applications and environments. Multiple VPCs can be connected together using an AWS Transit Gateway (TGW). A further option we will look at is VPC Endpoints, enabling you to privately connect to supported AWS services and endpoints.
1. Connected VPC
The AWS bare metal hosts deployed for VMware Cloud on AWS use a redundant 25 Gbps physical interface or Elastic Network Adaptor (ENA). The physical interface uses a trunk port to carry multiple VLANs for services like management, vMotion, NSX, and connectivity to the AWS backbone network.
The cross-linked VPC architecture is provided by a series of ENIs. Each host in the vSphere cluster uses the network adaptor outlined above to provide an individual cross-VPC ENI per physical host; supporting high-bandwidth, low latency connectivity to native AWS services.
VMware Cloud on AWS uses the AWS VPC as an underlay for NSX-T. The NSX Edge (Tier 0) router is a virtual router acting as the uplink to the connected VPC. The active ENI in use is the physical ESXi host where the virtual router is running. The connected VPC is owned and managed by the customer, any native services deployed are billed separately by AWS. When deploying the SDDC, the connected account and VPC is required along with a private subnet in each applicable Availability Zone (AZ). A static route is created for the defined subnets adding the connected VPC router as the next hop.
Traffic that traverses the ENI is not chargeable; however, cross-AZ charges do need to be taken into consideration if a Stretched Cluster is in use. During provisioning of the SDDC, and connection of the customer-managed AWS account, a CloudFormation template is deployed creating the necessary AWS Identity Access Management (IAM) roles and ENI configuration.
Following the SDDC deployment, you can view the connected account, VPC, ENI, and subnet details in the Connected VPC menu under the Networking & Security tab of the SDDC, from the VMware Cloud Services Portal.
VPC Endpoints allow private connectivity between your VPC and supported AWS services or custom applications. Network traffic traversing a VPC endpoint does not leave the AWS backbone network and therefore does not require Internet Gateway, Direct Connect, or VPN.
The general process for creating an endpoint is the same across these VPC Endpoint types. In the example below, we are connecting to a VPC Endpoint Service for Splunk, fronted by a Network Load Balancer (NLB) in another VPC. The administrator of the VPC Endpoint Service needs to grant IAM service consumer permissions and accept the incoming connection, as detailed in the AWS documentation here.
In the AWS console, I log into the connected account and select the VPC service. I choose Endpoints and Create Endpoint. To create a Gateway VPC Endpoint, e.g. for S3, or an Interface VPC Endpoint, e.g. for DynamoDB or other services, I would select the appropriate service from the AWS services service category. In this instance, I use Find service by name and enter the endpoint private service name. Either way, the key point is that I select the connected VPC from the VPC drop-down and the subnets that match up with those used for the ENI when deploying the VMware Cloud on AWS SDDC.
By using the cross-VPC linked subnets the Virtual Machines in the SDDC will utilise the static route across the ENI outlined in the Connected VPC section above. AWS Security Groups can be used to limit this to certain source IP addresses from within the SDDC or the wider VPC if required. In this instance, we can successfully test the connection over port 443 following the creation of the VPC Endpoint.
3. Additional VPC Connectivity
Traditionally VPC Peering has been used to provide one to one private network connectivity between VPCs, including VPCs in different accounts. VPC Peering cannot be configured in the SDDC as we do not have access to the underlying AWS account. VPN connections between additional VPCs and the SDDC router (Tier 0) can be configured from the VMware Cloud Services Portal, enabling VMware Cloud on AWS connectivity with other VPC environments. As the number of VPCs and accounts begins to scale the VPN approach becomes harder to manage.
This predicament is resolved with a relatively new addition to AWS; the Transit Gateway (TGW). The native AWS TGW is available now and acts as a transit network hub allowing you to connect multiple VPCs and on-premise networks (using Direct Connect or VPN attachments). A Managed Transit Gateway is being developed by VMware to assist with multi-SDDC and multi-VPC connectivity. You can review how the native AWS Transit Gateway fits into the VMware Cloud on AWS architecture on the VMware Network Virtualization blog: VMware Cloud on AWS with Transit Gateway Demo:
Kicking off 2020 with the theme of the year – security.
To keep this content accurate, the bulk of it has been quoted from the Cloud Security Alliance (CSA) VMware Cloud on Amazon Web Services (AWS) self-assessment. The CSA defines best practises for secure cloud computing environments. The full assessment can be found here under VMware Cloud on AWS. I have listed the key points from the CSA applicable to our customer use case below, any direct quotes are in blue text, alongside further information from the following must-read resources:
VMware Cloud on AWS (the Service Offering or VMware Cloud) brings VMware’s enterprise class Software-Defined Data Center software to the Amazon Web Services cloud, enabling customers to run any application across vSphere-based private, public, and hybrid cloud environments.
The Service Offering has the following components:
Software-Defined Data Center (SDDC) consisting of:
VMware vSphere running on elastic bare metal hosts deployed in AWS
VMware vCenter Server appliance
VMware NSX Data Center to power networking for the Service Offering
VMware vSAN aggregating host-based storage into a shared datastore
VMware HCX enabling app mobility and infrastructure hybridity
Self-service provisioning of SDDCs, on demand, from vmc.vmware.com
Maintenance, patching, and upgrades of the SDDC, performed by VMware
The SDDC service offering uses dedicated AWS physical hardware per tenant, each ESXi host you purchase is a dedicated physical AWS bare metal server. Each customer environment is logically and physically separated, there is no multi-tenancy or nested virtualisation. The customer is in charge of their own workloads as well as ingress/egress and user access.
This post focuses on the security of the VMware Cloud on AWS platform and collates information provided by VMware. The network design is a topic that requires addressing in its own right; connectivity, default route, Internet access, firewall and load balancing, etc. To ensure you secure the network along with user access and workloads, as outlined in the next section, review in full the above documentation, the VMware Cloud on AWS Documentation, and Reference Architectures, as well as engaging your VMware customer success or account team.
Although VMware Cloud on AWS utilises Infrastructure as a Service (IaaS) from AWS, the customer consumes the platform as a whole from VMware, and therefore maintains a relationship and support contract with VMware. Support for any native AWS services deployed in the customers connected AWS Virtual Private Cloud (VPC) remains with AWS as normal. The security model, therefore, is shared between the customer and VMware. VMware separates the roles as follows:
We (VMware) will use commercially reasonable efforts to provide:
Information Security: We will protect the information systems used to deliver the Service Offering over which we (as between VMware and you) have sole administrative level control.
Security Monitoring: We will monitor for security events involving the underlying infrastructure servers, storage, networks, and information systems used in the delivery of the Service Offering over which we (as between VMware and you) have sole administrative level control. This responsibility stops at any point where you have some control, permission, or access to modify an aspect of the Service Offering.
Patching and Vulnerability Management: We will maintain the systems we use to deliver the Service Offering, including the application of patches we deem critical for the target systems. We will perform routine vulnerability scans to surface critical risk areas for the systems we use to deliver the Service Offering. Critical vulnerabilities will be addressed in a timely manner.
You (the customer) are responsible for addressing the following:
Information Security: You are responsible for ensuring adequate protection of the Content that you deploy and/or access with the Service Offering. This includes, but is not limited to, any level of virtual machine patching, security fixes, data encryption, access controls, roles and permissions granted to your internal, external, or third party users, etc.
Network Security: You are responsible for the security of the networks over which you have administrative level control. This includes, but is not limited to, maintaining effective firewall rules in all SDDCs that you deploy in the Service Offering.
Security Monitoring: You are responsible for the detection, classification, and remediation of all security events that are isolated with your deployed SDDCs, associated with virtual machines, operating systems, applications, data, or content surfaced through vulnerability scanning tools, or required for a compliance or certification program in which you are required to participate, and which are not serviced under another VMware security program.
In a nutshell, as well as user access and connectivity, the customer is ultimately responsible for what is inside the Virtual Machine. This means things like Anti-Virus, operating system and application patches, monitoring, backups, access control / privileged access, etc. The VMware Cloud Service Offerings Terms of Service backs this up, stating:
2.2 You (the customer) are responsible for taking and maintaining appropriate steps to protect the confidentiality, integrity, and security of Your Content. Those steps include (a) controlling access you provide to your Users, (b) configuring the Service Offering appropriately, (c) ensuring the security of Your Content while it is in transit to and from the Service Offering, (d) using encryption technology to protect Your Content, and (e) backing up Your Content.
It is the customer’s responsibility to secure data appropriately through accessibility and authorisation. This means securing connectivity with a Virtual Private Network (VPN) or Direct Connect and maintaining associated on-premise firewalls accordingly, as well as implementing security policies and firewall rules for the VMware Cloud Internet Gateway, VMware Cloud Compute and Management Gateways (NSX Edge Firewalls), and the NSX Distributed Firewall (Micro-Segmentation).
While the NSX Edge Firewalls protects north-south traffic; essentially anything coming in or out of the SDDC, the Distributed Firewall protects east-west traffic; between workloads inside the SDDC. Micro-Segmentation can be used to protect applications by ring-fencing virtual machines in a zero-trust architecture. The VMware Cloud on AWS NSX Networking and Security eBook goes into great detail on the NSX Edge and Distributed Firewalls with screenshots and configuration examples in chapter 6 (page 83). Both firewall types are included for all virtual machines in the default VMware Cloud on AWS pricing model.
All native AWS services deployed in the connected AWS VPC fall under the customer’s responsibility to secure as normal with Security Groups, Access Control Lists (ACLs), Identity and Access Management (IAM) groups/roles/policies, etc. This includes the cross-VPC 25Gbps Elastic Network Interfaces (ENI) deployed to connect the SDDC with the customers VPC.
VMware Cloud on AWS customers retain control and ownership of their Customer Content and it is the customer’s responsibility to manage data retention to their own requirements. VMware Cloud on AWS backs up Account Information including system configuration settings but does not provide backup services for Customer Content.
The CSA assessment and VMware Cloud Services Security Overview go into more detail on code security, change control, quality assurance, and configuration management, however, it is worth calling out patching. VMware are responsible for patching the underlying infrastructure of the platform; this includes all network, utility and security equipment. Critical security patches are ‘installed in a timely manner’, while non-critical patches are included in predefined patch schedules.
Customers have visibility into VMware SDDC products are updated from the Cloud Services Portal. In most cases updates and patches can be applied before General Availability (GA); some products run VMware Cloud specific versions and do not need to wait for the next GA release of vSphere, for example. Also VMware has subscriptions to internal vendor security and bug-tracking notification services, meaning remediation efforts are accelerated, and critical or high-risk issues prioritised, often having been applied before the vulnerability has been made public.
VMware Cloud on AWS uses Amazon Web Services (AWS) geographically resilient data center hosting facilities. Data centers are built in clusters in various global regions. VMware provides customers the flexibility to place VMware Cloud on AWS instances and store data within multiple geographic regions as well as across multiple Availability Zones within each region to minimize risk.
Physical Access is strictly controlled both at the perimeter and at building ingress/egress points and includes, but is not limited to fencing, walls, video surveillance, intrusion detection systems and other electronic biometric access controls and alarm monitoring systems managed by a 24x7x365 professional security staff.
AWS equipment is protected from outages in alignment with ISO 27001 standard. AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard. AWS Availability Zones are all redundantly connected to multiple tier-1 transit providers.
Customers explicitly choose which VMware Cloud on AWS data center best suits their needs, and customer data will not traverse locations without the explicit actions of the tenant administrator.
Automated processes are in place that handle media sanitization before repurposing of any hardware. Upon the explicit deletion of a production environment by a tenant, a cryptographic wipe of the hard drive is performed via destruction of keys used by the self-encrypting drives.
When a physical storage device has reached the end of its useful life, a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals is followed using techniques detailed in NIST 800-88 (“Guidelines for Media Sanitization”) as part of the decommissioning process – the same applies when exiting the VMware Cloud service.
The exact locations of AWS data centres are generally kept secret, and they do not run data centre tours (this digital tour is about as good as it gets). AWS provides regions which contain multiple Availability Zones, consisting of one or more data centres all physically separated from one another. Electrical power systems, water, telecommunications, and internet connectivity are all designed to be fault-tolerant. Availability Zones are connected using private fibre-optic networking allowing customers to architect highly available solutions.
Physical access to data centres is restricted to those with valid and approved business justification. Site and server room access is limited to authorised individuals and requires a point in time access with multi-factor authentication. AWS implement additional perimeter security features outlined above and monitoring for things like open doors and removal of assets.
Media storage devices used to store customer data are classified by AWS as critical and treated as high-impact throughout their life-cycle. Media is decommissioned using techniques detailed in NIST 800-88 and is not removed from AWS control until it has been securely decommissioned. AWS and employees are audited by multiple compliance programs, you can download AWS Compliance Reports from the AWS Artifact service in the AWS console.
Security Operations Monitoring
VMware monitors internal platform & systems for privacy breaches and has a breach notification process to notify customers in the event of a privacy breach. If VMware becomes aware of a security incident on VMware Cloud on AWS that leads to the unlawful disclosure or access to personal information provided to VMware as a processor, we will notify customers without undue delay, and will provide information relating to a data breach as reasonably requested by our customers. VMware will use reasonable endeavors to assist customers in mitigating, where possible, the adverse effects of any personal data breach.
VMware Cloud on AWS has the capability to detect attacks that target the virtual infrastructure. VMware Cloud on AWS has several intrusion detection mechanisms in place. VMware log aggregation systems continuously ingest AWS firewall, AWS security services along with Cloud Trail logs, infrastructure and VPC Flow all logs. VMware continuously collects and monitors services operation logs using SIEM technologies. The 24x7x365 VMware Security Operations Center uses the SIEM to correlate information with public and private threat feeds to identify suspicious and unusual activities.
The real-time status of the VMware Cloud on AWS services along with past incidents is publicly available here. Availability reports are available to customers upon request within 45 days after a validated SLA event.
The VMware Security Operations Center (SOC) uses Log capture and SIEM tools, security monitoring technologies and intrusion detection tools in realtime to identify unauthorized access attempts or any behaviors that would indicate abnormal activity.
All changes to the virtual machine configuration are logged and available to the customer which enables detection of tampering and integrity checking.
VMware monitors AWS infrastructure and receives notifications directly from AWS in the event of a provider failure. VMware has developed processes with AWS to ensure that that we have defined disaster recovery mechanisms in place in the event that an upstream event occurs. VMware Cloud on AWS has conducted successful DR testing and continues to test annually.
See the Data Access section for more information on access logging. For ingesting logs from VMware Cloud on AWS, as well as native AWS, and other sources, customers can use Log Insight Cloud, which has both free and chargeable versions.
To address any further concerns, customers can also use their own Security Information and Event Management (SIEM) tools, such as Splunk, to continuously monitor the VMware Cloud on AWS environment for any unauthorised activity. Furthermore, tools currently used to scan or secure VMware environments on-premise can mostly be carried across to the VMware Cloud on AWS environment with IPFIX and Port Mirroring. This gives the customer unprecedented visibility under the hood of a cloud environment. The VMware Cloud on AWS NSX Networking and Security eBook goes into more detail on these operational tools in chapter 7 (page 101).
As well as Log Insight Cloud and/or the customers own SIEM tools, AWS can be used to monitor the connected VPC and services. AWS CloudTrail is a service that logs all API calls associated with your account. At the same time, AWS Config provides visibility of assets through an inventory of AWS resources and a history of configuration changes to these resources. You can use AWS Config to define rules that evaluate these configurations for compliance. VMware Cloud resources deployed to your connected VPC, such as IAM configurations for SDDC formation, and the attached ENIs, can be found in AWS Config. Your AWS logs can also be added as a content pack or log source for Log Insight Cloud.
In the example screenshots below, you can see part of the AWS CloudTrail logs for initial SDDC deployment. Highlighted in green is my user account linking the AWS account and running the CloudFormation template to create the appropriate IAM configuration, then a few minutes later in yellow the events for the ENIs being added and configured. You can view this in more detail in your own environment, the second screenshot shows AWS Config verifying there have been no changes made since initial deployment. I have had to remove most of the detail, but you get the idea.
It is important to note at this point that AWS security tools can only be used in the accounts you have access to. When VMware Cloud on AWS is deployed, i.e. the Elastic Compute Cloud (EC2) bare metal instances with associated VPC and subnets, routing table, etc. the customer does not have access to the underlying account and VPC. This is where the VMware logs outlined above are used to monitor the environment.
Penetration Testing & Audit
VMware has a comprehensive vulnerability management program. As a part of the vulnerability management program, penetration tests are performed at least annually.
Penetration test results are not provided externally. VMware Cloud on AWS is subject to regular internal and external reviews and security assessments. As a part of the VMware Cloud on AWS vulnerability management program, results are reviewed by the VMware security team(s) and remediation is performed based on the security team’s guidance.
With prior approval, Tenants are permitted to perform vulnerability assessments against their allocated service objects. Tenants are not permitted to perform vulnerability assessments against shared VMware assets.
VMware engages independent third-party auditors to perform reviews against industry standards. VMware will furnish audit reports under NDA.
Internal audits are performed at least annually under the VMware information security management system (ISMS) program. VMware utilizes internal/external audits as a way to measure the effectiveness of the controls applied to reduce risks associated with safeguarding information and to identify areas of improvement. Audits are essential to the VMware continuous improvement programs.
External audit reports will be provided to customers under NDA. Internal audit reports are classified as VMware confidential information and are not provided to tenants. Internal audit reports are reviewed by independent third-party auditors as a part of the VMware compliance program.
Risk assessments are performed at least annually, and results are disseminated to management. Adjustments are made to policies, procedures, standards and controls where necessary to address risks and corrective action plans.
VMware Cloud on AWS is built on the VMware Photon OS and VMware ESXi. The VMware Cloud on AWS Operations team disables unnecessary ports, protocols and services to harden the production environment. VMware applies security templates via Group Policy Object and we further harden servers through scripts. External communication in the production environment is restricted to ports 80 and 443 and all traffic passes through a firewall before reaching proxy servers in our DMZ. Managed interfaces are configured to deny-all communications traffic by default and allow network communications traffic by exception. – VMware Cloud on AWS deployment uses AWS CloudFormation, configuration and failure remediation is also scripted and automated to ensure a consistent and secure approach.
Customers maintain control of who has access to their VMware Cloud on AWS SDDC environment. VMware Cloud on AWS supports Identity Federation between vSphere and the customer’s identity provider using SAML standards for authentication. – Role-Based Access Control (RBAC) is used to assign permissions to both the vCenter Server and the Cloud Services Portal.
VMware Cloud on AWS natively supports Common Access Card Authentication and RSA SecurID Authentication to the vSphere client. Other multi-factor authentication systems can be supported via federation between vSphere and the customer’s Identity Provider.
Access control, separation of duties, and other policies define which individuals are allowed to have access to VMware Cloud on AWS management systems. Access to customer environments where customers data is stored, is limited to authorized VMware support engineers who must authenticate via two-factor authentication to an access control system in order to generate user-specific, time-limited credentials. Generation of these temporary credentials must be tied to an existing specific support incident ticket in the system. All activity performed by the support engineers is logged while accessing customer SDDCs. – In general, automated runbooks will address previously encountered issues. Execution of automated runbooks is logged and can be traced back to specific support personnel. However in the event, an issue requires the VMware Cloud on AWS Site Reliability Engineering (SRE) team to access the SDDC; time-limited credentials are generated providing access to a specific SDDC for only 8 hours, and must be linked to a system-generated or customer-generated support ticket. All activities carried out are visible straight away to the customer via the vCenter logs.
Privileged access is logged and captured in a centralized log server. VMware continuously collects and monitors services operation logs using SIEM technologies. The 24x7x365 VMware Security Operations Center uses the SIEM to correlate information with public and private threat feeds to identify suspicious and unusual activities.
Restricted, authorized personnel have access to the definitive central log servers for the VMware Cloud on AWS servers. Log aggregation sources and storage are protected and integrity of log data is preserved. Security logs are stored for at least 1 year.
The Customer’s access logs are replicated to other systems where they can be viewed by customers and other individuals with appropriate approvals.
VMware has also deployed mechanisms to ensure that the log data has been properly copied, transported and securely stored to preserve the information as required to maintain full data integrity. Metadata about the environment including security logs are stored for at least 1 year.
VMware Cloud on AWS platform access controls are implemented via directory services group management. All individuals who have access to the IT infrastructure and their level of access can be identified by enumerating the members of these dedicated groups.
VMware conducts criminal background checks, as permitted by applicable law, as part of pre-employment screening practices for employees commensurate with the employee’s position and level of access to the service.
In alignment with the ISO 27001 standard, all VMware personnel are required to complete annual security awareness training. Personnel supporting VMware managed services receive additional role-based security training to perform their job functions in a secure manner.
All VMware personnel are required to sign confidentiality agreements as a part of onboarding. Additionally, upon hire, personnel are required to read and accept the Acceptable Use Policy and the VMware Business Conduct Guidelines.
Key management policies and procedures are in place to guide personnel on proper encryption key management. Access to cryptographic keys is restricted to limited operational personnel and all access is logged and monitored. Cryptographic keys used by self-encrypting drives are managed by AWS.
All keys used in VMware Cloud on AWS are unique per tenant. Tenant specific keys are programmatically generated by an independent and well-established certificate authority at the time of provisioning and are tied to the unique URLs created for each tenant.
All Customer Content imported to VMware Cloud on AWS is stored on dedicated physical NVMe storage hardware that is self-encrypting using XTS-AES-256. Encryption keys of the self-encrypting drives are generated in the physical SED controller and they never leave the storage device. The vSphere, (KEK), keys are encrypted with the AWS KMS (CMK), which is managed by the AWS KMS that uses FIPS 140-2 validated hardware security modules (HSM). The DEK is encrypted using the local host KEK which are then used for encrypting and decrypting virtual machine files. The vSphere managed encryption keys can be managed/rotated by the customer at any time using the vSAN API or through the vSphere UI.
By default, all customer data at rest is also encrypted by vSAN XTS AES-256 cipher data-at-rest encryption, with two levels of keys: KEK (as the master key) and DEK (per-disk data key).
VMware personnel manage and secure the encryption certificates used to communicate with the VMware Cloud on AWS console and VMware has key management controls in place. VMware Cloud on AWS operations have complete visibility into certificate information such as installed, expiring and revoked certificates through a certificate management dashboard.
Customers can provide their own keys for VPN connectivity and VMware Cloud on AWS fully supports the use of in-guest encryption of Customer Content which further enables customers to use additional encryption technologies of their choice as well as the key management products and processes to meet their security requirements. For customers who choose to implement in-guest encryption of their Customer Content, VMware does not manage the keys.
VMware utilizes an industry leading commercial solution to secure, store, and tightly control access to tokens, passwords, certificates, API keys, and other secrets.
Data in-transit (authentication, administrative access, customer information, etc.) is encrypted with standard encryption mechanisms (i.e. SSH, TLS). Encrypted vMotion is available at VMware Cloud on AWS between hosts inside the Cloud SDDC.
VMware provides customers with the ability to create IPSEC and SSL VPN tunnels from their environments which support the most common encryption methods including AES-256.
Whenever a host machine is removed from a cluster, the data encryption keys used by the self-encrypting drives are destroyed. This cryptographic erasure ensures that there is no customer content on the drives before returning the servers to the pool of available hardware. The use of self-encrypting drives protects customers from an individual with physical access to the data centre, being able to physically remove drives and access the contents of the drives.
As a further layer of security, the VMware vSAN implementation for VMware Cloud on AWS has encryption enabled by default for all clusters, along with de-duplication and compression. These features are defined when a cluster is provisioned and cannot be disabled. Also, vSAN provides customisable storage protection policies to ensure data is tolerant of the failure of one or more physical drives in a cluster.
The vSAN storage array encryption allows customers to rotate encryption keys on-demand to meet industry regulations, this can be done via vShere and API. All vSphere features such as vMotion, Distributed Resource Scheduler (DRS), and High Availability (HA) are supported with vSAN Encryption without impacting I/O performance.
This post talks about the importance of using Storage Policies with VMware Cloud on AWS to ensure the most efficient consumption of the available vSAN capacity. As a VMware Cloud on AWS customer, this is something we initially overlooked, allowing the default policy to remain in place for some time. The end result was burning through the storage capacity quicker than expected.
It is important to note here that VMware recommends keeping 30% free/slack space to keep vSAN operational. While compute features such as Elastic DRS can be disabled, to maintain the integrity of vSAN and associated Service Level Agreements (SLAs) clusters will automatically scale out in the event a storage threshold is hit. Customers should monitor their datastore and capacity usage to avoid unexpected charges of additional hosts being added.
When deploying the SDDC, the customer has the option of deploying a Stretched Cluster. Although a single cluster provides High Availability (HA) between hosts with a 99.9% availability guarantee, it is restricted to a single Availability Zone (AZ) within a region. A Stretched Cluster is spread across 2 Availability Zones within a region and backed by a 99.99% availability guarantee, with a third acting as the witness. At the time of writing, it is not possible to mix cluster types within an SDDC.
There are currently 3 methods of consuming storage with VMware Cloud on AWS:
Direct Attached NVMe
i3.metal instances provide fixed capacity in the form of NVMe SSDs with high IOPS. This storage type is suitable for most use cases, including workloads with high transaction rates such as databases, high-speed analytics, and virtual desktops. The i3.metal instances offer 36 CPU, 512GiB RAM, and 10TiB direct-attached NVMe high IOPS vSAN storage.
r5.metal instances provide dynamic capacity using Amazon Elastic Block Storage (EBS), suitable for high or changing capacity needs and lower transaction rates; data warehousing, batch processing, disaster recovery, etc. The r5.metal instances offer 48 CPU, 768GiB RAM, and 15-35TiB AWS Elastic Block Storage (EBS) providing cloud-native Elastic vSAN made up of General Purpose SSDs (GP2).
Finally, storage can be scaled with external storage from a Managed Service Provider (MSP) like Faction, who are closely followed by others such as Rackspace and Netapp. Currently, the VMware Cloud public roadmap also lists ‘External datastore and guest OS storage access for both DX and ENI connected 3rd party storage’ as developing, although this may change.
There are additional instance types in development with further storage options.
Management servers are stored on the first cluster provisioned (Cluster0) with datastores as administrative delegation points. The management datastore is managed by VMware, and the workload datastore is for the customer to consume.
Data-at-rest encryption is provided by vSAN using the AWS Key Management Service (KMS). VMware Operations own the KMS relationship and Customer Master Key (CMK). The service is FIPS 140-2 compliant with full auditing, although this is sufficient for most use cases there currently isn’t a supported use case for customers who must own the KMS relationship themselves.
With VMware Cloud on AWS, the customer sets the desired end state and VMware manage the configuration. This means the technical details are not that in-depth; the customer tells VMware how many hosts they want in the cluster, and the policies to apply. Storage Policies can be applied to one to many Virtual Machines (VMs), Virtual Machine Disks (VMDKs), or VMDKs for container persistent volumes. In other words, they are applied at the object level, rather than for an entire datastore.
Storage Policies can be used to define things like disk stripes, IOPS limits, space or cache reservation, and availability. In this particular use case, we are interested in weighing up the availability options with space efficiency:
No data redundancy: requires 1 host, 100GB of data used writes 100GB of data in the back end.
Tolerate 1 host failure: RAID1, requires 3 hosts, 100 GB of data used writes 200GB of data in the back end. In this scenario, vSAN adds a second copy of the data, and a witness copy to prevent a split-brain situation. We can lose any 1 of the 3 hosts, and the object stays available. Although the storage consumption has doubled, reads are load balanced to accelerate performance, writes still need to be synchronously committed.
Tolerate 2 host failures: RAID1, requires 5 hosts, 100GB of data used writes 300GB of data in the back end. You get the picture, this means that the storage consumption could get quite high.
Erasure Coding: by implementing an Erasure Coding policy, instead of storing a complete copy of the data, it instead gets broken up into multiple data segments. We can lose any 1 chunk of that data and not suffer data loss; however, there is additional I/O associated with managing the parity copy. Furthermore, in the event of a failure, the data has to be rebuilt, meaning the potential for a compute and I/O overhead during this time. Despite this, the policy proves useful for space efficiency where workloads may not be hugely performance intensive.
If you are using a Stretched Cluster, further attributes can be applied concerning the Availability Zone (or site) location of the data. You can choose from the following options when configuring Storage Policies:
None (standard cluster)
Dual-site monitoring (stretched cluster)
None, keep data on primary (stretched cluster)
None, keep data on secondary (stretched cluster)
In a Stretched Cluster, there is a feature called read locality; keeping reads within the same AZ. Remember though that writes must be synchronous across both AZs.
Data transfer fees are $0.02/GB for cross-AZ traffic, tools like Live Optics can be used to predict application read and writes.
Not all your workloads will need vSphere HA protection across AZs, for example, developer workloads or workloads where failover is provided in the application stack such as SQL Always On.