This post pulls together the notes I have made during the planning of VMware Cloud (VMC) on AWS (Amazon Web Serivces) deployment, and migration planning of virtual machines from traditional on-premise vSphere infrastructure. It is intended as a list of considerations and not a comprehensive guide. For more information on VMware Cloud on AWS review the following resources:
- At the time of writing up to 10 SDDC’s can be deployed per organisation, each SDDC supporting up to 10 vSphere clusters and each cluster up to 16 physical nodes.
- The standard I3 bare metal instance currently offers 2 sockets, 36 cores, 512 GiB RAM, 10.7 TB vSAN storage, a 16-node cluster provides 32 sockets, 576 cores, 8192 GiB RAM, 171.2 TB.
- New R5 bare metal instances are deployed with 2.5 GHz Intel Platinum 8000 series (Skylake-SP) processors; 2 sockets, 48 cores, 768 GiB RAM and AWS Elastic Block Storage (EBS) backed capacity scaling up to 105 TB for 3-node resources and 560 TB for 16-node resources.
- When deploying the number of hosts in the SDDC consider the pay as you go pricing model and ability to scale out later on-demand; either manually or using Elastic DRS which can optimised for performance or cost.
- A really useful tool for VMC planning is the VMware Cloud on AWS Sizer and TCO calculator.
- The What-If analysis in both vRealize Business and vRealize Operations can also help with capacity planning and cost comparisons for migrations to VMware Cloud on AWS. Use Network Insight to understand network egress costs and application topology in your current environment, see Calculate AWS Egress Fees Proactively for VMware Cloud on AWS for more information.
Highly Available Deployments
- An SDDC can be deployed to a single Availability Zone (AZ) or across multiple AZ’s, otherwise known as a stretched cluster. For either configuration if a problem is identified with a host in the cluster High Availability (HA) evacuation takes place as normal, an additional host is then automatically provisioned and added as a replacement.
- The recommendation for workload availability is to use a stretched cluster which distributes workloads across 2 Availability Zones with a third hosting a witness node. In this setup data is written to both Availability Zones (synchronous write replication) in an active active setup; in the event of an outage to an entire Availability Zone vSphere HA brings virtual machines back online in the alternative AZ.
- Stretched clusters provide a Recovery Point Objective (RPO ) of zero by using synchronous data replication. Note that there may be additional cross-AZ charges for stretched clusters.
- The decision on whether to use single or multiple Availability Zones needs to be taken at the time of deployment. An existing SDDC cannot be upgraded to multi-AZ or downgraded to a single AZ.
- VMware Cloud on AWS links with your existing AWS account to provide access to native services. During provisioning a Cloud Formation template will grant AWS permissions using the Identity Access Management (IAM) service. This allows your VMC account to create and manage Elastic Network Interfaces (ENI’s) as well as auto-populate Virtual Private Cloud (VPC) route tables when NSX subnets are created. It is good practise to enable Multi-Factor Authentication (MFA) for your accounts in both VMC and AWS.
- Cloud Formation can also be used to deploy your SDDC if desired, review VMware Cloud on AWS Integrations with CloudFormation and the VMware Cloud on AWS Dev Center for more information.
- An Elastic Network Interface (ENI) dedicated to each physical host connects the VMware Cloud to the corresponding Availability Zone in the native AWS VPC. There is no charge for data crossing the 25 Gbps ENI between the VMware Cloud VPC and the native AWS VPC.
- Data that crosses Availability Zones however is charged at $0.01 per GB (at the time of writing), therefore it is good practise to deploy the SDDC to the same region and AZ as your current or planned native AWS services.
- Microsoft SQL Server Workloads and VMware Cloud on AWS: Design, Migration, and Configuration is aimed at migrating SQL into VMC but also contains some useful architectural and operational guidelines so is worth a read.
- Compute policies can be used to control the placement of virtual machines, see VMWARE CLOUD ON AWS – COMPUTE POLICIES – THE START OF SOMETHING GREAT! for more information.
- An example architecture of a stretched cluster SDDC is shown below.
- It is important to ensure proper planning of your IP addressing scheme, if the IP range used overlaps with anything on-premise or in AWS then routes will not be properly distributed and the SDDC needs destroying and reinstalling with an updated subnet to resolve.
- You will need to allocate a CIDR block for SDDC management, as well as network segments for your SDDC compute workloads to use. Review Selecting IP Subnets for your SDDC for assistance with selecting IP subnets for your VMC environment.
- Connectivity to the SDDC can be achieved using either AWS Direct Connect or VPN, see Connectivity Options for VMware Cloud on AWS Software Defined Data Centers.
- Traffic between VMC and your native AWS VPC is handled by the 25 Gbps Elastic Network Interfaces (ENI) mentioned in the section above. VPC Peering and Transit Gateway options are not supported at the time of writing, to connect to additional VPCs or accounts you would need to setup IPsec VPN.
- Access to EC2 instances needs correct configuration on the VMC Gateway Firewall, see Connecting VMware Cloud on AWS to EC2 Instances, as well as the security group(s) that both the ENIs and EC2 instances are attached to, this is explained clearly in UNDERSTANDING HOW AWS SECURITY GROUPS WORK WITH VMWARE CLOUD ON AWS.
- To migrate virtual machines from your on-premise data centre review Hybrid Linked Mode Prerequisites and vMotion across hybrid cloud: performance and best practices. In addition you will need to know the Required Firewall Rules for vMotion and for Cold Migration.
- For virtual machines to keep the same IP addressing layer 2 networks can be stretched with HCX, review VMware HCX Overview. HCX is included with VMC licensing but is a separate product in its own right so should be planned accordingly. Ideally HCX will overlay a Direct Connect or VPN connection but can also work without either.
- VMware Cloud on AWS: Internet Access and Design Deep Dive is a useful resource for considering virtual machines that may require internet access.
- If possible your migration team should be made up of the following: Infrastructure administrators for compute, storage, network, and data protection. Networking and Security teams for security and compliance. Application owners for applications, development, and lifecycle management. Support and Operations for automation, lifecycle, and change management.
- Group services together based on downtime tolerance, as this could determine how the workload is moved: prolonged downtime, minimal downtime, and zero downtime.
- Consider migration paths for any physical workloads, whether that be P2V, AWS Bare Metal instances, or co-locating equipment.
- Consider any load balancing and edge security requirements. The AWS Elastic Load Balancer (ELB) can be used or alternative third party options can be deployed through virtual appliances. NSX load balancing as a service in VMC is planned for future releases.
- You will likely still need Active Directory, DNS, DHCP, time synchronisation, so use native cloud services where possible, or migrate these services as VMs to VMC on AWS.
- Remember Disaster Recovery (DR) still needs to be factored in. DR as a Service (DRaaS) is offered through Site Recovery Manager (SRM) between regions in the cloud or on-premise.
- Make sure any existing monitoring tools are compatible with the new environment and think about integrating cloud monitoring and management with new or existing external tools.
- Move backup tooling to the cloud and perform full backups initially to create a new baseline. Consider native cloud backup products that will backup straight to S3, or traditional backup methods that connect into vCenter. The reference architecture below has been updated to include Elastic Block Storage (EBS) backed Elastic Compute Cloud (EC2) instances running Veeam:
For up to date configuration maximums and the latest features and information visit the VMware Cloud on AWS FAQs page. Up to date pricing for AWS services can be found at AWS Pricing. Most of the major compliance certification has been achieved at VMC on AWS data centres, see the VMware Cloud on AWS Meets Industry-Standard Security and Compliance Standards blog post for more information.
In addition, if you are working towards the VMware Cloud on AWS Management exam then review 5V0-31.19: VMware Cloud on AWS Management Exam 2019 – Study tips.