Load Balancing VMware Cloud on AWS with Amazon ELB

This post demonstrates the connectivity between VMware Cloud (VMC) on AWS and native AWS services. In the example below we will be using Amazon Elastic Load Balancing (ELB) to provide highly available, scaleable, and secure load balancing backed by virtual machines hosted in the VMware Cloud Software-Defined Data Centre (SDDC). There is an assumption you have a basic understanding of both platforms. Further Reading: How to Deploy and Configure VMware Cloud on AWS (Part 1), How to Migrate VMware Virtual Machines to VMware Cloud on AWS (Part 2).

When integrating with Amazon ELB there are 2 options: Application Load Balancer (ALB) which operates at the request layer (7), or Network Load Balancer (NLB) which operates at the connection layer (4). The Amazon Classic Load Balancer is for Amazon EC2 instances only. For assistance with choosing the correct type of load balancer review Details for Elastic Load Balancing Products and Product Comparisons. Amazon load balancers and their targets can be monitored using Amazon Cloud Watch.

Connectivity Overview

  • Update Feb 2020 – full details can be found at AWS Native Services Integration With VMware Cloud on AWS
  • VMware Cloud on AWS links with your existing AWS account to provide access to native services. During provisioning a Cloud Formation template will grant AWS permissions using the Identity Access Management (IAM) service. This allows your VMC account to create and manage Elastic Network Interfaces (ENI) as well as auto-populate Virtual Private Cloud (VPC) route tables.
  • An Elastic Network Interface (ENI) dedicated to each physical host connects the VMware Cloud to the corresponding Availability Zone in the native AWS VPC. There is no charge for data crossing the 25 Gbps ENI between the VMC VPC and the native AWS VPC, however it is worth remembering that data crossing Availability Zones is charged at $0.01 per GB (at the time of writing).
  • An example architecture below shows a stretched cluster in VMware on AWS with web services running on virtual machines across multiple Availability Zones. The load balancer sits in the customers native AWS VPC and connects to the web servers using the ENI connectivity. Amazon’s DNS service Route 53 routes users accessing a custom domain to the web service.
  • Remember to consider the placement of your target servers when deploying the Amazon load balancer. For more information see Planning Your VMware Cloud on AWS Deployment. See also Elastic Load Balancing Pricing.

VMC_LoadBalancing

VMC Gateway Firewall

Before configuring the ELB we need to make sure it can access the target servers. Log into the VMware on AWS Console, from the SDDCs tab locate the appropriate SDDC and click View Details. Select the Networking & Security tab, under Security click Gateway Firewall and Compute Gateway.

VMC_ELB_FW

In this example I have added a rule for inbound access to my web servers. The source is AWS Connected VPC Prefixes (this can be tied down to only allow access from the load balancer if required). The destination is a user defined group which contains the private IPv4 addresses for the web servers in VMC, and the allowed service is set to HTTP (TCP 80).

If you are using the Application Load Balancer then you also need to consider the security group attached to the ALB. If the default group is not used, or the security group attached to the Elastic Network Interfaces has been changed, then you may need to make additional security group changes to allow traffic between the ALB and the ENIs. Review the Security Group Configuration section of Connecting VMware Cloud on AWS to EC2 Instances for more information. The Network Load Balancer does not use security groups. The gateway firewall rule outlined above will be needed regardless of the load balancer type.

ELB Deployment

Log into the VMware on AWS Console, from the SDDCs tab locate the appropriate SDDC and click View Details. Select the Networking & Security tab. Under System click Connected VPC. Make a note of the AWS Account ID and the VPC ID. You will need to deploy the load balancer into this account and VPC.

Log into the AWS Console and navigate to the EC2 service. Locate the Load Balancing header in the left hand navigation pane and click Load Balancers. Click Create Load Balancer. Select the load balancer type and click Create.

VMC_ELB

Typically for HTTP/HTTPS the Application Load Balancer will be used. In this example since I want to deploy the load balancer to a single Availability Zone for testing I am using a Network Load Balancer, which can also have a dedicated Elastic (persistent public) IP.

Enter the load balancer configuration. I am configuring an internet-facing load balancer with listeners on port 80 for HTTP traffic. Scroll down and specify the VPC and Availability Zones to use. Ensure you use the VPC connected to your VMware on AWS VPC. In this example I have selected a subnet in the same Availability Zone as my VMware Cloud SDDC.

VMC_NLB_1

In the routing section configure the target group which will contain the servers behind the load balancer. The target type needs to be IP.

VMC_NLB_2

In this instance since I am creating a new target group I need to specify the IP addresses of the web servers which are VMs sitting in my VMC SDDC. The Network column needs to be set to Other private IP address.

VMC_NLB_3

Once the load balancer and target group are configured review the settings and deploy. You can review the basic configuration, listeners, and monitoring by selecting the newly deployed load balancer.

VMC_NLB_4

Click the Description tab to obtain the DNS name of the load balancer. You can add a CNAME to reference the load balancer using Amazon Route 53 or another DNS service.

VMC_NLB_5VMC_NLB_6

Finally, navigate to Target Groups. Here you can view the health status of your registered targets, and configure health checks, monitoring, and tags.

Connecting VMware Cloud on AWS to Amazon EC2

This post demonstrates the connectivity between VMware Cloud (VMC) on AWS and native AWS services. In the example below we will be using Amazon Elastic Compute Cloud (EC2) to provision a virtual instance backed by Amazon Elastic Block Store (EBS) storage. To complete the use case we will install Veeam and use the EC2 instance to backup virtual machines hosted in the VMware Cloud Software-Defined Data Centre (SDDC). Further Reading: How to Deploy and Configure VMware Cloud on AWS (Part 1), How to Migrate VMware Virtual Machines to VMware Cloud on AWS (Part 2).

Connectivity Overview

  • Upadate Feb 2020 – full details can be found at AWS Native Services Integration With VMware Cloud on AWS
  • VMware Cloud on AWS links with your existing AWS account to provide access to native services. During provisioning a Cloud Formation template will grant AWS permissions using the Identity Access Management (IAM) service. This allows your VMC account to create and manage Elastic Network Interfaces (ENI) as well as auto-populate Virtual Private Cloud (VPC) route tables.
  • An Elastic Network Interface (ENI) dedicated to each physical host connects the VMware Cloud to the corresponding Availability Zone in the native AWS VPC. There is no charge for data crossing the 25 Gbps ENI between the VMC VPC and the native AWS VPC, however it is worth remembering that data crossing Availability Zones is charged at $0.01 per GB (at the time of writing).
  • The example architecture we will be using is shown below.

VMC_Connectivity

Security Group Configuration

AWS Security Groups will be attached to your EC2 instances and ENIs, it is therefore vital that you fully understand the concepts and configuration you are implementing. Please review Understanding AWS Security Groups with VMware Cloud on AWS by Brian Graf.

In the AWS console Security Groups can be accessed from the EC2 service. In this example I have created a security group allowing all protocols (any port) inbound from the source CIDR block used in VMC for both my compute and management subnets. In other words this is allowing connectivity into the EC2 instance from VM in my VMC SDDC. You may want to lock this down to specific IP addresses or ports to provide a more secure operating model. Outbound access from the EC2 instance is defined as any IPv4 destination (0.0.0.0/0) on any port.

Veeam_SG

I have also changed the default security group associated with the ENIs used by VMC to a custom security group. The security group allows inbound access on the ENI (which is inbound access to VMC as explained in the article below) on all ports from the source CIDR block of my native AWS VPC. Outbound access which is from VMC into AWS is defined as any IPv4 destination (0.0.0.0/0) on any port.

ENI_SG

EC2 Deployment

Log into the VMware on AWS Console, from the SDDCs tab locate the appropriate SDDC and click View Details. Select the Networking & Security tab. Under System click Connected VPC. Make a note of the AWS Account ID and the VPC ID. You will need to deploy an EC2 instance into this account and VPC.

Log into the AWS Console and navigate to the EC2 service. Launch an EC2 instance that meets the System Requirements for Veeam. In this example I have used the t2.medium instance and Microsoft Windows Server 2019 Base AMI. When configuring network the EC2 instance must be in the VPC connected to VMC. I have added an additional EBS volume for the backup repository using volume type General Purpose SSD (gp2). Ensure the security group selected or created allows the relevant access.

Gateway Firewall

In addition to security group settings inbound access also needs allowing on the VMC Gateway Firewall. In this instance as we are connecting the EC2 instance to the vCenter we define the rule on the Management Gateway. If we were connecting to a workload in one of the compute subnets the rule would be defined on the Compute Gateway. You may have noticed that although I allowed any port in the AWS Security Groups, the actual ports allowed can also be defined on the Gateway Firewall.

In this example I have added a new user defined group which contains the private IPv4 address for the EC2 instance and added it as a source in the vCenter Inbound Rule. The allowed port is set to HTTPS (TCP 443) – I have also allowed ICMP. I have added the same source group to the ESXi Inbound Rule which allows Provisioning (TCP 902). Both these rules are needed to allow Veeam to backup virtual machines in VMC.

VMC_GW_FW

Veeam Setup

Now that connectivity between the EC2 instance and the VMC vCenter has been configured I can hop onto the EC2 instance and begin the setup of Veeam. I will, of course, need an inbound rule for RDP (TCP 3389) adding to the security group of the EC2 instance, specifying the source I am connecting from.

Follow the installation steps outlined in the Veeam Backup & Replication 9.5 Update 4 User Guide for VMware vSphere.

Veeam_1

In the VMC console navigate to the Settings tab of the SDDC and make a note of the  password for the cloudadmin@vmc.local account. Open the Veeam Backup & Replication console and add the vCenter private IP address, use the vCenter cloud admin credentials.

Veeam_2

Add the backup repository using the EBS volume and create a backup job as normal. Refer to the Veeam Backup Guide if you need assistance with Veeam.

Veeam_3

To make use of S3 object storage AWS you will need an IAM Role granting S3 access, and an S3 VPC Endpoint. In the case of VMC, as an alternative design, you can host the Veeam B&R server inside your VMC SDDC to make use of the built in S3 endpoint. In testing we found backup speeds to be faster but you will likely still need an EBS backed EC2 instance for your backup repository. It goes without saying you should make sure backup data is not held solely on the same physical site as the servers you are backing up. See Veeam KB2414: VMware Cloud on AWS Support for further details.

Add a new Scale-Out Backup Repository and follow the steps to add account and bucket details. 

Set an appropriate policy for moving backups to object based storage, once this threshold is met you will start to see Veeam files populating the S3 bucket.

S3_repo

How to Deploy and Configure VMware Cloud on AWS

This post pulls together notes made during a real life customer planning and deployment of VMware Cloud (VMC) on AWS (Amazon Web Services). Later on we move into the migrations of VMware Virtual Machines (VMs) from traditional on-premise vSphere infrastructure to VMware Cloud on AWS. The content is a work in progress and intended as a generic list of considerations and useful links for both VMware and AWS, and is not a comprehensive guide. Cloud, more-so than traditional infrastructure, is constantly changing. Features are implemented regularly and transparently so always validate against official documentation. This post was last updated on August 6th 2019.

Part 1: SDDC Deployment

Part 2: Migration Planning & Lessons Learned

See Also: VMware Cloud on AWS Security One Stop Shop

1. Capacity Planning

You can still use existing tools or methods for basic capacity planning, you should also consult the VMware Cloud on AWS Sizer and TCO Calculator provided by VMware to help compare on-premise pricing with VMware Cloud on AWS pricing. There is a What-If Analysis built into both vRealize Business and vRealize Operations, which is similar to the sizer tool and can also help with cost comparisons. Additional key considerations are:

  • Egress costs are now a thing! Use vRealize Network Insight to understand network egress costs and application topology in your current environment. Calculate AWS Egress Fees Proactively for VMware Cloud on AWS is a really useful resource and needs to be considered in your overall VMware on AWS pricing estimate.
  • You do not need to factor in N+1 when planning capacity. If there is a host failure VMware will automatically add a new host to the cluster, allowing you to utilise more of the available resource.
  • Export a list of Virtual Machines (VMs) from vCenter and review each VM. Contact service owners, application owners, or super users to understand if there is still a requirement for the machine and what it is used for. This ties in to the migration planning piece but crucially allows you to better understand capacity requirements. Most environments have VM sprawl and identifying services that are either obsolete, moved to managed services, or were simply test machines no longer required will clearly reduce capacity requirements.
  • Consider you are now on a ‘metered’ charging model, so don’t set the meter going; in other words don’t deploy the SDDC, until you are ready to start using the platform. Common sense, but internal service reviews or service acceptance and approvals can take longer than expected.
  • You can make savings using reserved instances, by committing to 1 or 3 years. Pay as you go pricing may be sufficient for evaluation or test workloads, but for production workloads it is much more cost effective to use reserved instances.
  • At the time of writing up to 2 SDDC’s can be deployed per organisation (soft limit), each SDDC supporting up to 20 vSphere clusters and each cluster up to 16 physical nodes.
  • The standard i3 bare metal instance currently offers 2 sockets, 36 cores, 512 GiB RAM, 10.7 TB vSAN storage, a 16-node cluster provides 32 sockets, 576 cores, 8192 GiB RAM, 171.2 TB.
  • New R5 bare metal instances are deployed with 2.5 GHz Intel Platinum 8000 series (Skylake-SP) processors; 2 sockets, 48 cores, 768 GiB RAM and AWS Elastic Block Storage (EBS) backed capacity scaling up to 105 TB for 3-node resources and 560 TB for 16-node resources. For up to date configuration maximums see Configuration Maximums for VMware Cloud on AWS.

2. Placement and Availability

Ultimately placement of your SDDC is going to be driven by specific use cases, and any regulations for the data type you are hosting. How VMware is Accelerating NHS Cloud Adoption uses the UK National Health Service (NHS) and Information Governance as an example. Additional placement and availability considerations are:

  • An SDDC can be deployed to a single Availability Zone (AZ) or across multiple AZ’s, otherwise known as a stretched cluster. For either configuration if a problem is identified with a host in the cluster High Availability (HA) evacuation takes place as normal, an additional host is then automatically provisioned and added as a replacement.
  • The recommendation for workload availability is to use a stretched cluster which distributes workloads across 2 Availability Zones with a third hosting a witness node. In this setup data is written to both Availability Zones in an active active setup. In the event of an outage to an entire Availability Zone vSphere HA brings virtual machines back online in the alternative AZ: VMware Cloud on AWS Stretched Cluster Failover Demo.
  • Stretched clusters have an SLA Availability Commitment of 99.99% (99.9% for single AZ), and provide a Recovery Point Objective (RPO ) of zero by using synchronous data replication. Note that there are additional cross-AZ charges for stretched clusters. The Recovery Time Objective (RTO) is a vSphere HA failover, usually sub 5 minutes.
  • The decision on whether to use single or multiple Availability Zones needs to be taken at the time of deployment. An existing SDDC cannot be upgraded to multi-AZ or downgraded to a single AZ.
  • An Elastic Network Interface (ENI) dedicated to each physical host connects the VMware Cloud to the corresponding Availability Zone in the native AWS Virtual Private Cloud (VPC). There is no charge for data crossing the 25 Gbps ENI between the VMware Cloud VPC and the native AWS VPC.
  • Data that crosses Availability Zones is chargeable, therefore it is good practise to deploy the SDDC to the same region and AZ as your current or planned native AWS services.

3. Networks and Connectivity

See also: AWS Native Services Integration With VMware Cloud on AWS

  • VMware Cloud on AWS links with your existing AWS account to provide access to native services. During provisioning a Cloud Formation template will grant AWS permissions using the Identity Access Management (IAM) service. This allows your VMC account to create and manage Elastic Network Interfaces (ENIs) as well as auto-populate Virtual Private Cloud (VPC) route tables when NSX subnets are created.
  • It is good practise to enable Multi-Factor Authentication (MFA) for your accounts in both VMC and AWS. VMware Cloud can also use Federated Identity Management, for example with Azure AD. This currently needs to be facilitated by your VMware Customer Success team, but once setup means you can control accounts using Active Directory and enforce MFA or follow your existing user account policies.
  • It is important to ensure proper planning of your IP addressing scheme, if the IP range used overlaps with anything on-premise or in AWS then routes will not be properly distributed and the SDDC needs destroying and reinstalling with an updated subnet to resolve.
  • You will need to allocate a CIDR block for SDDC management, as well as network segments for your SDDC compute workloads to use. Review Selecting IP Subnets for your SDDC for assistance with selecting IP subnets for your VMC environment.
  • Connectivity to the SDDC can be achieved using either AWS Direct Connect (DX) or VPN, see Connectivity Options for VMware Cloud on AWS Software Defined Data Centers. From SDDC v1.7 onwards it is possible to use DX with a backup VPN for resilience.
  • Traffic between VMC and your native AWS VPC is handled by the 25 Gbps Elastic Network Interfaces (ENI) referenced in the section above. To connect to additional VPCs or accounts you can setup an IPsec VPN. The Amazon Transit Gateway feature is available for some regions and configurations, if you are using DX then the minimum requirement is 1Gbps.
  • Access to native AWS services needs to be setup on the VMC Gateway Firewall, for example: Connecting VMware Cloud on AWS to EC2 Instances, as well as Amazon security groups; this is explained in How AWS Security Groups Work With VMware Cloud on AWS.
  • To migrate virtual machines from your on-premise data centre review Hybrid Linked Mode Prerequisites and vMotion across hybrid cloud: performance and best practices. In addition you will need to know the Required Firewall Rules for vMotion and for Cold Migration.
  • For virtual machines to keep the same IP addressing layer 2 networks can be stretched with HCX, review VMware HCX Migration Documentation. HCX is included with VMC licensing but is a separate product in its own right so should be planned accordingly and is not covered in this post. Review VMware Cloud on AWS Live Migration Demo to see a VMware HCX Migration in action.
  • VMware Cloud on AWS: Internet Access and Design Deep Dive is a useful resource for considering virtual machines that may require internet access.

4. Operational Readiness

See also: VMware Cloud on AWS Security One Stop Shop

The SDDC is deployed but before you can start migrating virtual machines to VMware on AWS you need to make sure the platform is fully operational. There are some key aspects but in general make sure you cover everything you do currently on premise:

  • You will likely still have a need for Active Directory, DNS, DHCP, and time synchronisation. Either use native cloud services, or build new Domain Controllers for example in VMC.
  • If you have a stretched-cluster and build Domain Controllers, or other management servers, consider building these components in each Availability Zone, then using compute policies to control the virtual machine placement. This is similar to anti-affinity rules on-premise, see VMware Cloud on AWS Compute Policies for more information.
  • Remember Disaster Recovery (DR) still needs to be factored in. DR as a Service (DRaaS) is offered through Site Recovery Manager (SRM) between regions in the cloud or on-premise. A stretched-cluster may be sufficient but again, this is dependent on the organisation or service requirements.
  • Anti-Virus, monitoring, and patching (OS / application) solutions need to be implemented. Depending on your licensing model you should be able to continue using the same products and tool-set, and carry the license over, but check with the appropriate vendor. Also start thinking about integrating cloud monitoring and management where applicable.
  • VMware Cloud Log Intelligence is a SaaS offering for log analytics, it can forward to an existing syslog solution or integrate with AWS CloudTrail.
  • Backups are still a crucial part of VMware Cloud on AWS and it is entirely the customers responsibility to ensure backups are in place. Unless you have a specific use case to backup machines from VMware Cloud to on-premise, it probably makes sense to move or implement backup tooling in the cloud, for example using Veeam in Native AWS.
  • Perform full backups initially to create a new baseline. Try native cloud backup products that will backup straight to S3, or continue with traditional backup methods that connect into vCenter. The reference architecture below uses Elastic Block Storage (EBS) backed Elastic Compute Cloud (EC2) instances running Veeam as a backup solution, then archiving out to Simple Storage Services (S3). Druva are able to backup straight to S3 from VMC. Veeam are also constantly updating functionality so as mentioned at the start of the post this setup may not stay up to date for long:
vmc_aws.png
  • Customers must be aware of the shared security model that exists between: VMware; delivering the service, Amazon Web Services (the IaaS provider); delivering the underlying infrastructure, and customers; consuming the service.
  • VMware Cloud on AWS meets a number of security standards such as NIST, ISO, and CIS. You can review VMware’s security commitments in the VMware Cloud Services on AWS Security Overview.
  • When using native AWS services you must always follow Secure by Design principals to make sure you are not leaving the environment open or vulnerable to attack.

Part 2: VMware Cloud on AWS Migration Planning & Lessons Learned

Additional resources: VMware Cloud On AWS On-Boarding Handbook | VMware Cloud on AWS Operating Principles | Resources | Documentation | Factbook

VMware Cloud on AWS VideosYouTube PlaylistsVMworld 2018 Recorded Sessions