Category Archives: VMware Cloud on AWS

VMware Cloud on AWS Migration Planning & Lessons Learned

This post pulls together the workload migration and lessons learned notes I have made during evacuation of an on-premise date centre to VMware Cloud (VMC) on AWS (Amazon Web Services). The content is a work in progress and intended as a generic list of considerations and useful links, it is not a comprehensive guide. Cloud, more-so than traditional infrastructure, is constantly changing. Features are implemented regularly and transparently so always validate against official documentation. This post was last updated on September 16th 2019.

Part 1: SDDC Deployment

Part 2: Migration Planning & Lessons Learned

1. Virtual Machine Migrations

The following points should help with the planning of Virtual Machine (VM) workload migrations. An assumption is made that the Software Defined Data Centre (SDDC) is stood up and operational with monitoring, backups, Anti-Virus, etc. in place. Review Part 1: SDDC Deployment for more information. I found the SDDC deployment and getting the environment available was the easy part. Internal processes and complexity of the existing environment are going to determine how quickly you can migrate workloads to the SDDC.

We started by exporting a list of Virtual Machines from each vCenter, from that we identified the service it was running and the service owner or business owner. The biggest surprise here was the amount of servers deployed by, or for, people who had left the organisation. These servers were still being hosted, maintained, patched, but no longer needed. We were able to decommission more workloads than expected due to years of VM sprawl. Whilst VMware Cloud on AWS isn’t directly responsible for this the project forced us to evaluate each server we hosted. For remaining workloads we put together a migration flow which identified the following criteria:

  • CPU, RAM, storage requirements: identified a baseline to automatically accept and then anything above our baseline would require a manual check.
  • Network dependencies: is there a large amount of data in transit, is IP retention required, is the VLAN stretched using Hybrid Cloud Extension (HCX), load balancer requirements.
  • Data flows: used vRealize Network Insight to identify potential egress costs and additional service dependencies.
  • Additional application or organisation specific considerations: e.g. data classification, tagging / charge-back model, backups, security, monitoring, DNS, authentication, licensing or support.
  • Service Management considerations: is the service platinum/gold/silver/bronze or unclassified, do the platform Service Level Agreements (SLAs) fulfil the existing SLAs in place for each service, is the proposed migration type (i.e. amount of downtime) taking this into consideration. Involving Service Management right from the start was useful as they were able to advise on internal processes for Service Acceptance and Business Continuity.
  • Service Owner considerations: if the technical criteria above is met then the next step was to meet with service owners and get their buy-in for the migration. We migrated internal services we owned first, and then used that as a success story to onboard other services. This process involved meeting with various departments, presenting the solution and the benefits over their existing hosting, in our case DR and performance improvements, and migrating dev or test workloads first to build confidence.
  • Migration passport: one of our Senior Engineers came up with this concept as a one-pager for each service that was migrated, it consisted of migration details (change ID, date, status), migration scope (server names, locations, and notes), firewall rules, vRNI outputs, and other information such as associated documentation.

Each environment is different so these are provided as example considerations only. Use resources such as those outlined below, and , to develop your own migration strategy.

Workload_Mobility

2. Network Design

  • Research the differences and limitations around the different connection types, especially under 1Gbps – Configuring AWS Direct Connect with VMware Cloud on AWS
    • Make sure you understand the terminology around a Virtual Interface (VIF) and the difference between a Standard VIF, Hosted VIF, and Hosted Connection: What’s the difference between a hosted virtual interface (VIF) and a hosted connection? It is important to consider that VMware Cloud on AWS requires a dedicated Virtual Interface (VIF) – or a pair of VIFs for resilience. If you have a standard 1Gbps or 10Gbps connection direct from Amazon then you can create and allocate VIFs for this purpose. If you are using a hosted connection from an Amazon Partner Network (APN) for sub-1G connectivity then you may need to procure additional VIFs, or a dedicated Direct Connect with the ability to have multiple VIFs on a single circuit. This is a discussion you should have with your APN partner.

  • The Virtual Private Cloud (VPC) provided by the shadow AWS account cannot be used as a transit VPC. In other words if you want to connect to private IP addressing of native AWS services you cannot hop via VMware Cloud. In this instance a Transit Gateway can be used.
  • At the time of writing a VPN attachment must be created to connect the SDDC to a Transit Gateway, if Direct Connect is in use then the minimum requirement is 1Gbps.
  • If there is a requirement to connect multiple existing AWS VPCs, or multiple SDDCs, with on-premise networks then definitely check out VMware Cloud on AWS with Transit Gateway Demo.
  • If a backup VPN is in use then you may be able to reduce failover time using Bidirectional Forwarding Detection (BFD) which is automatically enabled by AWS, in our case it was not supported by our third party provider.
  • Use vRealize Network Insight to get an idea of dependencies and data flows that you can use to plan firewall rules and estimate egress or cross-AZ charges. In general my experience with these charges is that they have been minimal, but this depends entirely on your own environment.
  • If you want to update your default route see How to Set the Default Route in VMware Cloud on AWS: Part 1 & Part 2.
  • VMware Cloud on AWS: NSX Networking and Security eBook

3. Load Balancing & Security

  • With the acquisition of Avi Networks we can expect Avi Networks services as a paid add-on for VMware Cloud: VMware Cloud on AWS: NSX and Avi Networks Load Balancing and Security.
  • Third party load balancers such as virtual F5 can be deployed in virtual appliance format. If you are planning on using AWS Elastic Load Balancer (ELB) on a private IP address accessible on-premise ensure you have a connectivity method as outlined above.
  • The NSX Distributed Firewall (DFW) feature is included in the price of VMware Cloud, the paid for message is removed from SDDC v1.8 onwards, this was announced at VMworld 2019.
  • Another VMworld 2019 announcement was the inclusion of syslog forwarding with the free version of VMware Cloud Log Intelligence (SaaS offering for log analytics), although for troubleshooting NSX DFW logs you still need the paid for version.
  • If you are using HCX this product uses its own IPSec tunnel and therefore we could not get it working with the private IP address over a backup VPN. It was assumed that HCX would also not work with the private IP address via Transit Gateway either, due to the SDDC VPN requirement, and would need to be reconfigured to use the public IP address.
  • Another HCX consideration is that when you are stretching a network all traffic goes via the HCX Interconnects. This means you are encapsulating everything in port UDP 4500, and essentially bypassing your on-premise firewall rules while the network is stretched. It is important to double check all rules are correct before eventually moving the gateway to VMC.
  • Again if you are using HCX to migrate workloads, remember to remove stretched networks once complete. This involves shutting down the gateway on-premise, removing the L2 stretch, and changing the network in the SDDC to routed, for us the down time was around 30 seconds. The deployment of HCX in our environment, although covered by vSphere High Availability (HA), didn’t have resilience built in, therefore we decided to minimise the amount of time they were in use by planning a migration strategy around each subnet.
  • If you use NSX Service Deployments for Anti-Virus, i.e. Guest Introspection for agentless AV then you will need to deploy an agent on each VM, as this feature is still currently unavailable.

4. General

  • The Cloud Services Portal (CSP) can be integrated with enterprise federation, allowing you to control access using your organisational policies, hopefully therefore enforcing Multi-Factor Authentication (MFA) and removing access as part of a leavers process. Federation will only work with a tenant, it will not work with a master organisation.
  • It is not possible at the time of writing to easily transfer an SDDC deployed in the root/master organisation into a tenant. The process currently is a redeploy and migrate.
  • Druva offer a product that will backup Virtual Machines from VMware Cloud on AWS direct into an S3 bucket they manage, for a greenfield deployment if you are not transferring any existing licenses this could be a good option as you only pay for the capacity you use. Having a backup environment setup in AWS has many benefits but also adds a management overhead and the consideration of replicating between Availability Zones.
  • In general internal support was good once teams were educated on the platform and the slightly different operating model we were implementing. In terms of external support we have not encountered any compatibility issues yet, there was one application vendor with a published KB article stating they support running the application on VMware Cloud on AWS,  then back tracked and said they wouldn’t support it as vSphere was a version not yet GA (6.8 at the time of writing).

 

VMware Cloud on AWS Deployment Planning

esxsi.com

This post pulls together the notes I have made during the planning of VMware Cloud (VMC) on AWS (Amazon Web Services) deployment, and migrations of virtual machines from traditional on-premise vSphere infrastructure. It is intended as a generic list of considerations and useful links, and is not a comprehensive guide. Cloud, more-so than traditional infrastructure, is constantly changing. Features are implemented regularly and transparently so always validate against official documentation. This post was last updated on August 6th 2019.

Part 1: SDDC Deployment

1. Capacity Planning

You can still use existing tools or methods for basic capacity planning, you should also consult the VMware Cloud on AWS Sizer and TCO Calculator provided by VMware. There is a What-If Analysis built into both vRealize Business and vRealize Operations, which is similar to the sizer tool and can also help with cost comparisons. Additional key considerations are:

  • Egress costs are now a thing! Use vRealize Network Insight to understand…

View original post 1,711 more words

How VMware is Accelerating NHS Cloud Adoption

This post provides an overview of how the UK National Health Service (NHS) can benefit from VMware Cloud (VMC) on Amazon Web Services (AWS).

In November 2014 the National Information Board and Department of Heatlh and Social Care published the Personalised Health and Care 2020 paper, outlining a framework to support the NHS with making better use of data and technology to improve health and care services. The paper endorsed the use of cloud services, backing up the UK Government cloud first strategy, introduced in 2013.

In January 2018 NHS Digital released guidance for NHS and social care data: off-shoring and the use of public cloud services, along with tools for identifying and assessing data risk classification, and a cloud security one page overview. The paper states that ‘NHS and social care organisations can safely put health and care data, including non-personal data and confidential patient information, into the public cloud’. NHS and Social care providers may use cloud computing services for NHS data, providing it is hosted in the UK, or European Economic Area (EEA), or in the US where covered by Privacy Shield. Steps for understanding the data type, assessing migration risks, and implementing and monitoring data protection controls are also included in the documentation.

The Information Governance (IG) report for Amazon Web Services was updated in 2018, the score approves Amazon Web Services to host and process NHS patient data. VMware Cloud on AWS leverages Amazon’s infrastructure to provide an integrated cloud offering, delivering a highly scaleable and secure solution for NHS organisations to migrate workloads and extend their on-premise infrastructure.

The NHS can implement Secure by Design services with VMware Cloud on AWS

  • NHS organisations must be aware of the shared security model that exists between: VMware; delivering the service, Amazon Web Services (the IaaS provider); delivering the underlying infrastructure, and customers; consuming the service.
  • The NHS organisation is in complete control of the location of its data. VMware do not backup or archive customer data and therefore it is up to the NHS organisation to implement this functionality.
  • Micro-segmentation can be used to protect applications by ring-fencing virtual machines in a zero trust architecture. The risks of legacy operating systems can be mitigated by isolating them from the rest of the network.
  • NHS organisations can use Role Based Access Control (RBAC) and Multi-Factor Authentication (MFA) to control access to cloud resources. NHS organisations are in control of inbound and outbound firewall rules and can opt to route all traffic internally on private addressing.
  • VMware Cloud on AWS meets a number of security standards such as NIST, ISO, and CIS. Standard Amazon policies for physical security and secure disposal apply. Amazon use self-encrypting disks and manage the keys using Amazon Key Management Service (KMS).
  • VMware implement a number of stringent security controls, for example MFA generated time-based credentials for support staff; all logged and monitored by a Security Operations Centre (SOC), VSAN based encryption, and industry-leading commercial solutions to secure, store, and control access to tokens, secrets, passwords, etc. Full details can be found in the VMware Cloud Services on AWS Security Overview.

Additional benefits of VMware Cloud on AWS to the wider NHS, are as follows:

  • The NHS can save time and money by reducing physical or data centre footprint

    • NHS Digital reached an agreement in May 2019 to offer other NHS organisations discounted access to cloud services to help accelerate their journey to the cloud. In addition, a favourable pricing structure is in place for reserved instances should organisations commit for 1 or 3 years.
    • Commissioning new space in a data centre, or even just new hardware, can be a lengthy process. With VMware Cloud an entire virtual data centre can be deployed in around 90 minutes. Extending capacity on demand takes as little as 15 minutes.
  • The NHS can protect existing investments and move to the cloud

    • Existing VMware workloads can be migrated to VMware Cloud on AWS, and back if needed, in minutes without the need to refactor applications.
    • NHS technical staff continue to use the same tools and management capabilities that they currently use day to day.
    • In most cases where products such as Monitoring, Backups, and Anti-Virus, are licensed per host or per number of Virtual Machines (VMs) organisations can adopt a Bring Your Own Licensing (BYOL) approach.
  • The NHS can improve service performance and availability

    • VSAN replication and stretched networks can enhance Disaster Recover (DR) capabilities. The Stretched-Cluster deployment provides vSphere High Availability (HA) across 2 Amazon Availability Zones within a region with a 99.99% availability commitment. Additional DR services such as Site Recovery Manager (SRM) add-ons are also available.
    • In many cases replacing aging servers and storage infrastructure with the latest hardware and flash based VSAN can yield significant application performance benefits.
    • Physical host capacity can be scaled out dynamically and then back in when it is no longer required. NHS organisations can take advantage of easily spinning up environments to test or develop without having to manually install and configure additional hardware.
  • The NHS has private access to native AWS services

    • VMware Cloud on AWS has a private link into Amazon’s backbone network of services, ranging from storage, database, and network services, to Internet of Things (IoT), Artificial Intelligence (AI) and Machine Learning. Developers can take advantage of various managed container services, or serverless platforms.
    • Since VMware Cloud resides in Amazon’s data centres hybrid configurations can be securely implemented, for example using Amazon’s Elastic Load Balancer with the back end servers in VMC, or Amazon’s Relational Database Service with the application servers in VMC.
  • NHS technical staff will have more time to proactively make improvements to systems and processes

    • Hardware maintenance such as firmware updates, failure remediation, and upgrades are all handled by VMware, as are software updates to the hypervisor and infrastructure management layer.
    • NHS technical staff are responsible for securing applications inside the virtual machine, e.g. operating system updates and firewall configuration, ensuring that Amazon Secure by Design best practises are followed.

In summary VMware Cloud on AWS enables NHS organisations to seamlessly extend or migrate data centre workloads to the cloud, whilst enhancing security and availability options. In the example shown below an existing VMware vSphere environment has been extended to VMware Cloud on AWS, giving organisations the flexibility to run their workloads on the most suited platform. This approach is secure and easy for operational teams who may not yet have an established cloud governance process in place.

Additional notes on this design: The Internet Gateway for VMC is not in use, all routes are advertised internally and controlled using on-premise firewalls, in other words all ingress and egress traffic is via the on-premise data centres. Access to native AWS services uses the 25Gbps Elastic Network Interfaces (ENI) and is secured using the gateway firewall and Amazon Security Groups.

NHS_SDDC

Further Reading: VMware Cloud on AWS Deployment Planning | VMware Cloud on AWS Evaluation Guide | VMware Cloud On AWS On-Boarding Handbook | VMware Cloud on AWS Operating Principles

VMware Cloud on AWS FAQs | Resources | Documentation | Factbook

VMware Cloud on AWS Stretched Cluster Failover Demo

This post demonstrates a simulated failure of an Availability Zone (AZ), in a VMware Cloud on AWS stretched cluster. The environment consists of a 6 host stretched cluster in the eu-west-2 (London) region, across Availability Zones eu-west-2a and eu-west-2b.

The simulation was carried out by the VMware Cloud on AWS back-end support team, to help with gathering evidence of AZ resilience. Failover works using vSphere High Availability (HA), in the event of a host failure HA traditionally brings virtual machines online on available hosts in the same cluster. In this scenario when the 3 hosts in AZ eu-west-2a are lost, vSphere HA automatically brings virtual machines online on the remaining 3 hosts in AZ eu-west-2b. High Availability across Availability Zones is facilitated using stretched networks (NSX-T) and storage replication (vSAN).

AWS Terminology: Each Region is a separate geographic area. Each Region has multiple, isolated locations known as Availability Zones. Each Region is completely independent. Each Availability Zone is isolated, but the Availability Zones in a Region are connected through low-latency links. An Availability Zone can be a single data centre or data centre campus.

VMC_Environment

You may also want to review VMware Cloud on AWS Deployment Planning, and VMware Cloud on AWS Live Migration Demo. For more information on Stretched Clusters for VMware Cloud on AWS see Overview and Documentation, as well as the following external links:

VMware FAQ | AWS FAQ | Roadmap | Product Documentation | Technical Overview | VMware Product Page | AWS Product Page | Try first @ VMware Cloud on AWS – Getting Started Hands-on Lab

Availability Zone (AZ) Outage

Before beginning it is worth re-iterating that the following screenshots do not represent a process, the customer / consumer of the service does not need to intervene unless a specific DR strategy has been put in place. In the event of a real world outage everything highlighted below happens automatically and is managed and monitored by VMware. You will of course want to be aware of what is happening on the platform hosting your virtual machines and that is why this post will give you a feel of what to expect, it may seem a little underwhelming as it does just look like a normal vSphere HA failover.

When we start out in this particular environment the vCenter Server and NSX Manager appliances are located in AZ eu-west-2a.

vcenter-2a

nsx-2a

The AZ failure simulation was initiated by the VMware back-end team. At this point all virtual machines in Availability Zone eu-west-2a went offline, including the example virtual machines screenshot above. As expected, within 5 minutes vSphere HA automatically brought the machines online in Availability Zone eu-west-2b. All virtual machines were accessible and working without any further action.

The stretched cluster now shows the hosts in AZ eu-west-2a as unresponsive. The hosts in AZ eu-west-2b are still online and able to run virtual machines.

Host-List

The warning on the hosts located in AZ eu-west-2b is a vSAN warning because there are cluster nodes down, this is still expected behaviour in the event of host outages.

eu-west-2b

The vCenter Server and NSX Manager appliances are now located in AZ eu-west-2b.

vcenter-2b

nsx-2b

Availability Zone (AZ) Return to Normal

Once the Availability Zone outage has been resolved, and the ESXi hosts are booted, they return as connected in the cluster. As normal with a vSphere cluster Distributed Resource Scheduler (DRS) will then proceed to balance resources accordingly.

Host-List-Normal

The vSAN object resync takes place and the health checks all change to green. Again this is something that happens automatically, and is managed and monitored by VMware.

vSAN-1

vSAN-2

Using a third party monitoring tool we can see the brief outage during virtual machine failover, and a server down / return to normal email alert generated for the support team.

Monitoring

This ties in with the vSphere HA events recorded for the ESXi hosts and virtual machines which we can of course view as normal in vCenter.

VM-Logs

 

VMware Cloud on AWS Live Migration Demo

This post will demonstrate a live migration of a virtual machine from an on-premise VMware infrastructure, to VMware Cloud on AWS. The steps below will demonstrate how quick and easy it is to move virtual machines between VMware Cloud on AWS and on-premise VMware environments using HCX, without the need to re-IP or re-architect services.

You may also want to review VMware Cloud on AWS Deployment Planning, and VMware Cloud on AWS Live Migration Demo. For more information on Stretched Clusters for VMware Cloud on AWS see Overview and Documentation, as well as the following external links:

VMware FAQ | AWS FAQ | Roadmap | Product Documentation | Technical Overview | VMware Product Page | AWS Product Page | Try first @ VMware Cloud on AWS – Getting Started Hands-on Lab

The VMware Cloud environment in this demo is setup as follows:

  • SDDC deployed consisting of a 6 host stretched cluster in eu-west-2
  • On-premise connectivity provided by Direct Connect
  • On-premise vCenter and SDDC vCenter in hybrid linked mode
  • HCX Cloud add-on enabled, and appliances deployed on-premise
  • On-premise networks are VLAN backed port groups in a distributed switch
  • VLAN_98 has been stretched for the purposes of this demo

VMC_Environment

The virtual machine I am going to migrate is the web server from a 3-tier application: VMC-DEMO-WEB-01, with a private IPv4 address of 192.168.98.15.

In the on-premise vCenter Server I have selected the HCX plugin from the Menu drop-down. The dashboard shows my site pairing and cloud overview. Under Network Extension I can see that VLAN_98 has been stretched to VMware Cloud on AWS.

HCX_Networks

From the Migration screen I can see previous migration history, and I select Migrate Virtual Machines.

HCX_Migration_1

The migration interface loads and I search for the virtual machine.

HCX_Migration_2

Having selected the virtual machine to migrate I can now go ahead and select the folder, resource pool, and datastore to use. In this example the machine is already thin provisioned and I am using the vMotion migration type. The network has automatically been populated with the stretched network VLAN_98.

HCX_Migration_3

HCX will perform some validation checks and then I click Finish to start the migration.

HCX_Migration_4

The virtual machine migration progress is now underway.

HCX_Migration_5

After 4 minutes, the migration is complete.

HCX_Migration_6

The virtual machine did not drop any pings during the migration, the web site is still accessible and able to pull data from the database.

VMC_Demo

A HTTP monitor setup in Solarwinds shows that there was no loss of service during the migration.

hCX_Migration_8

The virtual machine is now running in VMware Cloud on AWS and is visible in the SDDC vCenter.

HCX_Migration_7

Should the machine need moving back on-premise the same process can be followed, with the Reverse Migration tick-box.

HCX_Reverse

Once virtual machines are running in VMware Cloud on AWS they have access to native AWS services using the 25 Gbps Elastic Network Interface (ENI): Connecting VMware Cloud on AWS to Amazon EC2Load Balancing VMware Cloud on AWS with Amazon ELB.

Configuring AWS Direct Connect with VMware Cloud on AWS

This post talks about the setup of AWS Direct Connect with VMware Cloud (VMC) on AWS. Direct Connect provides a high-speed, low latency connection between Amazon services and your on-premises environment. Direct Connect is useful for those who want dedicated private connectivity with a consistent network experience in comparison with internet-based VPN connections.

Direct Connect traffic travels over one or more virtual interfaces that you create in your customer AWS account. For SDDCs in which networking is supplied by NSX-T, all Direct Connect traffic, including vMotion, management traffic, and compute gateway traffic, uses a private virtual interface. This establishes a private connection between your on-premises data center and a single Amazon VPC.

You can create multiple interfaces to allow for redundancy and greater availability.”

Using AWS Direct Connect with VMware Cloud on AWS

Make sure you understand the terminology around a Virtual Interface (VIF) and the difference between a Standard VIF, Hosted VIF, and Hosted Connection: What’s the difference between a hosted virtual interface (VIF) and a hosted connection? It is important to consider that VMware Cloud on AWS requires a dedicated Virtual Interface (VIF) – or a pair of VIFs for resilience. If you have a standard 1Gbps or 10Gbps connection direct from Amazon then you can create and allocate VIFs for this purpose. If you are using a hosted connection from an Amazon Partner Network (APN) for sub-1G connectivity then you may need to procure additional VIFs, or a dedicated Direct Connect with the ability to have multiple VIFs on a single circuit. This is a discussion you should have with your APN partner.

Firstly review the pre-requisites and steps to request an AWS Direct Connection connection at Getting Started with AWS Direct Connect. The steps below will walk through configuring Direct Connect for use with VMware Cloud on AWS once the initial connection with Amazon or Amazon partner has been setup. Also review Direct Connect Pricing.

Direct Connect VMC Setup

Log into the VMware on AWS Console, from the SDDCs tab locate the appropriate SDDC and click View Details. Select the Networking & Security tab. Under System click Direct Connect. Make a note of the AWS Account ID, this is the shadow AWS account setup for VMC, you will need this account ID to associate with the Direct Connect.

VMC_DX_1

Log into the AWS console and navigate to the Direct Connect service. If you have not already accepted the connection from your third party provider then review the Amazon documentation referenced above.

AWS_DX_1

Select Virtual Interfaces and click Create Virtual Interface. In this instance we are creating a private VIF. Select the physical connection to use and give the virtual interface a name. Change the virtual interface owner to Another AWS Account and enter the VMC shadow AWS account ID. Fill in the VLAN and BGP ASN information provided by your connection provider. Repeat the process if you are assigning more than one VIF.

AWS_DX_2

Once the VIF or VIFs are created you will see a message that they need to be accepted by the account we have set as owner.

AWS_DX_3

Go back to the VMC portal and the Direct Connect page, click Refresh if necessary. Any interfaces associated with the shadow AWS account will now be listed as available.

VMC_DX_2

Attach the virtual interfaces and confirm acknowledgement that you will be responsible for any data transfer charges that are incurred.

VMC_DX_3

At this point it will take up to 10 minutes for the state of each interface to change from Attaching to Attached, and the BGP status to change from Down to Up. You should now see Advertised BGP Routes listing the network segments you have configured, and Learned BGP Routes listing the subnets peering from your on-premises network.

Click Overview. The Direct Connect shows green, the corresponding VIFs in the AWS Direct Connect page show green and available.

Direct_Connect_Up_VMC

For Direct Connect deep dives review the following blog posts by Nico Vibert: AWS Direct Connect – Deep Dive and Integration with VMware Cloud on AWS, and Direct Connect with VMware Cloud on AWS with VPN as a back-up.

Load Balancing VMware Cloud on AWS with Amazon ELB

This post demonstrates the connectivity between VMware Cloud (VMC) on AWS and native AWS services. In the example below we will be using Amazon Elastic Load Balancing (ELB) to provide highly available, scaleable, and secure load balancing backed by virtual machines hosted in the VMware Cloud Software-Defined Data Centre (SDDC). There is an assumption you have a basic understanding of both platforms.

When integrating with Amazon ELB there are 2 options: Application Load Balancer (ALB) which operates at the request layer (7), or Network Load Balancer (NLB) which operates at the connection layer (4). The Amazon Classic Load Balancer is for Amazon EC2 instances only. For assistance with choosing the correct type of load balancer review Details for Elastic Load Balancing Products and Product Comparisons. Amazon load balancers and their targets can be monitored using Amazon Cloud Watch.

Connectivity Overview

  • VMware Cloud on AWS links with your existing AWS account to provide access to native services. During provisioning a Cloud Formation template will grant AWS permissions using the Identity Access Management (IAM) service. This allows your VMC account to create and manage Elastic Network Interfaces (ENI) as well as auto-populate Virtual Private Cloud (VPC) route tables.
  • An Elastic Network Interface (ENI) dedicated to each physical host connects the VMware Cloud to the corresponding Availability Zone in the native AWS VPC. There is no charge for data crossing the 25 Gbps ENI between the VMC VPC and the native AWS VPC, however it is worth remembering that data crossing Availability Zones is charged at $0.01 per GB (at the time of writing).
  • An example architecture below shows a stretched cluster in VMware on AWS with web services running on virtual machines across multiple Availability Zones. The load balancer sits in the customers native AWS VPC and connects to the web servers using the ENI connectivity. Amazon’s DNS service Route 53 routes users accessing a custom domain to the web service.
  • Remember to consider the placement of your target servers when deploying the Amazon load balancer. For more information see VMware Cloud on AWS Deployment Planning. See also Elastic Load Balancing Pricing.

VMC_LoadBalancing

VMC Gateway Firewall

Before configuring the ELB we need to make sure it can access the target servers. Log into the VMware on AWS Console, from the SDDCs tab locate the appropriate SDDC and click View Details. Select the Networking & Security tab, under Security click Gateway Firewall and Compute Gateway.

VMC_ELB_FW

In this example I have added a rule for inbound access to my web servers. The source is AWS Connected VPC Prefixes (this can be tied down to only allow access from the load balancer if required). The destination is a user defined group which contains the private IPv4 addresses for the web servers in VMC, and the allowed service is set to HTTP (TCP 80).

If you are using the Application Load Balancer then you also need to consider the security group attached to the ALB. If the default group is not used, or the security group attached to the Elastic Network Interfaces has been changed, then you may need to make additional security group changes to allow traffic between the ALB and the ENIs. Review the Security Group Configuration section of Connecting VMware Cloud on AWS to EC2 Instances for more information. The Network Load Balancer does not use security groups. The gateway firewall rule outlined above will be needed regardless of the load balancer type.

ELB Deployment

Log into the VMware on AWS Console, from the SDDCs tab locate the appropriate SDDC and click View Details. Select the Networking & Security tab. Under System click Connected VPC. Make a note of the AWS Account ID and the VPC ID. You will need to deploy the load balancer into this account and VPC.

Log into the AWS Console and navigate to the EC2 service. Locate the Load Balancing header in the left hand navigation pane and click Load Balancers. Click Create Load Balancer. Select the load balancer type and click Create.

VMC_ELB

Typically for HTTP/HTTPS the Application Load Balancer will be used. In this example since I want to deploy the load balancer to a single Availability Zone for testing I am using a Network Load Balancer, which can also have a dedicated Elastic (persistent public) IP.

Enter the load balancer configuration. I am configuring an internet-facing load balancer with listeners on port 80 for HTTP traffic. Scroll down and specify the VPC and Availability Zones to use. Ensure you use the VPC connected to your VMware on AWS VPC. In this example I have selected a subnet in the same Availability Zone as my VMware Cloud SDDC.

VMC_NLB_1

In the routing section configure the target group which will contain the servers behind the load balancer. The target type needs to be IP.

VMC_NLB_2

In this instance since I am creating a new target group I need to specify the IP addresses of the web servers which are VMs sitting in my VMC SDDC. The Network column needs to be set to Other private IP address.

VMC_NLB_3

Once the load balancer and target group are configured review the settings and deploy. You can review the basic configuration, listeners, and monitoring by selecting the newly deployed load balancer.

VMC_NLB_4

Click the Description tab to obtain the DNS name of the load balancer. You can add a CNAME to reference the load balancer using Amazon Route 53 or another DNS service.

VMC_NLB_5VMC_NLB_6

Finally, navigate to Target Groups. Here you can view the health status of your registered targets, and configure health checks, monitoring, and tags.