Tag Archives: VMware

Kubernetes on vSphere with Project Pacific

There will be more apps deployed in the next 5 years than in the last 40 years (source: Introducing Project Pacific: Transforming vSphere into the App Platform of the Future). In a shift towards application focus and to address application support complexities between development and operations teams VMware have announced Project Pacific. One of the drivers behind Project Pacific is to run Kubernetes components natively in vSphere. This post gives vSphere administrators an introduction on the technology and how it is expected to work.

Kubernetes Introduction

Kubernetes is an open-source orchestration and management tool that provides a simple Application Programming Interface (API). Kubernetes enables containers to run and operate in a production-ready environment at enterprise scale by managing and automating resource utilisation, failure handling, availability, configuration, scale, and desired state. Micro-services can be rapidly published, maintained, and updated through self-service automation. Kubernetes managed containers and containers package applications and their dependencies into a distributed image that can run almost anywhere, often made up of micro-services. Kubernetes makes it easier to run applications across multiple cloud platforms, accelerates application development and deployment, increases agility, flexibility, and the ability to adapt to change.

Kubernetes uses a cluster of nodes to distribute container instances. The master node is the management plane containing the API server and scheduling capabilities. Worker nodes make up the control plane and act as compute resources for running workloads (known as pods). A pod consists of one or more running container instances, cluster data is stored in a key-value store called etcd. Kubelet is an agent that run on each cluster node ensuring containers are running in a pod. Kubernetes uses a controller to constantly monitor the state of containers and pods, in the event of an issue Kubernetes attempts to redeploy the pod. If the underlying node in a Kubernetes cluster has issues Kubernetes redeploys pods to another node. Availability is addressed by specifying multiple pod instances in a ReplicaSet. Kubernetes replica sets run all the pods active-active, in the event of a replica failing or node issue then Kubernetes self-heals by re-deploying the pod elsewhere. Kubernetes nodes can run multiple pods, each pod gets a routable IP address to communicate with other pods.

Kubernetes namespaces are commonly used to provide multi-tenancy across applications or users, and to manage resource quotas. Kubernetes namespaces segment resources for large teams working on a single Kubernetes cluster. Resources can have the same name as long as they belong to different namespaces, think of them as sub-domains and the Kubernetes cluster as the root domain that the namespace gets attached to. Pods are created on a network internal to the Kubernetes nodes. By default pods cannot talk to each other across the cluster of nodes unless a Service is created, this uses either the cluster network, the nodes network, or a load balancer to map an IP address and port of the cluster or node network to the pods IP address and port, thus allowing pods distributed across nodes to talk to each other if needed.

Kubernetes can be accessed through a GUI known as the Kubernetes dashboard, or through a command-line tool called kubectl. Both query the Kubernetes API server to get or manage the state of various resources like pods, deployments, services, ReplicaSets, etc. Labels assigned to pods can be used to look up pods belonging to the same application or tier. This helps with inventory management and with defining Services. A Service in Kubernetes allows a group of pods to be exposed by a common IP address, helping you defined network routing and load balancing policies without having to understand the IP addressing of individual pods. Persistent storage can be defined using Persistent Volume Claims in a YAML file, the storage provider then mounds a volume on the pod.

Native vSphere Integration

Project_Pacific

VMware have re-architected vSphere to include a Kubernetes control plane for managing workloads on ESXi hosts. The control plane is made up of a supervisor cluster using ESXi as the worker nodes, allowing workloads or pods to be deployed and run natively in the hypervisor. This functionality is provided by a new container runtime built into ESXi called CRX. CRX optimises the Linux kernel and hypervisor, and strips some of the traditional heavy config of a Virtual Machine enabling the binary image and executable code to be quickly loaded and booted. The container runtime is able to produce some of the performance benchmarks VMware have been publishing, such as improvements even over bare metal, in combination with ESXi’s powerful scheduler. In addition the role of the Kubelet agent is handled by a new ‘Spherelet’ within each ESXi host. Kubernetes uses Container Storage Interface (CSI) to integrate with vSAN and Container Network Interface (CNI) for NSX-T to handle network routing, firewall, load balancing, etc. Kubernetes namespaces are built into vSphere and backed using vSphere Resource Pools and Storage Policies.

Developers use Kubernetes APIs to access the Software Defined Data Centre (SDDC) and ultimately consume Kubernetes clusters as a service using the same application deployment tools they use currently. This service is delivered by Infrastructure Operations teams using existing vSphere tools, with the flexibility of running Kubernetes workloads and traditional Virtual Machine workloads side by side.

By applying application focused management Project Pacific allows application level control over policies, quota, and role-based access for Developers. Service features provided by vSphere such as High Availability (HA), Distributed Resource Scheduler (DRS) and vMotion can be applied at application level across Virtual Machines and containers. Unified visibility in vCenter for Kubernetes clusters, containers, and existing Virtual Machines is provided for a consistent view between Developers and Infrastructure Operations alike.

img_1468

At the time of writing Project Pacific is in tech preview. This post will be updated when more information is released, continued reading can be found as follows:

msp-banner-sample-5

vCenter Server External PSC Converge Tool

This post gives an overview of the vCenter Server converge process using the HTML5 vSphere client. The converge functionality was added to the GUI with vSphere 6.7 U2, and enables consolidation of external Platform Services Controller (PSC) into the embedded deployment model. This was previously achieved in vSphere 6.5 onwards using a CLI tool.

Following an upgrade of 4 existing vCenter Servers with external PSC nodes I log into the vSphere client. From the drop-down menu click Administration, on the left hand task pane under Deployment I select System Configuration. The starting topology is as follows:

PSC_4

You can view a VMware produced tutorial below, or the documentation here.

As the vCenter Server appliances do not need internet access I need to mount the ISO I used for the vCenter upgrade, see here for more information. This step is not required if internet connectivity exists.

For each vCenter Server with external PSC I select Converge to Embedded.

PSC_1

Next I confirm the Single Sign-On (SSO) details and click Converge.

PSC_3

If I am logged into the vCenter Server being converged I will be kicked out while services are restarted.

PSC_5

Alternatively if I am logged into another vCenter Server in linked mode I can monitor progress.

PSC_6

Once all 4 vCenter Servers have been converged I check that each of the vCenter Servers is using the embedded PSC, SSH to the vCenter appliance in shell run:

/usr/lib/vmware-vmafd/bin/vmafd-cli get-ls-location --server-name localhost

The command should return the vCenter Server for the lookup service, and not the external PSC node. Once you are happy there are no outstanding connection to the external PSC nodes remove them by selecting them individually and clicking Decommission PSC.

PSC_7PSC_8

With the converge process now complete and the PSC nodes decommissioned, the topology is as desired with all vCenter Servers running embedded PSC.

PSC_9

At this point I needed to re-register any external appliances (such as NSX Manager) or third party services that are pointing at the lookup service URL, or referencing the old external PSC node. I also cleaned up DNS as part of the decommission process.

Connecting VMware Cloud on AWS to Amazon EC2

This post demonstrates the connectivity between VMware Cloud (VMC) on AWS and native AWS services. In the example below we will be using Amazon Elastic Compute Cloud (EC2) to provision a virtual instance backed by Amazon Elastic Block Store (EBS) storage. To complete the use case we will install Veeam and use the EC2 instance to backup virtual machines hosted in the VMware Cloud Software-Defined Data Centre (SDDC).

Connectivity Overview

  • VMware Cloud on AWS links with your existing AWS account to provide access to native services. During provisioning a Cloud Formation template will grant AWS permissions using the Identity Access Management (IAM) service. This allows your VMC account to create and manage Elastic Network Interfaces (ENI) as well as auto-populate Virtual Private Cloud (VPC) route tables.
  • An Elastic Network Interface (ENI) dedicated to each physical host connects the VMware Cloud to the corresponding Availability Zone in the native AWS VPC. There is no charge for data crossing the 25 Gbps ENI between the VMC VPC and the native AWS VPC, however it is worth remembering that data crossing Availability Zones is charged at $0.01 per GB (at the time of writing).
  • The example architecture we will be using is shown below. For more information see VMware Cloud on AWS Deployment Planning.

VMC_Connectivity

Security Group Configuration

AWS Security Groups will be attached to your EC2 instances and ENIs, it is therefore vital that you fully understand the concepts and configuration you are implementing. Please review Understanding AWS Security Groups with VMware Cloud on AWS by Brian Graf.

In the AWS console Security Groups can be accessed from the EC2 service. In this example I have created a security group allowing all protocols (any port) inbound from the source CIDR block used in VMC for both my compute and management subnets. In other words this is allowing connectivity into the EC2 instance from VM in my VMC SDDC. You may want to lock this down to specific IP addresses or ports to provide a more secure operating model. Outbound access from the EC2 instance is defined as any IPv4 destination (0.0.0.0/0) on any port.

Veeam_SG

I have also changed the default security group associated with the ENIs used by VMC to a custom security group. The security group allows inbound access on the ENI (which is inbound access to VMC as explained in the article below) on all ports from the source CIDR block of my native AWS VPC. Outbound access which is from VMC into AWS is defined as any IPv4 destination (0.0.0.0/0) on any port.

ENI_SG

EC2 Deployment

Log into the VMware on AWS Console, from the SDDCs tab locate the appropriate SDDC and click View Details. Select the Networking & Security tab. Under System click Connected VPC. Make a note of the AWS Account ID and the VPC ID. You will need to deploy an EC2 instance into this account and VPC.

Log into the AWS Console and navigate to the EC2 service. Launch an EC2 instance that meets the System Requirements for Veeam. In this example I have used the t2.medium instance and Microsoft Windows Server 2019 Base AMI. When configuring network the EC2 instance must be in the VPC connected to VMC. I have added an additional EBS volume for the backup repository using volume type General Purpose SSD (gp2). Ensure the security group selected or created allows the relevant access.

Gateway Firewall

In addition to security group settings inbound access also needs allowing on the VMC Gateway Firewall. In this instance as we are connecting the EC2 instance to the vCenter we define the rule on the Management Gateway. If we were connecting to a workload in one of the compute subnets the rule would be defined on the Compute Gateway. You may have noticed that although I allowed any port in the AWS Security Groups, the actual ports allowed can also be defined on the Gateway Firewall.

In this example I have added a new user defined group which contains the private IPv4 address for the EC2 instance and added it as a source in the vCenter Inbound Rule. The allowed port is set to HTTPS (TCP 443) – I have also allowed ICMP. I have added the same source group to the ESXi Inbound Rule which allows Provisioning (TCP 902). Both these rules are needed to allow Veeam to backup virtual machines in VMC.

VMC_GW_FW

Veeam Setup

Now that connectivity between the EC2 instance and the VMC vCenter has been configured I can hop onto the EC2 instance and begin the setup of Veeam. I will, of course, need an inbound rule for RDP (TCP 3389) adding to the security group of the EC2 instance, specifying the source I am connecting from.

Follow the installation steps outlined in the Veeam Backup & Replication 9.5 Update 4 User Guide for VMware vSphere.

Veeam_1

In the VMC console navigate to the Settings tab of the SDDC and make a note of the  password for the cloudadmin@vmc.local account. Open the Veeam Backup & Replication console and add the vCenter private IP address, use the vCenter cloud admin credentials.

Veeam_2

Add the backup repository using the EBS volume and create a backup job as normal. Refer to the Veeam Backup Guide if you need assistance with Veeam.

Veeam_3

To make use of S3 object storage AWS you will need an IAM Role granting S3 access, and an S3 VPC Endpoint. In the case of VMC, as an alternative design, you can host the Veeam B&R server inside your VMC SDDC to make use of the built in S3 endpoint. In testing we found backup speeds to be faster but you will likely still need an EBS backed EC2 instance for your backup repository. It goes without saying you should make sure backup data is not held solely on the same physical site as the servers you are backing up. See Veeam KB2414: VMware Cloud on AWS Support for further details.

Add a new Scale-Out Backup Repository and follow the steps to add account and bucket details.

Set an appropriate policy for moving backups to object based storage, once this threshold is met you will start to see Veeam files populating the S3 bucket.

S3_repo

VMware Cloud on AWS Deployment Planning

This post pulls together the notes I have made during the planning of VMware Cloud (VMC) on AWS (Amazon Web Services) deployment, and migrations of virtual machines from traditional on-premise vSphere infrastructure. It is intended as a generic list of considerations and useful links, and is not a comprehensive guide. Cloud, more-so than traditional infrastructure, is constantly changing. Features are implemented regularly and transparently so always validate against official documentation. This post was last updated on August 6th 2019.

Part 1: SDDC Deployment

Part 2: Migration Planning & Lessons Learned

1. Capacity Planning

You can still use existing tools or methods for basic capacity planning, you should also consult the VMware Cloud on AWS Sizer and TCO Calculator provided by VMware. There is a What-If Analysis built into both vRealize Business and vRealize Operations, which is similar to the sizer tool and can also help with cost comparisons. Additional key considerations are:

  • Egress costs are now a thing! Use vRealize Network Insight to understand network egress costs and application topology in your current environment. Calculate AWS Egress Fees Proactively for VMware Cloud on AWS is a really useful resource.
  • You do not need to factor in N+1 when planning capacity. If there is a host failure VMware will automatically add a new host to the cluster, allowing you to utilise more of the available resource.
  • Export a list of Virtual Machines (VMs) from vCenter and review each VM. Contact service owners, application owners, or super users to understand if there is still a requirement for the machine and what it is used for. This ties in to the migration planning piece but crucially allows you to better understand capacity requirements. Most environments have VM sprawl and identifying services that are either obsolete, moved to managed services, or were simply test machines no longer required will clearly reduce capacity requirements.
  • Consider you are now on a ‘metered’ charging model, so don’t set the meter going; in other words don’t deploy the SDDC, until you are ready to start using the platform. Common sense, but internal service reviews or service acceptance and approvals can take longer than expected.
  • You can make savings using reserved instances, by committing to 1 or 3 years. Pay as you go pricing may be sufficient for evaluation or test workloads, but for production workloads it is much more cost effective to use reserved instances.
  • At the time of writing up to 2 SDDC’s can be deployed per organisation (soft limit), each SDDC supporting up to 20 vSphere clusters and each cluster up to 16 physical nodes.
  • The standard i3 bare metal instance currently offers 2 sockets, 36 cores, 512 GiB RAM, 10.7 TB vSAN storage, a 16-node cluster provides 32 sockets, 576 cores, 8192 GiB RAM, 171.2 TB.
  • New R5 bare metal instances are deployed with 2.5 GHz Intel Platinum 8000 series (Skylake-SP) processors; 2 sockets, 48 cores, 768 GiB RAM and AWS Elastic Block Storage (EBS) backed capacity scaling up to 105 TB for 3-node resources and 560 TB for 16-node resources. For up to date configuration maximums see Configuration Maximums for VMware Cloud on AWS.

 

 

2. Placement and Availability

Ultimately placement of your SDDC is going to be driven by specific use cases, and any regulations for the data type you are hosting. How VMware is Accelerating NHS Cloud Adoption uses the UK National Health Service (NHS) and Information Governance as an example. Additional placement and availability considerations are:

  • An SDDC can be deployed to a single Availability Zone (AZ) or across multiple AZ’s, otherwise known as a stretched cluster. For either configuration if a problem is identified with a host in the cluster High Availability (HA) evacuation takes place as normal, an additional host is then automatically provisioned and added as a replacement.
  • The recommendation for workload availability is to use a stretched cluster which distributes workloads across 2 Availability Zones with a third hosting a witness node. In this setup data is written to both Availability Zones in an active active setup. In the event of an outage to an entire Availability Zone vSphere HA brings virtual machines back online in the alternative AZ: VMware Cloud on AWS Stretched Cluster Failover Demo.
  • Stretched clusters have an SLA Availability Commitment of 99.99% (99.9% for single AZ), and provide a Recovery Point Objective (RPO ) of zero by using synchronous data replication. Note that there are additional cross-AZ charges for stretched clusters. The Recovery Time Objective (RTO) is a vSphere HA failover, usually sub 5 minutes.
  • The decision on whether to use single or multiple Availability Zones needs to be taken at the time of deployment. An existing SDDC cannot be upgraded to multi-AZ or downgraded to a single AZ.
  • An Elastic Network Interface (ENI) dedicated to each physical host connects the VMware Cloud to the corresponding Availability Zone in the native AWS Virtual Private Cloud (VPC). There is no charge for data crossing the 25 Gbps ENI between the VMware Cloud VPC and the native AWS VPC.
  • Data that crosses Availability Zones is chargeable, therefore it is good practise to deploy the SDDC to the same region and AZ as your current or planned native AWS services.

 

3. Networks and Connectivity

  • VMware Cloud on AWS links with your existing AWS account to provide access to native services. During provisioning a Cloud Formation template will grant AWS permissions using the Identity Access Management (IAM) service. This allows your VMC account to create and manage Elastic Network Interfaces (ENI’s) as well as auto-populate Virtual Private Cloud (VPC) route tables when NSX subnets are created.
  • It is good practise to enable Multi-Factor Authentication (MFA) for your accounts in both VMC and AWS. VMware Cloud can also use Federated Identity Management, for example with Azure AD. This currently needs to be facilitated by your VMware Customer Success team, but once setup means you can control accounts using Active Directory and enforce MFA or follow your existing user account policies.
  • It is important to ensure proper planning of your IP addressing scheme, if the IP range used overlaps with anything on-premise or in AWS then routes will not be properly distributed and the SDDC needs destroying and reinstalling with an updated subnet to resolve.
  • You will need to allocate a CIDR block for SDDC management, as well as network segments for your SDDC compute workloads to use. Review Selecting IP Subnets for your SDDC for assistance with selecting IP subnets for your VMC environment.
  • Connectivity to the SDDC can be achieved using either AWS Direct Connect (DX) or VPN, see Connectivity Options for VMware Cloud on AWS Software Defined Data Centers. From SDDC v1.7 onwards it is possible to use DX with a backup VPN for resilience.
  • Traffic between VMC and your native AWS VPC is handled by the 25 Gbps Elastic Network Interfaces (ENI) referenced in the section above. To connect to additional VPCs or accounts you can setup an IPsec VPN. The Amazon Transit Gateway feature is available for some regions and configurations, if you are using DX then the minimum requirement is 1Gbps.
  • Access to native AWS services needs to be setup on the VMC Gateway Firewall, for example: Connecting VMware Cloud on AWS to EC2 Instances, as well as Amazon security groups; this is explained in How AWS Security Groups Work With VMware Cloud on AWS.
  • To migrate virtual machines from your on-premise data centre review Hybrid Linked Mode Prerequisites and vMotion across hybrid cloud: performance and best practices. In addition you will need to know the Required Firewall Rules for vMotion and for Cold Migration.
  • For virtual machines to keep the same IP addressing layer 2 networks can be stretched with HCX, review VMware HCX Documentation. HCX is included with VMC licensing but is a separate product in its own right so should be planned accordingly and is not covered in this post. Review VMware Cloud on AWS Live Migration Demo to see HCX in action.
  • VMware Cloud on AWS: Internet Access and Design Deep Dive is a useful resource for considering virtual machines that may require internet access.

4. Operational Readiness

The SDDC is deployed but before you can start migrating virtual machines you need to make sure the platform is fully operational. There are some key aspects but in general make sure you cover everything you do currently on premise:

  • You will likely still have a need for Active Directory, DNS, DHCP, and time synchronisation. Either use native cloud services, or build new Domain Controllers for example in VMC.
  • If you have a stretched-cluster and build Domain Controllers, or other management servers, consider building these components in each Availability Zone, then using compute policies to control the virtual machine placement. This is similar to anti-affinity rules on-premise, see VMware Cloud on AWS Compute Policies for more information.
  • Remember Disaster Recovery (DR) still needs to be factored in. DR as a Service (DRaaS) is offered through Site Recovery Manager (SRM) between regions in the cloud or on-premise. A stretched-cluster may be sufficient but again, this is dependent on the organisation or service requirements.
  • Anti-Virus, monitoring, and patching (OS / application) solutions need to be implemented. Depending on your licensing model you should be able to continue using the same products and tool-set, and carry the license over, but check with the appropriate vendor. Also start thinking about integrating cloud monitoring and management where applicable.
  • VMware Cloud Log Intelligence is a SaaS offering for log analytics, it can forward to an existing syslog solution or integrate with AWS CloudTrail.
  • Backups are still a crucial part of VMware Cloud on AWS and it is entirely the customers responsibility to ensure backups are in place. Unless you have a specific use case to backup machines from VMware Cloud to on-premise, it probably makes sense to move or implement backup tooling in the cloud, for example using Veeam in Native AWS.
  • Perform full backups initially to create a new baseline. Try native cloud backup products that will backup straight to S3, or continue with traditional backup methods that connect into vCenter. The reference architecture below uses Elastic Block Storage (EBS) backed Elastic Compute Cloud (EC2) instances running Veeam as a backup solution, then archiving out to Simple Storage Services (S3). Druva are able to backup straight to S3 from VMC. Veeam are also constantly updating functionality so as mentioned at the start of the post this setup may not stay up to date for long:

vmc_aws.png

  • Customers must be aware of the shared security model that exists between: VMware; delivering the service, Amazon Web Services (the IaaS provider); delivering the underlying infrastructure, and customers; consuming the service.
  • VMware Cloud on AWS meets a number of security standards such as NIST, ISO, and CIS. You can review VMware’s security commitments in the VMware Cloud Services on AWS Security Overview.
  • When using native AWS services you must always follow Secure by Design principals to make sure you are not leaving the environment open or vulnerable to attack.

Part 2: VMware Cloud on AWS Migration Planning & Lessons Learned

Additional resources: VMware Cloud On AWS On-Boarding Handbook | VMware Cloud on AWS Operating Principles | Resources | Documentation | Factbook

VMware Cloud on AWS VideosYouTube PlaylistsVMworld 2018 Recorded Sessions

VMware Site Recovery Manager 8.x Upgrade Guide

This post will walk through an inplace upgrade of VMware Site Recovery Manager (SRM) to version 8.1, which introduces support for the vSphere HTML5 client and recovery / migration to VMware on AWS. Read more about what’s new in this blog post. The upgrade is relatively simple but we need to cross-check compatibility and perform validation tests after running the upgrade installer.

SRM81

Planning

  • The Site Recovery Manager upgrade retains configuration and information such as recovery plans and history but does not preserve any advanced settings
  • Protection groups and recovery plans also need to be in a valid state to be retained, any invalid configurations or not migrated
  • Check the upgrade path here, for Site Recovery Manager 8.1 we can upgrade from 6.1.2 and later
  • If vSphere Replication is in use then upgrade vSphere Replication first, following the steps outlined here
  • Site Recovery Manager 8.1 is compatible with vSphere 6.0 U3 onwards, and VMware Tools 10.1 and onwards, see the compatibility matrices page here for full details
  • Ensure the vCenter and Platform Services Controller are running and available
  • In Site Recovery Manager 8.1 the version number is decoupled from vSphere, however check that you do not need to perform an upgrade for compatibility
  • For other VMware products check the product interoperability site here
  • If you are unsure of the upgrade order for VMware components see the Order of Upgrading vSphere and Site Recovery Manager Components page here
  • Make a note of any advanced settings you may have configured under Sites > Site > Manage > Advanced Settings
  • Confirm you have Platform Services Controller details, the administrator@vsphere.local password, and the database details and password

Download the VMware Site Recovery Manager 8.1.0.4 self extracting installer here to the server, and if applicable; the updated Storage Replication Adapter (SRA) – for storage replication. Review the release notes here, and SRM upgrade documentation centre here.

Database Backup

Before starting the upgrade make sure you take a backup of the embedded vPostgres database, or the external database. Full instructions can be found here, in summary:

  • Log into the SRM Windows server and stop the VMware Site Recovery Manager service
  • From command prompt run the following commands, replacing the db_username and srm_backup_name parameters, and the install path and port if they were changed from the default settings
cd C:\Program Files\VMware\VMware vCenter Site Recovery Manager Embedded Database\bin
pg_dump -Fc --host 127.0.0.1 --port 5678 --username=db_username srm_db > srm_backup_name
  • If you need to restore the vPostgres database follow the instructions here

In addition to backing up the database check the health of the SRM servers and confirm there are no pending reboots. Log into the vSphere web client and navigate to the Site Recovery section, verify there are no pending cleanup operations or configuration issues, all recovery plans and protection groups should be in a Ready state.

Process

As identified above, vSphere Replication should be upgraded before Site Recovery Manager. In this instance we are using Nimble storage replication, so the Storage Replication Adapter (SRA) should be upgraded first. Download and run the installer for the SRA upgrade, in most cases it is a simple next, install, finish.

We can now commence the Site Recovery Manager upgrade, it is advisable to take a snapshot of the server and ensure backups are in place. On the SRM server run the executable downloaded earlier.

  • Select the installer language and click Ok, then Next
  • Click Next on the patent screen, accept the EULA and click Next again
  • Double-check you have performed all pre-requisite tasks and click Next
  • Enter the FQDN of the Platform Services Controller and the SSO admin password, click Next
  • The vCenter Server address is auto-populated, click Next
  • The administrator email address and local host ports should again be auto-populated, click Next
  • Click Yes when prompted to overwrite registration
  • Select the appropriate certificate option, in this case keeping the existing certificate, click Next
  • Check the database details and enter the password for the database account, click Next
  • Configure the service account to run the SRM service, again this will be retain the existing settings by default, click Next
  • Click Install and Finish once complete

Post-Upgrade

After Site Recovery Manager is upgraded log into the vSphere client. If the Site Recovery option does not appear immediately you may need to clear your browser cache, or restart the vSphere client service.

SRM_81

On the summary page confirm both sites are connected, you may need to reconfigure the site pair if you encounter connection problems.

SRM_81_1

Validate the recovery plan and run a test to confirm there are no configuration errors.

SRM_81_2

The test should complete successfully.

SRM_81_5

I can also check the replication status and Storage Replication Adapter status.

SRM_81_4

Configuring vCenter 6.7 High Availability

The vCenter Server Appliance has provided vCenter High Availability (HA) with vSphere 6.5 onwards. In the fully functioning HTML5 release of vCenter 6.7 Update 1 onwards the setup of vCenter HA was hugely simplified. Read more about the improvements made in vSphere 6.7U1 in this blog post. By implementing vCenter HA you can protect your vCenter from host and hardware failures, and significantly reduce down time during patching due to the active / standby nature of the vCenter cluster.

The vCenter HA architecture is made up of the components in the vSphere image below. The vCenter Server Appliance is cloned out to create passive and witness nodes. Updated data is replicated between the active and passive nodes. In the event of an outage to the active vCenter the passive vCenter automatically assumes the active role and identity. Management connections still route to the same IP address and FQDN, however they have now failed over to the replica node. When the outage is resolved and the vCenter that failed comes back online; it then takes on the role of the passive node, and receives replication data from the active vCenter Server.

vCenter_HA

Requirements

  • vCenter HA was introduced with the vCenter Server Appliance 6.5
  • The vCenter deployment size should be at least small, and therefore 4 vCPU 16 GB RAM
  • A minimum of three hosts
  • The hosts should be running at least ESXi 5.5
  • The management network should be configured with a static IP address and reachable FQDN
  • SSH should be enabled on the VCSA
  • A port group for the HA network is required on each ESXi host
  • The HA network must be on a different subnet to the management network
  • Network latency between the nodes must be less than 10ms
  • vCenter HA is compatible with both embedded deployment model and external PSC
  • For further information on vCenter HA performance and best practises see this post

If you are configuring vCenter HA on a version of vCenter prior to 6.7 Update 1 then see this post. If you are configuring vCenter HA in a cluster with less than the required number of physical hosts, such as in a home lab, you can add a parameter to override the anti-affinity setting; see this post by William Lam.

Configuring vCenter HA

Log into the vSphere client and select the top level vCenter Server in the inventory. Click the Configure tab and vCenter HA. The vCenter HA summary page is displayed with a list of prerequisites, ensure these are met along with the requirements above. Click Setup vCenter HA.

vCenter_HA_1

Select the vCenter HA network by clicking Browse. Scroll down the vCenter HA resource settings, review the network and resource settings of the active node of the vCenter Server. Scroll down to the passive node and click Edit. Follow the on-screen prompts to select a folder location, compute and storage resources. Select the management and HA networks for the passive node, review the settings once complete and click Finish. Follow the same steps for the witness node.

vCenter_HA_2

On the IP settings page enter the HA network settings for the active, passive, and witness nodes. Click Finish.

vCenter_HA_3

The vCenter Server will now be cloned and the HA network settings applied, this can be monitored from the tasks pane. Once complete the vCenter HA state will show Healthy, and all nodes in the cluster will show Up.

vCenter_HA_4

You can edit the status of vCenter HA at any time by going back into the vCenter HA menu and clicking Edit. You also have the option of removing the vCenter HA configuration or manually initiating a failover.

vCenter_HA_Edit

For more information on vCenter 6.7 High Availability see the vCenter Documentation Centre here.

vRA Deployments with Terraform

This post covers notes made when using Terraform to deploy basic resources from VMware vRealize Automation (vRA). Read through the vRA provider plugin page here and the Terraform documentation here. There are a couple of other examples of Terraform configurations using the vRA provider here and here. If you’re looking for an introduction on why Terraform and vRA then this blog post gives a good overview. If you have worked with the vRA Terraform provider before feel free to add any additional pointers or best practises in the comments section, as this is very much a work in progress.

Terraform Setup

Before starting you will need to download and install Go and Git to the machine you are running Terraform from. Visual Studio Code with the Terraform extension is also a handy tool for editing config files but not a requirement. The steps below were validated with Windows 10 and vRA 7.3.

After installing Go the default GOROOT is set to C:\Go and GOPATH to %UserProfile%\Go. Go is  the programming language we will use to rebuild the vRA provider plugin. GOPATH is going to be the location of the directory containing source files for Go projects.

In this instance I have set GOPATH to D:\Terraform and will keep all files in this location. To change GOPATH manually open Control Panel, System, Advanced system settings, Advanced, Environment Variables. Alternatively GOROOT and GOPATH can be set from CLI:

set GOROOT=C:\Go
set GOPATH=D:\Terraform

Download Terraform for Windows, put the executable in the working directory for Terraform (D:\Terraform or whatever GOPATH was set to).

In AppData\Roaming create a new file terraform.rc (%UserProfile%\AppData\Roaming\terraform.rc) with the following contents, replace D:\Terraform with your own Terraform working directory.

providers {
     vra7 = "D:\\Terraform\\bin\\terraform-provider-vra7.exe"
}

Open command prompt and navigate to the Terraform working directory. Run the following command to download the source repository:

go get github.com/vmware/terraform-provider-vra7et GOROOT=C:\Go

Open the Terraform working directory and confirm the repository source files have been downloaded.

The final step is to rebuild the Terraform provider using Go. Download the latest version of dep. Rename the executable to dep.exe and place in your Terraform working directory under \src\github.com\vmware\terraform-provider-vra7.

Back in command prompt navigate to D:\Terraform\src\github.com\vmware\terraform-provider-vra7 and run:

dep ensure
go build -o D:\Terraform\bin\terraform-provider-vra7.exe

Running dep ensure can take a while, use the -v switch if you need to troubleshoot. The vRA Terraform provider is now ready to use.

Using Terraform

In the Terraform working directory a main.tf file is needed to describe the infrastructure and set variables. There are a number of example Terraform configuration files located in the source repository files under \src\github.com\vmware\terraform-provider-vra7\example.

A very basic example of a configuration file would first contain the vRA variables:

provider "vra7" {
     username = "username"
     password = "password"
     tenant = "vRAtenant"
     host = "https://vRA
}

Followed by the resource details:

resource "vra7_resource" "machine" {
   catalog_name = "BlueprintName"
}

Further syntax can be added to pass additional variables, for a full list see the resource section here. The configuration file I am using for the purposes of this example is as follows:

main_tf

Example config and variable files from source repo:

multi_machine_example

variables_example

Once your Terraform configuration file or files are ready go back to command prompt and navigate to the Terraform working directory. Type terraform and hit enter to see the available options, for a full list of commands see the Terraform CLI documentation here.

Start off with initialising the vRA provider plugin:

terraform init

terraform_init

Validate the Terraform configuration files:

terraform validate

If you’re ready then start the deployment:

terraform apply

terraform_apply_1

Monitor the progress from the CLI or from the task that is created in the Requests tab of the vRA portal.

terraform_apply_2

terraform_apply_3

Check the state of the active deployments using the available switches for:

terraform state

terraform_state

To destroy the resource use:

terraform destroy

terraform_destroy