Author Archives: ESXsi

Oracle Cloud Infrastructure Demo

This opening post will give an overview and demo of Oracle Cloud Infrastructure (OCI). Oracle Cloud offers fast and scalable compute and storage resources, combined with enterprise-grade private virtual cloud networks. Oracle Cloud offers a range of flexible operating models including traditional Virtual Machine (VM) instances, container infrastructure, databases on demand, and dedicated hardware through bare metal servers and Bring Your Own Hypervisor (BYOH).

You can sign up for a free trial account with $300 credit here. When you sign up for an Oracle account you are creating a tenant. Resources inside a tenant can be organised and isolated using compartments, separate projects, billing, and access policies are some use case examples.

Oracle Cloud Infrastructure is deployed in regions. Regions are localised geographical areas, each containing at least 3 Availability Domains. An Availability Domain is a fault-independent data centre with power, thermal, and network isolation. A Virtual Cloud Network (VCN) is deployed per region across multiple Availability Domains, thereby allowing us to build high availability and fault tolerance into our cloud design. Virtual Cloud Networks are software defined versions of traditional on-premise networks running in the cloud, containing subnets, route tables, and internet gateways. VCNs can be connected together using VCN Peering, and connected to a private network using Fast Connect or VPN with the use of a Dynamic Routing Gateway (DRG).

Product Page | Getting Started | Documentation | Sizing and Pricing | Architecture

Product Demo

The demo below creates a VCN and VM instances in the second generation of Oracle Cloud for lab purposes. Before deploying your own environment you should review all the above linked documentation and plan your cloud strategy including IP addressing, DNS, authentication, access control, billing, governance, network connectivity and security.

Log into the Oracle Cloud portal here, the home dash board is displayed.

Oracle_Dashboard

You’ll need a subscription to get into the second generation Oracle Cloud Infrastructure portal. Under Compute select Open Service Console.

Oracle_Cloud_Dashboard

The region can be selected from the drop-down location pin icon in the top right corner, in this example the region is set to eu-frankfurt-1. Select Manage Regions to subscribe to new regions if required. Use the top left Menu button to display the various options. The first step in any deployment is to build the VCN, select Networking and Virtual Cloud Networks.

Oracle_Cloud_Dashboard

Make sure you are in the correct compartment in the left hand column and click Create Virtual Cloud Network. Select the compartment and enter a name, in this example I am going to create the Virtual Cloud Network only which will allow me to manually define resources such as the CIDR block, internet gateway, subnets, and routes. The DNS label is auto-populated.

Oracle_VCN_1

The newly created VCN is displayed, all objects are orange during provisioning and green when available.

Oracle_VCN_3

Once the VCN is available click the VCN name to display more options.

Oracle_VCN_4

Use the options in the Resources menu to view and create resources assigned to the VCN. In this example first we’ll create the Internet Gateway.

Oracle_Cloud_IG_1

Next we can create a subnet, in this example I have created a public subnet that I will later attach a VM instance to.

Oracle_Cloud_Subnet_1Oracle_Cloud_Subnet_2

We also need to add a route table or new routes into the default route table.

Oracle_Cloud_Route

The final step to allow connectivity in and out of our new subnet(s) is to define ingress and egress rules using security lists. Again you can either add rules to the default section or split out environments into additional security lists.

Oracle_Cloud_Security_1

Define the source and destination types and port ranges to allow access. In this example we are allowing in port 22 to test SSH connectivity for a VM instance.

Oracle_Cloud_Security_2

Now that we have a fully functioning software defined network we can deploy a VM instance. From the left hand Menu drop-down select Compute, Instances. Use the Create Instance wizard to deploy a virtual machine or bare metal machine.

Oracle_Cloud_Instance

In this example I have deployed a virtual machine using the Oracle Linux 7.5 image and VM.Standard2.1 shape (1 OCPU, 15 GB RAM). The machine is deployed to Availability Domain 1 in the Frankfurt region and has been assigned the public subnet in the VCN we created earlier. I used PUTTYgen to generate public and private key pairs for SSH access.

Oracle_Cloud_Instance_2

Once deployed the instance turns green.

Oracle_Cloud_Instance_3

Click the instance name to view further details or terminate, when removing you have the option to keep or delete the attached boot volume.

Oracle_Cloud_Instance_4

Additional block volumes can be added to instances. Block volumes can be created under Block Storage, Block Volumes.

Oracle_Cloud_Block_2

For object based storage we can create buckets under Object Storage, Object Storage.

Oracle_Cloud_Bucket_1

Buckets can be used to store objects with public or private visibility, pre-auth requests can also be added for short term access.

Oracle_Cloud_Bucket_2

Oracle_Cloud_Bucket_3

vRA Deployments with Terraform

This post covers notes made when using Terraform to deploy basic resources from VMware vRealize Automation (vRA). Read through the vRA provider plugin page here and the Terraform documentation here. There are a couple of other examples of Terraform configurations using the vRA provider here and here. If you’re looking for an introduction on why Terraform and vRA then this blog post gives a good overview. If you have worked with the vRA Terraform provider before feel free to add any additional pointers or best practises in the comments section, as this is very much a work in progress.

Terraform Setup

Before starting you will need to download and install Go and Git to the machine you are running Terraform from. Visual Studio Code with the Terraform extension is also a handy tool for editing config files but not a requirement. The steps below were validated with Windows 10 and vRA 7.3.

After installing Go the default GOROOT is set to C:\Go and GOPATH to %UserProfile%\Go. Go is  the programming language we will use to rebuild the vRA provider plugin. GOPATH is going to be the location of the directory containing source files for Go projects.

In this instance I have set GOPATH to D:\Terraform and will keep all files in this location. To change GOPATH manually open Control Panel, System, Advanced system settings, Advanced, Environment Variables. Alternatively GOROOT and GOPATH can be set from CLI:

set GOROOT=C:\Go
set GOPATH=D:\Terraform

Download Terraform for Windows, put the executable in the working directory for Terraform (D:\Terraform or whatever GOPATH was set to).

In AppData\Roaming create a new file terraform.rc (%UserProfile%\AppData\Roaming\terraform.rc) with the following contents, replace D:\Terraform with your own Terraform working directory.

providers {
     vra7 = "D:\\Terraform\\bin\\terraform-provider-vra7.exe"
}

Open command prompt and navigate to the Terraform working directory. Run the following command to download the source repository:

go get github.com/vmware/terraform-provider-vra7et GOROOT=C:\Go

Open the Terraform working directory and confirm the repository source files have been downloaded.

The final step is to rebuild the Terraform provider using Go. Download the latest version of dep. Rename the executable to dep.exe and place in your Terraform working directory under \src\github.com\vmware\terraform-provider-vra7.

Back in command prompt navigate to D:\Terraform\src\github.com\vmware\terraform-provider-vra7 and run:

dep ensure
go build -o D:\Terraform\bin\terraform-provider-vra7.exe

Running dep ensure can take a while, use the -v switch if you need to troubleshoot. The vRA Terraform provider is now ready to use.

Using Terraform

In the Terraform working directory a main.tf file is needed to describe the infrastructure and set variables. There are a number of example Terraform configuration files located in the source repository files under \src\github.com\vmware\terraform-provider-vra7\example.

A very basic example of a configuration file would first contain the vRA variables:

provider "vra7" {
     username = "username"
     password = "password"
     tenant = "vRAtenant"
     host = "https://vRA
}

Followed by the resource details:

resource "vra7_resource" "machine" {
   catalog_name = "BlueprintName"
}

Further syntax can be added to pass additional variables, for a full list see the resource section here. The configuration file I am using for the purposes of this example is as follows:

main_tf

Example config and variable files from source repo:

multi_machine_example

variables_example

Once your Terraform configuration file or files are ready go back to command prompt and navigate to the Terraform working directory. Type terraform and hit enter to see the available options, for a full list of commands see the Terraform CLI documentation here.

Start off with initialising the vRA provider plugin:

terraform init

terraform_init

Validate the Terraform configuration files:

terraform validate

If you’re ready then start the deployment:

terraform apply

terraform_apply_1

Monitor the progress from the CLI or from the task that is created in the Requests tab of the vRA portal.

terraform_apply_2

terraform_apply_3

Check the state of the active deployments using the available switches for:

terraform state

terraform_state

To destroy the resource use:

terraform destroy

terraform_destroy

NSX 6.4.1 Upgrade Guide

This post will walk through upgrading to NSX 6.4.1. If upgrading from 6.4.0 then the new Upgrade Coordinator feature can be used, allowing simultaneous upgrade planning of multiple NSX objects, see the NSX 6.4.x Upgrade Coordinator post for more information. If upgrading from an earlier version than 6.4.0 then the steps outlined below are applicable. When performing an upgrade the NSX components must be upgraded in the following order: NSX Manager, NSX Controllers, Host Clusters, NSX Edge, Service Virtual Machines (such as Guest Introspection).

Review the operational impacts of NSX upgrades for each component here when planning your upgrade, it is best practise to limit all operations in the environment until the upgrade is complete. Make sure NSX Manager is backed up before starting an upgrade, and be aware that after a successful upgrade NSX cannot be downgraded. You should also review the VMware NSX for vSphere 6.4.1 Release Notes here and NSX for vSphere Documentation Center here.

Requirements

Requirements specific to NSX 6.4.1 are listed below. As we are doing an upgrade the assumption is that the vSphere and NSX environment is already setup and working, you can validate the existing NSX configuration here. You should also ensure an underlying network with IP connectivity and an MTU size of 1600 or above, FQDN resolution, connectivity, and time synchronisation between NSX and vSphere components, syslog, monitoring, and backups are all in place. In addition review the basic system requirements for NSX here and the full list of network port requirements here.

  • NSX 6.4.1 is compatible with vSphere versions 6.0 U2 and above, also note; if you are using 6.0 then U3 is recommended, the minimum supported version for 6.5 is 6.5a, support for 5.5 has now been removed
  • Supported upgrade paths to NSX 6.4.1 are from 6.2.4 onwards, there is a workaround for upgrading from 6.2.0, 6.2.1, or 6.2.2 which can be found here
  • Review the VMware Upgrade Path page here and also fully review the NSX 6.4.1 Release Notes here, as there are a number of things to be aware of when upgrading from 6.2.x or 6.3.x
  • Check compatibility with VMware products using the VMware Interoperability page here
  • Check compatibility with other third party products such as partner services for Guest Introspection using the VMware Compatibility Guide here
  • Before starting the upgrade make sure existing appliances meet the recommended hardware requirements:
    • NSX Manager 16 GB RAM (24 GB for large deployments), 4 vCPU (8 vCPU for large deployments), and 60 GB disk, a large deployment is typically 256+ hosts or 2000+ VMs
    • NSX Controllers 4 GB RAM, 4 vCPU, and 28 GB disk
    • NSX Edge Compact: 512 MB RAM, 1 vCPU, 584 MB + 512 MB disks. Large: 1 GB RAM, 2 vCPU, 584 MB + 512 MB disks. Quad Large: 2 GB RAM, 4 vCPU, 584 MB + 512 MB disks. X-Large: 8 GB RAM, 6 vCPU, 584 MB + 2 GB + 256 MB disks.
  • Verify the existing NSX Manager has sufficient space by connecting to the CLI (if using SSH service may need starting on the summary page of NSX Manager appliance page) and running show filesystems
  • Maximum latency between NSX components and NSX and vSphere components should be 150 ms RTT or below
  • NSX Data Security is no longer supported, it should be removed if installed prior to the upgrade
  • If you are using Cross-vCenter NSX then each component should be upgraded in the order listed here
  • Enabling DRS on the vSphere cluster allows running VMs to be automatically migrated when each host is placed into maintenance mode for the NSX VIB upgrades. This process can of course be undertaken manually if DRS is not in use
  • A completed upgrade can be validated following the steps listed here

Backups

Before we start take a backup of the vCenter Server and NSX Manager. NSX configuration can be backed up using FTP/SFTP, see this post for more information. From version 6.4.1 a configuration backup is automatically taken at the start of the upgrade process, this is intended as a fall back and you should still take your own backup before beginning. You can also take a snapshot of the NSX Manager incase we need to revert back the NSX Manager upgrade. For extra peace of mind export the vSphere Distributed Switch configuration by following the instructions here.

In the event you do need to restore from an NSX backup a new appliance should be deployed and the configuration restored, click here for further details.

Upgrade Process

As noted above make sure you have read all the linked documentation, specifically the release notes and operational impacts for each component upgrade. The steps below will not list the operational impact for each step of the upgrade.

Download the NSX for vSphere 6.4.1 Upgrade Bundle from the download page here to a location accessible from the NSX Manager. Browse to the NSX Manager and log in as admin. From the home page click Upgrade.

Click Upload Bundle and browse to the upgrade bundle downloaded earlier, click Continue. Once the bundle is uploaded you can (optional) select to enable SSH and/or join the Customer Experience Improvement Program. Click Upgrade to start the upgrade.

NSX64_1

The installer will now upgrade NSX Manager, once complete you will be returned to the login page.

NSX64_2

Log back into NSX Manager and click Upgrade. Verify the upgrade state is complete and the version number is correct. Click Summary and verify the health of the NSX Manager.

NSX64_3

Log into the vSphere Client, if you were already logged in then log out and back in, or you may need to clear your browser cache. From the Menu drop-down select Networking and Security.

Before upgrading any other components we need to upgrade the NSX Controller Cluster. On the Dashboard tab confirm there are 3 controller nodes all connected, the upgrade cannot commence if any nodes are in a disconnected state.

NSX64_5

Click Installation and Upgrade and select the Management and NSX Managers tab. Check the NSX Manager version is correct, in the Controller Cluster Status column click Upgrade Available.

NSX641_1

Each controller is upgraded and rebooted one at a time. From NSX 6.3.3 onwards the underlying operating system of the controller nodes changed to Photon-OS. If you are upgrading from 6.3.3 onwards an in-place upgrade is applied. If you are upgrading from 6.3.2 or earlier then the controller nodes are redeployed, any DRS rules anti-affinity rules are lost and will need to be reapplied.

Click Yes to being the Controller Cluster upgrade.

NSX641_2

Monitor the status in the NSX Controller Nodes tab. After all the controller nodes have been upgraded validated the Status, Peers, and Upgrade Status are all green. Confirm the Software Version is correct.

NSX641_3

Next we can upgrade the host NSX VIBs, click the Host Preparation tab. Clusters running NSX are displayed, upgrades are initiated on a per cluster basis. Select the cluster and click Upgrade to begin the upgrade.

Hosts running NSX 6.2.x require a reboot for the installation of new VIBs, hosts running NSX 6.3.0 and above do not need a reboot but must be placed into maintenance mode. You can either manually place hosts into maintenance mode and vMotion / power off VMs yourself, or allow DRS to live migrate VMs and remediate hosts one at a time.

NSX641_4

Click Yes to commence the cluster upgrade.

NSX641_5

At this stage if hosts are not in maintenance mode the NSX Installation will show Not Ready. If you have DRS enabled on the cluster click Actions and Resolve All, this will automatically vMotion running machines from a host, place into maintenance mode, update the VIBs, and exit maintenance mode, one host at a time. Alternatively you can select individual hosts and click Resolve if you want to control the order of the upgrades.

NSX641_6

Monitor the status of the NSX Installations in the Hosts table. You can also monitor Recent Tasks to make sure a host is not taking too long to enter maintenance mode, if a host cannot be evacuated due to DRS rules, or a VM that cannot be migrated then manual intervention may be required (in this case see here).

If you are using stateless images with Auto Deploy you should also update your ESXi image with the latest NSX VIBs or they will be lost at next reboot, for guidance see this post.

NSX641_7

The next step is to upgrade NSX Edges. Before commencing with validate the status of all NSX prepared hosts is green and they are showing successfully upgraded to the correct version. During Edge upgrades a replacement appliance is deployed which means 2 appliances (or 4 if running in HA mode) are powered on at the same time, ensure your cluster has sufficient compute resource.

NSX641_8

At the time of writing (v6.4.1) NSX Edges still need to be upgraded using the vSphere web client. Log into the vSphere web client and click Networking & Security, NSX Edges, deployed Edges are displayed .If you have multiple NSX Managers ensure the correct NSX Manager is selected in the drop-down. Select the NSX Edge to upgrade and from the Actions menu click Upgrade Version.

NSX641_9

The upgraded version will be deployed from OVF, you can follow the progress in the Recent Tasks pane and also the Status column for the Edge. Repeat this process for each Edge Services Gateway (ESG) and Distributed Logical Router (DLR) you wish to upgrade.

NSX641_10

The final stage is to upgrade Guest Introspection. This can either be done in the vSphere web client or by going back into the HTML5 web client. From the Menu drop-down select Networking and Security, click Installation and Upgrade and the Service Deployment tab. Existing service deployments are listed, the Installation Status for Guest Introspection shows Upgrade Available. Select the Guest Introspection deployment and click Upgrade, once complete verify the Installation Status and Service Status are both green and the version number is correct.

NSX641_11

After all NSX components are upgraded if you want to follow additional verification steps then see the upgrade validation KB here, or the post upgrade tasks listed here. You should take a further backup of NSX Manager after completion of the upgrade. Any third party appliances for Guest Introspection or Network Introspection that require an update can now be upgraded.

NSX 6.4.x Upgrade Coordinator

This post will walk through an upgrade to NSX 6.4.1 using the new Upgrade Coordinator feature allowing simultaneous upgrade planning of multiple NSX components. If you are upgrading from an earlier version of NSX, see the NSX 6.4.1 Upgrade Guide post for details on upgrading individual components. From version 6.4 onwards upgrade plans can be used to upgrade host clusters, controller clusters, Edge Service Gateways (ESGs), Distributed Logical Routers – including Universal (DLRs and UDLRs), and Service Virtual Machines such as Guest Introspection. Upgrade plans consist of either a one click system managed upgrade, or planning your own upgrade where objects and options can be customised.

Review the operational impacts of NSX upgrades for each component here when planning your upgrade, it is best practise to limit all operations in the environment until the upgrade is complete. Make sure NSX Manager is backed up before starting an upgrade, and be aware that after a successful upgrade NSX cannot be downgraded. You should also review the VMware NSX for vSphere 6.4.1 Release Notes here and NSX for vSphere Documentation Center here.

Requirements

Requirements specific to NSX 6.4.1 are listed below. As we are doing an upgrade the assumption is that the vSphere and NSX environment is already setup and working, you can validate the existing NSX configuration here. You should also ensure an underlying network with IP connectivity and an MTU size of 1600 or above, FQDN resolution, connectivity, and time synchronisation between NSX and vSphere components, syslog, monitoring, and backups are all in place. In addition review the basic system requirements for NSX here and the full list of network port requirements here.

  • NSX 6.4.1 is compatible with vSphere versions 6.0 U2 and above, also note; if you are using 6.0 then U3 is recommended, the minimum supported version for 6.5 is 6.5a, support for 5.5 has now been removed
  • Supported upgrade paths to NSX 6.4.1 are from 6.2.4 onwards, there is a workaround for upgrading from 6.2.0, 6.2.1, or 6.2.2 which can be found here
  • Review the VMware Upgrade Path page here and also fully review the NSX 6.4.1 Release Notes here, as there are a number of things to be aware of when upgrading from 6.2.x or 6.3.x
  • Check compatibility with VMware products using the VMware Interoperability page here
  • Check compatibility with other third party products such as partner services for Guest Introspection using the VMware Compatibility Guide here
  • Before starting the upgrade make sure existing appliances meet the recommended hardware requirements:
    • NSX Manager 16 GB RAM (24 GB for large deployments), 4 vCPU (8 vCPU for large deployments), and 60 GB disk, a large deployment is typically 256+ hosts or 2000+ VMs
    • NSX Controllers 4 GB RAM, 4 vCPU, and 28 GB disk
    • NSX Edge Compact: 512 MB RAM, 1 vCPU, 584 MB + 512 MB disks. Large: 1 GB RAM, 2 vCPU, 584 MB + 512 MB disks. Quad Large: 2 GB RAM, 4 vCPU, 584 MB + 512 MB disks. X-Large: 8 GB RAM, 6 vCPU, 584 MB + 2 GB + 256 MB disks.
  • Verify the existing NSX Manager has sufficient space by connecting to the CLI (if using SSH service may need starting on the summary page of NSX Manager appliance page) and running show filesystems
  • Maximum latency between NSX components and NSX and vSphere components should be 150 ms RTT or below
  • NSX Data Security is no longer supported, it should be removed if installed prior to the upgrade
  • If you are using Cross-vCenter NSX then each component should be upgraded in the order listed here
  • Enabling DRS on the vSphere cluster allows running VMs to be automatically migrated when each host is placed into maintenance mode for the NSX VIB upgrades. This process can of course be undertaken manually if DRS is not in use
  • A completed upgrade can be validated following the steps listed here

Backups

Before we start take a backup of the vCenter Server and NSX Manager. NSX configuration can be backed up using FTP/SFTP, see this post for more information. From version 6.4.1 a configuration backup is automatically taken at the start of the upgrade process, this is intended as a fall back and you should still take your own backup before beginning. You can also take a snapshot of the NSX Manager incase we need to revert back the NSX Manager upgrade. For extra peace of mind export the vSphere Distributed Switch configuration by following the instructions here.

In the event you do need to restore from an NSX backup a new appliance should be deployed and the configuration restored, click here for further details.

Upgrade Process

As noted above make sure you have read all the linked documentation, specifically the release notes and operational impacts for each component upgrade. The steps below will not list the operational impact for each step of the upgrade.

Download the NSX for vSphere 6.4.1 Upgrade Bundle from the download page here to a location accessible from the NSX Manager. Browse to the NSX Manager and log in as admin. From the home page click Upgrade.

Click Upload Bundle and browse to the upgrade bundle downloaded earlier, click Continue. Once the bundle is uploaded you can (optional) select to enable SSH and/or join the Customer Experience Improvement Program. Click Upgrade to start the upgrade.

NSX64_1

The installer will now upgrade NSX Manager, once complete you will be returned to the login page.

NSX64_2

Log back into NSX Manager and click Upgrade. Verify the upgrade state is complete and the version number is correct. Click Summary and verify the health of the NSX Manager.

NSX64_3

Log into the vSphere Client, if you were already logged in then log out and back in, or you may need to clear your browser cache. From the Menu drop-down select Networking and Security.

For any upgrade plan the NSX Controller Cluster upgrade is mandatory and performed first. On the Dashboard tab confirm there are 3 controller nodes all connected, the upgrade cannot commence if any nodes are in a disconnected state.

NSX64_5

Click Installation and Upgrade and select the Upgrade tab. Review the components, any warnings, and current and target version details.

NSX64_4

To start an upgrade plan click Plan Upgrade.

Upgrade Coordinator puts objects of the same type in default upgrade groups when planning an upgrade. These groups and other settings can be modified by planning your own upgrade (controller upgrades are mandatory) or you can allow the system to upgrade everything using a one click upgrade. Select the desired upgrade plan and click Next.

NSX64_7

The default options for the one click upgrade are to upgrade Host Clusters and Service VMs individually (serial), and to upgrade NSX Edges all together (parallel). There is no pause between components or pause on error. If you are happy with these settings then click Start Upgrade to being the upgrade process, otherwise go back to Plan Your Upgrade.

NSX64_8

Select your own upgrade to choose which components are upgraded, controller upgrades are mandatory and are done first. You can also pause the upgrade between components or pause the upgrade if an error is returned.

NSX64_9

The next 3 pages of the Upgrade Coordinator allow you to manage upgrade groups for Host Clusters, NSX Edges, and Service VMs. When planning your upgrade take into consideration the following:

  • Objects of the same type can be added to or removed from an upgrade group
  • The order of object upgrades within a group can be changed
  • All components included in an upgrade group must be upgraded before the next component type can be upgraded, e.g. all hosts included in an upgrade plan must be upgraded before moving onto Edges, and so on
  • Excluding an object within an upgrade group is useful for multiple maintenance windows, where you want to add an object to an upgrade plan but exclude them from this upgrade session
  • If the upgrade order within group is set to Serial then each object is upgraded one at a time, if it is Parallel then multiple objects within that group are upgraded at the same time

Controller Upgrades: each controller is upgraded and rebooted one at a time. From NSX 6.3.3 onwards the underlying operating system of the controller nodes changed to Photon-OS. If you are upgrading from 6.3.3 onwards an in-place upgrade is applied. If you are upgrading from 6.3.2 or earlier then the controller nodes are redeployed, any DRS rules anti-affinity rules are lost and will need to be reapplied.

Host Upgrades: hosts running NSX 6.2.x require a reboot for the installation of new VIBs, hosts running NSX 6.3.0 and above do not need a reboot but must be placed into maintenance mode. You can either manually place hosts into maintenance mode and vMotion / power off VMs yourself, or allow DRS to live migrate VMs and remediate hosts one at a time. Monitor the status of the NSX Installations on the Upgrade tab. You can also monitor Recent Tasks to make sure a host is not taking too long to enter maintenance mode, if a host cannot be evacuated due to DRS rules, or a VM that cannot be migrated then manual intervention may be required (in this case see here).

If you are using stateless images with Auto Deploy you should also update your ESXi image with the latest NSX VIBs or they will be lost at next reboot, for guidance see this post.

NSX64_10

Configure your upgrade plan based on the components you want to upgrade in this session, and review the final plan. When you’re read click Start Upgrade to begin the upgrade process.

NSX64_13

Monitor the status of the upgrade on the Upgrade page. If any warnings or errors are displayed during the upgrade process see the Monitor and Troubleshoot Your Upgrade page here. If you selected Pause between components you must Resume or Replan after each stage of the upgrade.

nsx64_14

An in-progress upgrade plan can still be paused to make modifications; when paused the object currently being upgraded will continue and the upgrade plan pauses when this object upgrade succeeds or fails.

nsx64_15

After the upgrade is complete verify the Upgrade page shows the system upgrade status successful.

nsx64_16

Verify the NSX health from the Dashboard page. After all NSX components are upgraded if you want to follow additional verification steps then see the upgrade validation KB here, or the post upgrade tasks listed here. You should take a further backup of NSX Manager after completion of the upgrade. Any third party appliances for Guest Introspection or Network Introspection that require an update can now be upgraded.

VMware Cloud on AWS Demo

This opening post will give an overview and demo of VMware Cloud on AWS. VMware Cloud on AWS provides on-demand, scalable cloud environments based on existing vSphere Software-Defined Data Center (SDDC) products. VMware and AWS have worked together to optimise running vSphere, vSAN and NSX, directly on dedicated, elastic, bare-metal AWS infrastructure without the need for nested virtualization. A SDDC cloud can be deployed in a few hours and then capacity scaled up and down within minutes.

Key Benefits

There are a number of benefits and use cases for extending on-premise data centers to the cloud with VMware Cloud on AWS:

  • VMware maintains software updates, emergency software patches, and auto-remediation of hardware failures
  • Increasing capacity in the cloud is generally quicker, easier, and sometimes more cost effective than increasing physical capacity in the data center
  • Scale capacity to protect services when met with temporary or unplanned demand
  • Improve business continuity by using the cloud for Disaster Recovery (DR) with VMware Site Recovery
  • Consistent operating environments allows for simplified cloud migrations with minimal re-training for system administrators
  • Transfer your existing operating system and third party licensing to the cloud and make use of existing support contracts with VMware
  • Expand footprint into additional geographical locations without needing to provision new data centers

Key Details

The following links contain enough reading to plan your VMware Cloud on AWS implementation and cloud migration strategy, the points below should be enough to get you started.

VMware Cloud on AWS: Product Documentation | Technical Overview | VMware Product Page | VMware FAQ| AWS Product Page | AWS FAQRoadmap | Case Study

  • Each SDDC supports 4 to 32 hosts, each with 512 GB of memory and 2.3 GHz CPUs (custom-built Intel Xeon Processor E5-2686 v4 CPU package) with 18 cores per socket for a total of 36 cores
  • Each SDDC cluster uses an all-flash vSAN configuration utilising NVMe storage
  • An initial 4 host cluster provides roughly 21 TB usable capacity
  • Each ESXi host is connected to an Amazon Virtual Private Cloud (VPC) through Elastic Networking Adapter (ENA), which supports throughput up to 25 Gbps
  • Hybrid Cloud Extension allows stretched subnets between on-premise and cloud data centers for live migration of virtual machines
  • Hybrid Linked Mode allows administrators to connect vCenter Server running in VMware Cloud on AWS to an on-premises vCenter server to view both cloud and on-premises resources from a single interface
  • VMware Cloud on AWS complies with ISO 27001, ISO 27017, ISO 27018, SOC 1, SOC 2, SOC 3, HIPAA, and GDPR
  • VMware Cloud on AWS is managed from a web-based console or RESTful API
  • At the time of writing VMware Cloud on AWS is available in the AWS Europe (Frankfurt and London), AWS US East (N. Virginia) and AWS US West (Oregon) Regions
  • Basic pricing before discount can be calculated here

VMware_AWS

Product Demo

The demo below creates a SDDC in the cloud for lab purposes. Before deploying your own environment you should review all the above linked documentation and do your own research to plan your cloud strategy as well as the following:

  • Identify or create an AWS account and ensure that all technical personnel have access to the account
  • Identify a VPC and subnet by cross-linking the AWS account to the SDDC
  • Allocate IP ranges for the SDDC, and determine a DNS strategy
  • Identify the authentication model for the SDDC
  • Plan connectivity to the SDDC
  • Develop a network security policy for the SDDC

Browse to the VMware Cloud Services portal (https://console.cloud.vmware.com) and login using your VMware ID. At the time of writing to access VMware Cloud on AWS you need to be invited or you can register for a 30 day single host trial here.

VMware_Cloud

Select VMware Cloud on AWS. If you have not used the service before you will be prompted to create a new organisation. Enter a name for your new organisation and accept the terms of service, click Continue.

AWS_1

Add a credit card to be billed if you use the service. If you are using one of the free or trial methods outlined above you will not be billed.

aws_2.png

After you have created the organisation and added payment information you will be sent to the VMware Cloud on AWS dashboard. The first step is to create our SDDC in the cloud, click Create SDDC.

Billing: annual subscriptions are listed under the Subscriptions tab, you can see other billing information from the drop-down menu next to your organisation name: select Organisation Settings, View Organisation. From here you have services, identity and access management, billing and subscriptions, and support options.

AWS_3

Select a region and deployment model for the SDDC, enter a name and the number of hosts if you are not using the single host deployment. Click Next.

AWS_4

Follow the instructions to connect an AWS account and assign the relevant capabilities.

AWS_5

Once the connection is successfully established click Next.

AWS_7

Select the VPC and subnet to use then click Next.

AWS_8

Specify a private subnet range for the management subnet or leave blank to use default addressing. As mentioned above ensure you have planned accordingly and are not using any ranges that will conflict with other networks you may connect in the future. Click Deploy SDDC.

AWS_9

The SDDC will now be deployed, it takes around 2 hours to provision the ESXi hosts and all management components.

AWS_10

Once the deployment is complete the dashboard will show the new SDDC and assigned resources. Click View Details (you can toggle the web portal theme using the Dark/Light options in the top right hand corner).

AWS_14

From either the SDDC Summary tab or back on the SDDC dashboard you can seamlessly add additional hosts or clusters at any time.

AWS_15

If needed the chat bubble in the bottom right hand corner of the screen will take you through to support.

AWS_Support

The Network tab shows the network topology and is where you can configure firewall rules, NAT rules, VPN, Direct Connect, etc.

AWS_12

To access the vCenter Server through the vSphere client the port needs opening, a VPN can also be used. Under Management Gateway select Firewall Rules, click Add Rule. Configure the rule to allow access to the vCenter on port 443 and click Save.

AWS_11

Click Open vCenter from either the Summary or Network tab, if access is in place you are given the cloudadmin@vmc.local credentials to open vCenter. Active Directory can also be configured as an identity source later on.

Once you are logged into the vSphere client you will see the familiar vSphere layout.

vCenter_AWS

It is also possible to see your on-premise vCenter Server(s) in the same pane of glass using Hybrid Linked Mode, click here for more information.

Back in the VMware Cloud on AWS portal the Add Ons tab features Site Recovery and Hybrid Cloud Extension for protecting and migrating workloads to your SDDC in the cloud.

AWS_16

You can delete a SDDC from the Actions drop-down menu in either the SDDC Summary tab or the SDDC dashboard. Once a SDDC is deleted all workloads, data, and interfaces are destroyed and any public IP addresses released.

AWS_17

VMware vSAN 6.7 Install Guide

esxsi.com

VMware vSAN utilises server attached flash devices and local hard disk drives to create a shared datastore across hosts in a vSphere cluster. VMware vSAN achieves high availability by adding a software layer leveraging existing server hardware to provide the same resiliency and features as expensive SAN, NAS, or DAS arrays. Further to this vSAN is uniquely embedded within the hypervisor kernel, directly in the I/O path allowing it to make rapid data placement decisions without the installation of additional VIBs or virtual appliances. This post intends to give an overview of vSAN 6.5/6.7 and how to enable it.

VSAN

For further reading visit the VMware Documentation Centre and expand vSAN under the relevant version.

View original post 1,077 more words

Migrating Windows vCenter Server to VCSA 6.7

VMware vCenter Server pools ESXi host resources to provide a rich feature set delivering high availability and fault tolerance to virtual machines. The vCenter Server is a centralised management application and can be deployed as a virtual appliance or Windows machine. It should be noted that vCenter 6.7 is the final release where Windows modules will be available, see here for more information. All future releases will only be available as vCenter Server Appliance (VCSA) which is the preferred deployment method of vCenter Server. This post gives a walk through on migrating from a Windows based vCenter Server (VCS) to the Photon OS based vCenter Server Appliance (VCSA).

vCenter 6.7: Download | Release Notes | What’s New | VMware DocsvSphere Central

About VCSA

migrate2vcsaThe VCSA is a pre-configured virtual appliance built on Project Photon OS. Since the OS has been developed by VMware it benefits from enhanced performance and boot times over the previous Linux based appliance. Furthermore the embedded vPostgres database means VMware have full control of the software stack, resulting in significant optimisation for vSphere environments and quicker release of security patches and bug fixes. The VCSA scales up to 2000 hosts and 35,000 virtual machines. A couple of releases ago the VCSA reached feature parity with its Windows counterpart, and is now the preferred deployment method for vCenter Server. Features such as Update Manager are bundled into the VCSA, as well as file based backup and restore, and vCenter High Availability. The appliance also saves operating system license costs and is quicker and easier to deploy and patch.

Migrating to VCSA involves the deployment of a new appliance and migration of all configuration (including distributed switches) and historical data using the upgrade installer. The VCSA uses a temporary IP address during migration before switching to the IP and host name of the VCS, the Windows box is then powered off.

Software Considerations

  • The Windows VCS must be v.6.0 or v6.5 (any build / patch) to migrate to VCSA 6.7. Both physical and virtual vCenter Server installations are compatible.
  • Any database, internal or external, supported by VCS can be migrated to the embedded vPostgres database within the target VCSA.
  • The ESXi host or vCenter where VCSA will be deployed must be running v5.5 or above. However, all hosts you intend to connect to vCenter Server 6.7 should be running ESXi 6.0 or above, hosts running 5.5 and earlier cannot be managed by vCenter 6.7 and do not have a direct upgrade path to 6.7.
  • The Windows server is powered off once the VCSA is brought online, this means any other components, VMware or third party, need to be migrated off the Windows server in advance or they will no longer work (don’t forget to move and update any scripts that may live on the Windows server).
  • If you are using Update Manager the VCSA now includes an embedded Update Manager instance.
  • You must check compatibility of any third party products and plugins that might be used for backups, anti-virus, monitoring, etc. as these may need upgrading for vSphere 6.7 compatibility.
  • To check version compatibility with other VMware products see the Product Interoperability Matrix.
  • The points above are especially important since at the time of writing vSphere 6.7 is new enough that other VMware and third party products may not have released compatible versions. Verify before installing vSphere 6.7 and review the Release Notes and Important information before upgrading to vSphere 6.7 KB.

Hardware Considerations

  • The VCSA with embedded PSC requires the following hardware resources (disk can be thin provisioned)
    • Tiny (up to 10 hosts, 100 VMs) – 2 CPUs, 10 GB RAM.
    • Small (up to 100 hosts, 1000 VMs) – 4 CPUs, 16 GB RAM.
    • Medium (up to 400 hosts, 4000 VMs) – 8 CPUs, 24 GB RAM.
    • Large (up to 1000 hosts, 10,000 VMs) – 16 CPUs, 32 GB RAM.
    • X-Large (up to 2000 hosts, 35,000 VMs) – 24 CPUs, 48 GB RAM – new to v6.5.
  • Storage requirements for the smallest environments start at 250 GB and increase depending on your specific database requirements. See the Storage Requirements document for further details.
  • Where the PSC is deployed as a separate appliance this requires 2 CPUs, 4 GB RAM, 60 GB disk.
  • Environments with ESXi host(s) with more than 512 LUNs and 2048 paths should be sized large or x-large.
  • To help with selecting the appropriate storage size for the appliance calculate the size of your existing VCS database here.

Architectural Considerations

  • The migration tool supports different deployment topologies but can not, make changes to the topology and SSO domain configuration.
  • For more information on the deployment topologies available with vCenter 6.x see vCenter Server and Platform Services Controller Deployment Types.
  • A series of videos covering vCenter Server and Platform Services Architecture can be found here. If you require further assistance with vCenter planning see also the vSphere Topology and Upgrade Planning Tool here,
  • Most deployments will include the vCenter Server and PSC in one appliance, following the embedded deployment model, which I will use in this guide.
  • Consider if the default self-signed certificates are sufficient or if you want to replace with custom CA or VMware CA signed certs, see Installing vCenter Internal CA signed SSL Certificates for more information.

embedded

Other Considerations

  • Ensure you have a good backup of the vCenter Server and the database.
  • Variables such as FQDN resolution, database permissions and access to the licensing portal should all be in place since we are upgrading an existing vCenter solution.
  • All vSphere components should be configured to use an NTP server. The installation can fail or the vCenter Server Appliance vpxd service may not be able to start if the clocks are unsynchronized.
  • The ESXi host on which you deploy the VCSA should not be in lockdown or maintenance mode.
  • You will need the SSO administrator login details and if the Windows VCS service runs as a service account then the account must have replace a process level token permission.
  • Local Windows users that have vSphere permissions are not migrated since they are specific to the Windows server, all SSO users and permissions are migrated.
  • The upgrade can be easily rolled back by following this KB.
  • Migration of vCenter using DHCP, or services with custom ports, is not supported. The settings of only one physical network adapter are migrated.
  • Downtime varies depending on the amount of data you are migrating and is calculated when running the migration wizard.
  • A list of Required Ports for vCenter Server and PSC can be found here.
  • The configuration maximums for vSphere 6.7 can be found here.
  • In vSphere 6.7 TLS 1.2 is enabled by default. TLS 1.0 and TLS 1.1 are disabled by default, review the Release Notes for more information.
  • There are a number of Intel and AMD CPUs no longer supported with vSphere 6.7, review the Release Notes for a full list of unsupported processors.

Process

Before we begin if your existing Windows vCenter is virtual it may be beneficial to rename the vCenter virtual machine name in the vSphere inventory to include -old or equivalent. While the hostname and IP are migrated the vSphere inventory name of the VM cannot be a duplicate. The old server is powered down but not deleted so that we have a back out.

Download the VMware vCenter Server Appliance 6.7 ISO from VMware downloads: v6.7.0. Mount the ISO on your computer. The VCSA 6.5 installer is compatible with Mac, Linux, and Windows. Copy the migration-assistant folder to the Windows vCenter Server (and PSC server if external). If the PSC is running on a different Windows server then you must run the Migration Assistant on the PSC server first and migrate following the instructions below, then complete the same process on the Windows vCenter Server.

Start the VMware-Migration-Assistant and enter the SSO Administrator credentials to start running pre-checks.

VCSA_Migration_1

If all checks complete successfully the Migration Assistant will finish at ‘waiting for migration to start’.

On a different machine from your Windows vCenter and PSC server(s) open the vcsa-ui-installer folder file located on the root of the ISO. Browse to the corresponding directory for your operating system, e.g. \vcsa-ui-installer\win32. Right click Installer and select Run as administrator. The vCenter Server Appliance Installer will open, click Migrate.

VCSA_Migration_2

The migration is split into 2 stages; stage 1 deploys the new appliance with temporary network settings, there is no outage to the Windows vCenter at this stage. Stage 2 migrates data and network settings over to the new appliance and shuts down the Windows server. We begin with deploying the appliance. Click Next.

VCSA_Migration_3

Accept the license terms and click Next.

VCSA_Migration_4

Enter the details of the vCenter Server to migrate, then click Next.

VCSA_Migration_5

Enter the FQDN or IP address of the host, or vCenter upon which you wish to deploy the new VCSA. Enter the credentials of an administrative or root user and click Next. The installer will validate access, if prompted with an untrusted SSL certificate message click Yes to continue. Tip – connect to the vCenter for visibility of any networks using a distributed switch, connecting to the host direct will only pull back networks using a standard switch.

VCSA_Migration_6

Enter the virtual appliance VM name, this is the name that appears in the vSphere inventory as mentioned earlier. The host name of the vCenter Server will automatically be migrated. Click Next.

VCSA_Migration_7

Select the appropriate deployment size for your environment and click Next.

VCSA_Migration_8

Select the datastore to locate the virtual appliance and click Next. Configure the temporary network settings for the appliance. These will only be used during migration of the data, once complete the temporary settings are discarded and the VCSA assumes the identity, including IP settings, of the Windows vCenter Server. Click Next.

VCSA_Migration_9

Review the settings on the summary page and click Finish. The VCSA will now be deployed. Once complete click Continue to being the second stage of the migration.

VCSA_Migration_10

Click Next to begin the migration wizard.

VCSA_Migration_11

The source vCenter details are imported from stage 1.

VCSA_Migration_12

As my source Windows vCenter was joined to a domain I am prompted for credentials to join the VCSA to the domain.

VCSA_Migration_13

Select the data to migrate and click Next.

VCSA_Migration_14

Select whether or not to join the VMware Customer Experience Improvement Program and click Next.

VCSA_Migration_15

Review the summary page and click Finish. Data will now be migrated to the VCSA, once complete the Windows vCenter Server will be powered off and the network settings transferred to the VCSA. If you urgently need to power back on the Windows server to retrieve files or such like, then do so with the vNICs disconnected, otherwise you will cause an IP/host name conflict on the network.

VCSA_Migration_16

Post-Installation

Connect to the vCenter post install using the IP or FQDN of the vCenter. Access vSphere by clicking either Launch vSphere Client (HTML5) or Launch vSphere Web Client (FLEX). As the web client will be depreciated in future versions, and the HTML5 client is now nearly at full feature parity, we will use the HTML5 vSphere client.

Windows_vCenter67_14

Management features of the VCSA can be accessed by browsing to the IP or FQDN of the vCenter on port 5480. The login is the root account we configured a password for during the migration wizard.

VCSA_Management