Tag Archives: VMware

Upgrading to vCenter Server 6.5 Update 1

This post will walk through how to update the vCenter Server Appliance (VCSA) from 6.5 to the first major update 6.5 U1. The new features in the latest release are listed here. The official VMware blog goes into further detail here, and of course the release notes cover the important technical information here.

The latest vSphere version is now 6.7, updated posts:

vCenter Server Appliance 6.7 Install Guide

Windows vCenter Server 6.7 Install Guide

Migrating Windows vCenter Server to VCSA 6.7

Prior to updating vCenter ensure you have verified the compatibility of any third party products such as backups, anti-virus, monitoring, etc. Also cross-check the compatibility of other VMware products using the Product Interoperability Matrix. Since we are updating vCenter Server 6.5 to 6.5 U1 I am assuming the usual pre-requisites such as FQDN resolution, time synchronization, relevant ports open, etc. are already in place, and all hosts are running at least ESXi version 5.5. For more information on the requirements for vCenter Server 6.5, or if you are upgrading from an earlier version, the following posts may be of use:

Before beginning the update process take a backup and snapshot of the vCenter Server Appliance. There is downtime during the update but this is minimal – around 10 mins to update and reboot using an ISO as an update source, when using the online repository the update time may vary depending on your internet connection.

VAMI Update

The easiest way of updating the vCenter Server is through the VAMI (vCenter Server Appliance Management Interface). Browse to https://vCenter:5480, where vCenter is the FQDN or IP address of the vCenter Server. Log in as the root user.


Select the Update option from the navigator.


Click the Check Updates drop-down. If the VCSA has internet access then select Check Repository to pull the update direct from the VMware online repository.

If the VCSA does not have internet access, or you’d prefer to provide the patch manually then download the relevant patch from VMware here (in this case VMware-vCenter-Server-Appliance- and attach the ISO to the CD/DVD drive of the VCSA in the virtual machine settings. Back in the VAMI update page select the Check Updates drop-down and click Check CDROM.


Details of the available update from either the online repository or attached ISO are displayed. Click Install Updates.


Accept the EULA and click Install to begin the installation.


When the update process has completed click OK. From an attached ISO the installation took around 5 minutes.


The updated version and release date should now be displayed in the current version details. Finally, to complete the upgrade reboot the vCenter Server Appliance. Select Summary from the navigator and click Reboot.


CLI Update

Alternatively the vCenter Server Appliance can be updated from the command line. Again, either using the online repository or by downloading the patch from VMware here (VMware-vCenter-Server-Appliance- or latest version) and attaching the ISO to the CD/DVD drive of the VCSA in the virtual machine settings. For more information on patching the vCenter Server Appliance using the appliance shell see this section of VMware docs.

Log in to the vCenter Server appliance as root. First stage the patches from your chosen source using either:

  • software-packages stage --iso --acceptEulas stages software packages from ISO and accepts EULA.
  •  software-packages stage --url --acceptEulas stages software packages from the default VMware online repository and accepts EULA.

Next, review the staged packages, install the update, and reboot the VCSA.

  • software-packages list --staged lists the details of the staged software package.
  • software-packages install --staged installs the staged software package.
  • shutdown reboot -r update reboots the VCSA where ‘update’ is the reboot reason. Use -d to add a delay.


ESXi 6.5 FCoE Adapters Missing

After installing or upgrading to ESXi 6.5 FCoE adapters and datastores are missing. In this case the hardware in use is a HP ProLiant BL460c Gen9 server with HP FlexFabric 10Gb 2-port 536FLB adapters, although this seems to have been a problem for other vendors (see here) and versions too.

This issue should be resolved with a driver provided by the vendor which has the FCoE auto discovery on boot parameter enabled. Cross reference your hardware against the VMware Hardware Compatibility Guide here, and confirm you are using the correct version of the bnx2fc driver and firmware. If no updated driver is available from the vendor then review the workarounds outlined below.

Stateful Installs

Credit to this article, SSH onto the host and run the following commands.

esxcli fcoe adapter list lists the discovered FCoE adapters, at this stage there will be no results.

esxcli fcoe nic list lists the adapters available as potential FCoE candidates. Locate the name of the adapter.

esxcli fcoe nic enable -n vmnicX enables the adapter, replace vmnicX with the adapter name, for example vmnic2.

esxcli fcoe nic discover -n vmnicX enables discovery on the adapter, replace vmnicX with the adapter name.

esxcli fcoe adapter list lists the discovered FCoE adapters, you should now see the FCoE adapters listed.

The storage adapters should now be showing in the vSphere web client, however if you are using stateless installs with Auto Deploy, then this workaround is not persistent and is lost at reboot.


Stateless Installs

Credit to this article, we were able to create a custom script bundle to enable discovery on the FCoE adapters as part of the deploy rule with the steps below. Custom script bundles open up a lot of possibilities with Auto Deploy, but at this stage they are CLI only. I also noticed that if you create a deploy rule with a script bundle from the CLI, although it shows in the GUI if you then edit that rule in the GUI (for something unrelated, e.g. updated host profile) then it removes the script bundle without warning. So this is something you would need to weigh up against your environment, if you are already using CLI to configure deploy rules it shouldn’t be a problem.

PowerCLI can now be installed directly through PowerShell, if you don’t already have PowerCLI installed see here.

  • First up we’ll need to create the script on a Linux / Unix system. I just used a test ESXi host we had kicking about over SSH. Type vi scriptname.sh replacing with an appropriate name for your script.
  • The file will open, type i to begin editing.
  • On the first line enter #!/bin/ash followed by the relevant enable and discover commands from the section above. You can see in the example below the commands for enabling vmnic2 and vmnic3 as FCoE adapters.


  • Press escape to leave the text editor and type :wq to save changes to the file and close.
  • Next we need to create the script bundle that will be imported into Auto Deploy. Type tar -cvzf bundlename.tgz scriptname.sh


  • Copy the script bundle with the .tgz extension to your local machine, or the computer from where you will be using PowerCLI to create the deploy rule. In my case I copied the file over with WinSCP.
  • You should also have an ESXi image in zip format, make a note of the location. Add the script bundle and the ESXi software depot by running the following commands Add-ScriptBundle location\bundlename.tgz and Add-EsxSoftwareDepot location\file.zip. If you need further assistance with building custom images or using PowerCLI to manage Auto Deploy see the VMware Auto Deploy 6.x Guide and How to Create Custom ESXi Images posts.


  • Build the deploy rule using your own variables, again if you’re already using Auto Deploy I’m assuming you know this bit, we’re just adding an additional item in for the script bundle. See the guide referenced above if you need assistance creating deploy rules. I have used:
    • New-DeployRule -Name "Test Rule" -Item "autodeploy-script","HPE-ESXi-6.5.0-Build-5146846", LAB_Cluster, -Pattern "ipv4=" | Add-DeployRule


  • The deploy rule is created and activated, I can now see it in the Auto Deploy GUI in the vSphere web client, with the associated script bundle. When the host boots from the deploy rule the script is extracted and executed, and the FCoE adapters are automatically enabled and discovered on boot.


  • If you don’t use the | Add-DeployRule parameter then the deploy rule will be created but show inactive. You can activate using the GUI but do not edit the rule using the GUI or the script bundle will break.
  • If you are updating an existing image then don’t forget to remove cached rules by remediating host associations, under the Deployed Hosts tab.

VMware vRealize Business for Cloud Install

VMware vRealize Business for Cloud provides automated cost analysis and consumption metering; allowing administrators to make workload placement decisions between private and pulic clouds based on cost and available services. Furthermore infrastructure stakeholders have full visibility of virtual machine provisioning costs and are able to accurately manage capital expenditure and operating expenditure. For more information see the vRealize Business product page, you can try vRealize Business for Cloud using the Hands on Labs available here.

This post will walk through the installation of vRealize Business for Cloud 7.3; we’ll be provisioning to a vSphere environment running vRealize Automation 7.3. Each vRealize Business instance scales up to 20,000 virtual machines and 10 vCenter Servers, remote data collectors can be deployed to distributed geographical sites. vRealize Business is deployed in OVA format as a virtual appliance, you should ensure this appliance is backed up appropriately. There is no built in HA or DR functionality within vRealize Business, but you can take advantage of VMware components such as High Availability, Fault Tolerance, or Site Recovery Manager. Logs can be output to a syslog server such as vRealize Log Insight.



  • vRealize Business for Cloud must be deployed to an ESXi host, and can be used to mange vCenter Server, vCloud Director, vCloud Air, vRealize Automation, and vRealize Operations Manager.
  • vRB 7.3 is compatible with vCenter and ESXi versions 5.5 through to 6.5, and vRealize Automation verisons 6.2.4 through to 7.3 (latest versions at the time of writing).
  • For compatibilty with other VMware products see the VMware Product Interoperability Matrix.
  • The vRB appliance requires 8 GB memory, 4 vCPU and 50 GB disk (thick provisioned).
  • If you use any remote data collectors the memory on these appliances can be reduced to 2 GB.
  • vRealize Business for Cloud is licensed as part of the vRealize suite, per CPU, or in packs of 25-OSI.
  • There are 2 available editions; standard and advanced. Features such as public cloud costing require the advanced version, for more information see the feature comparison section of the product page.
  • The web UI can be accessed from IE 10 or later, Chrome 36.x or later, and Firefox 31.x and later.
  • Time synchronization and name resolution should be in place across all VMware components.
  • For a full list of pre-requisites including port requirements see here.

Before beginning review the following VMware links:

Installing vRB

Download the VMware vRealize Business for Cloud 7.3 OVA file here. Log into the vSphere web client and right click the datastore, cluster, or host where you want to deploy the virtual appliance. Select Deploy OVF Template and browse to the location of the OVA file.

  • Enter a name for the virtual appliance and select the deployment location, click Next.
  • Confirm the compute resource and click Next.
  • Review the details of the OVF template and click Next.
  • Accept the end user license agreement and click Next.
  • Select the storage for the virtual appliance, ensure the virtual disk format is set to Thick provision eager zeroed, and click Next.
  • Select the network to attach to the virtual appliance and click Next.
  • Set the Currency, note that at this time the currency cannot be changed after deployment. Ensure Enable Server is checked, select or de-select SSH and the customer experience improvement program based on your own preferences. Configure a Root user password for the virtual appliance and enter the network settings for the virtual appliance in the Networking Properties fields.
  • Click Next and review the summary page. Click Finish to deploy the virtual appliance.

Once the virtual appliance has been deployed and powered on open a web browser to https://vRB:5480, where vRB is the IP address or FQDN of the appliance. Log in with the root account configured during setup.


Verify the settings under AdministrationTime Settings, and Network. At this stage the appliance is ready to be registered with a cloud solution. In this example I will be using vRealize Automation, for other products or further information see the install guide referenced above. Return to the Registration tab and ensure vRA is selected.


Enter the host name or IP address of the vRA appliance or load balancer. Enter the name of the vRA default tenant and the default tenant administrator username and password. Select Accept vRealize Automation certificate and click Register.

Accessing vRB

vRealize Business for Cloud can be integrated into vRealize Automation, or you can enable stand-alone access. To access vRB after integrating with vRA log into the vRA portal. First open the Administration tab, select Directory Users and Computers, search for a user or group and assign the relevant business management roles. A user with a business management role has access to the Business Management tab in vRA.


Optional: to enable stand-alone access first enable SSH from the Administration tab. Use a client such as Putty to open an SSH connection to the virtual appliance, log in with the root account. Enter cd /usr/ITFM-Cloud/va-tools/bin to change directory, enter sh manage-local-user.sh and select the operation, in this case 5 to enable local authentication.


If you want to create new local users user option 1 and enter the username and password, when prompted for permissions VCBM_ALL provides administrator access and VCBM_VIEW read-only. You can also log in to the web UI with the root account, although it would be better practice to create a separate account.

Disable SSH from the Administration tab if required. Wait a few minutes for the services to restart and then browse to https://IP/itfm-cloud/login.html, where IP is the IP address of your appliance. If you try to access this URL without enabling stand-alone access you will receive a HTTP Status 401 – Authentication required error message.

vRB Configuration

We will continue with the configuration in the vRA portal, open the Administration tab and click Business Management.


Expand License Information, enter a license key and click Save. Expand Manage Private Cloud Connections, configure the required connections. In this example I have added multiple vCenter Server endpoints. Open the Business Management tab, the Launchpad will load.


Select Expenses, Private Cloud (vSphere) and click Edit Expenses. At this stage you will need the figures associated with hardware, storage, and licensing for the environment. You can also add costs for maintenance, labour, network, facilities, and any other additional costs.


Once vRB is populated with the new infrastructure costs utilisation and projected pricing will start to be updated. Consumption showback, what-if analysis, and public cloud comparisons can all be accessed from the navigation menu on the left hand side. For further guidance on getting the most out of vRB see the vRealize Business for Cloud User Guide.


Vembu Install Guide – Part 2 vSphere Configuration

Following on from the previous installation of Vembu Backup Server v3.7 on Windows Server 2016; this post will walk through the configuration steps to add a vCenter target and configure virtual machine backup jobs.

Vembu Install Guide – Part 1 Installation

Vembu Install Guide – Part 2 vSphere Configuration

Initial Configuration

On the Windows server running Vembu Backup Server browse to https://localhost:6061 and login with the default username and password of admin admin. Alternatively from a remote machine browse to the FQDN or IP address of the Vembu Backup Server over port 6061. If you changed the web UI port or default credentials during setup then update accordingly.


The Vembu BDR dashboard will load.


When logging in you will notice the license notification in the bottom left hand corner. The full feature set is available for 30 days, after which you can continue with Vembu BDR Suite free edition. To purchase licenses you can register the server from the Management tab, under Server Management, and Server Registration.

Email notifications can also be configured from the Management tab, under Settings, and Email Settings.

If you didn’t configure a backup repository during installation then we’ll need to do this now before continuing. Select the Management tab, from the drop down menu and click Storage Management.

Select the appropriate location for the backup repository, this can be a local disk:


Or a SAN / NAS hosted network share:


Configuring vSphere Backups

Once a valid storage repository is in place we can begin configuring backup jobs. First let’s add the vCenter: from the Backup tab select VMware vSphere. Click Add VMware vSphere Server and enter the vCenter FQDN or IP address, and credentials. Click Save.


Added vCenter Servers can be viewed in the List VMware Servers page.


Setup a backup job by clicking Backup next to the relevant vCenter Server, the backup wizard loads.

Select the clusters, hosts or VMs to backup. You can exclude VMs or individual disks here also. When you’re ready, click Next.


Set the backup schedule to your own preferences, and click Next.


Configure the appropriate retention policy and click Next.


Application aware backups can be configured here, select any servers that require application consistent snapshots, such as Exchange, SQL, etc. and select whether transaction logs should be truncated post backup. You need to enter valid guest OS credentials, create a service account for this purpose if one doesn’t already exist.


Review the settings, and click Next. You can run the backup straight away by ticking the option if required.


Click Ok to confirm creation of the new backup job, you will now see this in the list of backup jobs.


If you opted to run the backup immediately you will see the backup progress page.


You can run the backup at any time from the List all Backups page, and also edit or delete backup jobs.



Vembu Install Guide – Part 1 Installation

Vembu Install Guide – Part 2 vSphere Configuration


vCenter Server Appliance Integrated TFTP Server

This post covers the steps required to use the vCenter Server Appliance for Auto Deploy, with the built in TFTP server in vSphere 6.5. For more information on Auto Deploy, and to see the process for creating ESXi images and deploy rules to boot hosts, see the VMware Auto Deploy 6.x Guide. This post assumes that you have a working vCenter Server Appliance, and may be of use if you have recently migrated from Windows vCenter to VCSA.

Enable Auto Deploy

Open the vSphere web client and click System Configuration, Nodes. Select the vCenter Server and open the Related Objects tab. The Auto Deploy, ImageBuilder Service, and VMware vSphere ESXi Dump Collector services should all be set to Automatic and Running.

To start a service right click and select Start, then select Edit Startup Type and choose Automatic.

servicesLog out of the web client and log back in. You should now see the Auto Deploy icon on the home page.


Enable TFTP

Now that Auto Deploy is enabled we can configure the TFTP server. Enable SSH on the VCSA by browsing to the Appliance Management page: https://VCSA:5480 where VCSA is the IP or FQDN of your appliance.

Log in as the root account. From the Access page enable SSH Login and Bash Shell.


SSH onto the vCenter Appliance, using a client such as Putty, and log in with the root account. First type shell and hit enter to launch Bash.

To start the TFTP service enter service atftpd start. Check the service is started using service atftpd status.


To allow TFTP traffic through the firewall on port 69; we must run iptables -A port_filter -p udp -m udp –dport 69 -j ACCEPT. Validate traffic is being accepted over port 69 using iptables -nL | grep 69.


The TFTP server will now work, however we need to make a couple of additional changes to make the configuration persistent after the VCSA is rebooted. There isn’t an official VMware way of doing this, and as it’s done in Linux there may be more than one way of achieving what we want. Basically I am going to backup iptables and create a script to restore iptables and start the TFTP service when the appliance boots. The steps are outlined below and this worked for me, however as a reminder this is not supported by VMware, and if you are a Linux expert you’ll probably find a better way round it.

The following commands are all run in Bash on the vCenter Server Appliance, you can stay in the existing session we were using above.

First make a copy of the existing iptables config by running iptables-save > /etc/iptables.rules.

Next change the directory by running cd /etc/init.d, and create a new script: vi scriptname.sh, for example: vi starttftp.sh.

Press I to begin typing. I used the following, which was copied from the Image Builder Service startup script, and modified for TFTP.

#! /bin/sh
# TFTP Start/Stop the TFTP service and allow port 69
# chkconfig: 345 80 05
# description: atftpd

# Provides: atftpd
# Required-Start: $local_fs $remote_fs $network
# Required-Stop:
# Default-Start: 3 5
# Default-Stop: 0 1 2 6
# Description: TFTP

service atftpd start
iptables-restore -c < /etc/iptables.rules

The file must be in the above format to be compatible with chkconfig which runs the script at startup. I left the defaults in from the Image Builder Service as it made sense they started at the same time and had the same dependencies. If you wish to modify further see the following sources: Bash commands, Script Options, Startup, and vmwarebits.com for the iptables commands.

Press escape to leave the editor and :wq to save the file and quit.

Next set execute permissions on the script by running chmod +x scriptname.sh, for example: chmod +x starttftp.sh.

To set the script to run at startup use chkconfig –add scriptname.sh, for example: chkconfig –add starttftp.sh.

Reboot the vCenter appliance to test the script is running. If successful the atftpd service will be started and port 69 allowed, you can check these with service atftpd status and iptables -nL | grep 69.

Close the session and disable SSH if required.

Configure DHCP

In this example I will be using PXE boot to boot the ESXi hosts using a DHCP reservation. On the DHCP scope that will be serving the hosts I have configured reservations and enabled options 066 and 067. In the value for option 066 (Boot Server Host Name) goes the IP address or FQDN of the vCenter Server where TFTP is running. In the value for option 067 (Bootfile Name) I have entered the BIOS DHCP File Name (undionly.kpxe.vmw-hardwired).


Now that Auto Deploy is up and running using the built-in components of VCSA 6.5 you can begin creating ESXi images and deploy rules to boot hosts; using the Auto Deploy GUI. See the VMware Auto Deploy 6.x Guide.

Add a User Defined Windows Administrator to a vRA Blueprint

This post will walk through implementing a process allowing a vRA portal user to specify a user account to be added to the local administrators group on a Windows server provisioned by vRA. There are plenty of posts out there, including a kb article, on adding the virtual machine requester (owner) to the administrators group if that is what you need to do. Before beginning I am assuming you have a fully working vRA installation (I’m using v7.2), and Windows templates with the vRealize Automation Guest Agent installed. Some blueprints would also be handy, but you can create those after.

We’ll need a script on the template Windows machine, in this example I’ve created a Scripts sub-folder within the VRMGuestAgent folder, and a new text file which I’ve saved as AdminUser.cmd. The full path therefore is C:\VRMGuestAgent\Scripts\AdminUser.cmd.


Copy and paste the following line into the batch file: Net localgroup administrators /add %1.


Log in to the vRA portal, for example https://*loadbalancer*/vcac/org/*tenant*. Open the Administration tab and select Property Dictionary. We need to provide the user with a field in the virtual machine request process for them to specify an account to be added as a local administrator. Click Property Definitions and New.

  • Enter a name, it is best practice to use the tenant name, a dot, and then the name of the proeprty definition, for example YourTenant.AdminUser.
  • Enter a useful description, this text will be displayed when the user points to the help symbol next to the field we’re adding in the virtual machine request.
  • Change the Data type to String, and select whether you want the field to be mandatory.
  • From the Display as drop-down menu select Textbox. Click Ok to save.


Next click Property Groups. If your blueprints are using an existing property group then click the property group.  If you need to create a new property group click New and enter a name. The following lines need adding to the property group that is used, or will be used, by a blueprint.

  • Name:   VirtualMachine.Software0.Name
  • Value:   AdminUser
    • Replace the value with an appropriate name for the property, I have used the same name as the script but it doesn’t have to match up.
  • Name:   VirtualMachine.Software0.ScriptPath
  • Value:   C:\VRMGuestAgent\Scripts\AdminUser.cmd {YourTenant.AdminUser}
    • Replace the value with the location of the script on the template OS and include the squiggly brackets; with the name of the property definition we created earlier inside.
  • Name:   YourTenant.AdminUser
  • Value:
  • Show in Request:   Yes
    • Enter the name of the property definition we created earlier and leave the value blank (this will be entered by the user). Ensure Show in Request is ticked.

If you are already using VirtualMachine.Software0 for something else, such as adding the virtual machine owner to the local administrators group, then you can amend to VirtualMachine.Software1 and so on. When you’re done the entries should look something like this, click Ok.


If you haven’t yet assigned a property group to your blueprint then click the Design tab and Blueprints. Click the blueprint to edit, select the vSphere_Machine and click the Properties tab, from the Property Groups tab click Add.


Select the property group we recently created or changed and click Ok. Click Save and Finish. The values in the property group will now be applied to any virtual machines deployed from this blueprint, repeat as required for any other vSphere_Machines or blueprints.

Assuming your blueprint is published and has the necessary entitlements; click the Catalog tab. Locate the catalog item linked to the blueprint and click Request. Select the vSphere_Machine component and you’ll see the new field for the requester to enter the domain\user or user@domain account to be added to the Windows local Administrator group. If you opted to make data input mandatory you’ll see an asterisk next to the new field.


Site Recovery Manager Configuration and Failover Guide

This post will walk through the configuration of Site Recovery Manager; we’ll protect some virtual machines with a Protection Group, and then fail over to the DR site using a Recovery Plan. The pre-requisites for this post are for Site Recovery Manager (SRM) and the Storage Replication Adapter (SRA) to be installed at both sites along with the corresponding vSphere infrastructure, and replication to be configured on the storage array. It is also possible to use vSphere Replication, for more information see the previous posts referenced below.

Part 1 – Nimble Storage Integration with SRM

Part 2 – Site Recovery Manager Install Guide

Part 3 – Site Recovery Manager Configuration and Failover Guide

Site Recovery Manager now has integration with the HTML5 vSphere client, see VMware Site Recovery Manager 8.x Upgrade Guide for more information.

Before creating a Recovery Plan ensure that you have read the documentation listed in the installation guide above and have the required components for each site. You should also make further design considerations around compute, storage, and network. In this post we will be using storage based replication and stretched VLANs to ensure resources are available at both sites. If you want to assign a different VLAN at the failover site then you can use SRM to reconfigure the network settings, see this section of the documentation center.


Configuring SRM

Log into the vSphere web client for the primary site as an administrator, and click the Site Recovery Manager icon.


The first step is to pair the sites together. When sites are paired either site can be configured as the protected site.

  • Click Sites, both installed sites should be listed, select the primary site.
  • On the Summary tab, in the Guide to configuring SRM box, click 1. Pair sites.
  • The Pair Site Recovery Manager Servers wizard will open. Enter the IP address or FQDN of the Platform Services Controller for the recovery site, and click Next.
  • The wizard then checks the referenced PSC for a registered SRM install. Select the corresponding vCenter Server from the list and enter SSO administrator credentials.
  • Click Finish to pair the sites together.

Now the sites are paired they should both show connected. When we configure protection one will be made the protected site and the other failover.


Next we will configure mappings to determine which resources, folders, and networks will be used at both sites.

  • Locate the Guide to configuring SRM box and the subheading 2. Configure inventory mappings.
  • Click 2.1 Create resource mappings.
  • Expand the vCenter servers and select the resources, then click Add mappings and Next.
  • On the next page you can choose to add reverse mappings too, using the tick box if required.
  • Click Finish to add the resource mappings.


  • Click 2.2 Create folder mappings.
  • Select whether you want the system to automatically create matching folders in the failover site for storing virtual machines, or if you want to manually choose which folders at the protected site map to which folders at the failover site. Click Next.
  • Select the folders to map for both sites, including reverse mappings if required, and click Finish.


  • Click 2.3 Create network mappings.
  • Select whether you want the system to automatically create networks, or if you want to manually choose which networks at the protected site map to which networks at the failover site. Click Next.
  • Select the networks to map for both sites and click Next.
  • Review the test networks, these are isolated networks used for SRM test failovers. It is best to leave these as the default settings unless you have a specific isolated test network you want to use. Click Next.
  • Include any reverse mappings if required, then click Finish.

Next we will configure a placeholder datastore. SRM creates placeholder virtual machines at the DR site, when a failover is initiated the placeholder virtual machines are replaced with the live VMs. A small datastore is required at each site for the placeholder data, placeholder VMs are generally a couple of KBs in size.

  • Click 3. Configure placeholder datastore.
  • Select the datastore to be used for placeholder information and click Ok.

The screenshot below shows the placeholder VMs in the failover site on the left, and the live VMs in the protected site on the right.


Although we followed the wizard on the site summary page for the above tasks, it is also possible to configure, or change the settings later, by selecting the site and then the Manage tab, all the different mappings are listed.


Site Protection

The following steps will configure site protection, we’ll start by adding the storage arrays.

  • Click 4. Add array manager and enable array pair.
  • Select whether to use a single array manager, or add a pair of arrays, depending on your environment, and click Next. I’m adding two separate arrays.


  • Select the site pairing and click Next.
  • Select the installed Storage Replication Adapter and click Next.


  • Enter the details for the two storage arrays where volumes are replicated and click Next.
  • Select the array pair to enable and click Next.
  • Confirm the details on the review page and click Finish.

An array pair can be managed by selecting the SRM site and clicking the Related Objects tab, then Array Based Replication. If you add new datastores to the datastore group, you can check they have appeared by selecting Array Based Replication from the Site Recovery Manager home page, select the array, and click the Manage tab. Array pairs and replicated datastores will be listed, click the blue sync icon to discover new devices.

Now the storage arrays are added we can create a Protection Group.

  • Click 5. Create a Protection Group.
  • Enter a name for the protection group and select the site pairing, click Next.


  • Select the direction of protection and the type of protection group. In this example I am using datastore groups provided by array based replication so I’ll need to select the array-pair configured above, and Next.


  • Select the datastore groups to protect, the datastores and virtual machines will be listed, click Next.
  • Review the configuration and click Finish.

The final step is to group our settings together in a Recovery Plan.

  • Click 6. Create a Recovery Plan.
  • Enter a name for the recovery plan and select the site pairing, click Next.
  • From the sites detected select the recovery site and click Next.
  • Select the Protection Group we created above and click Next.
  • Review the test networks, these are isolated networks used for SRM test failovers. It is best to leave these as the default settings unless you have a specific isolated test network you want to use. Click Next.
  • Review the configuration and click Finish.

Now we have green ticks against each item in the Guide to configuring SRM box, we can move on to testing site failover. The array based replication, Protection Groups, and Recovery Plans settings can all be changed, or new ones created, using the menus on the left handside of the Site Recovery Manager home page.


Site Failover

SRM allows us to do a test failover, as well as an actual failover in the event of a planned or unplanned site outage. The test failover brings online the replicated volumes and starts up the virtual machines, using VMware Tools to confirm the OS is responding. It does not connect the network or impact the production VMs.

  • Log in to the vSphere web client for the vCenter Server located at the DR site.
  • Click Site Recovery, click Recovery Plans and select the appropriate recovery plan.
    • To test the failover plan click the green start button (Test Recovery Plan).
    • Once the test has completed click the cleanup icon (Cleanup Recovery Plan) to remove the test data, previous results can still be viewed under History.
  • To initiate an actual fail over click the white start button inside a red circle (Run Recovery Plan).
  • Select the tick-box to confirm you understand the virtual machines will be moved to different infrastructure.
  • Select the recovery type; if the primary site is available then use Planned migration, datastores will be synced before fail over. If the primary site is unavailable then use Disaster recovery, datastores will be brought online using the most recent replica on the storage array.
  • Click Next and then Finish.


During the failover you will see the various tasks taking place in vSphere. Once complete the placeholder virtual machines in the DR site are replaced with the live virtual machines. The virtual machines are brought online in the priority specified when we created the Recovery Plan.


Ensure the virtual machines are protected again as soon as the primary site is available by following the re-protection steps below.

Site Re-Protection

When the primary site is available the virtual machines must be re-protected to allow failback. Likewise after failing back to the primary site the virtual machines must be re-protected to allow failover again to the DR site.

  • Log in to the vSphere web client for either site and click Site Recovery, Recovery Plans and select the appropriate Recovery Plan.
  • Under Monitor, Recovery Steps, the Plan status needs to show Recovery complete, before we can re-protect.


If the status shows incomplete then you can troubleshoot which virtual machine(s) are causing the problem under Related Objects, Virtual Machines. VMware Tools must be running on the VMs to detect the full recovery process.

  • To re-protect virtual machines click Reprotect from the Actions menu at the top of the page.
  • Click the tick-box to confirm you understand the machines will be protected based on the sites specified.


  • Click Next and Finish. The re-protect job will now run, follow the status in the Monitor tab.


Once complete the Plan Status, and Recovery Status, will show Complete. The virtual machine Protection Status will show Ok. The VMs are now protected and can be failed over to the recovery site. If you are failing back to the primary site follow the same steps as outlined in the SRM Failover section above. Remember to then re-protect the VMs so they can failover to the DR site again in the event of an outage. When a Protection Plan is active the status will show Ready, the plan is ready for test or recovery.



Part 1 – Nimble Storage Integration with SRM

Part 2 – Site Recovery Manager Install Guide

Part 3 – Site Recovery Manager Configuration and Failover Guide