Nimble Storage Integration with SRM

This post will walk through the steps required to prepare Nimble Storage arrays at primary and secondary sites for VMware Site Recovery Manager (SRM) using array based replication. The following posts in this Site Recovery Manager series detail the end to end installation and configuration process.

Part 1 – Nimble Storage Integration with SRM

Part 2 – Site Recovery Manager Install Guide

Part 3 – Site Recovery Manager Configuration and Failover Guide

SRM

Before beginning ensure that time synchronization and DNS are in place across the Nimble arrays and vCenter / SRM servers. It is best practice to have Active Directory and DNS servers already running at the secondary site, and also recommended for virtual machine swap files to be stored in a dedicated datastore without replication. Make sure you review the Nimble Storage Best Practices for VMware Site Recovery Manager guide.

All Nimble Storage arrays are listed on the VMware Compatibility Guide, and Nimble have been providing a VMware specific Storage Replication Adapter (SRA) since version 5.1 of SRM. The SRA is the main integration point between SRM and Nimble; allowing storage interactive workflows to be initiated from SRM. In my environment I will only be using VMDK for the virtual storage and can utilise  the Nimble built in vCenter synchronization to quiesce I/O during snapshots. This means the replica is in an application-consistent state and can be cleanly brought back online in the event of failover. Nimble arrays are supplied inclusive of all features, so there are no additional licensing costs for replication.

VMware Integration

If you’re using Nimble to present LUNs to VMware then it’s likely you configured VMware integration during the initial configuration. However, to check log into the web UI of both the replication source and target Nimble arrays by browsing to the IP address or FQDN. From the drop-down Administration menu select VMware Integration.

If the correct vCenter Server is already registered confirm the settings using the Test Status button. Otherwise, enter the required vCenter information to register with the Nimble Storage array.

vcenter

Furthermore, any ESXi hosts connected to Nimble volumes should have the Nimble Connection Manager installed, which includes the Nimble Path Selection Policy (PSP) for VMware. Installing the Nimble VIBs is not included in the scope of this article, however I have briefly outlined the process below.

  • Log in to InfoSight and select Software Downloads from the Resources drop down.
  • Click Connection Manager (NCM) for VMware and download the appropriate version.
  • The downloaded zip file contains the Nimble VIBs, you can install these using one of the following methods:

nmp

Configure Replication Partners

Log into the Nimble web UI using the management IP address or FQDN of the desired replication source array. From the drop-down Manage menu, select Protection, and Replication Partners.

replication

Existing replication partners will be listed, at this stage we don’t have any. Click New Replication Partner.

replication1

The replication partner wizard will load.

  • Enter the group name of the replication target in the partner name field, the group name can be obtained by logging into the web UI of the target Nimble array and clicking the Group referenced in the top right hand corner, or by navigating to Manage, Arrays. Fill in the rest
  • Enter a description if required. Enter the hostname or management IP address of the target Nimble array.
  • Enter a shared secret, this will be configured the same on both arrays.
  • Specify if replication traffic should use the existing management network, or specified data IPs.
  • Specify any folder assignments if required.

replication2

If you want to configure bandwidth limits this is done on the QoS Policy page, click Finish once complete.

replication3

The replication partner will now be listed with a status of OK, the Test function should come back with a success message. Repeat the process on the replication target Nimble array, adding a replication partner for the replication source array.

Configure Volume Replication

Now we have an available replication target we can configure replication of important volumes. In the Nimble web UI for the replication source array, navigate to Manage, Volumes. Replication is configured in the Protection tab, either during the New Volume wizard, or for an existing volume by clicking the volume and Edit, then selecting Protection. Select Create new volume collection and enter a name.

rep1

Something to be aware of – later in the SRM configuration stage we will create protection groups which use consistency groups to group datastores for protection. These SRM consistency groups map to the volume collection groups in Nimble, so if you want to configure different protection settings for different virtual machines they will need to be in volumes using separate volume collection groups. See here for more information.

If application or hypervisor synchronization is required then enter the appropriate details. In this case since we are integrating with SRM we will select VMware vCenter and enter the vCenter details, ensuring application-consistent copies are replicated to the secondary site.

rep2

Configure the protection schedule and how many snapshots to keep locally and on the replication target array, make sure you select the replication partner we created earlier from the Replicate to drop-down menu. When you have entered the required details click Save.

rep3

Repeat this process for any other volumes requiring replication. Once a volume collection has been created you can use the same protection schedules by selecting the existing volume collection on the Protection tab of a volume.

replication5

To view or edit a volume collection navigate to Manage, Protection, Volume Collections in the Nimble web UI.

replication6

Existing replication snapshots are displayed on the Replication tab when selecting a volume, or in the volume collections page referenced above. On the replication target array replication volumes are displayed in the volumes page with a grey coupled LUN icon.

We can now move on to installing Site Recovery Manager in Part 2, the only other Nimble specific step is to install the Storage Replication Adapter (SRA) on the same Windows server as SRM, after SRM has been installed. The Nimble SRA can be downloaded here from InfoSight, and is a simple next and finish installer. After SRM is installed you can confirm the SRA status in the vSphere web client by browsing to Site Recovery Manager, Sites, select the site, open the Monitor tab, and click SRAs.

SRA

_______________

Part 1 – Nimble Storage Integration with SRM

Part 2 – Site Recovery Manager Install Guide

Part 3 – Site Recovery Manager Configuration and Failover Guide

Configuring EMC Unity Replication

Following on from the EMC Unity Setup Guide and EMC Unity Configuration Guide, we will walk through setting up replication between 2 Unity arrays. For Remote Office and Branch Offices replication can be configured between the Unity VSA and a physical Unity array in the datacentre.

Replication between storage devices provides data redundancy and protects against storage system failures. EMC Unity provides synchronous or asynchronous replication; synchronous replication is only available to physical arrays and can protect LUNs, Consistency Groups, and VMware VMFS datastores. Asynchronous replication is applicable to all products in the Unity range and can protect the storage resources listed previously, as well as File Systems, NAS Servers, and VMware NFS datastores. Replication can be configured within the same system or to a different system locally, or in a remote location. All EMC Unity systems are licensed for replication as standard. For more information on EMC Unity replication technologies see the Unity Replication White Paper.

Establishing Replication Connections

Before configuring replication a secure link between Unity systems must be established. All tasks are carried out using the HTML5 Unisphere web client. Browse to the IP or FQDN of either Unity system.

replication1

Under Data Protection on the left hand navigation pane select Replication. First we need to configure the interfaces to use for replication traffic, so click the Interfaces tab.

interfaces

Click the plus symbol (Create Replication Interface). Select the interface to use on each storage processor, if you have created link aggregation groups these are also listed. For assistance with creating link aggregation groups see the EMC Unity Configuration Guide. Enter the IP address to use for replication traffic for each storage processor, configure a VLAN ID if required, and click Ok.

replication2

The created replication interfaces will now be listed, you can also edit and delete replication interfaces from this tab. You need to configure the replication interfaces on both the source and destination system.

Next we setup a remote connection between the storage systems. From either device select the Connections tab.

connections

Enter the management IP address and credentials of the remote system, and the admin password for the local system. Select the replication type; in this example we will use Asynchronous. Click Ok.

replication3

A replication connection will now be established on both the local and remote systems. Once complete click Close.

Configuring Replication

Select the storage resource to configure replication, in this post we will replicate a file system. The NAS Server hosting the file system must be configured for replication. Browse to File under the Storage menu, open the NAS Servers tab and select the NAS server to replicate. Click Edit and select the Replication tab.

replication7

Select the replication mode and RPO (Recovery Point Objective) time. The replication destination is the remote system connection we established earlier. Click Next.

nas2

Select the storage pool and storage processor for the destination storage system, the NAS Server name will auto-populate. Any existing file systems stored on the NAS Server will be listed for replication. Click Next.

nas3

Review the summary page and click Finish and Close.

nas4

When creating new file systems they can be configured for replication on the Replication page of the create a file system wizard.

replication4

A destination file system is automatically created on the destination storage system.

replication5

Alternatively we can configure replication at a later date. To do this open the File Systems tab and select the file system to replicate, click Edit.

replication6

Select the Replication tab. Click Configure Replication.

replication7

The replication wizard will open. The replication session inherits the configuration from the NAS Server. Click Next.

replication8

A file system on the destination file system will automatically be created. Select the storage pool to use on the destination storage system, click Next.

replication9

Review the summary page and click Finish. The replication session will be established, once complete click Close.

replication10

We can confirm the replication status by going back into the properties of the file system and the Replication tab where the replication status will be displayed. The replication role of the file system on the source storage system is Source, on the remote system the file system role is Destination. We can also go back to the Replication page and open Sessions. The replication sessions will be listed.

properties