Nimble Storage Integration with SRM

This post will walk through the steps required to prepare Nimble Storage arrays at primary and secondary sites for VMware Site Recovery Manager (SRM) using array based replication. The following posts in this Site Recovery Manager series detail the end to end installation and configuration process.

Part 1 – Nimble Storage Integration with SRM

Part 2 – Site Recovery Manager Install Guide

Part 3 – Site Recovery Manager Configuration and Failover Guide

SRM

Before beginning ensure that time synchronization and DNS are in place across the Nimble arrays and vCenter / SRM servers. It is best practice to have Active Directory and DNS servers already running at the secondary site, and also recommended for virtual machine swap files to be stored in a dedicated datastore without replication. Make sure you review the Nimble Storage Best Practices for VMware Site Recovery Manager guide.

All Nimble Storage arrays are listed on the VMware Compatibility Guide, and Nimble have been providing a VMware specific Storage Replication Adapter (SRA) since version 5.1 of SRM. The SRA is the main integration point between SRM and Nimble; allowing storage interactive workflows to be initiated from SRM. In my environment I will only be using VMDK for the virtual storage and can utilise  the Nimble built in vCenter synchronization to quiesce I/O during snapshots. This means the replica is in an application-consistent state and can be cleanly brought back online in the event of failover. Nimble arrays are supplied inclusive of all features, so there are no additional licensing costs for replication.

VMware Integration

If you’re using Nimble to present LUNs to VMware then it’s likely you configured VMware integration during the initial configuration. However, to check log into the web UI of both the replication source and target Nimble arrays by browsing to the IP address or FQDN. From the drop-down Administration menu select VMware Integration.

If the correct vCenter Server is already registered confirm the settings using the Test Status button. Otherwise, enter the required vCenter information to register with the Nimble Storage array.

vcenter

Furthermore, any ESXi hosts connected to Nimble volumes should have the Nimble Connection Manager installed, which includes the Nimble Path Selection Policy (PSP) for VMware. Installing the Nimble VIBs is not included in the scope of this article, however I have briefly outlined the process below.

  • Log in to InfoSight and select Software Downloads from the Resources drop down.
  • Click Connection Manager (NCM) for VMware and download the appropriate version.
  • The downloaded zip file contains the Nimble VIBs, you can install these using one of the following methods:

nmp

Configure Replication Partners

Log into the Nimble web UI using the management IP address or FQDN of the desired replication source array. From the drop-down Manage menu, select Protection, and Replication Partners.

replication

Existing replication partners will be listed, at this stage we don’t have any. Click New Replication Partner.

replication1

The replication partner wizard will load.

  • Enter the group name of the replication target in the partner name field, the group name can be obtained by logging into the web UI of the target Nimble array and clicking the Group referenced in the top right hand corner, or by navigating to Manage, Arrays. Fill in the rest
  • Enter a description if required. Enter the hostname or management IP address of the target Nimble array.
  • Enter a shared secret, this will be configured the same on both arrays.
  • Specify if replication traffic should use the existing management network, or specified data IPs.
  • Specify any folder assignments if required.

replication2

If you want to configure bandwidth limits this is done on the QoS Policy page, click Finish once complete.

replication3

The replication partner will now be listed with a status of OK, the Test function should come back with a success message. Repeat the process on the replication target Nimble array, adding a replication partner for the replication source array.

Configure Volume Replication

Now we have an available replication target we can configure replication of important volumes. In the Nimble web UI for the replication source array, navigate to Manage, Volumes. Replication is configured in the Protection tab, either during the New Volume wizard, or for an existing volume by clicking the volume and Edit, then selecting Protection. Select Create new volume collection and enter a name.

rep1

Something to be aware of – later in the SRM configuration stage we will create protection groups which use consistency groups to group datastores for protection. These SRM consistency groups map to the volume collection groups in Nimble, so if you want to configure different protection settings for different virtual machines they will need to be in volumes using separate volume collection groups. See here for more information.

If application or hypervisor synchronization is required then enter the appropriate details. In this case since we are integrating with SRM we will select VMware vCenter and enter the vCenter details, ensuring application-consistent copies are replicated to the secondary site.

rep2

Configure the protection schedule and how many snapshots to keep locally and on the replication target array, make sure you select the replication partner we created earlier from the Replicate to drop-down menu. When you have entered the required details click Save.

rep3

Repeat this process for any other volumes requiring replication. Once a volume collection has been created you can use the same protection schedules by selecting the existing volume collection on the Protection tab of a volume.

replication5

To view or edit a volume collection navigate to Manage, Protection, Volume Collections in the Nimble web UI.

replication6

Existing replication snapshots are displayed on the Replication tab when selecting a volume, or in the volume collections page referenced above. On the replication target array replication volumes are displayed in the volumes page with a grey coupled LUN icon.

We can now move on to installing Site Recovery Manager in Part 2, the only other Nimble specific step is to install the Storage Replication Adapter (SRA) on the same Windows server as SRM, after SRM has been installed. The Nimble SRA can be downloaded here from InfoSight, and is a simple next and finish installer. After SRM is installed you can confirm the SRA status in the vSphere web client by browsing to Site Recovery Manager, Sites, select the site, open the Monitor tab, and click SRAs.

SRA

_______________

Part 1 – Nimble Storage Integration with SRM

Part 2 – Site Recovery Manager Install Guide

Part 3 – Site Recovery Manager Configuration and Failover Guide

Nimble Storage Setup Guide

Nimble Storage is built on the unique Cache Accelerated Sequential Layout (CASL) Architecture; a CPU-driven storage architecture capable of optimising performance and increasing usable capacity through dynamic caching, sequential data layout and inline compression. Nimble devices are renown for the simplicity of their setup and administration, in this post we’ll put that to the test and walk through the installation of a Nimble CS700 array.

Nimble arrays come configured with triple parity RAID as standard, which offers greater protection of data in the event of a drive failure, without impacting performance or overall capacity of the array. Furthermore should a drive fail then the rebuild process is significantly quicker since it only rebuilds the individual compressed blocks in use.

nimble

Requirements

  • Cabinet vertical space of 3U or 4U for each array, depending on model. The new arrays are 4U each.
  • Cat 6 and Gigabit Ethernet switch ports x 2 for management connections.
  • Cables and ports for your chosen connectivity, note that for both protocols you should use the full resources available and spread across 2 switches, the number of ports available is dependent on your ordered configuration. For iSCSI at least 2 additional cat 6 cables and GbE or 10GbE (recommended), for FC at least 2 OM3 or better Fibre channel cables and ports.
  • At least 3 static IP addresses for FC setups or 5 for iSCSI.
  • Phillips screwdriver for installation.
  • A Windows based computer to run the initialisation and setup.
  • It’s worth checking the Nimble Storage Documentation site as there are lots of environment and product specific best practises guides available.

Nimble arrays are monitored remotely by Nimble Storage, you will need to have the following ports open:

  • SSH: 2222 hogan.nimblestorage.com – Secure Tunnel connection to Nimble Storage Support.
  • HTTPS: 443 nsdiag.nimblestorage.com – AutoSupport and heartbeat monitoring.
  • HTTPS: 443 update.nimblestorage.com – Software updates.
  • HTTPS: 443 nsstats.nimblestorage.com – InfoSight analysis.

Unboxing

The following components are included:

  • Nimble 3u/4u array or 3u/4u expansion shelf.
  • Nimble front bezel, rail kit, and screws.
  • Nimble accessory kit containing Phillips screwdriver, KVM adapters, round-to-square hole rack adapters.
  • Expansion shelves include 1m and 3m SAS cables.
  • Power cables.

Racking

Separate the inner rails from the rail assemblies using the catch at the front end of the middle rail. Slide the inner rails into the retaining hooks on the side of the chassis and install the set screws to secure in place.

rack1

Install the rail assemblies by hooking each rail into the rack and sliding it down to lock into position. If the rack has round holes then use the square-hole adapter and secure into place inside the front and back posts of the rack with the screws included.

rack2

Slide the chassis into the rack, when you hear a click the chassis is locked into place. There are 2 built in screws in the front handles to secure the array.

rack3

Cabling

Connect the cables for management and your chosen connectivity protocol, i.e. Fibre Channel or Ethernet, using all available ports where possible. For redundancy connect one member of each interface pair to the same switch and the second member to a second switch with the same port configuration.

If you do not have a standard network configuration to follow or are unsure about cabling the array see the Nimble network topology options. The most common networking topology is likely to be similar to the image below, however with 4 data ports used for each controller (also applicable to Fibre Channel, swapping out for FC switches, ports and HBAs).

network

Plug the power cables into both power supplies for the array and any expansion shelves, use separate Power Distribution Units (PDUs) for redundancy. Once power is connected the storage should come online automatically but failing that there is a power button located on the front of the array.

Before connecting any additional expansion shelves make sure the array and expansion shelves are all powered on. Connect SAS cables in the order below, repeating steps 3 and 4 for any additional expansion shelves. Wait at least 3 minutes between connecting each expansion shelf to ensure firmware updates are complete. You can daisy-chain up to 3 shelves per bus, Nimble recommend that all flash shelves are cabled direct to the header where possible.

  • Connect the SAS OUT (expansion) port of controller A on the array to the SAS IN port of expander A on the first expansion shelf.
  • Connect the SAS OUT (expansion) port of controller B on the array to the SAS IN port of expander B on the first expansion shelf.
  • Connect the SAS OUT (expansion) port of expander A on the first expansion shelf to the SAS IN port of expander A on the next expansion shelf.
  • Connect the SAS OUT (expansion) port of expander B on the first expansion shelf to the SAS IN port of expander B on the next expansion shelf.

sas

Setup

There are two methods of applying a management IP to the new array; using the GUI from a Windows machine on the same subnet, or directly using the CLI. To use the GUI download the latest version of the Windows Toolkit from InfoSight, this includes Nimble Setup Manager. Note that if you are using a 32-bit version of Windows you will need to select a previous version that is 32-bit compatible.

Nimble Setup Manager scans the subnet for unconfigured Nimble arrays. Select the array to configure and click Next. Enter the array name, group name, network settings, and admin password, then click Next. (Groups are used to manage up to 4 arrays and pool storage. You can add the array as standalone by having it as the only array in its group). Accept the license agreement and click Finish.

nsm2

Alternatively you can use the CLI by connecting directly to the console of the active controller using a keyboard and monitor. Log in with the default username and password admin admin and execute the setup command to launch the CLI based setup. Accept the license agreement and configure the network settings at the relevant prompts, then opt to continue the setup using the GUI.

Once the management network has been configured open a web browser to the IP address. Log in to the Nimble OS web client with the admin password configured, the setup wizard will auto start.

nim1

The configuration settings in the first two pages of the setup wizard differ slightly depending on whether you are using FC or iSCSI. If you’ve ever set up an array with either protocol before you’ll find this process very straight forward, I’ll make references to both protocols just incase.

The first thing we need to do is to configure subnets for the required networks. For FC arrays this is easy as you’ll just have to confirm the management subnet. Ensure management only is selected as the traffic type.

If you are using iSCSI then in addition to the management subnet you will also configure a data subnet, or subnets, in accordance with your iSCSI fabric design. It is recommended that the management and data networks are separate subnets.  Each subnet requires an iSCSI discovery IP address. IP Address Zones are used to divide data subnets into two, typically split by using odds and evens addresses; to avoid bottlenecks on interconnect links. You don’t need to worry about this unless you are implementing an advanced solution for a specific use case. Ensure data is selected as the traffic type. Once the subnet configuration is complete click Next.

nimble1

On the interfaces page assign each interface to one of the subnets. Both controllers should be configured with a diagnostic IP address, whether you are using FC or iSCSI. Click Next.

nimble2

On the domain menu configure the domain and DNS server settings, click Next.

nim3

Configure the time zone and NTP server settings and click Next. Enter the email and auto support settings and click Finish. The initial setup is now complete and the browser will return to the management web client.

nim4

Before going any further you should ensure the Nimble OS is up to date by following the Updating Nimble OS post.

If you have cabled additional expansion shelves then these need to be activated. Browse to Manage, Arrays and click the array name. Notice that the expansion shelves are orange, click Activate Now.

activateshelves1

If you’re using Fibre Channel you’re probably wondering why the ports are named fc1, fc2, fc5, and fc6. This is to future proof the array for the release of quad port FC HBA’s by leaving an upgrade path open (fc3, fc4, fc7, and fc8). Hosts will need to be zoned as normal for FC connectivity and then added as initiators. You can create initiator groups and volumes by following the Configuring Nimble Storage guide. You may also want to see Configuring VVOLs with Nimble Storage.

Configuring VVOLs with Nimble Storage

This post will walk through the setup of VMware VVOLs with Nimble Storage. If you are unfamiliar with the concept of Virtual Volumes then see this KB. If you are already up to speed on Virtual Volumes but are looking for further information on why they work so well with Nimble storage then have a look at the Nimble Storage & VVOLs demo below, courtesy of Nimble. There are also free online VVOL labs available at Nimble Storage University.

Nimble VVOL Components

The Nimble OS includes the vStorage APIs for Storage Awareness (VASA) provider and the PE (Protocol Endpoint) built into the operating system. This means that the VASA provider and Protocol Endpoint run natively from the controller, so there is no additional installation or configuration required. This design also offers high availability due to the active / passive setup of Nimble controllers.

Virtual machines are provisioned based on the VMware Storage Policy Based Management (SPBM) framework which uses the VASA client, both features are key to VVOLs and were introduced with vSphere 6. Nimble folders were added in v3 of the OS and represent a logical allocation of capacity, vSphere sees the folders as containers where virtual volumes can reside.

Prerequisites

  • Before you can implement VVOLs you need to be running vSphere 6 or above.
  • If you have already licensed vSphere for standard or above there is no additional cost.
  • Nimble arrays must be running OS v3.x or above.
  • Nimble have included the vStorage APIs for Storage Awareness (VASA) in the software. Check with your storage provider that they support VASA 2.0 (vSphere 6.0) or VASA 3.0 (vSphere 6.5).
  • At the time of writing all Nimble storage support VVOLs. If you are using an alternative storage provider cross check your hardware with VVOLs in the VMware compatibility checker.
  • There are no additional licensing requirements or costs to use VVOLs with Nimble. Check with your storage provider that this is the case.

Nimble Configuration

Log into the web interface of the Nimble device. The first thing we need to do is integrate the VASA provider with vSphere. From the drop down Administration menu select VMware Integration. Edit the vCenter server integration to select the VASA Provider (VVOLS) tick box and click Save.

enablevvol

The next task is to setup a Nimble folder, or container, to house the virtual volumes. From the drop down Manage menu select Volumes, click New Folder.

Enter a name and description for the new folder. If you have setup multiple pools of storage then select the pool to use, otherwise leave at default. If required you can set a limit to the folder which puts a cap on the amount of storage vSphere can see available (note this doesn’t create a reservation) otherwise leave at no limit. From the Management Type drop down menu select VVOLs, enter the vCenter server that will be managing the virtual volumes and click Create.

newfolder

The new folder should now be visible in the folder tree on the left hand pane, once vSphere starts using this container for virtual volumes you will see VMDK and other files stored natively within the folder

The final step is to examine the existing performance policies, from the Manage drop down menu select Performance Policies. There are multiple pre-configured performance policies, if required you can create your own. These policies can later be defined in vSphere as part of a VM storage policy.

performancepolicies

The configuration on the Nimble array is now complete and we can progress with the vSphere steps.

vSphere Configuration

Since VVOLs are a new feature of vSphere 6 all configuration is done in the vSphere web client. The first task is to add the Nimble folder we just created to vSphere, from the home page in the vSphere web client click Storage, and Add Datastore. Pick the datacentre location and click Next, select VVOL as the type of datastore and click Next.

vvoldatastore

The available Nimble folder should now be highlighted, verify the name and size, enter a name for your new datastore and click Next.

vvol

Select the hosts that require access and click Next, review the details in the final screen and click Finish. You may need to do a rescan on the hosts but at this stage we are ready to provision a new virtual machine to the virtual volume datastore with the default storage policy. This represents VVOLs in its simplest form, the virtual machine files are now thin provisioned and stored natively in the folder we created on the Nimble array.

The next phase depends entirely on the configuration that is going to best suit your environment. Chances are you’ll already know which storage based policies you have a requirement for, if not you can still browse through both the features on the Nimble array and the options in VM Storage Policies.

From the home page of the vSphere web client click Policies and Profiles, select VM Storage Policies, the default policies are listed.

vsphere1

Click the Create New VM Storage Policy icon. Enter a name and description and click Next. Under rule-sets change the data services to NimbleStorage. Configure the storage policy as required by adding rule sets; either selecting the performance polices we saw earlier in the Nimble array, or adding additional features. Once you have added all applicable rule-sets click Next.

vsphere2

In the storage compatibility screen choose the virtual volume datastore listed as compatible and click Next. Review the storage policy details and click Finish. Now the new policy is setup you can select it from the dropdown VM Storage Policy menu when provisioning a virtual machine.

At the time of writing VMware recommend a halfway house for VVOLs, i.e. no one is suggesting you go migrating your entire production environment to virtual volumes. Have a play about in dev with the policies and from there start to migrate the virtual machines that would benefit from the storage based policies and functionality.

The release of vSphere 6.5 included VVOLs 2 built on VASA 3.0 which features support for array based replication. You can read more about VVOLs 2 with Nimble here, or read more about what’s new here.

Updating Nimble OS

Although non-disruptive upgrades are a great feature of Nimble storage arrays, I’ll not be throwing administrative best practises out of the window just yet; and will be completing this update out of core business hours.

At the time of writing Nimble software updates are only available by downloading directly on the storage device. This shouldn’t be a problem since Nimble support is centred around remote monitoring, however if you do have an array without internet connectivity then the software can be manually obtained from Nimble support. The release notes are available on the Infosight portal but there are no download links.

Log into the web interface of the Nimble device. Select the Administration drop down menu and click Software.

osupdate1

On the software page click Download to download the latest operating system version, this does not being the update process but downloads the software to the array. If you have obtained the software from Nimble support click Upload and browse to the folder location.

osupdate2

Once the software has downloaded click Update. In this example we are upgrading from v2.3.14 to v3.4.1. The process involves updating one controller at a time, since they run as active / passive there should be no outage.

When prompted with the license agreement click Agree. Click Ok to the message about clearing your browser cache and reloading the page once the update is complete.

nimbleupdate1

The update will now commence, keep in mind that you may see the process stages ‘1 of 7’ jump forward and then back a step, this is normal. Once the first controller has finished updating the browser will reload during fail-over.

nimbleupdate2

Once the update is complete you will be returned to the software page, where the current version number should now be updated. Some of the menu pages look a little different, such as the home page. If you have any issues with the web client at this stage then delete your browsing history and restart the browser.

nimbleupdate3

Check in recent events, you should see the software successfully updated message for both controllers. If you have email notifications setup you will also have been notified of this via email. It’s worth checking over the servers that have Nimble storage volumes mounted and that all paths are available.

Configuring Nimble Storage

Once your Nimble storage array is setup and zoned with the appropriate servers, you can start presenting volumes. The first step is to create an initiator group; a volume or LUN is mapped to an initiator group which contains the initiator World Wide Port Names (WWPN).

Open a browser and navigate to the IP address of the Nimble device. Select Manage and Initiator Groups, click Create. Enter a name and add the aliases and WWPN of the servers you want to present storage to. You can create multiple initiator groups and a server can be a member of multiple initiator groups.

nimble1

When all your initiators are added and grouped you can proceed to create and map a volume. Click Manage and Volumes, select New Volume.

Enter a volume name and chose a performance policy based on the application or platform you are using, or create your own. From the drop down list of initiator groups you created earlier select the initiator group to present the new volume to. The LUN number is auto populated but can be changed, click Next.

nimble-2

Enter the size of the volume and any reserves or quotas you wish to apply, click Next. Volumes are thin provisioned by default.

If you want to configure snapshots to protect your volume, or volume caching you can do so here by selecting the relevant options, otherwise click No volume collection, Next and Finish.

The volume is now mapped to the initiator. There are a couple of further steps you should take on your server before using the newly presented storage.

nimble4

Windows Initiator

If you are presenting Nimble storage direct to a Windows host you should install the Nimble Windows Toolkit. The toolkit includes the Nimble Device Specific Module (DSM) and you will need the Multipath I/O (MPIO) feature installed. For the purpose of this post I will be installing version 2.3.2 of the Nimble Windows Toolkit which requires Windows 2008 R2 or later and .NET Framework 4.5.2.

Log in to Infosight and select Software Downloads from the Resources drop down. Click Windows Toolkit and download the appropriate version. Check for any required Windows hot-fixes in the release notes.

nwt

On your Windows server run the setup as an administrator and follow the onscreen prompts. Once the Nimble Windows Toolkit is installed the server will require a reboot. You will then see Nimble Connection Manager listed in programs, and the Nimble DSM in use on the properties of the disk in Disk Management.

ESXi Initiator

If you are presenting Nimble storage to an ESXi host you can install Nimble Connection Manager which includes the Nimble Path Selection Policy (PSP) for VMware. You will need ESXi 5.x or above.

Log in to Infosight.nimblestorage.com and select Software Downloads from the Resources drop down. Click Connection Manager (NCM) for VMware and download the appropriate version.

Within the zip file that is downloaded you will find the Nimble vibs; you can either deploy the zip file using VMware Update Manager, inject the vibs to your hosts manually or build them into your image if you are using VMware Auto Deploy.

nmp