Tag Archives: Nimble

Configuring VVols with HPE Nimble Storage

esxsi.com

This post will walk through the setup of VMware Virtual Volumes (VVols) with HPE Nimble Storage. The post was originally published in September 2016 and has subsequently been brought up to date, the process remains largely the same and in this example we will use the vSphere 6.7 HTML5 client and Nimble software version 5.0.4. The terminology and features of Virtual Volumes are detailed in KB 2113013 (Understanding Virtual Volumes (VVols) in VMware vSphere 6.7).

HPE_Nimble

Nimble VVol Components

Nimble software includes the vStorage APIs for Storage Awareness (VASA) provider and the PE (Protocol Endpoint) built into the operating system. This means that the VASA provider and Protocol Endpoint run natively from the controller, so there is no additional installation or configuration required. This design also benefits from the highly available setup of Nimble controllers.

Virtual machines are provisioned based on the VMware Storage Policy Based Management (SPBM) framework which uses the…

View original post 749 more words

Site Recovery Manager 6.x Install Guide

This post will walk through the installation of Site Recovery Manager (SRM) to protect virtual machines from site failure. SRM plugs into vCenter to protect virtual machines replicated to a failover site using array based replication or vSphere replication. In the event of a site outage, or outage of components within a site meaning production virtual machines can no longer run there; SRM brings online the replicated datastore and VMs in vSphere, with a whole bunch of automated customisation options such as assigning new IP addresses, boot orders, dependencies, running scripts, etc. After a failover SRM can reverse the replication direction and protect virtual machines ready to fail back, all from within the vSphere web client.

Site Recovery Manager now has integration with the HTML5 vSphere client, see VMware Site Recovery Manager 8.x Upgrade Guide for more information.

Requirements

  • SRM is installed on a Windows machine at the protected site and the recovery site. SRM requires an absolute minimum of 2vCPU, 2 GB RAM and 5 GB disk available, more is recommended for large environments and installations with an embedded database.
  • The Windows server should have User Access Control (UAC) disabled (in the registry, not just set to never notify) as this interferes with the install.
  • Each SRM installation requires its own database, this can be embedded for small deployments, or external for large deployments.
  • A vCenter Server must be in place at both the protected site and the recovery site.
  • SRM supports both embedded and external Platform Services Controller deployments. If the external deployment method is used ensure the vCenter at the failover site is able to connect to the Platform Services Controller (i.e. it isn’t in the primary site). For more information click here.
  • The vCenter Server, Platform Services Controller, and SRM versions must be the same on both sites.
  • You will need the credentials of the vCenter Server SSO administrator for both sites.
  • For vCenter Server 6.0 U2 compatibility use SRM v6.1.1, vCenter Server 6.0 U3 use SRM v6.1.2 and for vCenter Server 6.5 and 6.5 U1 use v6.5 or v6.5.1 of SRM.
  • Check compatibility of other VMware products using the Product Interoperability Matrix.
  • If there any firewalls between the management components review the ports required for SRM in this KB.
  • SRM can be licensed in packs of 25 virtual machines, or for unlimited virtual machines on a per CPU basis with vCloud Suite. Read more about SRM licensing here.
  • Array based replication or vSphere Replication should be in place before beginning the SRM install. If you are using array based replication contact your storage vendor for best practices guide and the Storage Replication Adapter which is installed on the same server as SRM.

As well as the requirements listed above the following points are best practices which should also be taken into consideration:

  • Small environments can host the SRM installation on the same server as vCenter Server, for large environments SRM should be installed on a different system.
  • For vCenter Server, Platform Services Controller, Site Recovery Manager servers, and vSphere Replication (if applicable) use FQDN where possible rather than IP addresses.
  • Time synchronization should be in place across all management nodes and ESXi hosts.
  • It is best practice to have Active Directory and DNS servers already running at the failover site.

Installation

In this example we will be installing Site Recovery Manager using Nimble array based replication. There is a vCenter Server with embedded Platform Services Controller already installed at each site. The initial screenshots are from an SRM v6.1.1 install, but I have also validated the process with SRM v6.5.1 and vCenter 6.5 U1.

SRM

The virtual machines we want to protect are in datastores replicated by the Nimble array. For more information on the storage array pre-installation steps see the Nimble Storage Integration post referenced below. The Site Recovery Manager install, configuration, and failover guides have no further references to Nimble and are the same for all vendors and replication types.

Part 1 – Nimble Storage Integration with SRM

Part 2 – Site Recovery Manager Install Guide

Part 3 – Site Recovery Manager Configuration and Failover Guide

Installing SRM

The installation is pretty straight forward, download the SRM installer and follow the steps below for each site. We’ll install SRM on the Windows server for the primary / protected site first, and repeat the process for the DR / failover site. We can then pair the two sites together and create recovery plans.

SRM 6.5.1 (vSphere 6.5 U1) Download | Release Notes | Documentation

SRM 6.5 (vSphere 6.5) Download | Release Notes | Documentation

SRM 6.1.2 (vSphere 6.0 U3) Download | Release Notes | Documentation

SRM 6.1.1 (vSphere 6.0 U2) Download | Release Notes | Documentation

Log into the Windows server where SRM will be installed as an administrator, and right click the downloaded VMware-srm-version.exe file. Select Run as aministrator. If you are planning on using an external database then the ODBC data source must be configured, for SQL integrated Windows authentication make sure you log into the Windows server using the account that has database permissions to configure the ODBC data source, and run the SRM installer.

Select the installer language and click Ok.

SRM1

Click Next to begin the install wizard.

SRM2

Review the patent information and click Next.

SRM3

Accept the EULA and click Next.

SRM4

Confirm you have read the prerequisites located at http://pubs.vmware.com/srm-61/index.jsp by clicking Next.

SRM5

Select the destination drive and folder, then click Next.

SRM6

Enter the IP address or FQDN of the Platform Services Controller that will be registered with this SRM instance, in this case the primary site. If possible use the FQDN to make IP address changes easier if required at a later date. Enter valid credentials to connect to the PSC and click Next. If your vCenter Server is using an embedded deployment model then enter your vCenter Server information.

SRM7

Accept the PSC certificate when prompted. The vCenter Server will be detected from the PSC information provided. Confirm this is correct and click Next. Accept the vCenter certificate when prompted.

SRM8

Enter the site name that will appear in the Site Recovery Manager interface, and the SRM administrator email address. Enter the IP address or FQDN of the local server, again use the FQDN if possible, and click Next.

SRM11

In this case as we are using a single protected site and recovery site we will use the Default Site Recovery Manager Plug-in Identifier. For environments with multiple protected sites create a custom identifier. Click Next.

SRM12

Select Automatically generate a certificate, or upload one of your own if required, and click Next.

SRM13

Select an embedded or external database server and click Next. If you are using an external database you will need a DSN entry configured in ODBC data sources on the local Windows server referencing the external data source. Click Next.

SRM14

If you opted for the embedded database you will be prompted to enter a new database name and create new database credentials. Click Next.

SRM15

Configure the account to run the SRM services, if applicable, and click Next.

SRM10

Click Install to begin the installation.

SRM9

Site Recovery Manager is now installed. Repeat the process to install SRM on the Windows server in the DR / recovery site, referencing the local PSC and changing the site names as appropriate. If you are using storage based replication you also need to install the Storage Replication Adapter (SRA) on the same server as Site Recovery Manager. In this example I have installed the Nimble SRA, available from InfoSight downloads, which is just a next and finish installer.

After each site installation of SRM you will see the Site Recovery Manager icon appear in the vSphere web client for the corresponding vCenter Server.

SRMvsphereSRMvsphere2

Providing the datastores are replicated, either using vSphere replication or array based replication, we can now move on to pairing the sites and creating recovery plans in Part 3.

_______________

Part 1 – Nimble Storage Integration with SRM

Part 2 – Site Recovery Manager Install Guide

Part 3 – Site Recovery Manager Configuration and Failover Guide

Nimble Storage Integration with SRM

This post will walk through the steps required to prepare Nimble Storage arrays at primary and secondary sites for VMware Site Recovery Manager (SRM) using array based replication. The following posts in this Site Recovery Manager series detail the end to end installation and configuration process.

Part 1 – Nimble Storage Integration with SRM

Part 2 – Site Recovery Manager Install Guide

Part 3 – Site Recovery Manager Configuration and Failover Guide

SRM

Before beginning ensure that time synchronization and DNS are in place across the Nimble arrays and vCenter / SRM servers. It is best practice to have Active Directory and DNS servers already running at the secondary site, and also recommended for virtual machine swap files to be stored in a dedicated datastore without replication. Make sure you review the Nimble Storage Best Practices for VMware Site Recovery Manager guide.

All Nimble Storage arrays are listed on the VMware Compatibility Guide, and Nimble have been providing a VMware specific Storage Replication Adapter (SRA) since version 5.1 of SRM. The SRA is the main integration point between SRM and Nimble; allowing storage interactive workflows to be initiated from SRM. In my environment I will only be using VMDK for the virtual storage and can utilise  the Nimble built in vCenter synchronization to quiesce I/O during snapshots. This means the replica is in an application-consistent state and can be cleanly brought back online in the event of failover. Nimble arrays are supplied inclusive of all features, so there are no additional licensing costs for replication.

VMware Integration

If you’re using Nimble to present LUNs to VMware then it’s likely you configured VMware integration during the initial configuration. However, to check log into the web UI of both the replication source and target Nimble arrays by browsing to the IP address or FQDN. From the drop-down Administration menu select VMware Integration.

If the correct vCenter Server is already registered confirm the settings using the Test Status button. Otherwise, enter the required vCenter information to register with the Nimble Storage array.

vcenter

Furthermore, any ESXi hosts connected to Nimble volumes should have the Nimble Connection Manager installed, which includes the Nimble Path Selection Policy (PSP) for VMware. Installing the Nimble VIBs is not included in the scope of this article, however I have briefly outlined the process below.

  • Log in to InfoSight and select Software Downloads from the Resources drop down.
  • Click Connection Manager (NCM) for VMware and download the appropriate version.
  • The downloaded zip file contains the Nimble VIBs, you can install these using one of the following methods:

nmp

Configure Replication Partners

Log into the Nimble web UI using the management IP address or FQDN of the desired replication source array. From the drop-down Manage menu, select Protection, and Replication Partners.

replication

Existing replication partners will be listed, at this stage we don’t have any. Click New Replication Partner.

replication1

The replication partner wizard will load.

  • Enter the group name of the replication target in the partner name field, the group name can be obtained by logging into the web UI of the target Nimble array and clicking the Group referenced in the top right hand corner, or by navigating to Manage, Arrays. Fill in the rest
  • Enter a description if required. Enter the hostname or management IP address of the target Nimble array.
  • Enter a shared secret, this will be configured the same on both arrays.
  • Specify if replication traffic should use the existing management network, or specified data IPs.
  • Specify any folder assignments if required.

replication2

If you want to configure bandwidth limits this is done on the QoS Policy page, click Finish once complete.

replication3

The replication partner will now be listed with a status of OK, the Test function should come back with a success message. Repeat the process on the replication target Nimble array, adding a replication partner for the replication source array.

Configure Volume Replication

Now we have an available replication target we can configure replication of important volumes. In the Nimble web UI for the replication source array, navigate to Manage, Volumes. Replication is configured in the Protection tab, either during the New Volume wizard, or for an existing volume by clicking the volume and Edit, then selecting Protection. Select Create new volume collection and enter a name.

rep1

Something to be aware of – later in the SRM configuration stage we will create protection groups which use consistency groups to group datastores for protection. These SRM consistency groups map to the volume collection groups in Nimble, so if you want to configure different protection settings for different virtual machines they will need to be in volumes using separate volume collection groups. See here for more information.

If application or hypervisor synchronization is required then enter the appropriate details. In this case since we are integrating with SRM we will select VMware vCenter and enter the vCenter details, ensuring application-consistent copies are replicated to the secondary site.

rep2

Configure the protection schedule and how many snapshots to keep locally and on the replication target array, make sure you select the replication partner we created earlier from the Replicate to drop-down menu. When you have entered the required details click Save.

rep3

Repeat this process for any other volumes requiring replication. Once a volume collection has been created you can use the same protection schedules by selecting the existing volume collection on the Protection tab of a volume.

replication5

To view or edit a volume collection navigate to Manage, Protection, Volume Collections in the Nimble web UI.

replication6

Existing replication snapshots are displayed on the Replication tab when selecting a volume, or in the volume collections page referenced above. On the replication target array replication volumes are displayed in the volumes page with a grey coupled LUN icon.

We can now move on to installing Site Recovery Manager in Part 2, the only other Nimble specific step is to install the Storage Replication Adapter (SRA) on the same Windows server as SRM, after SRM has been installed. The Nimble SRA can be downloaded here from InfoSight, and is a simple next and finish installer. After SRM is installed you can confirm the SRA status in the vSphere web client by browsing to Site Recovery Manager, Sites, select the site, open the Monitor tab, and click SRAs.

SRA

_______________

Part 1 – Nimble Storage Integration with SRM

Part 2 – Site Recovery Manager Install Guide

Part 3 – Site Recovery Manager Configuration and Failover Guide

Nimble Storage Setup Guide

Nimble Storage is built on the unique Cache Accelerated Sequential Layout (CASL) Architecture; a CPU-driven storage architecture capable of optimising performance and increasing usable capacity through dynamic caching, sequential data layout and inline compression. Nimble devices are renown for the simplicity of their setup and administration, in this post we’ll put that to the test and walk through the installation of a Nimble CS700 array.

Nimble arrays come configured with triple parity RAID as standard, which offers greater protection of data in the event of a drive failure, without impacting performance or overall capacity of the array. Furthermore should a drive fail then the rebuild process is significantly quicker since it only rebuilds the individual compressed blocks in use.

Requirements

  • Cabinet vertical space of 3U or 4U for each array, depending on model. The new arrays are 4U each.
  • Cat 6 and Gigabit Ethernet switch ports x 2 for management connections.
  • Cables and ports for your chosen connectivity, note that for both protocols you should use the full resources available and spread across 2 switches, the number of ports available is dependent on your ordered configuration. For iSCSI at least 2 additional cat 6 cables and GbE or 10GbE (recommended), for FC at least 2 OM3 or better Fibre channel cables and ports.
  • At least 3 static IP addresses for FC setups or 5 for iSCSI.
  • Phillips screwdriver for installation.
  • A Windows based computer to run the initialisation and setup.
  • It’s worth checking the Nimble Storage Documentation site as there are lots of environment and product specific best practises guides available.

Nimble arrays are monitored remotely by Nimble Storage, you will need to have the following ports open:

  • SSH: 2222 hogan.nimblestorage.com – Secure Tunnel connection to Nimble Storage Support.
  • HTTPS: 443 nsdiag.nimblestorage.com – AutoSupport and heartbeat monitoring.
  • HTTPS: 443 update.nimblestorage.com – Software updates.
  • HTTPS: 443 nsstats.nimblestorage.com – InfoSight analysis.

Altaro-Free-VMware-ebook-Mastering-vSphere-468x60

Unboxing

The following components are included:

  • Nimble 3u/4u array or 3u/4u expansion shelf.
  • Nimble front bezel, rail kit, and screws.
  • Nimble accessory kit containing Phillips screwdriver, KVM adapters, round-to-square hole rack adapters.
  • Expansion shelves include 1m and 3m SAS cables.
  • Power cables.

Racking

Separate the inner rails from the rail assemblies using the catch at the front end of the middle rail. Slide the inner rails into the retaining hooks on the side of the chassis and install the set screws to secure in place.

rack1

Install the rail assemblies by hooking each rail into the rack and sliding it down to lock into position. If the rack has round holes then use the square-hole adapter and secure into place inside the front and back posts of the rack with the screws included.

rack2

Slide the chassis into the rack, when you hear a click the chassis is locked into place. There are 2 built in screws in the front handles to secure the array.

rack3

Cabling

Connect the cables for management and your chosen connectivity protocol, i.e. Fibre Channel or Ethernet, using all available ports where possible. For redundancy connect one member of each interface pair to the same switch and the second member to a second switch with the same port configuration.

If you do not have a standard network configuration to follow or are unsure about cabling the array see the Nimble network topology options. The most common networking topology is likely to be similar to the image below, however with 4 data ports used for each controller (also applicable to Fibre Channel, swapping out for FC switches, ports and HBAs).

network

Plug the power cables into both power supplies for the array and any expansion shelves, use separate Power Distribution Units (PDUs) for redundancy. Once power is connected the storage should come online automatically but failing that there is a power button located on the front of the array.

Before connecting any additional expansion shelves make sure the array and expansion shelves are all powered on. Connect SAS cables in the order below, repeating steps 3 and 4 for any additional expansion shelves. Wait at least 3 minutes between connecting each expansion shelf to ensure firmware updates are complete. You can daisy-chain up to 3 shelves per bus, Nimble recommend that all flash shelves are cabled direct to the header where possible.

  • Connect the SAS OUT (expansion) port of controller A on the array to the SAS IN port of expander A on the first expansion shelf.
  • Connect the SAS OUT (expansion) port of controller B on the array to the SAS IN port of expander B on the first expansion shelf.
  • Connect the SAS OUT (expansion) port of expander A on the first expansion shelf to the SAS IN port of expander A on the next expansion shelf.
  • Connect the SAS OUT (expansion) port of expander B on the first expansion shelf to the SAS IN port of expander B on the next expansion shelf.

sas

Setup

There are two methods of applying a management IP to the new array; using the GUI from a Windows machine on the same subnet, or directly using the CLI. To use the GUI download the latest version of the Windows Toolkit from InfoSight, this includes Nimble Setup Manager. Note that if you are using a 32-bit version of Windows you will need to select a previous version that is 32-bit compatible.

Nimble Setup Manager scans the subnet for unconfigured Nimble arrays. Select the array to configure and click Next. Enter the array name, group name, network settings, and admin password, then click Next. (Groups are used to manage up to 4 arrays and pool storage. You can add the array as standalone by having it as the only array in its group). Accept the license agreement and click Finish.

nsm2

Alternatively you can use the CLI by connecting directly to the console of the active controller using a keyboard and monitor. Log in with the default username and password admin admin and execute the setup command to launch the CLI based setup. Accept the license agreement and configure the network settings at the relevant prompts, then opt to continue the setup using the GUI.

Once the management network has been configured open a web browser to the IP address. Log in to the Nimble OS web client with the admin password configured, the setup wizard will auto start.

nim1

The configuration settings in the first two pages of the setup wizard differ slightly depending on whether you are using FC or iSCSI. If you’ve ever set up an array with either protocol before you’ll find this process very straight forward, I’ll make references to both protocols just incase.

The first thing we need to do is to configure subnets for the required networks. For FC arrays this is easy as you’ll just have to confirm the management subnet. Ensure management only is selected as the traffic type.

If you are using iSCSI then in addition to the management subnet you will also configure a data subnet, or subnets, in accordance with your iSCSI fabric design. It is recommended that the management and data networks are separate subnets.  Each subnet requires an iSCSI discovery IP address. IP Address Zones are used to divide data subnets into two, typically split by using odds and evens addresses; to avoid bottlenecks on interconnect links. You don’t need to worry about this unless you are implementing an advanced solution for a specific use case. Ensure data is selected as the traffic type. Once the subnet configuration is complete click Next.

nimble1

On the interfaces page assign each interface to one of the subnets. Both controllers should be configured with a diagnostic IP address, whether you are using FC or iSCSI. Click Next.

nimble2

On the domain menu configure the domain and DNS server settings, click Next.

nim3

Configure the time zone and NTP server settings and click Next. Enter the email and auto support settings and click Finish. The initial setup is now complete and the browser will return to the management web client.

nim4

Before going any further you should ensure the Nimble OS is up to date.

If you have cabled additional expansion shelves then these need to be activated. Browse to Manage, Arrays and click the array name. Notice that the expansion shelves are orange, click Activate Now.

activateshelves1

If you’re using Fibre Channel you’re probably wondering why the ports are named fc1, fc2, fc5, and fc6. This is to future proof the array for the release of quad port FC HBA’s by leaving an upgrade path open (fc3, fc4, fc7, and fc8). Hosts will need to be zoned as normal for FC connectivity and then added as initiators or initiator groups before you can present volumes. See also: Configuring VVols with HPE Nimble Storage.

Configuring VVols with HPE Nimble Storage

This post will walk through the setup of VMware Virtual Volumes (VVols) with HPE Nimble Storage. The post was originally published in September 2016 and has subsequently been brought up to date, the process remains largely the same and in this example we will use the vSphere 6.7 HTML5 client and Nimble software version 5.0.4. The terminology and features of Virtual Volumes are detailed in KB 2113013 (Understanding Virtual Volumes (VVols) in VMware vSphere 6.7).

HPE_Nimble

Nimble VVol Components

Nimble software includes the vStorage APIs for Storage Awareness (VASA) provider and the PE (Protocol Endpoint) built into the operating system. This means that the VASA provider and Protocol Endpoint run natively from the controller, so there is no additional installation or configuration required. This design also benefits from the highly available setup of Nimble controllers.

Virtual machines are provisioned based on the VMware Storage Policy Based Management (SPBM) framework which uses the VASA client, both features are key to VVols and were introduced with vSphere 6. Nimble folders were added in v3 of the OS and represent a logical allocation of capacity, vSphere sees the folders as containers where virtual volumes can reside.

Prerequisites

  • Before you can implement VVols you need to be running vSphere 6 or above.
  • If you have already licensed vSphere for standard or above there is no additional cost.
  • Nimble arrays must be running OS v3.x or above.
  • There are no additional licensing requirements or costs to use VVols with Nimble.
  • Nimble have included the vStorage APIs for Storage Awareness (VASA) in the software. Check with your storage provider that they support VASA 2.0 (vSphere 6.0) or VASA 3.0 (vSphere 6.5).
  • At the time of writing all Nimble storage support VVols. If you are using an alternative storage provider cross check your hardware with VVols in the VMware Compatibility Guide.

Nimble Configuration

Log into the web interface of the Nimble device.

Nimble_VVols_1

The first thing we need to do is integrate the VASA provider with vSphere. From the drop down Administration menu select VMware Integration. Add the vCenter Server or edit an existing vCenter integration to select the VASA Provider (VVols) check box and click Save.

Nimble_VVols_2

The next task is to setup a folder for VMware to use as a storage container. From the drop down Manage menu select Data Storage, change the view to Folder and click the Add button.

Enter a name and description for the new folder. If you have setup multiple pools of storage then select the pool to use, otherwise leave at default. If required you can set a usage limit on the folder, this does not create a reservation but puts a cap on the amount of storage VMware can use for Virtual Volumes. From the Management Type drop down menu select VMware Virtual Volumes (VVols), and select the vCenter server that will be managing the Virtual Volumes, click Create.

Nimble_VVols_3

The new folder should now be visible in the folder tree on the left hand pane, once vCenter starts using this container for Virtual Volumes you will see VMDK and other files stored natively within the folder

The final step is to examine the existing performance policies, from the Manage drop down menu select Performance Policies. There are multiple pre-configured performance policies, if required you can create your own. These policies can later be defined in vCenter as part of a Virtual Machine (VM) Storage policy.

Nimble_VVols_4

vCenter Configuration

Configuring Virtual Volumes can be done in either the vSphere web client or HTML5 client. The first task is to add the folder we just created as a VVol storage container; from the home page click Storage, expand the vCenter and right click the data center, select Storage, and New Datastore. Set the datastore type to VVol and click Next.

Nimble_VVols_5

The available Nimble folder should now be highlighted, verify the name and size, enter a name for your new datastore and click Next.

Nimble_VVols_6

Select the hosts that require access and click Next, review the details in the final screen and click Finish.

Nimble_VVols_8

You may need to do a rescan on the hosts but at this stage we are ready to provision a new virtual machine to the Virtual Volume datastore with the default storage policy.

Nimble_VVols_12

This represents VVols in its simplest form, the virtual machine files are now thin provisioned and stored natively in the folder we created on the Nimble array.

Nimble_VVols_13

The final phase is to configure VM Storage Policies to meet your own requirements, there are a number of policies built into the Nimble software which we examined earlier, or you can create your own. To map a VM Storage Policy through to a Nimble Performance Policy browse to the vSphere client and click Policies and Profiles, select VM Storage Policies, the default policies are listed.

Click Create New VM Storage Policy, enter a name and description for the policy and click Next.

Nimble_VVols_9

Under Policy Structure select Enable rules for “NimbleStorage” storage and click Next. Under NimbleStorage Rules add the desired rule that exists upon the Nimble, click Next. From the Storage Compatibility table select the datastore we added for Virtual Volumes, click Next. Review the Storage Policy details and click Finish.

Nimble_VVols_10

The policy can now be selected from the drop-down VM Storage Policy menu when provisioning a virtual machine.