Configuring EMC Unity Replication

Following on from the EMC Unity Setup Guide and EMC Unity Configuration Guide, we will walk through setting up replication between 2 Unity arrays. For Remote Office and Branch Offices replication can be configured between the Unity VSA and a physical Unity array in the datacentre.

Replication between storage devices provides data redundancy and protects against storage system failures. EMC Unity provides synchronous or asynchronous replication; synchronous replication is only available to physical arrays and can protect LUNs, Consistency Groups, and VMware VMFS datastores. Asynchronous replication is applicable to all products in the Unity range and can protect the storage resources listed previously, as well as File Systems, NAS Servers, and VMware NFS datastores. Replication can be configured within the same system or to a different system locally, or in a remote location. All EMC Unity systems are licensed for replication as standard. For more information on EMC Unity replication technologies see the Unity Replication White Paper.

Establishing Replication Connections

Before configuring replication a secure link between Unity systems must be established. All tasks are carried out using the HTML5 Unisphere web client. Browse to the IP or FQDN of either Unity system.

replication1

Under Data Protection on the left hand navigation pane select Replication. First we need to configure the interfaces to use for replication traffic, so click the Interfaces tab.

interfaces

Click the plus symbol (Create Replication Interface). Select the interface to use on each storage processor, if you have created link aggregation groups these are also listed. For assistance with creating link aggregation groups see the EMC Unity Configuration Guide. Enter the IP address to use for replication traffic for each storage processor, configure a VLAN ID if required, and click Ok.

replication2

The created replication interfaces will now be listed, you can also edit and delete replication interfaces from this tab. You need to configure the replication interfaces on both the source and destination system.

Next we setup a remote connection between the storage systems. From either device select the Connections tab.

connections

Enter the management IP address and credentials of the remote system, and the admin password for the local system. Select the replication type; in this example we will use Asynchronous. Click Ok.

replication3

A replication connection will now be established on both the local and remote systems. Once complete click Close.

Configuring Replication

Select the storage resource to configure replication, in this post we will replicate a file system. The NAS Server hosting the file system must be configured for replication. Browse to File under the Storage menu, open the NAS Servers tab and select the NAS server to replicate. Click Edit and select the Replication tab.

replication7

Select the replication mode and RPO (Recovery Point Objective) time. The replication destination is the remote system connection we established earlier. Click Next.

nas2

Select the storage pool and storage processor for the destination storage system, the NAS Server name will auto-populate. Any existing file systems stored on the NAS Server will be listed for replication. Click Next.

nas3

Review the summary page and click Finish and Close.

nas4

When creating new file systems they can be configured for replication on the Replication page of the create a file system wizard.

replication4

A destination file system is automatically created on the destination storage system.

replication5

Alternatively we can configure replication at a later date. To do this open the File Systems tab and select the file system to replicate, click Edit.

replication6

Select the Replication tab. Click Configure Replication.

replication7

The replication wizard will open. The replication session inherits the configuration from the NAS Server. Click Next.

replication8

A file system on the destination file system will automatically be created. Select the storage pool to use on the destination storage system, click Next.

replication9

Review the summary page and click Finish. The replication session will be established, once complete click Close.

replication10

We can confirm the replication status by going back into the properties of the file system and the Replication tab where the replication status will be displayed. The replication role of the file system on the source storage system is Source, on the remote system the file system role is Destination. We can also go back to the Replication page and open Sessions. The replication sessions will be listed.

properties

EMC VNXe Setup Guide

The VNXe is the most affordable hybrid and all-flash array across the EMC product range. Although the future potentially sits with the newly released Unity line, the VNXe remains a popular, flexible, and efficient storage solution for SMBs and ROBOs. This post will walk through the setup of an EMC VNXe device.

vnxe

Architecture

The VNXe 3200 is powered by dual Intel Xeon E5-2407 4-Core processors, providing up to 3x the performance of its 3150 predecessor. The Disk Processor Enclosure (DPE) leverages dual controllers and 6-Gb SAS back-end connectivity to deliver high levels of availability and efficiency, whilst lowering storage costs per I/O. Disk Array Enclosures (DAE) are added to scale out capacity up to 500 TB top end. There is concurrent support for NAS and SAN, with CIFS, SMB3, NFS, iSCSI and Fibre Channel (up to 8Gb) protocols, whilst the unit itself has a small datacentre footprint. For more information see the EMC VNXe Data Sheet.

Some considerations when creating storage pools; typically we want to configure less storage pools to reduce complexity and increase flexibility, however configuring multiple storage pools may be required if you want to separate workloads. Storage pools must maintain free capacity to operate, EMC recommend at least 10%. You will need to make design decisions based on your environment around storage pool capacities and configured RAID protection. The VNXe range offers multicore RAID 1/10/5/6 configured at storage pool level. EMC generally recommends smaller RAID widths as providing the best performance and availability, at the cost of slightly less usable capacity, e.g. for RAID 6 use 4+2 or 6+2 instead of 10+2 or 14+2.

VNXe arrays use the first 4 drives to store configuration information and critical system data, these are known as the system or vault drives and run from DPE Disk 0 through to DPE Disk 3. The system drives can be added to storage pools however usable capacity of system drives is reduced, therefore storage pools utilising system drives should use a smaller RAID width. For larger configurations with high drive counts EMC does not recommend using the system drives as heavy client workload may slow down management operations (does not apply to all-flash).

Requirements

In addition to the boxed system components you will need:

  • Cabinet vertical space of 2U for the DPE, and 2U for each optional 25-drive DAE.
  • 2 x Cat 5e or better GbE management connections.
  • Between 2 and 8 Cat 5e or better GbE or 10GbE data connections, or, between 2 and 8 Gb FC connections, depending on your chosen connection protocol.
  • A Windows based computer to run the initialisation and setup.
  • If you are unable to connect the Windows computer to the same subnet as the EMC VNXe then you will need a USB drive to configure the array with a management IP address.
  • Phillips screwdriver for installation.

Unboxing

The VNXe base comes with the following:

  • Disk Processor Enclosure (DPE) 2U component consisting of 12 x 3.5″ bays or 25 x 2.5″ bays.
  • Rail kit consisting of 2 adjustable rails and 10 screws, or 2 snap-in rails and 6 screws.
  • Accessory kit consisting of an mini-USB adaptor, cable ties, stickers, etc.
  • Front bezel for DPE.
  • Power cords.

Any additional disk shelves contain:

  • Disk Array Enclosure (DAE) 2U component consisting of 12 x 3.5″ bays or 25 x 2.5″ bays.
  • Rail kit consisting of 2 adjustable rails and 10 screws, or 2 snap-in rails and 6 screws.
  • Front bezel for DAE.
  • Power cords.
  • Mini-SAS and mini-SAS HD to mini-SAS cables.

Racking

EMC recommend installing the DPE at the bottom of the cabinet and installing any additional DAE’s above. The snap-in method is the most commonly used rail set and the one we will use here. For assistance with racking the adjustable rails see page 16 of the EMC VNXe Install Guide.

Locate the left and right markings on each rail. Align the 2U key tabs with the U-space in the rear rack channel. Push the key tabs and adaptors into the rear mounting holes until the spring clips snap into place. Round the front push in the spring clip and release once the rail is lined up with the mounting holes. Secure the rear of the rail using 1 x M5 screw on each side.

Slide the DPE into the rails until they click into the rear tabs on each rail. The tabs secure and support the rear of the enclosure, the front is secured using 2 x M5 screws on each side. Repeat the process for any additional DAEs.

 

racking

Cabling

First connect the 2 management ports to the switch, management ports have a LAN/management symbol above them. Do not use the service ports, service ports have a wrench/spanner symbol above them. Next plug in the cables for your chosen front end connectivity, i.e. Fibre Channel or Ethernet. Front end ports need to be connected and configured symmetrically across both storage processors to facilitate high availability. Furthermore you should use all front-end ports that are installed in the system, so that workload is spread across as many resources as possible. NAS ports should also be configured with LACP grouped per storage processor, to provide path redundancy and performance improvements.

If you have purchased additional DAEs then these need to be connected using the included SAS cables. There are 2 on-board 6Gb SAS ports in each storage processor in the DPE. When cabling DAEs to the DPE, balance them as evenly as possible across all available buses. The drives in the DPE are serviced by SAS Bus 0; therefore, the first DAE should be cabled to SAS Bus 1. Connect SP A SAS Port 1 to DAE 1 Link Controller Card (LCC) A (cable 1 in the image below). Connect SP B SAS Port 1 to DAE 1 LCC B (cable 2).

sas

The mini-SAS HD connectors are used for the DPE ports, the mini-SAS connectors are used for DAE ports. Mini-SAS to mini-SAS cables are used for cabling DAEs together. If you are attaching additional DAE’s see page 28 of the EMC VNXe Install Guide.

The power cables included with the array are colour coded with an intended use of: grey for Power Distribution Unit (PDU) A, black for PDU B. Once the array has power it will take approximately 10 – 15 minutes to power up. Finally, clip the front bezels into place and secure with the key included.

power

Setup

To access the web UI for setup we have a couple of options for automatic or manual IP addressing.

Automatic – if the array has access to network DHCP and DNS servers (with dynamic DNS enabled) then it will automatically be assigned an IP address. After power up if the SP Fault LED is solid blue then a management address has been assigned. This IP is dynamically added to DNS in the format of serialnumber.domain. If the SP Fault LED alternates between solid blue and flashing amber then a management address has not been assigned as the DHCP or DNS server could not be reached.

Manual – download and install the Connection Utility from EMC Downloads. The Connection Utility gives you two options; automatically detect unconfigured storage systems in the same subnet as your Windows client, or manually configure an IP in a configuration file for use with a USB flash drive which the array automatically reads.

Depending on how IP addressing has been assigned open a browser and enter the IP address manually configured, or the DNS entry (serialnumber.dnszone). Log in to Unisphere using the default credentials admin Password123#.

unisphere1

The Initial Configuration Wizard launches the first time you login. This self explanatory wizard guides you through the basic setup of the array, any settings you skip here can be configured later through the appropriate menus.

unisphere2

Once the configuration wizard is complete you will be returned to the home dashboard. It is recommended that the operating system is updated straight away. This can be achieved from the Settings drop down menu, and selecting Update software. Software can either be obtained online direct from the VNXe, or downloaded from EMC Downloads and then uploaded to the array. If you skipped the configuration wizard there are some basic configuration settings below to get you started.

First browse to the Management Settings page of the Settings drop down menu. Under the General tab we can configure the system name and management network settings. The Network tab features DNS settings, NTP settings, and remote logging.

network

To apply a license (.lic file provided by EMC) go to Settings, Manage Licenses; upload and install the license file. Also under Settings select Configure alerts, connect to EMC and configure SMTP and alert settings here.

alerts.png

It is recommended that physical network interfaces are  pooled together. To configure link aggregation browse to Settings, More Configuration, Advanced Configuration. Tick the Aggregation box.

linkaggregation

Storage pools are configured under System, Storage Pools. You will see 2 default pools; Hot Spare Pool and Unconfigured Disks. To configure the number of hot spares, or configure a storage pool and RAID group, select the appropriate pool and click Configure Disks. Follow the Disk Configuration Wizard.

diskconfig

To change the admin password at any time go to Settings, User administration. To enable SSH (optional) navigate to Settings, Service System and enter the service password. Select Enable SSH and click Execute service action.

You can now move on to configure the chosen protocol for the array, whether that be creating CIFS/NFS servers and shares through Settings, Manage Shared Folder Server Settings, or presenting iSCSI or FC storage through Hosts or Settings, iSCSI Server Settings. For further assistance with the VNXe GUI see EMC Unisphere for VNXe.

Configuring VVOLs with EMC Unity

This post will walk through the setup of VMware VVOLs with EMC Unity. If you are unfamiliar with the concept of Virtual Volumes then see this KB. You can read more about the EMC Unity physical array by reviewing the EMC Unity Setup Guide, or the Unity Virtual Appliance by reviewing the Deploying EMC Unity VSA post.

EMC Unity VVOL Components

The vStorage APIs for Storage Awareness (VASA) provider is built into the controller, so there is no additional installation or configuration required. This design also offers high availability of VVOLs which is native to the controller configuration of the Unity product line. Virtual machines are provisioned based on the VMware Storage Policy Based Management (SPBM) framework which uses the VASA client, both features are key to VVOLs and were introduced with vSphere 6.

The Unisphere interface was rebuilt when EMC introduced Unity; the first midrange EMC product to officially support VVOLs. Unity provides both NAS and SAN connectivity for VVOLs, meaning virtual volumes can be provsioined via Fibre Channel, iSCSI, or NFS. The Protocol Endpoints are the NAS Server interfaces, iSCSI initiators, and Fibre Channel ports zoned to the ESXi hosts. VVOLs reside in VVOL datastores, known as storage containers, which are made up of storage allocations from one or more capability profiles. A capability profile is built on top of one or more underlying storage pools – a storage pool can contain different disk types.

emcvvols

Prerequisites

  • Before you can implement VVOLs you need to be running vSphere 6.
  • If you have already licensed vSphere for standard or above there is no additional cost.
  • At the time of writing all products in the EMC Unity range support VVOLs. If you are using an alternative storage provider cross check your hardware with VVOLs in the VMware compatibility checker, and check with your storage provider that they support VASA.
  • Check the license pack for your Unity array covers VVOLs, this will be listed in the feature table on the licensing email from EMC. If you are unsure check with your account manager.
  • The Unity 300 and 400 arrays support up to 9000 VVOLs. The Unity 500 supports 13500 VVOLs and the Unity 600 supports 30,000 VVOLs.

EMC Unity Configuration

First let’s add the vCenter Server to Unity so that ESXi hosts can be discovered. Log into the Unisphere web client and select VMware from the Access menu on the left hand side. Select vCenters and click the add symbol to add the vCenter Server. Enter the vCenter details to discover ESXi hosts that are connected via the Protocol Endpoints.

hosts

To deliver virtual volumes we need a storage pool. A storage pool was most likely configured during the setup of the Unity array. However if not, then select Pools from the Storage menu, create a storage pool using the create pool wizard.

If you already have a storage pool select VMware from the Storage menu and open the Capability Profiles tab. A capability profile is used to advertise the available characteristics of a storage pool, in this case virtual volumes. Click the add symbol to create a new capability profile. Give the profile a name and click Next.

vvol1

Select the storage pool the capability profile should use and click Next.

vvol2

Review the summary page and click Finish.

vvol3

The capability profile will now be created.

vvol4

Once complete we can go ahead and create a storage container fo virtual volumes, in EMC this is called a VVOL datastore. Select the Datastores tab and click the add symbol to create a new VMware datastore. Select VVOL and click Next.

vvol5

Enter a name for the virtual volume datastore and click Next.

vvol6

Select the capability profile we created earlier and click Next, multiple capability profiles can be assigned.

vvol7

Configure the hosts that should have access to the virtual volume datastore and click Next.

vvol8

Review the summary page and click Finish. Storage containers are now presented to the vCenter hosts specified during access configuration, these are thin provisioned by default. For further details see the official EMC Unity VVOLs White Paper.

vSphere Configuration

Since VVOLs are a new feature of vSphere 6 all configuration is done in the vSphere web client. The first task is to register the Unity VASA provider; from the home page in the vSphere web client click vCenter Inventory Lists, vCenter Servers, select the vCenter Server, click Manage and open the Storage Providers tab. Click the green add symbol to add a new VASA provider. Enter the URL of the Unity system and admin credentials, click Ok. The URL should be in the following format https:// :8443/vasa/version.xml where is the management IP address or FQDN of the Unity system.

storageprovider

Next we can provision VVOLs from the storage container (or VVOL datastore in EMC Unity) that we just created. From the home page in the vSphere web client click Storage, and Add Datastore. Pick the datacentre location and click Next, select VVOL as the type of datastore and click Next.

vvoldatastore

The available storage container should now be highlighted, verify the name and size, enter a name for your new datastore and click Next.

vvoldatastore2

Select the hosts that require access and click Next, review the details in the final screen and click Finish. You may need to do a rescan on the hosts but at this stage we are ready to provision a new virtual machine to the virtual volume datastore with the default storage policy. This represents VVOLs in its simplest form, the virtual machine files are now thin provisioned and stored natively in the storage container we created on the Unity array. You can create additional storage based policies using the vSphere 6.0 Documentation Centre.

The release of vSphere 6.5 included VVOLs 2 built on VASA 3.0 which features support for array based replication. You can read more about what’s new here.