EMC VNXe Setup Guide

The VNXe is the most affordable hybrid and all-flash array across the EMC product range. Although the future potentially sits with the newly released Unity line, the VNXe remains a popular, flexible, and efficient storage solution for SMBs and ROBOs. This post will walk through the setup of an EMC VNXe device.

vnxe

Architecture

The VNXe 3200 is powered by dual Intel Xeon E5-2407 4-Core processors, providing up to 3x the performance of its 3150 predecessor. The Disk Processor Enclosure (DPE) leverages dual controllers and 6-Gb SAS back-end connectivity to deliver high levels of availability and efficiency, whilst lowering storage costs per I/O. Disk Array Enclosures (DAE) are added to scale out capacity up to 500 TB top end. There is concurrent support for NAS and SAN, with CIFS, SMB3, NFS, iSCSI and Fibre Channel (up to 8Gb) protocols, whilst the unit itself has a small datacentre footprint. For more information see the EMC VNXe Data Sheet.

Some considerations when creating storage pools; typically we want to configure less storage pools to reduce complexity and increase flexibility, however configuring multiple storage pools may be required if you want to separate workloads. Storage pools must maintain free capacity to operate, EMC recommend at least 10%. You will need to make design decisions based on your environment around storage pool capacities and configured RAID protection. The VNXe range offers multicore RAID 1/10/5/6 configured at storage pool level. EMC generally recommends smaller RAID widths as providing the best performance and availability, at the cost of slightly less usable capacity, e.g. for RAID 6 use 4+2 or 6+2 instead of 10+2 or 14+2.

VNXe arrays use the first 4 drives to store configuration information and critical system data, these are known as the system or vault drives and run from DPE Disk 0 through to DPE Disk 3. The system drives can be added to storage pools however usable capacity of system drives is reduced, therefore storage pools utilising system drives should use a smaller RAID width. For larger configurations with high drive counts EMC does not recommend using the system drives as heavy client workload may slow down management operations (does not apply to all-flash).

Requirements

In addition to the boxed system components you will need:

  • Cabinet vertical space of 2U for the DPE, and 2U for each optional 25-drive DAE.
  • 2 x Cat 5e or better GbE management connections.
  • Between 2 and 8 Cat 5e or better GbE or 10GbE data connections, or, between 2 and 8 Gb FC connections, depending on your chosen connection protocol.
  • A Windows based computer to run the initialisation and setup.
  • If you are unable to connect the Windows computer to the same subnet as the EMC VNXe then you will need a USB drive to configure the array with a management IP address.
  • Phillips screwdriver for installation.

Unboxing

The VNXe base comes with the following:

  • Disk Processor Enclosure (DPE) 2U component consisting of 12 x 3.5″ bays or 25 x 2.5″ bays.
  • Rail kit consisting of 2 adjustable rails and 10 screws, or 2 snap-in rails and 6 screws.
  • Accessory kit consisting of an mini-USB adaptor, cable ties, stickers, etc.
  • Front bezel for DPE.
  • Power cords.

Any additional disk shelves contain:

  • Disk Array Enclosure (DAE) 2U component consisting of 12 x 3.5″ bays or 25 x 2.5″ bays.
  • Rail kit consisting of 2 adjustable rails and 10 screws, or 2 snap-in rails and 6 screws.
  • Front bezel for DAE.
  • Power cords.
  • Mini-SAS and mini-SAS HD to mini-SAS cables.

Racking

EMC recommend installing the DPE at the bottom of the cabinet and installing any additional DAE’s above. The snap-in method is the most commonly used rail set and the one we will use here. For assistance with racking the adjustable rails see page 16 of the EMC VNXe Install Guide.

Locate the left and right markings on each rail. Align the 2U key tabs with the U-space in the rear rack channel. Push the key tabs and adaptors into the rear mounting holes until the spring clips snap into place. Round the front push in the spring clip and release once the rail is lined up with the mounting holes. Secure the rear of the rail using 1 x M5 screw on each side.

Slide the DPE into the rails until they click into the rear tabs on each rail. The tabs secure and support the rear of the enclosure, the front is secured using 2 x M5 screws on each side. Repeat the process for any additional DAEs.

 

racking

Cabling

First connect the 2 management ports to the switch, management ports have a LAN/management symbol above them. Do not use the service ports, service ports have a wrench/spanner symbol above them. Next plug in the cables for your chosen front end connectivity, i.e. Fibre Channel or Ethernet. Front end ports need to be connected and configured symmetrically across both storage processors to facilitate high availability. Furthermore you should use all front-end ports that are installed in the system, so that workload is spread across as many resources as possible. NAS ports should also be configured with LACP grouped per storage processor, to provide path redundancy and performance improvements.

If you have purchased additional DAEs then these need to be connected using the included SAS cables. There are 2 on-board 6Gb SAS ports in each storage processor in the DPE. When cabling DAEs to the DPE, balance them as evenly as possible across all available buses. The drives in the DPE are serviced by SAS Bus 0; therefore, the first DAE should be cabled to SAS Bus 1. Connect SP A SAS Port 1 to DAE 1 Link Controller Card (LCC) A (cable 1 in the image below). Connect SP B SAS Port 1 to DAE 1 LCC B (cable 2).

sas

The mini-SAS HD connectors are used for the DPE ports, the mini-SAS connectors are used for DAE ports. Mini-SAS to mini-SAS cables are used for cabling DAEs together. If you are attaching additional DAE’s see page 28 of the EMC VNXe Install Guide.

The power cables included with the array are colour coded with an intended use of: grey for Power Distribution Unit (PDU) A, black for PDU B. Once the array has power it will take approximately 10 – 15 minutes to power up. Finally, clip the front bezels into place and secure with the key included.

power

Setup

To access the web UI for setup we have a couple of options for automatic or manual IP addressing.

Automatic – if the array has access to network DHCP and DNS servers (with dynamic DNS enabled) then it will automatically be assigned an IP address. After power up if the SP Fault LED is solid blue then a management address has been assigned. This IP is dynamically added to DNS in the format of serialnumber.domain. If the SP Fault LED alternates between solid blue and flashing amber then a management address has not been assigned as the DHCP or DNS server could not be reached.

Manual – download and install the Connection Utility from EMC Downloads. The Connection Utility gives you two options; automatically detect unconfigured storage systems in the same subnet as your Windows client, or manually configure an IP in a configuration file for use with a USB flash drive which the array automatically reads.

Depending on how IP addressing has been assigned open a browser and enter the IP address manually configured, or the DNS entry (serialnumber.dnszone). Log in to Unisphere using the default credentials admin Password123#.

unisphere1

The Initial Configuration Wizard launches the first time you login. This self explanatory wizard guides you through the basic setup of the array, any settings you skip here can be configured later through the appropriate menus.

unisphere2

Once the configuration wizard is complete you will be returned to the home dashboard. It is recommended that the operating system is updated straight away. This can be achieved from the Settings drop down menu, and selecting Update software. Software can either be obtained online direct from the VNXe, or downloaded from EMC Downloads and then uploaded to the array. If you skipped the configuration wizard there are some basic configuration settings below to get you started.

First browse to the Management Settings page of the Settings drop down menu. Under the General tab we can configure the system name and management network settings. The Network tab features DNS settings, NTP settings, and remote logging.

network

To apply a license (.lic file provided by EMC) go to Settings, Manage Licenses; upload and install the license file. Also under Settings select Configure alerts, connect to EMC and configure SMTP and alert settings here.

alerts.png

It is recommended that physical network interfaces are  pooled together. To configure link aggregation browse to Settings, More Configuration, Advanced Configuration. Tick the Aggregation box.

linkaggregation

Storage pools are configured under System, Storage Pools. You will see 2 default pools; Hot Spare Pool and Unconfigured Disks. To configure the number of hot spares, or configure a storage pool and RAID group, select the appropriate pool and click Configure Disks. Follow the Disk Configuration Wizard.

diskconfig

To change the admin password at any time go to Settings, User administration. To enable SSH (optional) navigate to Settings, Service System and enter the service password. Select Enable SSH and click Execute service action.

You can now move on to configure the chosen protocol for the array, whether that be creating CIFS/NFS servers and shares through Settings, Manage Shared Folder Server Settings, or presenting iSCSI or FC storage through Hosts or Settings, iSCSI Server Settings. For further assistance with the VNXe GUI see EMC Unisphere for VNXe.

Configuring VVOLs with EMC Unity

This post will walk through the setup of VMware VVOLs with EMC Unity. If you are unfamiliar with the concept of Virtual Volumes then see this KB. You can read more about the EMC Unity physical array by reviewing the EMC Unity Setup Guide, or the Unity Virtual Appliance by reviewing the Deploying EMC Unity VSA post.

EMC Unity VVOL Components

The vStorage APIs for Storage Awareness (VASA) provider is built into the controller, so there is no additional installation or configuration required. This design also offers high availability of VVOLs which is native to the controller configuration of the Unity product line. Virtual machines are provisioned based on the VMware Storage Policy Based Management (SPBM) framework which uses the VASA client, both features are key to VVOLs and were introduced with vSphere 6.

The Unisphere interface was rebuilt when EMC introduced Unity; the first midrange EMC product to officially support VVOLs. Unity provides both NAS and SAN connectivity for VVOLs, meaning virtual volumes can be provsioined via Fibre Channel, iSCSI, or NFS. The Protocol Endpoints are the NAS Server interfaces, iSCSI initiators, and Fibre Channel ports zoned to the ESXi hosts. VVOLs reside in VVOL datastores, known as storage containers, which are made up of storage allocations from one or more capability profiles. A capability profile is built on top of one or more underlying storage pools – a storage pool can contain different disk types.

emcvvols

Prerequisites

  • Before you can implement VVOLs you need to be running vSphere 6.
  • If you have already licensed vSphere for standard or above there is no additional cost.
  • At the time of writing all products in the EMC Unity range support VVOLs. If you are using an alternative storage provider cross check your hardware with VVOLs in the VMware compatibility checker, and check with your storage provider that they support VASA.
  • Check the license pack for your Unity array covers VVOLs, this will be listed in the feature table on the licensing email from EMC. If you are unsure check with your account manager.
  • The Unity 300 and 400 arrays support up to 9000 VVOLs. The Unity 500 supports 13500 VVOLs and the Unity 600 supports 30,000 VVOLs.

EMC Unity Configuration

First let’s add the vCenter Server to Unity so that ESXi hosts can be discovered. Log into the Unisphere web client and select VMware from the Access menu on the left hand side. Select vCenters and click the add symbol to add the vCenter Server. Enter the vCenter details to discover ESXi hosts that are connected via the Protocol Endpoints.

hosts

To deliver virtual volumes we need a storage pool. A storage pool was most likely configured during the setup of the Unity array. However if not, then select Pools from the Storage menu, create a storage pool using the create pool wizard.

If you already have a storage pool select VMware from the Storage menu and open the Capability Profiles tab. A capability profile is used to advertise the available characteristics of a storage pool, in this case virtual volumes. Click the add symbol to create a new capability profile. Give the profile a name and click Next.

vvol1

Select the storage pool the capability profile should use and click Next.

vvol2

Review the summary page and click Finish.

vvol3

The capability profile will now be created.

vvol4

Once complete we can go ahead and create a storage container fo virtual volumes, in EMC this is called a VVOL datastore. Select the Datastores tab and click the add symbol to create a new VMware datastore. Select VVOL and click Next.

vvol5

Enter a name for the virtual volume datastore and click Next.

vvol6

Select the capability profile we created earlier and click Next, multiple capability profiles can be assigned.

vvol7

Configure the hosts that should have access to the virtual volume datastore and click Next.

vvol8

Review the summary page and click Finish. Storage containers are now presented to the vCenter hosts specified during access configuration, these are thin provisioned by default. For further details see the official EMC Unity VVOLs White Paper.

vSphere Configuration

Since VVOLs are a new feature of vSphere 6 all configuration is done in the vSphere web client. The first task is to register the Unity VASA provider; from the home page in the vSphere web client click vCenter Inventory Lists, vCenter Servers, select the vCenter Server, click Manage and open the Storage Providers tab. Click the green add symbol to add a new VASA provider. Enter the URL of the Unity system and admin credentials, click Ok. The URL should be in the following format https:// :8443/vasa/version.xml where is the management IP address or FQDN of the Unity system.

storageprovider

Next we can provision VVOLs from the storage container (or VVOL datastore in EMC Unity) that we just created. From the home page in the vSphere web client click Storage, and Add Datastore. Pick the datacentre location and click Next, select VVOL as the type of datastore and click Next.

vvoldatastore

The available storage container should now be highlighted, verify the name and size, enter a name for your new datastore and click Next.

vvoldatastore2

Select the hosts that require access and click Next, review the details in the final screen and click Finish. You may need to do a rescan on the hosts but at this stage we are ready to provision a new virtual machine to the virtual volume datastore with the default storage policy. This represents VVOLs in its simplest form, the virtual machine files are now thin provisioned and stored natively in the storage container we created on the Unity array. You can create additional storage based policies using the vSphere 6.0 Documentation Centre.

The release of vSphere 6.5 included VVOLs 2 built on VASA 3.0 which features support for array based replication. You can read more about what’s new here.

EMC Unity Configuration Guide

Following on from the EMC Unity Setup Guide this post will walk through the configuration of an EMC Unity array with iSCSI connectivity using the management web interface. Before beginning, ensure your Unity device is up to date by following the EMC Unity Update Guide. The EMC Unity is also available as a Virtual Storage Appliance.

unityhybrid

Architecture

The EMC Unity hybrid and all flash storage range implements an integrated architecture for block, file, and VMware VVOLs powered by the Intel E5-2600 processors. The Disk Processor Enclosure (DPE) leverages dual storage processors and full 12-Gb SAS back-end connectivity to deliver high levels of performance and efficiency. Disk Array Enclosures (DAE) are added to scale out capacity up to 3 PB top end. There is concurrent support for native NAS, iSCSI, and Fibre Channel protocols whilst the unit itself takes up less rack space than it’s competitors. Unity arrays can be managed from the HTML5 web client, or through the CloudIQ service, and offer a full range of enterprise storage features. For more information see the Unity platform white paper.

Some considerations when creating storage pools; typically we want to configure less storage pools to reduce complexity and increase flexibility. However configuring multiple storage pools may be required if you want to separate workloads for different I/O profiles or use FAST Cache. When sizing a storage pool remember that all data written to LUNs, file systems, and datastores is stored in the pool, as well as configuration information, change tracking, and snapshots. Storage pools must maintain free capacity to operate, EMC recommend at least 10%.

You will need to make design decisions based on your environment around storage pool capacities and configured RAID protection. The Unity range offers RAID 1/0, RAID 5, or RAID 6 configured at storage pool level. EMC generally recommends smaller RAID widths as providing the best performance and availability, at the cost of slightly less useable capacity, e.g. for RAID 6 use 4+2 or 6+2 instead of 10+2 or 14+2. Unity automatically reserves 1 out of every 30 drives of the same type for use as a hot spare, you can reduce the number of hot spare drives by decreasing the number of individual drive types.

Unity arrays use the first 4 drives to store configuration information and critical system data, these are known as the system drives and run from DPE Disk 0 through to DPE Disk 3. The system drives cannot be used as hot spares but can be added to storage pools in smaller configurations, if no other disks are available. The usable capacity of system drives is reduced by around 100 GB, therefore storage pools utilising system drives should use a smaller RAID width. For larger configurations with high drive counts EMC does not recommend using the system drives as heavy client workload may slow down management operations. This restriction does not apply to all-flash.

Configuration Settings

Browse to the management IP address of the Unity array configured during installation. If you have not changed the admin password the default login is admin Password123#.

The welcome dashboard gives an overview of health and capacity. Note the icons in the top right hand corner. The first symbol shows the overall system state, if there are no issues this will be a green tick. The second icon lists active jobs and the third any active alarms. Next is the settings menu, logged in user menu, and help.

icons

Let’s start by opening the settings menu using the gear icon. The Software and Licenses page lists the licensed enabled features. To install a license click Install License and upload the .lic file provided by EMC. You can also view system limits, install language packs, software updates, and disk firmware.

settings1

The Users and Groups page can be used to add local users or an LDAP identity source.

settings2

Use the Management page to configure NTP servers and DNS. The host name and management address can also be changed here if required as well as optional services such as Unisphere Central (centralised management), remote logging, and encryption.

settings3.png

The Storage Configuration page allows for configuration of FAST cache; FAST cache extends existing cache using enterprise flash drives to provide instant access to frequently used data. You can also view the spare disks in the system, but it’s best to come back to this after we’ve configured our storage pool.

settings4

Configure auto-support on the Support Configuration page by entering the support credentials and contact details. Make sure you use the EMC support account where the support contract is associated.

settings5

The Access page lists the iSCSI (Ethernet) and FC ports. Double click the port to view further details, all ports should be connected and green.

For Ethernet ports it is good practise to create link aggregation where more than one port is used for the same traffic, e.g. iSCSI data, or replication. Aggregating ports together pools the resources to create a highly available configuration, iSCSI or other services then use the port aggregation group to distribute I/O and provide redundancy. Select the first port for the group and click Link Aggregation, Create Link Aggregation. You can add or remove additional ports by selecting the port and clicking Link Aggregation, and Add to Link Aggregation or Remove from Link Aggregation.

settings6

Configure email alerts, and SNMP traps if required, using the Alerts page.

settings7

Next we’ll go through the menu options in the left hand navigation pane.

System

The System View page lists basic system information such as the model, serial number, and software version. If any hardware issues are detected they will be listed here.

unisphere1

The Performance page shows IOPS and bandwidth , you can also create I/O limits.

The Service page shows a number of service related tasks and logs, as well as any technical advisories issued by EMC. Auto-support functionality should already be enabled as we configured it earlier using the Support Configuration page of the Settings menu. The support contract will auto-populate once refreshed providing the correct support settings have been entered.

unisphere2

Access

The Hosts page allows for configuration of network hosts, such as Windows or Linux machines, for storage access. An individual host can be added, or a subnet or netgroup; to allow access to multiple hosts or network segments. The VMware page provides a single workflow for adding vCenter servers and ESXi host discovery. Virtual machine and VMDK information can also be imported.

For block storage resources you must register initiators using the Initiators tab. Initiators are servers initiating Fibre Channel or iSCSI sessions, and are identified by a unique World Wide Name (WWN) or iSCSI Qualified Name (IQN). The link between the initiator and the port on the storage system is called the initiator path; an initiator can be associated with multiple initiator paths. At this point for iSCSI paths to show up iSCSI interfaces must be configured on the Block page, see the Storage section above for further details. For FC paths the appropriate zoning on the FC switch must be complete for the initiator paths to be seen by the storage system.

Data Protection

The Data Protection section gives you two ways of protecting data on the array. The first is Snapshots; snapshots are used to create point in time copies of your data. There are 3 built in snapshot policies with different retention periods, or you can create your own by clicking the add symbol.

The second option is Replication, replication allows data to be copied to a different Unity array or Virtual Storage Appliance, on or off-site. To facilitate replication you must first create an interface by clicking the Interfaces tab and the add symbol. Chose an Ethernet interface, or link aggregation group, to use and configure the network settings. Next click the Connections tab and the add symbol. Enter the details of the remote Unity system to be a replication target and the connection mode; asynchronous replication, which takes an initial copy and then only updates with incremental (changed) data (recommended for most use cases) or synchronous replication, which takes full copies of the data at each replication interval. Finally configure replication on the storage resource you wish to replicate, as outlined under the Storage section below.

To configure replication see the Configuring EMC Unity Replication post.

data

Storage

Before using any disks in the system they must be allocated to a storage pool. When creating storage pools take into consideration the notes in the Architecture section above. To create a storage pool click Pools and the add symbol. Assign disks to the storage pool and select a RAID configuration, a storage pool can be made up of 2 performance tiers (types of disks) with different RAID types.

The Unity array is able to provide both block level and file level storage. For block level resources click Block and iSCSI Interfaces. Use the add button to add iSCSI interfaces for use with block level storage, chose the interface(s), storage pool, and configure the networking settings. LUNs can be created and mapped to a host, subnet, or netgroup using the LUNs tab.

For file level resources click File and NAS Servers, click the add symbol to create a NAS server, chose the interface(s), storage pool, configure the networking settings, and select the sharing protocols to use. It is good practise to create at least one NAS server each on SPA and SPB, and distribute resources evenly. Once your NAS servers are ready you can create File Systems, and then SMB shares or NFS Shares using the appropriate tabs.

During the creating of storage objects such as LUNs or file systems, you have the option to configure snapshots and replication. These features can also be configured at a later date by selecting the storage object and clicking the edit icon. Snapshots can be configured using one of the built in policies or creating your own under the Data Protection section above. When creating replication sessions you need to specify a replication schedule and target.

The VMware page can be used to configure VVOLs, read more about this at Configuring VVOLs with EMC Unity.

pools

Events and Support

The Events page lists all alerts from information to critical, as well as a record of all jobs that have been initiated on the device. The Support page provides links to documentation, training, and support.

support

Upgrading EMC Unity OE

The EMC Unity features an active/active controller configuration designed to allow for non-disruptive software updates. However, it is still best practise to mitigate the risk by performing software updates out of core business hours. In this post we will quickly run through an Operating Environment (OE) upgrade for a newly commissioned Unity 300 array; which was installed using the EMC Unity Setup Guide. Arrays shipped with v4.0.1.8404134 include a letter advising the administrator to upgrade the software due to an issue with this version of the OE. The latest OE can be downloaded from EMC Downloads, you will need an EMC account for access.

From the Unity dashboard select the settings gear and click Software Upgrades, the current version will be listed. Click Start Upgrade. To ensure the system is ready to be upgraded click Perform Health Checks, address any issues that arise from the health check, otherwise click Next.

update1

Browse to the gpg file downloaded earlier, once uploaded click Next.

update2

Confirm you are happy for the storage processors to individually reboot and click Next.

update3

Review the details on the summary page and click Finish.

update4

The software update will now commence, an ETA will be displayed in the top right hand corner.

update5

When the upgrade has completed click to Reload Unisphere, you will be returned to the dashboard. Click the settings gear again and Software Upgrades, verify that the installed version number is correct.

update6

The software update is now complete. You can also update disk firmware by selecting Disk Firmware from the settings menu and following the same steps outlined above.

Nimble Storage Setup Guide

Nimble Storage is built on the unique Cache Accelerated Sequential Layout (CASL) Architecture; a CPU-driven storage architecture capable of optimising performance and increasing usable capacity through dynamic caching, sequential data layout and inline compression. Nimble devices are renown for the simplicity of their setup and administration, in this post we’ll put that to the test and walk through the installation of a Nimble CS700 array.

Nimble arrays come configured with triple parity RAID as standard, which offers greater protection of data in the event of a drive failure, without impacting performance or overall capacity of the array. Furthermore should a drive fail then the rebuild process is significantly quicker since it only rebuilds the individual compressed blocks in use.

nimble

Requirements

  • Cabinet vertical space of 3U or 4U for each array, depending on model. The new arrays are 4U each.
  • Cat 6 and Gigabit Ethernet switch ports x 2 for management connections.
  • Cables and ports for your chosen connectivity, note that for both protocols you should use the full resources available and spread across 2 switches, the number of ports available is dependent on your ordered configuration. For iSCSI at least 2 additional cat 6 cables and GbE or 10GbE (recommended), for FC at least 2 OM3 or better Fibre channel cables and ports.
  • At least 3 static IP addresses for FC setups or 5 for iSCSI.
  • Phillips screwdriver for installation.
  • A Windows based computer to run the initialisation and setup.
  • It’s worth checking the Nimble Storage Documentation site as there are lots of environment and product specific best practises guides available.

Nimble arrays are monitored remotely by Nimble Storage, you will need to have the following ports open:

  • SSH: 2222 hogan.nimblestorage.com – Secure Tunnel connection to Nimble Storage Support.
  • HTTPS: 443 nsdiag.nimblestorage.com – AutoSupport and heartbeat monitoring.
  • HTTPS: 443 update.nimblestorage.com – Software updates.
  • HTTPS: 443 nsstats.nimblestorage.com – InfoSight analysis.

Unboxing

The following components are included:

  • Nimble 3u/4u array or 3u/4u expansion shelf.
  • Nimble front bezel, rail kit, and screws.
  • Nimble accessory kit containing Phillips screwdriver, KVM adapters, round-to-square hole rack adapters.
  • Expansion shelves include 1m and 3m SAS cables.
  • Power cables.

Racking

Separate the inner rails from the rail assemblies using the catch at the front end of the middle rail. Slide the inner rails into the retaining hooks on the side of the chassis and install the set screws to secure in place.

rack1

Install the rail assemblies by hooking each rail into the rack and sliding it down to lock into position. If the rack has round holes then use the square-hole adapter and secure into place inside the front and back posts of the rack with the screws included.

rack2

Slide the chassis into the rack, when you hear a click the chassis is locked into place. There are 2 built in screws in the front handles to secure the array.

rack3

Cabling

Connect the cables for management and your chosen connectivity protocol, i.e. Fibre Channel or Ethernet, using all available ports where possible. For redundancy connect one member of each interface pair to the same switch and the second member to a second switch with the same port configuration.

If you do not have a standard network configuration to follow or are unsure about cabling the array see the Nimble network topology options. The most common networking topology is likely to be similar to the image below, however with 4 data ports used for each controller (also applicable to Fibre Channel, swapping out for FC switches, ports and HBAs).

network

Plug the power cables into both power supplies for the array and any expansion shelves, use separate Power Distribution Units (PDUs) for redundancy. Once power is connected the storage should come online automatically but failing that there is a power button located on the front of the array.

Before connecting any additional expansion shelves make sure the array and expansion shelves are all powered on. Connect SAS cables in the order below, repeating steps 3 and 4 for any additional expansion shelves. Wait at least 3 minutes between connecting each expansion shelf to ensure firmware updates are complete. You can daisy-chain up to 3 shelves per bus, Nimble recommend that all flash shelves are cabled direct to the header where possible.

  • Connect the SAS OUT (expansion) port of controller A on the array to the SAS IN port of expander A on the first expansion shelf.
  • Connect the SAS OUT (expansion) port of controller B on the array to the SAS IN port of expander B on the first expansion shelf.
  • Connect the SAS OUT (expansion) port of expander A on the first expansion shelf to the SAS IN port of expander A on the next expansion shelf.
  • Connect the SAS OUT (expansion) port of expander B on the first expansion shelf to the SAS IN port of expander B on the next expansion shelf.

sas

Setup

There are two methods of applying a management IP to the new array; using the GUI from a Windows machine on the same subnet, or directly using the CLI. To use the GUI download the latest version of the Windows Toolkit from InfoSight, this includes Nimble Setup Manager. Note that if you are using a 32-bit version of Windows you will need to select a previous version that is 32-bit compatible.

Nimble Setup Manager scans the subnet for unconfigured Nimble arrays. Select the array to configure and click Next. Enter the array name, group name, network settings, and admin password, then click Next. (Groups are used to manage up to 4 arrays and pool storage. You can add the array as standalone by having it as the only array in its group). Accept the license agreement and click Finish.

nsm2

Alternatively you can use the CLI by connecting directly to the console of the active controller using a keyboard and monitor. Log in with the default username and password admin admin and execute the setup command to launch the CLI based setup. Accept the license agreement and configure the network settings at the relevant prompts, then opt to continue the setup using the GUI.

Once the management network has been configured open a web browser to the IP address. Log in to the Nimble OS web client with the admin password configured, the setup wizard will auto start.

nim1

The configuration settings in the first two pages of the setup wizard differ slightly depending on whether you are using FC or iSCSI. If you’ve ever set up an array with either protocol before you’ll find this process very straight forward, I’ll make references to both protocols just incase.

The first thing we need to do is to configure subnets for the required networks. For FC arrays this is easy as you’ll just have to confirm the management subnet. Ensure management only is selected as the traffic type.

If you are using iSCSI then in addition to the management subnet you will also configure a data subnet, or subnets, in accordance with your iSCSI fabric design. It is recommended that the management and data networks are separate subnets.  Each subnet requires an iSCSI discovery IP address. IP Address Zones are used to divide data subnets into two, typically split by using odds and evens addresses; to avoid bottlenecks on interconnect links. You don’t need to worry about this unless you are implementing an advanced solution for a specific use case. Ensure data is selected as the traffic type. Once the subnet configuration is complete click Next.

nimble1

On the interfaces page assign each interface to one of the subnets. Both controllers should be configured with a diagnostic IP address, whether you are using FC or iSCSI. Click Next.

nimble2

On the domain menu configure the domain and DNS server settings, click Next.

nim3

Configure the time zone and NTP server settings and click Next. Enter the email and auto support settings and click Finish. The initial setup is now complete and the browser will return to the management web client.

nim4

Before going any further you should ensure the Nimble OS is up to date by following the Updating Nimble OS post.

If you have cabled additional expansion shelves then these need to be activated. Browse to Manage, Arrays and click the array name. Notice that the expansion shelves are orange, click Activate Now.

activateshelves1

If you’re using Fibre Channel you’re probably wondering why the ports are named fc1, fc2, fc5, and fc6. This is to future proof the array for the release of quad port FC HBA’s by leaving an upgrade path open (fc3, fc4, fc7, and fc8). Hosts will need to be zoned as normal for FC connectivity and then added as initiators. You can create initiator groups and volumes by following the Configuring Nimble Storage guide. You may also want to see Configuring VVOLs with Nimble Storage.

EMC Unity Setup Guide

The EMC Unity product line is a flexible storage solution with a rich feature set and small datacentre footprint. EMC claim this product installs in 2 minutes, configures in 15 as one of its key features, in this post we’ll put that to the test and walk through the setup of an EMC Unity 300 array.

EMC also offer a software defined version of the Unity technology in the form of a virtual storage appliance, read more about it at Deploying EMC Unity VSA.

unityhybrid

Architecture

The EMC Unity hybrid and all flash storage range implements an integrated architecture for block, file, and VMware VVOLs powered by the Intel E5-2600 processors. The Disk Processor Enclosure (DPE) leverages dual storage processors and full 12-Gb SAS back-end connectivity to deliver high levels of performance and efficiency. Disk Array Enclosures (DAE) are added to scale out capacity up to 3 PB top end. There is concurrent support for native NAS, iSCSI, and Fibre Channel protocols whilst the unit itself takes up less rack space than it’s competitors. Unity arrays can be managed from the HTML5 web client, or through the CloudIQ service, and offer a full range of enterprise storage features. For more information see the Unity platform white paper.

Some considerations when creating storage pools; typically we want to configure less storage pools to reduce complexity and increase flexibility. However configuring multiple storage pools may be required if you want to separate workloads for different I/O profiles or use FAST Cache. When sizing a storage pool remember that all data written to LUNs, file systems, and datastores is stored in the pool, as well as configuration information, change tracking, and snapshots. Storage pools must maintain free capacity to operate, EMC recommend at least 10%.

You will need to make design decisions based on your environment around storage pool capacities and configured RAID protection. The Unity range offers RAID 1/0, RAID 5, or RAID 6 configured at storage pool level. EMC generally recommends smaller RAID widths as providing the best performance and availability, at the cost of slightly less usable capacity, e.g. for RAID 6 use 4+2 or 6+2 instead of 10+2 or 14+2. Unity automatically reserves 1 out of every 30 drives of the same type for use as a hot spare, you can reduce the number of hot spare drives by decreasing the number of individual drive types.

Unity arrays use the first 4 drives to store configuration information and critical system data, these are known as the system drives and run from DPE Disk 0 through to DPE Disk 3. The system drives cannot be used as hot spares but can be added to storage pools in smaller configurations, if no other disks are available. The usable capacity of system drives is reduced by around 100 GB, therefore storage pools utilising system drives should use a smaller RAID width. For larger configurations with high drive counts EMC does not recommend using the system drives as heavy client workload may slow down management operations. This restriction does not apply to all-flash.

Requirements

In addition to the boxed system components you will need:

  • Cabinet vertical space of 2U for the DPE, 2U for each optional 25-drive DAE, or 3U for each 15-drive DAE.
  • Cat 5 or better and Gigabit Ethernet switch ports x 2 for management connections.
  • Cables and ports for your chosen connectivity: 4 x Converged Network Adapter (CNA) ports which can be set at 10GbE, or 4, 8, or 16Gbps Fibre Channel. Once set they cannot be changed. 4 x 10GbE for file/iSCSI.
  • Slotted or Phillips screwdriver for installation.
  • A Windows based computer to run the initialisation and setup.
  • If you are unable to connect the Windows computer to the same subnet as the EMC Unity then you will need a USB drive to configure the array with a management IP address.

Unboxing

The Unity 300 base comes with the following:

  • Disk Processor Enclosure (DPE) 2U component.
  • Front bezel for DPE.
  • Rail kit consisting of 2 rails and 6 screws.
  • Accessory kit consisting of an anti-static wrist strap, cable ties, stickers, etc.
  • Power cords.

Any additional disk shelves contain:

  • Disk Array Enclosure (DAE) 2U component (each).
  • Front bezel for DAE.
  • 2 snap in rails, 3 screws per rail.
  • Power cords.
  • Mini-SAS HD cables (1 metre connect DAEs together, 2 metre connects to DPE).

Racking

EMC recommend installing the DPE at the bottom of the cabinet and installing any additional DAE’s above.

The rails clip into the rack using spring clips at the front and rear. Start with the rear and secure with 1 x M5 screw on each side once the rails are in place. The array then slides in and is secured with 2 x M5 screws per rail at the front. Do not tighten the screws until they are all in place. Once the  array is racked clip on the front bezel, a key is also enclosed.

If you require further assistance racking the devices see page 19 of the EMC Unity Installation Guide.

Cabling

First connect the 2 management ports to the switch, management ports have a white border around them, service ports yellow. Next plug in the cables for your chosen front end connectivity, i.e. Fibre Channel or Ethernet. Front end ports need to be connected and configured symmetrically across both storage processors to facilitate high availability. Furthermore you should use all front-end ports that are installed in the system, so that workload is spread across as many resources as possible.

When configuring switch ports for iSCSI and NAS configure Jumbo frames (MTU 9000) for optimum performance. NAS ports should also be configured with LACP grouped per storage processor, to provide path redundancy and performance improvements.

If you have purchased additional DAEs then these need to be connected using the included SAS cables. There are 2 on-board 12Gb SAS ports in each storage processor in the DPE. An additional 4-port 12 Gb SAS I/O module can be provisioned with the higher end Unity products but in general this is only required for extremely high bandwidth.

When cabling DAEs to the DPE, balance them as evenly as possible across all available buses. The drives in the DPE are serviced by SAS Bus 0; therefore, the first DAE should be cabled to SAS Bus 1. Daisy chain additional DAEs in a continuation of the following oder:

  • DAE 1 connects to SAS Bus 1 (on-board port 1)
  • DAE 2 connects to SAS Bus 0 (on-board port 0)
  • DAE 3 connects to SAS Bus 1 (on-board port 1)

cabling

If you are attaching a large number of DAE’s see page 33 of the EMC Unity Installation Guide for further cabling examples and a guide to the stickers included.

The power cables included with the array are colour coded with an intended use of: grey for Power Distribution Unit (PDU) A, black for PDU B. Once the array has power it will take approximately 10 – 15 minutes to power up.

unitypower

Setup

To access the web UI for setup we have a couple of options for automatic or manual IP addressing.

Automatic – if the array has access to network DHCP and DNS servers (with dynamic DNS enabled) then it will automatically be assigned an IP address. After power up if the SP Fault LED is solid blue then a management address has been assigned. This IP is dynamically added to DNS in the format of serialnumber.dnszone. If the SP Fault LED alternates between solid blue and flashing amber then a management address has not been assigned as the DHCP or DNS server could not be reached.

Manual – download and install the EMC Connection Utility. The Connection Utility gives you two options; automatically detect unconfigured storage systems in the same subnet as your Windows client, or manually configure an IP in a configuration file for use with a USB flash drive which the array automatically reads.

connection1

Depending on how IP addressing has been assigned open a browser and enter the IP address manually configured, or the DNS entry (serialnumber.dnszone). Log in to Unisphere using the default credentials admin Password123#.

vsa1

The Initial Configuration Wizard launches the first time you login. This self explanatory wizard guides you through the basic setup of the array, any settings you skip here can be configured later through the appropriate menus.

For a more in depth look at the configuration settings and Unisphere interface see the EMC Unity Configuration Guide, otherwise continue with the configuration wizard as outlined below.

vsa2

Accept the license agreement and click Next.

vsa3

Configure the admin and service passwords and click Next.

vsa4

Install the license file provided by your EMC vendor and click Next.

vsa5

Configure DNS settings and click Next.

vsa6

Configure NTP server settings and click Next.

vsa7

Create the storage pools required for your environment, see the notes on storage pools above under the Architecture heading. Click Next.

vsa8

Configure the email alert settings for your system and click Next.

vsa9

If applicable configure the iSCSI interfaces for use with the Unity system and click Next.

vsa10

If you intend on creating File level storage resources on the Unity system then configure at least one NAS server for each storage processor. NAS Servers require a separate IP to be configured for network access.

vsa11

The configuration wizard is now complete, click Close.

vsa12

It is good practise to update the Unity Operating Environment (OE) upon install of the new system. Arrays shipped with v4.0.1.8404134 will include a letter advising the administrator upgrades the software due to an issue with this version of the OE. See Upgrading EMC Unity OE for further assistance.

dashboard

That’s it, the initial configuration is complete and is incredibly quick and easy providing all the pre-prep is done beforehand. You can now begin the process of adding hosts and presenting LUNs. Any configuration of additional features is done through the HTML5 Unisphere web client, for more information see the EMC Unity Configuration Guide. Once storage resources are created you can configure replication between Unity systems by following the Configuring EMC Unity Replication guide.

See also Configuring VVOLs with EMC Unity.