Tag Archives: Storage

ESXi 6.5 FCoE Adapters Missing

After installing or upgrading to ESXi 6.5 FCoE adapters and datastores are missing. In this case the hardware in use is a HP ProLiant BL460c Gen9 server with HP FlexFabric 10Gb 2-port 536FLB adapters, although this seems to have been a problem for other vendors (see here) and versions too.

This issue should be resolved with a driver provided by the vendor which has the FCoE auto discovery on boot parameter enabled. Cross reference your hardware against the VMware Hardware Compatibility Guide here, and confirm you are using the correct version of the bnx2fc driver and firmware. If no updated driver is available from the vendor then review the workarounds outlined below.

Stateful Installs

Credit to this article, SSH onto the host and run the following commands.

esxcli fcoe adapter list lists the discovered FCoE adapters, at this stage there will be no results.

esxcli fcoe nic list lists the adapters available as potential FCoE candidates. Locate the name of the adapter.

esxcli fcoe nic enable -n vmnicX enables the adapter, replace vmnicX with the adapter name, for example vmnic2.

esxcli fcoe nic discover -n vmnicX enables discovery on the adapter, replace vmnicX with the adapter name.

esxcli fcoe adapter list lists the discovered FCoE adapters, you should now see the FCoE adapters listed.

The storage adapters should now be showing in the vSphere web client, however if you are using stateless installs with Auto Deploy, then this workaround is not persistent and is lost at reboot.

storageadapters2

Stateless Installs

Credit to this article, we were able to create a custom script bundle to enable discovery on the FCoE adapters as part of the deploy rule with the steps below. Custom script bundles open up a lot of possibilities with Auto Deploy, but at this stage they are CLI only. I also noticed that if you create a deploy rule with a script bundle from the CLI, although it shows in the GUI if you then edit that rule in the GUI (for something unrelated, e.g. updated host profile) then it removes the script bundle without warning. So this is something you would need to weigh up against your environment, if you are already using CLI to configure deploy rules it shouldn’t be a problem.

PowerCLI can now be installed directly through PowerShell, if you don’t already have PowerCLI installed see here.

  • First up we’ll need to create the script on a Linux / Unix system. I just used a test ESXi host we had kicking about over SSH. Type vi scriptname.sh replacing with an appropriate name for your script.
  • The file will open, type i to begin editing.
  • On the first line enter #!/bin/ash followed by the relevant enable and discover commands from the section above. You can see in the example below the commands for enabling vmnic2 and vmnic3 as FCoE adapters.

ssh1

  • Press escape to leave the text editor and type :wq to save changes to the file and close.
  • Next we need to create the script bundle that will be imported into Auto Deploy. Type tar -cvzf bundlename.tgz scriptname.sh

ssh2

  • Copy the script bundle with the .tgz extension to your local machine, or the computer from where you will be using PowerCLI to create the deploy rule. In my case I copied the file over with WinSCP.
  • You should also have an ESXi image in zip format, make a note of the location. Add the script bundle and the ESXi software depot by running the following commands Add-ScriptBundle location\bundlename.tgz and Add-EsxSoftwareDepot location\file.zip. If you need further assistance with building custom images or using PowerCLI to manage Auto Deploy see the VMware Auto Deploy 6.x Guide and How to Create Custom ESXi Images posts.

ps1

  • Build the deploy rule using your own variables, again if you’re already using Auto Deploy I’m assuming you know this bit, we’re just adding an additional item in for the script bundle. See the guide referenced above if you need assistance creating deploy rules. I have used:
    • New-DeployRule -Name "Test Rule" -Item "autodeploy-script","HPE-ESXi-6.5.0-Build-5146846", LAB_Cluster, -Pattern "ipv4=192.168.0.101" | Add-DeployRule

ps2

  • The deploy rule is created and activated, I can now see it in the Auto Deploy GUI in the vSphere web client, with the associated script bundle. When the host boots from the deploy rule the script is extracted and executed, and the FCoE adapters are automatically enabled and discovered on boot.

autodeployGUI

  • If you don’t use the | Add-DeployRule parameter then the deploy rule will be created but show inactive. You can activate using the GUI but do not edit the rule using the GUI or the script bundle will break.
  • If you are updating an existing image then don’t forget to remove cached rules by remediating host associations, under the Deployed Hosts tab.

VMware vSAN 6.7 Install Guide

VMware vSAN utilises server attached flash devices and local hard disk drives to create a shared datastore across hosts in a vSphere cluster. VMware vSAN achieves high availability by adding a software layer leveraging existing server hardware to provide the same resiliency and features as expensive SAN, NAS, or DAS arrays. Further to this vSAN is uniquely embedded within the hypervisor kernel, directly in the I/O path allowing it to make rapid data placement decisions without the installation of additional VIBs or virtual appliances. This post intends to give an overview of vSAN 6.5/6.7 and how to enable it.

VSAN

For further reading visit the VMware Documentation Centre and expand vSAN under the relevant version.

Migrating Windows vCenter Server to VCSA 6.7

468x60

Key Features

  • Data protection and availability with built-in failure tolerance, asynchronous long distance replication, and stretched clusters between geographically separate sites.
  • Leverages distributed RAID and cache mirroring to protect data against loss of a disk, host network or rack.
  • Minimises storage latency by accelerating read/write disk I/O traffic with built-in caching on server attached flash devices.
  • Software based deduplication, compression, and data-at-rest encryption (v6.6 and higher) with minimal CPU and memory overhead.
  • Easily scale storage capacity and performance by adding new nodes or drives without disruption.
  • VM-centric storage policies to automate balancing and provisioning of storage resources and QoS.
  • Fully integrates with the VMware stack including vMotion, DRS, High Availability, Fault Tolerance, Site Recovery Manager, vRealize Automation, vRealize Operations, and vSphere Integrated Containers.

Requirements

  • Between 3 and 64 hosts for a standard cluster, a two node cluster can also be implemented with the use of an offsite witness host.
  • Each capacity contributing host in the cluster must contain at least one flash drive for cache and one flash or HDD for persistent storage.
  • SATA/SAS HBA or RAID controller in pass-through mode or RAID 0 mode.
  • All hosts participating in a vSAN cluster must be connected to a Layer 2 or Layer 3 network using either IPv4 or IPv6.
  • If you are using vSAN 6.5 or earlier then multicast must be enabled on the physical switches that handle vSAN traffic, vSAN 6.6 and higher requires Unicast.
  • Host bandwidth to the vSAN network must be at least 1Gbps for Hybrid configurations or 10Gbps for All-Flash.
  • If you are deploying vSAN to your existing hardware and not using the VMware hyper-converged software stack then check the Hardware Compatibility Guide.
  • For compatibility with additional VMware products see the Product Interoperability Matrix.
  • Before implementing vSAN review Designing and Sizing a Virtual SAN Cluster.

Licensing

VMware vSAN can be added to any version of vSphere and is licensed per CPU, per VM, or per concurrent user. The current licensing model comes in three tiers; standard, advanced, and enterprise, as well as standard and advanced ROBO (Remote Office/Branch Office) versions. Features such as data-at-rest encryption and stretched clusters need enterprise licensing. RAID 5/6 erasure coding, deduplication and compression require advanced licensing. For full details see the licensing guide for the relevant vSAN version: vSAN 6.5 | vSAN 6.6 | vSAN 6.7.

vSAN Ports

Before configuring vSAN each host in the cluster must be configured with a VMkernel port for use with vSAN traffic.

In the vSphere client (HTML5) or vSphere web client browse to each of the hosts in the designated cluster for which you intend to use vSAN, open the Configure tab and select Networking. Click VMKernel Adapters and the Add Networking icon. Ensure the connection type is VMkernel Network Adapter and click Next.

VMK_1

Select a New standard switch and click Next.

VMK_2

Assign physical adapters to the switch using the green plus symbol. For production environments make sure multiple physical network adapters are assigned for redundancy. When you have finished the network adapter configuration click Next.

VMK_3

Configure a name for the VMkernel port and a VLAN ID if required. Ensure Virtual SAN is selected under enabled services and click Next.

VMK_4

Configure the network settings for the VMkernel port and click Next.

VMK_5

On the Summary page click Finish.

For lab environments with limited physical interfaces you select the Management Network and click the Edit Settings icon. Add Virtual SAN traffic to the list of available services and click Ok. The vSAN traffic will now share the management network, this is obviously not recommended for production workloads.

vSAN 6.7 Configuration

In vSphere 6.7 the HTML5 client now includes support for vSAN. To enable vSAN browse to the appropriate cluster in the vSphere client and click the Configure tab. Expand vSAN and select Services, vSAN is turned off by default so click Configure.

VSAN_1

Select the vSAN configuration and click Next. The standard option is a Single site cluster where all hosts are at one site. A two host cluster with third witness node (not contributing capacity), or stretched cluster across sites, can also be used.

VSAN_2

Enable any additional services that are required, these can also be enabled later. Click Next.

VSAN_3

Select the disks to use in the vSAN configuration and click Next. For each capacity contributing host one flash device should be selected for the cache tier, and at least one more device for the capacity tier.

VSAN_4

If your vSAN cluster spans multiple racks or chassis then you may have included fault domains in your vSAN design. Configure any required fault domains here and then click Next.

VSAN_5

Review the settings in the summary page and click Finish. The selected resources are pooled into a single vSAN datastore and you can start provisioning machines right away.

VSAN_6

Additional vSAN services such as deduplication and compression can be configured after initial setup using the menu options under vSAN in the cluster Configuration tab. The vSAN menu options in the cluster Monitor tab also provide a number of good monitoring tools and dashboards.

VSAN_7

vSAN 6.5/6.6 Configuration

For vSAN 6.6.2 and earlier the required features need enabling from the vSphere web client, only vSAN 6.7 has HTML5 support.

To enable vSAN browse to the appropriate cluster in the vSphere web client and click the Configure tab. Expand Virtual SAN and select General, you will see a message that Virtual SAN is not enabled, so click Configure.

enablevsan1

By default any suitable disks will be added to the vSAN datastore. To manually select disks change the disk claiming setting to Manual. Review the other capability options by hovering over the grey information circle, select any appropriate features and click Next. If you change any settings on the capabilities page additional menu pages will be added for configuration of these settings.

enablevsan2

The network validation page will confirm that each host in the cluster has a valid vSAN kernel port, click Next.

enablevsan3

Review the details on the summary page and click Finish. The virtual SAN will now pool the selected resources into the vSAN datastore and you can start provisioning machines right away. vSAN creates and presents a single datastore containing all disks for each vSphere cluster. You can amend vSAN settings or add additional capabilities at a later date using the menu options under the Virtual SAN heading of the Configure tab of a vSphere cluster.

EMC VNXe Setup Guide

The VNXe is the most affordable hybrid and all-flash array across the EMC product range. Although the future potentially sits with the newly released Unity line, the VNXe remains a popular, flexible, and efficient storage solution for SMBs and ROBOs. This post will walk through the setup of an EMC VNXe device.

vnxe

Architecture

The VNXe 3200 is powered by dual Intel Xeon E5-2407 4-Core processors, providing up to 3x the performance of its 3150 predecessor. The Disk Processor Enclosure (DPE) leverages dual controllers and 6-Gb SAS back-end connectivity to deliver high levels of availability and efficiency, whilst lowering storage costs per I/O. Disk Array Enclosures (DAE) are added to scale out capacity up to 500 TB top end. There is concurrent support for NAS and SAN, with CIFS, SMB3, NFS, iSCSI and Fibre Channel (up to 8Gb) protocols, whilst the unit itself has a small datacentre footprint. For more information see the EMC VNXe Data Sheet.

Some considerations when creating storage pools; typically we want to configure less storage pools to reduce complexity and increase flexibility, however configuring multiple storage pools may be required if you want to separate workloads. Storage pools must maintain free capacity to operate, EMC recommend at least 10%. You will need to make design decisions based on your environment around storage pool capacities and configured RAID protection. The VNXe range offers multicore RAID 1/10/5/6 configured at storage pool level. EMC generally recommends smaller RAID widths as providing the best performance and availability, at the cost of slightly less usable capacity, e.g. for RAID 6 use 4+2 or 6+2 instead of 10+2 or 14+2.

VNXe arrays use the first 4 drives to store configuration information and critical system data, these are known as the system or vault drives and run from DPE Disk 0 through to DPE Disk 3. The system drives can be added to storage pools however usable capacity of system drives is reduced, therefore storage pools utilising system drives should use a smaller RAID width. For larger configurations with high drive counts EMC does not recommend using the system drives as heavy client workload may slow down management operations (does not apply to all-flash).

Requirements

In addition to the boxed system components you will need:

  • Cabinet vertical space of 2U for the DPE, and 2U for each optional 25-drive DAE.
  • 2 x Cat 5e or better GbE management connections.
  • Between 2 and 8 Cat 5e or better GbE or 10GbE data connections, or, between 2 and 8 Gb FC connections, depending on your chosen connection protocol.
  • A Windows based computer to run the initialisation and setup.
  • If you are unable to connect the Windows computer to the same subnet as the EMC VNXe then you will need a USB drive to configure the array with a management IP address.
  • Phillips screwdriver for installation.

Unboxing

The VNXe base comes with the following:

  • Disk Processor Enclosure (DPE) 2U component consisting of 12 x 3.5″ bays or 25 x 2.5″ bays.
  • Rail kit consisting of 2 adjustable rails and 10 screws, or 2 snap-in rails and 6 screws.
  • Accessory kit consisting of an mini-USB adaptor, cable ties, stickers, etc.
  • Front bezel for DPE.
  • Power cords.

Any additional disk shelves contain:

  • Disk Array Enclosure (DAE) 2U component consisting of 12 x 3.5″ bays or 25 x 2.5″ bays.
  • Rail kit consisting of 2 adjustable rails and 10 screws, or 2 snap-in rails and 6 screws.
  • Front bezel for DAE.
  • Power cords.
  • Mini-SAS and mini-SAS HD to mini-SAS cables.

Racking

EMC recommend installing the DPE at the bottom of the cabinet and installing any additional DAE’s above. The snap-in method is the most commonly used rail set and the one we will use here. For assistance with racking the adjustable rails see page 16 of the EMC VNXe Install Guide.

Locate the left and right markings on each rail. Align the 2U key tabs with the U-space in the rear rack channel. Push the key tabs and adaptors into the rear mounting holes until the spring clips snap into place. Round the front push in the spring clip and release once the rail is lined up with the mounting holes. Secure the rear of the rail using 1 x M5 screw on each side.

Slide the DPE into the rails until they click into the rear tabs on each rail. The tabs secure and support the rear of the enclosure, the front is secured using 2 x M5 screws on each side. Repeat the process for any additional DAEs.

 

racking

Cabling

First connect the 2 management ports to the switch, management ports have a LAN/management symbol above them. Do not use the service ports, service ports have a wrench/spanner symbol above them. Next plug in the cables for your chosen front end connectivity, i.e. Fibre Channel or Ethernet. Front end ports need to be connected and configured symmetrically across both storage processors to facilitate high availability. Furthermore you should use all front-end ports that are installed in the system, so that workload is spread across as many resources as possible. NAS ports should also be configured with LACP grouped per storage processor, to provide path redundancy and performance improvements.

If you have purchased additional DAEs then these need to be connected using the included SAS cables. There are 2 on-board 6Gb SAS ports in each storage processor in the DPE. When cabling DAEs to the DPE, balance them as evenly as possible across all available buses. The drives in the DPE are serviced by SAS Bus 0; therefore, the first DAE should be cabled to SAS Bus 1. Connect SP A SAS Port 1 to DAE 1 Link Controller Card (LCC) A (cable 1 in the image below). Connect SP B SAS Port 1 to DAE 1 LCC B (cable 2).

sas

The mini-SAS HD connectors are used for the DPE ports, the mini-SAS connectors are used for DAE ports. Mini-SAS to mini-SAS cables are used for cabling DAEs together. If you are attaching additional DAE’s see page 28 of the EMC VNXe Install Guide.

The power cables included with the array are colour coded with an intended use of: grey for Power Distribution Unit (PDU) A, black for PDU B. Once the array has power it will take approximately 10 – 15 minutes to power up. Finally, clip the front bezels into place and secure with the key included.

power

Setup

To access the web UI for setup we have a couple of options for automatic or manual IP addressing.

Automatic – if the array has access to network DHCP and DNS servers (with dynamic DNS enabled) then it will automatically be assigned an IP address. After power up if the SP Fault LED is solid blue then a management address has been assigned. This IP is dynamically added to DNS in the format of serialnumber.domain. If the SP Fault LED alternates between solid blue and flashing amber then a management address has not been assigned as the DHCP or DNS server could not be reached.

Manual – download and install the Connection Utility from EMC Downloads. The Connection Utility gives you two options; automatically detect unconfigured storage systems in the same subnet as your Windows client, or manually configure an IP in a configuration file for use with a USB flash drive which the array automatically reads.

Depending on how IP addressing has been assigned open a browser and enter the IP address manually configured, or the DNS entry (serialnumber.dnszone). Log in to Unisphere using the default credentials admin Password123#.

unisphere1

The Initial Configuration Wizard launches the first time you login. This self explanatory wizard guides you through the basic setup of the array, any settings you skip here can be configured later through the appropriate menus.

unisphere2

Once the configuration wizard is complete you will be returned to the home dashboard. It is recommended that the operating system is updated straight away. This can be achieved from the Settings drop down menu, and selecting Update software. Software can either be obtained online direct from the VNXe, or downloaded from EMC Downloads and then uploaded to the array. If you skipped the configuration wizard there are some basic configuration settings below to get you started.

First browse to the Management Settings page of the Settings drop down menu. Under the General tab we can configure the system name and management network settings. The Network tab features DNS settings, NTP settings, and remote logging.

network

To apply a license (.lic file provided by EMC) go to Settings, Manage Licenses; upload and install the license file. Also under Settings select Configure alerts, connect to EMC and configure SMTP and alert settings here.

alerts.png

It is recommended that physical network interfaces are  pooled together. To configure link aggregation browse to Settings, More Configuration, Advanced Configuration. Tick the Aggregation box.

linkaggregation

Storage pools are configured under System, Storage Pools. You will see 2 default pools; Hot Spare Pool and Unconfigured Disks. To configure the number of hot spares, or configure a storage pool and RAID group, select the appropriate pool and click Configure Disks. Follow the Disk Configuration Wizard.

diskconfig

To change the admin password at any time go to Settings, User administration. To enable SSH (optional) navigate to Settings, Service System and enter the service password. Select Enable SSH and click Execute service action.

You can now move on to configure the chosen protocol for the array, whether that be creating CIFS/NFS servers and shares through Settings, Manage Shared Folder Server Settings, or presenting iSCSI or FC storage through Hosts or Settings, iSCSI Server Settings. For further assistance with the VNXe GUI see EMC Unisphere for VNXe.