ESXi 6.5 FCoE Adapters Missing

After installing or upgrading to ESXi 6.5 FCoE adapters and datastores are missing. In this case the hardware in use is a HP ProLiant BL460c Gen9 server with HP FlexFabric 10Gb 2-port 536FLB adapters, although this seems to have been a problem for other vendors (see here) and versions too.

This issue should be resolved with a driver provided by the vendor which has the FCoE auto discovery on boot parameter enabled. Cross reference your hardware against the VMware Hardware Compatibility Guide here, and confirm you are using the correct version of the bnx2fc driver and firmware. If no updated driver is available from the vendor then review the workarounds outlined below.

Stateful Installs

Credit to this article, SSH onto the host and run the following commands.

esxcli fcoe adapter list lists the discovered FCoE adapters, at this stage there will be no results.

esxcli fcoe nic list lists the adapters available as potential FCoE candidates. Locate the name of the adapter.

esxcli fcoe nic enable -n vmnicX enables the adapter, replace vmnicX with the adapter name, for example vmnic2.

esxcli fcoe nic discover -n vmnicX enables discovery on the adapter, replace vmnicX with the adapter name.

esxcli fcoe adapter list lists the discovered FCoE adapters, you should now see the FCoE adapters listed.

The storage adapters should now be showing in the vSphere web client, however if you are using stateless installs with Auto Deploy, then this workaround is not persistent and is lost at reboot.

storageadapters2

Stateless Installs

Credit to this article, we were able to create a custom script bundle to enable discovery on the FCoE adapters as part of the deploy rule with the steps below. Custom script bundles open up a lot of possibilities with Auto Deploy, but at this stage they are CLI only. I also noticed that if you create a deploy rule with a script bundle from the CLI, although it shows in the GUI if you then edit that rule in the GUI (for something unrelated, e.g. updated host profile) then it removes the script bundle without warning. So this is something you would need to weigh up against your environment, if you are already using CLI to configure deploy rules it shouldn’t be a problem.

PowerCLI can now be installed directly through PowerShell, if you don’t already have PowerCLI installed see here.

  • First up we’ll need to create the script on a Linux / Unix system. I just used a test ESXi host we had kicking about over SSH. Type vi scriptname.sh replacing with an appropriate name for your script.
  • The file will open, type i to begin editing.
  • On the first line enter #!/bin/ash followed by the relevant enable and discover commands from the section above. You can see in the example below the commands for enabling vmnic2 and vmnic3 as FCoE adapters.

ssh1

  • Press escape to leave the text editor and type :wq to save changes to the file and close.
  • Next we need to create the script bundle that will be imported into Auto Deploy. Type tar -cvzf bundlename.tgz scriptname.sh

ssh2

  • Copy the script bundle with the .tgz extension to your local machine, or the computer from where you will be using PowerCLI to create the deploy rule. In my case I copied the file over with WinSCP.
  • You should also have an ESXi image in zip format, make a note of the location. Add the script bundle and the ESXi software depot by running the following commands Add-ScriptBundle location\bundlename.tgz and Add-EsxSoftwareDepot location\file.zip. If you need further assistance with building custom images or using PowerCLI to manage Auto Deploy see the VMware Auto Deploy 6.x Guide and How to Create Custom ESXi Images posts.

ps1

  • Build the deploy rule using your own variables, again if you’re already using Auto Deploy I’m assuming you know this bit, we’re just adding an additional item in for the script bundle. See the guide referenced above if you need assistance creating deploy rules. I have used:
    • New-DeployRule -Name "Test Rule" -Item "autodeploy-script","HPE-ESXi-6.5.0-Build-5146846", LAB_Cluster, -Pattern "ipv4=192.168.0.101" | Add-DeployRule

ps2

  • The deploy rule is created and activated, I can now see it in the Auto Deploy GUI in the vSphere web client, with the associated script bundle. When the host boots from the deploy rule the script is extracted and executed, and the FCoE adapters are automatically enabled and discovered on boot.

autodeployGUI

  • If you don’t use the | Add-DeployRule parameter then the deploy rule will be created but show inactive. You can activate using the GUI but do not edit the rule using the GUI or the script bundle will break.
  • If you are updating an existing image then don’t forget to remove cached rules by remediating host associations, under the Deployed Hosts tab.

vCenter Server Appliance Integrated TFTP Server

This post covers the steps required to use the vCenter Server Appliance for Auto Deploy, with the built in TFTP server in vSphere 6.5. For more information on Auto Deploy, and to see the process for creating ESXi images and deploy rules to boot hosts, see the VMware Auto Deploy 6.x Guide. This post assumes that you have a working vCenter Server Appliance, and may be of use if you have recently migrated from Windows vCenter to VCSA.

Enable Auto Deploy

Open the vSphere web client and click System Configuration, Nodes. Select the vCenter Server and open the Related Objects tab. The Auto Deploy, ImageBuilder Service, and VMware vSphere ESXi Dump Collector services should all be set to Automatic and Running.

To start a service right click and select Start, then select Edit Startup Type and choose Automatic.

servicesLog out of the web client and log back in. You should now see the Auto Deploy icon on the home page.

autodeploy1

Enable TFTP

Now that Auto Deploy is enabled we can configure the TFTP server. Enable SSH on the VCSA by browsing to the Appliance Management page: https://VCSA:5480 where VCSA is the IP or FQDN of your appliance.

Log in as the root account. From the Access page enable SSH Login and Bash Shell.

SSH

SSH onto the vCenter Appliance, using a client such as Putty, and log in with the root account. First type shell and hit enter to launch Bash.

To start the TFTP service enter service atftpd start. Check the service is started using service atftpd status.

startservice

To allow TFTP traffic through the firewall on port 69; we must run iptables -A port_filter -p udp -m udp –dport 69 -j ACCEPT. Validate traffic is being accepted over port 69 using iptables -nL | grep 69.

firewall

The TFTP server will now work, however we need to make a couple of additional changes to make the configuration persistent after the VCSA is rebooted. There isn’t an official VMware way of doing this, and as it’s done in Linux there may be more than one way of achieving what we want. Basically I am going to backup iptables and create a script to restore iptables and start the TFTP service when the appliance boots. The steps are outlined below and this worked for me, however as a reminder this is not supported by VMware, and if you are a Linux expert you’ll probably find a better way round it.

The following commands are all run in Bash on the vCenter Server Appliance, you can stay in the existing session we were using above.

First make a copy of the existing iptables config by running iptables-save > /etc/iptables.rules.

Next change the directory by running cd /etc/init.d, and create a new script: vi scriptname.sh, for example: vi starttftp.sh.

Press I to begin typing. I used the following, which was copied from the Image Builder Service startup script, and modified for TFTP.

#! /bin/sh
#
# TFTP Start/Stop the TFTP service and allow port 69
#
# chkconfig: 345 80 05
# description: atftpd

### BEGIN INIT INFO
# Provides: atftpd
# Required-Start: $local_fs $remote_fs $network
# Required-Stop:
# Default-Start: 3 5
# Default-Stop: 0 1 2 6
# Description: TFTP
### END INIT INFO

service atftpd start
iptables-restore -c < /etc/iptables.rules

The file must be in the above format to be compatible with chkconfig which runs the script at startup. I left the defaults in from the Image Builder Service as it made sense they started at the same time and had the same dependencies. If you wish to modify further see the following sources: Bash commands, Script Options, Startup, and vmwarebits.com for the iptables commands.

Press escape to leave the editor and :wq to save the file and quit.

Next set execute permissions on the script by running chmod +x scriptname.sh, for example: chmod +x starttftp.sh.

To set the script to run at startup use chkconfig –add scriptname.sh, for example: chkconfig –add starttftp.sh.

Reboot the vCenter appliance to test the script is running. If successful the atftpd service will be started and port 69 allowed, you can check these with service atftpd status and iptables -nL | grep 69.

Close the session and disable SSH if required.

Configure DHCP

In this example I will be using PXE boot to boot the ESXi hosts using a DHCP reservation. On the DHCP scope that will be serving the hosts I have configured reservations and enabled options 066 and 067. In the value for option 066 (Boot Server Host Name) goes the IP address or FQDN of the vCenter Server where TFTP is running. In the value for option 067 (Bootfile Name) I have entered the BIOS DHCP File Name (undionly.kpxe.vmw-hardwired).

DHCP

Now that Auto Deploy is up and running using the built-in components of VCSA 6.5 you can begin creating ESXi images and deploy rules to boot hosts; using the Auto Deploy GUI. See the VMware Auto Deploy 6.x Guide.

VMware Auto Deploy 6.x Guide

Auto Deploy has some great new features in vSphere 6.5 including a hugely improved and feature rich GUI built into the web client. The Auto Deploy Guide has been updated for vSphere 6.5 including a walk through of the Auto Deploy GUI and step by step creation of a custom ESXi image with deploy rules to boot ESXi hosts.

esxsi.com

What is Auto Deploy?

VMware Auto Deploy works with Host Profiles to provision and customise your ESXi estate. When a host is deployed using Auto Deploy the state information is loaded to memory upon boot, the state is not permanently stored on the physical host by default. In theory this means there is no requirement for local storage, however we can configure stateless caching or stateful installs for use with a local drive if required.

What are the benefits?

After the initial setup you’ll benefit from quick and easy ESXi deployments, patches, driver updates, etc. provisioning of hosts on a mass scale all running the same version of code, huge savings in configuration times for new hosts and a more reliable and stable environment.

What do I need to make it work?

  • An Active Directory DNS, DHCP infrastructure already in place, you’ll need to enable DHCP options 66 and 67…

View original post 2,568 more words