In Part 1 we looked at the overall design of the environment and official documentation, as well as preparing the management and control planes by installing NSX Manager and an NSX Controller cluster. In this post we will focus on installing components in the data plane; preparing the hosts for NSX, configuring VXLAN, VTEP interfaces, Transport Zones, and creating a logical switch. To complete the routing configuration Part 3 will walk through NSX Edge and Distributed Logical Routers.
- Host preparation involves the installation of NSX kernel modules upon ESXi hosts in a vSphere cluster.
- If you are using stateless environments then you should update the Auto Deploy image with the NSX VIBs so they are not lost on reboot.
Log into the vSphere web client and select Networking & Security. From the left hand navigator pane click Installation. Select the Host Preparation tab. Highlight the cluster you want to prepare for NSX and click Actions, from the drop down menu click Install.
Click Yes to confirm the install, the NSX kernel modules will now be pushed out to the hosts in the selected cluster. The installation status will change to Installing, and then a green tick with the version number.
To download a zip file containing the VIBs for manual install or updating an ESXi image; browse to https://NSX/bin/vdn/nwfabric.properties, where NSX is the IP address or FQDN of the NSX Manager, and locate the VIB URL for your version of ESXi. Open the relevant URL which will auto download vxlan.zip. For assistance with updating Auto Deploy images see the VMware Auto Deploy 6.x Guide.
Host preparation should be controller via the web UI. If you do need to manually install the modules you can download the VIBs as outlined above, and upload to a vSphere datastore. Open an SSH connection to the host using a client such as Putty, run the following command, replacing
datastore with the name of the datastore:
esxcli software vib install -d /vmfs/volumes/datastore/vxlan.zip
esxcli software vib install -d /vmfs/volumes/Datastore01/vxlan.zip
You can also copy the package to a local directory on the ESXi host, such as tmp, using WinSCP and run the install from there; changing the file path accordingly. Update Manager is another option. The vxlan.zip package contains esx-vdpi, esx-vsip, and esx-vxlan VIBs. Check the VIBs installed on an ESXi host by running esxcli software vib list.
For more information on installing kernel modules using CLI see this post.
- VXLAN Tunnel Endpoints are required for ESXi hosts to talk to each other across the physical network.
- Tunnel Endpoints are configured on each host using a VMkernel interface, which requires an IP Pool and an existing distributed switch. New distributed port groups are added to the distributed switch.
- The Tunnel Endpoint interface encapsulates the L2 data packet with a VXLAN header and sends this on to the transport network, as well as removing the VXLAN header at the other end.
Stay within the Host Preparation tab, in the VXLAN for each cluster click Not Configured. Select the distributed switch and VLAN to use for the VTEP interfaces. Ensure the MTU size of the VTEP configuration and underlying network is at least 1600. Select the IP addressing option (you can create a new IP Pool in the Use IP Pool drop down menu). Specify the VMkernel NIC teaming policy and click Ok.
The VMkernel interfaces will now be configured on the specified distributed switch, once complete the VXLAN configuration of the cluster will show configured and a green tick.
Tip: you can test the network is correctly configured with 1600 MTU by pinging between VTEP interfaces on different hosts with an increased packet size:
Browse to one of the ESXi hosts in the vSphere web client and click Manage, Networking, VMkernel Adapters. Locate the NIC(s) in use for VXLAN and make a note of the IP addresses, these are the VXLAN Tunnel Endpoints. For example on host 1 we may have vmk4 configured for VXLAN with IP 192.168.30.11, and host 2 with vmk4 configured for VXLAN with IP 192.168.30.12. In this case we can SSH onto host 1 and run the following command:
ping ++netstack=vxlan -d -s 1572 -I <vmk> <IP>
<vmk> is the VMkernel to use, in this case vmk4, and
<IP> is the IP address to ping, in this case the VTEP interface of host 2 (192.168.30.12).
ping ++netstack=vxlan -d -s 1572 -I vmk4 192.168.30.12
If the ping comes back successful then we know the MTU is set correctly, since the command specifies a packet size of 1572 (there is a 28 byte overhead = 1600). If the ping drops the packet then we can try reducing the packet size to 1472:
ping ++netstack=vxlan -d -s 1472 -I (again + 28 byte overhead = 1500). If the smaller ping packet is successful but the larger packet is dropped then we know the MTU is not correct.
VXLAN Network Identifiers
- For NSX Manager to isolate network traffic we must configure a Segment ID Pool.
- Each VXLAN is assigned a unique network identifier from the Segment ID Pool.
- When sizing the pool use a small subnet of the available range, do not use more than 10,000 per vCenter.
- If you have multiple NSX Managers in an environment make sure the pools do not overlap.
Switch to the Logical Network Preparation tab and click Segment ID. Click Edit to create the Segment ID Pool. Enter the range for the pool within 5000-16777215, for example 5000-5999, and click Ok. (We are not using Multicast addressing in this deployment, however if required you can select the tick box and enter the Multicast address range here).
- Transport zones control which hosts a logical switch can reach.
- Virtual machines must be in the same transport zone to be able to connect to one another.
- Multiple transport zones can be configured as per your own requirements, however typically a single global transport zone is enough.
Still under the Logical Network Preparation tab, select Transport Zones. Click the green plus symbol to add a new zone. In this example we will be using Unicast mode, allowing the NSX Controllers to look after the control plane. There are further network and IP requirements if you want to use multicast or hybrid modes. Add the clusters you want your VXLAN networks to span to the transport zone, click Ok.
- A logical switch reproduces switching functionality, decoupled from the underlying hardware.
- Logical switches are assigned a Segment ID from the Segment ID Pool, similar to the concept of VLAN IDs.
- Virtual machines connected to the same logical switch can connect to one another over VXLAN.
From the left hand navigation pane locate Logical Switches.
Click the green plus symbol to add a new logical switch. Enter a name and description, the Transport Zone we created earlier will automatically be selected, tick Unicast mode and click Ok. Enable IP Discovery is enabled by default, this option minimises ARP traffic flooding within the logical switch.
Now the logical switch is created we will see the new port group listed under the relevant distributed switch. Port groups created as NSX logical switches start with vxw-dvs–virtualwire-. Virtual machines can be added to a logical switch using the Add Virtual Machine icon, or by selecting the port group in the traditional Edit Settings option direct on the VM.
Virtual machines added to a logical switch at this stage only have connectivity with each other. To create VXLAN subnets, route traffic between different logical switches, and route traffic outside of VXLAN subnets; we need an Edge Services Gateway and Distributed Logical Router. Both of these components will be covered in Part 3.