Category Archives: NSX

NSX 6.4.1 Upgrade Guide

This post will walk through upgrading to NSX 6.4.1. If upgrading from 6.4.0 then the new Upgrade Coordinator feature can be used, allowing simultaneous upgrade planning of multiple NSX objects, see the NSX 6.4.x Upgrade Coordinator post for more information. If upgrading from an earlier version than 6.4.0 then the steps outlined below are applicable. When performing an upgrade the NSX components must be upgraded in the following order: NSX Manager, NSX Controllers, Host Clusters, NSX Edge, Service Virtual Machines (such as Guest Introspection).

Review the operational impacts of NSX upgrades for each component here when planning your upgrade, it is best practise to limit all operations in the environment until the upgrade is complete. Make sure NSX Manager is backed up before starting an upgrade, and be aware that after a successful upgrade NSX cannot be downgraded. You should also review the VMware NSX for vSphere 6.4.1 Release Notes here and NSX for vSphere Documentation Center here.

Requirements

Requirements specific to NSX 6.4.1 are listed below. As we are doing an upgrade the assumption is that the vSphere and NSX environment is already setup and working, you can validate the existing NSX configuration here. You should also ensure an underlying network with IP connectivity and an MTU size of 1600 or above, FQDN resolution, connectivity, and time synchronisation between NSX and vSphere components, syslog, monitoring, and backups are all in place. In addition review the basic system requirements for NSX here and the full list of network port requirements here.

  • NSX 6.4.1 is compatible with vSphere versions 6.0 U2 and above, also note; if you are using 6.0 then U3 is recommended, the minimum supported version for 6.5 is 6.5a, support for 5.5 has now been removed
  • Supported upgrade paths to NSX 6.4.1 are from 6.2.4 onwards, there is a workaround for upgrading from 6.2.0, 6.2.1, or 6.2.2 which can be found here
  • Review the VMware Upgrade Path page here and also fully review the NSX 6.4.1 Release Notes here, as there are a number of things to be aware of when upgrading from 6.2.x or 6.3.x
  • Check compatibility with VMware products using the VMware Interoperability page here
  • Check compatibility with other third party products such as partner services for Guest Introspection using the VMware Compatibility Guide here
  • Before starting the upgrade make sure existing appliances meet the recommended hardware requirements:
    • NSX Manager 16 GB RAM (24 GB for large deployments), 4 vCPU (8 vCPU for large deployments), and 60 GB disk, a large deployment is typically 256+ hosts or 2000+ VMs
    • NSX Controllers 4 GB RAM, 4 vCPU, and 28 GB disk
    • NSX Edge Compact: 512 MB RAM, 1 vCPU, 584 MB + 512 MB disks. Large: 1 GB RAM, 2 vCPU, 584 MB + 512 MB disks. Quad Large: 2 GB RAM, 4 vCPU, 584 MB + 512 MB disks. X-Large: 8 GB RAM, 6 vCPU, 584 MB + 2 GB + 256 MB disks.
  • Verify the existing NSX Manager has sufficient space by connecting to the CLI (if using SSH service may need starting on the summary page of NSX Manager appliance page) and running show filesystems
  • Maximum latency between NSX components and NSX and vSphere components should be 150 ms RTT or below
  • NSX Data Security is no longer supported, it should be removed if installed prior to the upgrade
  • If you are using Cross-vCenter NSX then each component should be upgraded in the order listed here
  • Enabling DRS on the vSphere cluster allows running VMs to be automatically migrated when each host is placed into maintenance mode for the NSX VIB upgrades. This process can of course be undertaken manually if DRS is not in use
  • A completed upgrade can be validated following the steps listed here

Backups

Before we start take a backup of the vCenter Server and NSX Manager. NSX configuration can be backed up using FTP/SFTP, see this post for more information. From version 6.4.1 a configuration backup is automatically taken at the start of the upgrade process, this is intended as a fall back and you should still take your own backup before beginning. You can also take a snapshot of the NSX Manager incase we need to revert back the NSX Manager upgrade. For extra peace of mind export the vSphere Distributed Switch configuration by following the instructions here.

In the event you do need to restore from an NSX backup a new appliance should be deployed and the configuration restored, click here for further details.

Upgrade Process

As noted above make sure you have read all the linked documentation, specifically the release notes and operational impacts for each component upgrade. The steps below will not list the operational impact for each step of the upgrade.

Download the NSX for vSphere 6.4.1 Upgrade Bundle from the download page here to a location accessible from the NSX Manager. Browse to the NSX Manager and log in as admin. From the home page click Upgrade.

Click Upload Bundle and browse to the upgrade bundle downloaded earlier, click Continue. Once the bundle is uploaded you can (optional) select to enable SSH and/or join the Customer Experience Improvement Program. Click Upgrade to start the upgrade.

NSX64_1

The installer will now upgrade NSX Manager, once complete you will be returned to the login page.

NSX64_2

Log back into NSX Manager and click Upgrade. Verify the upgrade state is complete and the version number is correct. Click Summary and verify the health of the NSX Manager.

NSX64_3

Log into the vSphere Client, if you were already logged in then log out and back in, or you may need to clear your browser cache. From the Menu drop-down select Networking and Security.

Before upgrading any other components we need to upgrade the NSX Controller Cluster. On the Dashboard tab confirm there are 3 controller nodes all connected, the upgrade cannot commence if any nodes are in a disconnected state.

NSX64_5

Click Installation and Upgrade and select the Management and NSX Managers tab. Check the NSX Manager version is correct, in the Controller Cluster Status column click Upgrade Available.

NSX641_1

Each controller is upgraded and rebooted one at a time. From NSX 6.3.3 onwards the underlying operating system of the controller nodes changed to Photon-OS. If you are upgrading from 6.3.3 onwards an in-place upgrade is applied. If you are upgrading from 6.3.2 or earlier then the controller nodes are redeployed, any DRS rules anti-affinity rules are lost and will need to be reapplied.

Click Yes to being the Controller Cluster upgrade.

NSX641_2

Monitor the status in the NSX Controller Nodes tab. After all the controller nodes have been upgraded validated the Status, Peers, and Upgrade Status are all green. Confirm the Software Version is correct.

NSX641_3

Next we can upgrade the host NSX VIBs, click the Host Preparation tab. Clusters running NSX are displayed, upgrades are initiated on a per cluster basis. Select the cluster and click Upgrade to begin the upgrade.

Hosts running NSX 6.2.x require a reboot for the installation of new VIBs, hosts running NSX 6.3.0 and above do not need a reboot but must be placed into maintenance mode. You can either manually place hosts into maintenance mode and vMotion / power off VMs yourself, or allow DRS to live migrate VMs and remediate hosts one at a time.

NSX641_4

Click Yes to commence the cluster upgrade.

NSX641_5

At this stage if hosts are not in maintenance mode the NSX Installation will show Not Ready. If you have DRS enabled on the cluster click Actions and Resolve All, this will automatically vMotion running machines from a host, place into maintenance mode, update the VIBs, and exit maintenance mode, one host at a time. Alternatively you can select individual hosts and click Resolve if you want to control the order of the upgrades.

NSX641_6

Monitor the status of the NSX Installations in the Hosts table. You can also monitor Recent Tasks to make sure a host is not taking too long to enter maintenance mode, if a host cannot be evacuated due to DRS rules, or a VM that cannot be migrated then manual intervention may be required (in this case see here).

If you are using stateless images with Auto Deploy you should also update your ESXi image with the latest NSX VIBs or they will be lost at next reboot, for guidance see this post.

NSX641_7

The next step is to upgrade NSX Edges. Before commencing with validate the status of all NSX prepared hosts is green and they are showing successfully upgraded to the correct version. During Edge upgrades a replacement appliance is deployed which means 2 appliances (or 4 if running in HA mode) are powered on at the same time, ensure your cluster has sufficient compute resource.

NSX641_8

At the time of writing (v6.4.1) NSX Edges still need to be upgraded using the vSphere web client. Log into the vSphere web client and click Networking & Security, NSX Edges, deployed Edges are displayed .If you have multiple NSX Managers ensure the correct NSX Manager is selected in the drop-down. Select the NSX Edge to upgrade and from the Actions menu click Upgrade Version.

NSX641_9

The upgraded version will be deployed from OVF, you can follow the progress in the Recent Tasks pane and also the Status column for the Edge. Repeat this process for each Edge Services Gateway (ESG) and Distributed Logical Router (DLR) you wish to upgrade.

NSX641_10

The final stage is to upgrade Guest Introspection. This can either be done in the vSphere web client or by going back into the HTML5 web client. From the Menu drop-down select Networking and Security, click Installation and Upgrade and the Service Deployment tab. Existing service deployments are listed, the Installation Status for Guest Introspection shows Upgrade Available. Select the Guest Introspection deployment and click Upgrade, once complete verify the Installation Status and Service Status are both green and the version number is correct.

NSX641_11

After all NSX components are upgraded if you want to follow additional verification steps then see the upgrade validation KB here, or the post upgrade tasks listed here. You should take a further backup of NSX Manager after completion of the upgrade. Any third party appliances for Guest Introspection or Network Introspection that require an update can now be upgraded.

NSX 6.4.x Upgrade Coordinator

This post will walk through an upgrade to NSX 6.4.1 using the new Upgrade Coordinator feature allowing simultaneous upgrade planning of multiple NSX components. If you are upgrading from an earlier version of NSX, see the NSX 6.4.1 Upgrade Guide post for details on upgrading individual components. From version 6.4 onwards upgrade plans can be used to upgrade host clusters, controller clusters, Edge Service Gateways (ESGs), Distributed Logical Routers – including Universal (DLRs and UDLRs), and Service Virtual Machines such as Guest Introspection. Upgrade plans consist of either a one click system managed upgrade, or planning your own upgrade where objects and options can be customised.

Review the operational impacts of NSX upgrades for each component here when planning your upgrade, it is best practise to limit all operations in the environment until the upgrade is complete. Make sure NSX Manager is backed up before starting an upgrade, and be aware that after a successful upgrade NSX cannot be downgraded. You should also review the VMware NSX for vSphere 6.4.1 Release Notes here and NSX for vSphere Documentation Center here.

Requirements

Requirements specific to NSX 6.4.1 are listed below. As we are doing an upgrade the assumption is that the vSphere and NSX environment is already setup and working, you can validate the existing NSX configuration here. You should also ensure an underlying network with IP connectivity and an MTU size of 1600 or above, FQDN resolution, connectivity, and time synchronisation between NSX and vSphere components, syslog, monitoring, and backups are all in place. In addition review the basic system requirements for NSX here and the full list of network port requirements here.

  • NSX 6.4.1 is compatible with vSphere versions 6.0 U2 and above, also note; if you are using 6.0 then U3 is recommended, the minimum supported version for 6.5 is 6.5a, support for 5.5 has now been removed
  • Supported upgrade paths to NSX 6.4.1 are from 6.2.4 onwards, there is a workaround for upgrading from 6.2.0, 6.2.1, or 6.2.2 which can be found here
  • Review the VMware Upgrade Path page here and also fully review the NSX 6.4.1 Release Notes here, as there are a number of things to be aware of when upgrading from 6.2.x or 6.3.x
  • Check compatibility with VMware products using the VMware Interoperability page here
  • Check compatibility with other third party products such as partner services for Guest Introspection using the VMware Compatibility Guide here
  • Before starting the upgrade make sure existing appliances meet the recommended hardware requirements:
    • NSX Manager 16 GB RAM (24 GB for large deployments), 4 vCPU (8 vCPU for large deployments), and 60 GB disk, a large deployment is typically 256+ hosts or 2000+ VMs
    • NSX Controllers 4 GB RAM, 4 vCPU, and 28 GB disk
    • NSX Edge Compact: 512 MB RAM, 1 vCPU, 584 MB + 512 MB disks. Large: 1 GB RAM, 2 vCPU, 584 MB + 512 MB disks. Quad Large: 2 GB RAM, 4 vCPU, 584 MB + 512 MB disks. X-Large: 8 GB RAM, 6 vCPU, 584 MB + 2 GB + 256 MB disks.
  • Verify the existing NSX Manager has sufficient space by connecting to the CLI (if using SSH service may need starting on the summary page of NSX Manager appliance page) and running show filesystems
  • Maximum latency between NSX components and NSX and vSphere components should be 150 ms RTT or below
  • NSX Data Security is no longer supported, it should be removed if installed prior to the upgrade
  • If you are using Cross-vCenter NSX then each component should be upgraded in the order listed here
  • Enabling DRS on the vSphere cluster allows running VMs to be automatically migrated when each host is placed into maintenance mode for the NSX VIB upgrades. This process can of course be undertaken manually if DRS is not in use
  • A completed upgrade can be validated following the steps listed here

Backups

Before we start take a backup of the vCenter Server and NSX Manager. NSX configuration can be backed up using FTP/SFTP, see this post for more information. From version 6.4.1 a configuration backup is automatically taken at the start of the upgrade process, this is intended as a fall back and you should still take your own backup before beginning. You can also take a snapshot of the NSX Manager incase we need to revert back the NSX Manager upgrade. For extra peace of mind export the vSphere Distributed Switch configuration by following the instructions here.

In the event you do need to restore from an NSX backup a new appliance should be deployed and the configuration restored, click here for further details.

Upgrade Process

As noted above make sure you have read all the linked documentation, specifically the release notes and operational impacts for each component upgrade. The steps below will not list the operational impact for each step of the upgrade.

Download the NSX for vSphere 6.4.1 Upgrade Bundle from the download page here to a location accessible from the NSX Manager. Browse to the NSX Manager and log in as admin. From the home page click Upgrade.

Click Upload Bundle and browse to the upgrade bundle downloaded earlier, click Continue. Once the bundle is uploaded you can (optional) select to enable SSH and/or join the Customer Experience Improvement Program. Click Upgrade to start the upgrade.

NSX64_1

The installer will now upgrade NSX Manager, once complete you will be returned to the login page.

NSX64_2

Log back into NSX Manager and click Upgrade. Verify the upgrade state is complete and the version number is correct. Click Summary and verify the health of the NSX Manager.

NSX64_3

Log into the vSphere Client, if you were already logged in then log out and back in, or you may need to clear your browser cache. From the Menu drop-down select Networking and Security.

For any upgrade plan the NSX Controller Cluster upgrade is mandatory and performed first. On the Dashboard tab confirm there are 3 controller nodes all connected, the upgrade cannot commence if any nodes are in a disconnected state.

NSX64_5

Click Installation and Upgrade and select the Upgrade tab. Review the components, any warnings, and current and target version details.

NSX64_4

To start an upgrade plan click Plan Upgrade.

Upgrade Coordinator puts objects of the same type in default upgrade groups when planning an upgrade. These groups and other settings can be modified by planning your own upgrade (controller upgrades are mandatory) or you can allow the system to upgrade everything using a one click upgrade. Select the desired upgrade plan and click Next.

NSX64_7

The default options for the one click upgrade are to upgrade Host Clusters and Service VMs individually (serial), and to upgrade NSX Edges all together (parallel). There is no pause between components or pause on error. If you are happy with these settings then click Start Upgrade to being the upgrade process, otherwise go back to Plan Your Upgrade.

NSX64_8

Select your own upgrade to choose which components are upgraded, controller upgrades are mandatory and are done first. You can also pause the upgrade between components or pause the upgrade if an error is returned.

NSX64_9

The next 3 pages of the Upgrade Coordinator allow you to manage upgrade groups for Host Clusters, NSX Edges, and Service VMs. When planning your upgrade take into consideration the following:

  • Objects of the same type can be added to or removed from an upgrade group
  • The order of object upgrades within a group can be changed
  • All components included in an upgrade group must be upgraded before the next component type can be upgraded, e.g. all hosts included in an upgrade plan must be upgraded before moving onto Edges, and so on
  • Excluding an object within an upgrade group is useful for multiple maintenance windows, where you want to add an object to an upgrade plan but exclude them from this upgrade session
  • If the upgrade order within group is set to Serial then each object is upgraded one at a time, if it is Parallel then multiple objects within that group are upgraded at the same time

Controller Upgrades: each controller is upgraded and rebooted one at a time. From NSX 6.3.3 onwards the underlying operating system of the controller nodes changed to Photon-OS. If you are upgrading from 6.3.3 onwards an in-place upgrade is applied. If you are upgrading from 6.3.2 or earlier then the controller nodes are redeployed, any DRS rules anti-affinity rules are lost and will need to be reapplied.

Host Upgrades: hosts running NSX 6.2.x require a reboot for the installation of new VIBs, hosts running NSX 6.3.0 and above do not need a reboot but must be placed into maintenance mode. You can either manually place hosts into maintenance mode and vMotion / power off VMs yourself, or allow DRS to live migrate VMs and remediate hosts one at a time. Monitor the status of the NSX Installations on the Upgrade tab. You can also monitor Recent Tasks to make sure a host is not taking too long to enter maintenance mode, if a host cannot be evacuated due to DRS rules, or a VM that cannot be migrated then manual intervention may be required (in this case see here).

If you are using stateless images with Auto Deploy you should also update your ESXi image with the latest NSX VIBs or they will be lost at next reboot, for guidance see this post.

NSX64_10

Configure your upgrade plan based on the components you want to upgrade in this session, and review the final plan. When you’re read click Start Upgrade to begin the upgrade process.

NSX64_13

Monitor the status of the upgrade on the Upgrade page. If any warnings or errors are displayed during the upgrade process see the Monitor and Troubleshoot Your Upgrade page here. If you selected Pause between components you must Resume or Replan after each stage of the upgrade.

nsx64_14

An in-progress upgrade plan can still be paused to make modifications; when paused the object currently being upgraded will continue and the upgrade plan pauses when this object upgrade succeeds or fails.

nsx64_15

After the upgrade is complete verify the Upgrade page shows the system upgrade status successful.

nsx64_16

Verify the NSX health from the Dashboard page. After all NSX components are upgraded if you want to follow additional verification steps then see the upgrade validation KB here, or the post upgrade tasks listed here. You should take a further backup of NSX Manager after completion of the upgrade. Any third party appliances for Guest Introspection or Network Introspection that require an update can now be upgraded.

Setting Manual DFW Override for NSX Restore

The recommended restore process for NSX Manager is to deploy a new OVA of the same version, and restore the configuration. After a recent failed upgrade we needed to restore NSX Manager, so deployed a new OVA with the same network settings. After the new NSX Manager was powered on we were unable to ping the IP address, this was because there were no default rules allowing access to the VM, and since the existing NSX Manager was down we were unable to connect to the UI or API to add the required firewall rules. NSX Manager is normally excluded from Distributed Firewall (DFW) by default, however at this point the hosts saw it as any other VM, since we had not yet restored the configuration. Therefore we needed to add a manual override to clear the filters applied to the new NSX Manager, allowing us to connect and restore the configuration. The following commands were run on the host where the new NSX Manager OVA was deployed, using SSH. For further guidance on the backup and restore process of NSX see the NSX Backup and Restore post.

Disclaimer: the steps below are advanced commands using vsipfwcli which is an extremely powerful tool. You should engage VMware GSS if doing this on anything other than a lab environment, you should also understand the impact of stopping the vsfwd service on the host and the impact this may have on any other VMs with a DFW policy of fail closed.

net-stats -l lists the NIC details of the VMs running on the host, verify the new NSX Manager is present.

/etc/init.d/vShield-Stateful-Firewall stop stops the vsfwd user world agent, you can also use status to display the status.

vsfwd

summarize-dvfilter lists port and filter details, we need the port name for the VM, e.g. nic-38549-eth0-vmware-sfw.2.

DFW_1

vsipioctl getrules -f nic-38549-eth0-vmware-sfw.2 lists the existing filters applied to the port, replace the port name with your own, from the output check to confirm the ruleset name, e.g. ruleset domain-c17.

DFW_2

vsipioctl vsipfwcli -f nic-38549-eth0-vmware-sfw.2 -c "create ruleset domain-c17;" creates a new empty ruleset with the same name, overriding the previous ruleset applied to the port. Replace the port name with your own and the ruleset name if it is different.

vsipioctl getrules -f nic-38549-eth0-vmware-sfw.2 again lists the existing filters applied to the port, the ruleset should now be empty as no filters are applied.

DFW_3

The NSX Manager is now pinging and the normal restore process can resume; connect to the web interface by browsing to the IP address or FQDN of the NSX Manager.

Restore_NSX_1

Select Backup & Restore.

Restore_NSX_2

Select the appropriate restore point and click Restore. Click Yes to confirm.

Restore_NSX_3

The restore generally takes 5-10 minutes, once complete you will see a restore completed successfully message in a blue banner on the Summary page. You may need to log out and log back in after the config is restored.

Restore_NSX_4

Once the NSX Manager services have started you can manage the DFW from the vSphere web client as normal. Remember to start the vsfwd service again on the host, after the vsfwd service is started the empty ruleset we created earlier is replaced with the original ruleset when the host syncs with NSX Manager.

/etc/init.d/vShield-Stateful-Firewall start starts the vsfwd user world agent, you can also use status to display the status.

CLI Reference for Troubleshooting NSX

Quick post documenting some useful CLI commands for troubleshooting NSX, mainly for my own reference. Other useful information can be found at NSX CLI Cheat Sheet and NSX for vSphere Command Line Interface Reference.

ESXi Hosts

Open an SSH session to an ESXi host. The SSH service can be started from the Configure, System, Security Profile page in the vSphere web client, or under Manage, Services when logging into the host UI.

ESXi_SSH

  • esxcli software vib list displays installed vibs, add | grep esx to filter.
  • vmkload_mod -l | grep vd displays the loaded drivers, add | grep nsx to filter, nsx-vdl2, nsx-vdrb, and nsx-vsip kernel modules should be loaded (
  • /etc/init.d/vShield-Stateful-Firewall status displays the status of user world agent vsfwd which connects the host to NSX Manager.
  • /etc/init.d/netcpad status displays the status of user world agent netcpa which connects the host to the controller cluster.

ESXi_SSH_1

  • tail -f /var/log/netcpa.log tails the user world agent netcpa log.
  • Note – to change the logging level for netcpa execute the following commands on the ESXi host:
    • chmod +wt /usr/lib/vmware/netcpa/etc/netcpa.xml gives write permissions to the file.
    • vi /usr/lib/vmware/netcpa/etc/netcpa.xml opens the file in an editor. Find <level>info</level>, press insert to edit the line and replace info with verbose. Press escape twice and enter :wq to save the file and quit.
    • /etc/init.d/netcpad restart restarts netcpad.

ESXi_SSH_3.png

  • esxcfg-advcfg -g /UserVars/RmqIpAddress lists the IP address of the registered NSX Manager.
  • esxcli network ip connection list lists active TCP/IP connections, add | grep 5671 to filter port 5671 used to connect to NSX Manager.
  • ping ++netstack=vxlan -d -s 1572 -I vmk3 <VMK> <VTEP> can be used to ping a VTEP IP address using an increased packet size, where <VMK> is the VMkernel to use on the source host, and <VTEP> is the destination VTEP IP address to ping.
    • For example ping ++netstack=vxlan -d -s 1572 -I vmk4 192.168.30.12
    • If the ping comes back successful then we know the MTU is set correctly, since the command specifies a packet size of 1572 (there is a 28 byte overhead = 1600). If the ping drops the packet then try reducing the packet size to 1472: ping ++netstack=vxlan -d -s 1472 -I (again + 28 byte overhead = 1500). If the smaller ping packet is successful but the larger packet is dropped then we know there is an MTU mismatch.
  • pktcap-uw can be used for packet capturing, full syntax here.
  • esxtop is a useful host troubleshooting tool, type n to switch to network view.

ESXi_SSH_2

NSX Manager Appliance

Open an SSH session to the NSX Manager. The SSH service can be started from the Summary page of the NSX Manager Virtual Appliance Management page.

Enable_SSH

  • show interface displays information for the NSX Manager management interface.
  • show ip route NSX Manager route information.
  • show filesystem NSX Manager file system capacity.
  • show log manager follow follows the NSX Manager log.

NSX_SSH_1

  • show controller list all displays the controller nodes status.
  • show cluster all displays vSphere clusters managed by the vCenter Server.
  • show logical-switch list all displays all logical switch information.
  • show logical-switch controller master vni 5001 connection displays the hosts connected to segment ID 5001, also replace connection with vtep mac arp.
  • show logical-router list all displays all distributed logical router information.

Deploying an NSX Load Balancer with vRA

In this post we will walk through the process of deploying an NSX Load Balancer using vRealize Automation. We will also cover high availability and post deployment scaling. In order to take advantage of the direct NSX API integration with vRA you will need to be running at least v7.3, read more about the enhancements made in vRA 7.3 from the release notes or what’s new. In the example we’ll work towards multiple web servers are provisioned with an On-Demand Load Balancer, along with app servers and a database server. The On-Demand Load Balancer deploys an NSX edge for load balancing and adds the web servers as pool members. There are a number of available customisations which we’ll cover in the configuration process below.

Blueprint_2

Adding Endpoints

The following process assumes that you have a fully deployed vRA topology with all the components required to provision virtual machines; vCenter endpoint(s), reservations, compute resources, and a published catalog with entitlements. It would also be beneficial to have an understanding of using an NSX edge for load balancing or have deployed an edge manually to see the corresponding deployment options.

The first step is to add the NSX Manager as a vRA endpoint. From the Infrastructure tab select Endpoints and Endpoints again. Click New and select Networking and Security, NSX. Enter the details for the NSX Manager. Before adding the NSX endpoint we can create an association with the registered vCenter Server. From the Associations tab, click New. Select the vCenter Server from the dropdown, the platform type will auto-populate to vSphere and the description vSphere to NSX Association. Click Test Connection and then Ok to save the configuration.

NSX_Endpoint

Blueprint Modifications

After NSX has been added as an endpoint navigate to Blueprints under the Design tab. From the design canvas of a new or existing blueprint select Network & Security, drag and drop the On-Demand Load Balancer onto the canvas.

Blueprint_1

Click the On-Demand Load Balancer that has been added to the canvas. When the load balancer is provisioned in NSX the servers associated with the load balancer in the blueprint are automatically added as members in the pool. This is set in the Member field, in the example below the web servers in the blueprint are added as members of the load balancer.

Blueprint_3

The network for the member servers and the network for the VIP address are configured in the appropriate fields. Leave the IP address blank to automatically assign an IP address from the associated VIP network. Under Virtual servers click New, here you can configure the protocol settings for the load balancer, and the algorithm/persistence, health check, and connection settings by selecting Customize.

Customize_LB

Before saving the blueprint click the settings cog at the top of the page, this opens the blueprint properties. From the NSX Settings tab set the Transport zone to attach the load balancer to, this can be a local or universal transport zone. Next select the Edge and routed gateway reservation policy, this is the reservation policy (compute, storage) that will be used when provisioning the edge.

Blueprint_Properties_1

Click the Properties tab and select Custom Properties. There are a number of optional parameters we can add here.

  • NSX.Edge.ApplianceSize sets the appliance size of the edge, accepted values are compact, large, quadlarge and xlarge.
  • NSX.Edge.HighAvailability deploys the edge appliance in HA mode when the value is true. Without this property only a single appliance is deployed.
  • NSX.Edge.HighAvailability.PortGroup references the port group to use for the heartbeat network of the edge appliances deployed in HA mode.

Blueprint_Properties_2

Click Ok and Finish to save the blueprint. Make the blueprint available as a catalog item and request a test deployment. In vSphere you will see the edge and VMs being provisioned and, once complete, the virtual machines will be added as members in the load balacer pool. You can view the settings of the deployed edge in the vSphere web client under Networking & Security, NSX Edges, double click the edge and select Load Balancer.

NSX_Load_Balancer

Post Deployment

When the deployment is destroyed the edge appliances are removed along with the VMs as part of the cleanup process. If the deployment is scaled out then the new server is added as a member to the existing load balancer pool, likewise if the deployment is scaled in then the server deleted is also removed from the pool.

Scale_Out

The scale in and scale out actions are assigned as entitled actions from within the relevant entitlement  Aswell as having the permissions to perform the scale actions the blueprint must also contain a higher number of maximum instances. In the example below 2 web servers will be deployed with an On-Demand Load Balancer, as the maximum number of instances is set to 10 the requester can scale out the number of web servers and pool members to a maximum of 10 servers.

Blueprint_Scale

Configuring VMware Cross-vCenter NSX

This post provides an overview of cross-vCenter NSX and walks through the configuration steps. Cross-vCenter NSX allows central management of network virtualization and security policies across multiple vCenter Server systems. Cross vCenter NSX introduces universal objects; such as universal logical switches, universal logical routers, and universal distributed firewall rules. Universal objects are able to span multiple sites or vCenter Server instances, enhancing workload mobility by allowing cross vCenter and long distance vMotion for virtual machines, whilst keeping the same network settings and firewall rules. This improves DR capabilities, overcomes scale limits of vCenter Server, and gives administrators more control over resource pooling and the separation of environments.

Cross vCenter-NSX was introduced in NSX v6.2 and requires vSphere v6.0 or later. As normal NSX Manager is deployed with vCenter server in a 1:1 pairing.  In a single site NSX deployment the NSX Manager is given the standalone role by default. When configuring cross-vCenter NSX one NSX Manager is assigned the primary role, and up to seven other NSX Managers are assigned the secondary role. NSX Managers configured for cross-vCenter NSX must all be running the same version. The primary NSX Manager is responsible for deploying the Universal Controller Cluster; forming the control plane across the NSX Managers. The Universal Controller Cluster runs in the site of the primary NSX Manager. Universal objects are created on the primary NSX Manager and automatically synchronized across the multi-site NSX environment.

Configuring Cross-vCenter NSX

The steps below assume you have already deployed and registered the NSX Managers, and have a good understanding of NSX. This post is intended as add on to the NSX Install Guide to provide an outline of the additional or different steps required for a cross-vCenter NSX install, further resources are listed at the bottom of the page. If you are using vCenter enhanced linked mode then multiple NSX Manager instances are displayed within the same interface, or single pane of glass, when managing the Network & Security section of the vSphere web client. Enhanced linked mode is not a requirement for cross-vCenter NSX however, and vCenter Server systems not in enhanced linked mode can still be configured for cross-vCenter NSX.

From the Networking & Security page of the vSphere web client select Installation, highlight the NSX Manager in the primary site, from the Actions menu select Assign Primary Role.

NSX_Promote

The secondary NSX Manager(s) synchronize with the primary using the Universal Synchronization Service. These sites do not run any NSX Controllers, although they can be redeployed easily in the event of a primary site outage. Before assigning the secondary role you should ensure there are no existing NSX Controllers deployed in the associated vCenter. If you have already assigned a segment ID pool to the NSX Managers then ensure the segment ID pools do not overlap. Select the primary NSX Manager and from the Actions menu click Add Secondary NSX Manager. Enter the secondary NSX Manager information and admin password.

NSX2

Review the table of NSX Managers, the roles have now changed accordingly.

NSX_Roles

The universal controller cluster is formed by individually deploying the NSX controllers from the primary NSX Manager, the method of deploying the controllers is the same (see NSX Install Guide Part 1 – Mgmt and Control Planes for further assistance). Once the controllers are deployed you will notice placeholder controllers listed against the secondary NSX Manager, these are not connected or deployed. In the event of a site failure the configuration is synchronized between NSX Managers so you can simply re-deploy the controllers in the DR site. To see the failover process review this blog post. VMware recommend deploying 3 controllers on different hosts with anti-affinity rules.

NSX_Controllers

The next part of the install process is to follow the host preparation and VXLAN configuration steps as normal (see NSX Install Guide Part 2 – Data Plane for further assistance). Create the segment ID pools for each NSX Manager, making sure they do not overlap. On the primary NSX Manager you will also assign a universal segment ID pool.

In order for us to deploy universal logical switches we need to create a universal transport zone. A universal transport zone determines which hosts a universal logical switch can reach, spanning multiple vCenters. From the Logical Network Preparation tab open Transport Zones, ensure the primary NSX Manager is selected and click the plus symbol. Select Mark this object for Universal Synchronization, and enter the configuration for the universal transport zone. All universal objects must be created on the primary NSX Manager, change the NSX Manager to the secondary site and you will see the universal transport zone has synchronized there also.

NSX_TZ

Next we will create a universal logical switch for the transit network. Local objects such as logical switches, logical routers, and Edge Services Gateways can still be deployed from each NSX Manager, although by design they are only local to the vCenter linked to that specific NSX Manager, and cannot be deployed or edited elsewhere. From the left hand navigation pane in Networking & Security select Logical Switches, ensure the primary NSX Manager is selected and click the plus symbol. Enter a name for the transit network and select the universal transport zone we created earlier.

NSX_Universal_Transit

At this stage you can also deploy another universal logical switch, connecting a couple of test VMs on a private subnet, and have them ping one another to confirm connectivity. Now that we have a transit network and test universal logical switches connected to our universal transport zone we can go ahead and create a universal DLR. In this particular environment we have already deployed an ESG in each site. For further assistance with deploying an ESG and DLR see NSX Install Guide Part 3 – Edge and DLR.

From the Networking & Security page click NSX Edges, ensure the primary NSX Manager is selected and click the plus symbol. The control VM for the DLR is deployed to the primary site, again the configuration is synchronized and this can be re-deployed to the DR site in the event of a primary site outage. Select Universal Logical Router and follow the wizard as normal, if local egress is required then check the appropriate box. Sites configured in a cross-vCenter NSX environment can use the same physical routers for egress traffic, or have the local egress feature enabled within a universal logical router. The local egress feature allows routes to be customized at host, cluster, or router level.

NSX_UDLR

From the NSX Edges page double click the new universal DLR, select Manage, Settings, Interfaces and click the add button. In order for traffic to route from the universal DLR to the ESG(s) we must add an uplink interface connecting them to the universal transit network. Change the logical router interface to Uplink, in the Connected To field select the transit network universal logical switch we created earlier. Configure the IP and MTU settings of the interface per your own environment.

NSX_UDLR_Interface

You can also add Internal interfaces here corresponding with universal logical switches for virtual machine subnets. Before these subnets can route out follow the same process to add an Internal interface to the ESG(s) connecting them to the same transit network.

A virtual machine connected to the test universal logical switch can now vMotion between sites keeping the same IP addressing, providing L2 over L3 capability. As well as remaining on the same logical network a virtual machine can also be migrated across sites without any additional firewall rules, this is achieved with the use of universal firewall rules. Universal firewall rules require a dedicated section creating under the Firewall section of Networking & Security, you must select Mark this section for Universal Synchronization. For assistance with creating universal firewall rules see here.

NSX_Universal_Firewall

Additional Resources

To plan a cross-vCenter NSX installation review the VMware Cross-vCenter NSX Design Guide, Cross-vCenter NSX Topologies Guide, and the VMware Cross-vCenter Installation Guide. For more information on cross-vCenter NSX design see the following blog posts:

VM Security Tags with NSX Firewall

This post will walk through virtual machine security tags; how we can create tags, automatically add virtual machines with tags to a specific security group, and build associated NSX firewall rules. As a bonus we’ll also apply a security tag to a vRA blueprint, allowing vRA provisioned machines to automatically receive a security tag and apply any corresponding NSX firewall rules.

Security tags and groups allow us to identify virtual machines with a common value, such as business department, support group, workloads, and so on. By applying security tags to virtual machines, and/or adding virtual machines to security groups, we can control security at a custom defined level, independent of the underlying infrastructure. Virtual machines can have multiple tags, allowing administrators to identify different values upon which to act. Many third party anti-virus solutions with NSX integration use security tags to protect and quarantine virtual machines depending on their health status.

The steps below assume that NSX is installed and working, for more information on installing the required components see the following series of posts.

NSX Install Guide Part 1 – Mgmt and Control Planes

NSX Install Guide Part 2 – Data Plane

NSX Install Guide Part 3 – Edge and DLR

Creating Security Tags and Groups

From the vSphere web client browse to Networking & Security, click NSX Managers, and select the appropriate NSX Manager. Open the Manage tab, then Security Tags. Existing security tags are listed, some third part plugins such as AV may also add and use security tags. To create a new security tag click the New Security Tag icon. Enter a name for the security tag, and a description if required, then click Ok.

Tags1

Security tags can be applied to virtual machines manually in the page referenced above, or through an automatic provisioning solution such as vRealize Automation.

Next, select the Grouping Objects page. Under Security Groups, click the Add New Security Group icon. Enter a name and description if required.

Group1

Go to the third option: Select objects to include. Change the Object Type in the drop-down to Security Tag. Select the new security tag we created earlier.

Group2

Review the details on the summary page and click Finish.

Group3

The security group has now been created, and any virtual machines that use the security tag we included are automatically added to the group. You can create multiple security tags and groups for different departments, applications, or however you want to segregate these out.

Group4

Creating NSX Firewall Rules

Our new security group / tag setup can be used to configure NSX firewall rules. Still under Networking & Security in the vSphere web client; select Firewall.

If you have already configured NSX firewall rules you’ll be familiar with this page, and likely have a number of sections and rules already configured. You can edit an existing rule or create a new one in the relevant section. To create a new section use the Add Section (folder) icon. Click the green plus icon to add a new rule, or the edit icon to edit an existing rule.

Firewall1

When configuring the rule you can set the source, destination, or both to use a security group. Change the Object Type drop-down to Security Group, and select the new security group we created earlier.

Firewall2

Remember to click Publish Changes when you’re done. For assistance with creating NSX firewall rules see this section of the NSX Documentation Center.

Adding Tags to vRA Blueprints

The use of security tags with blueprints requires NSX to be integrated with vRA. If you haven’t already done so you can follow the steps outlined in the VMware post Part 1 of Integrating NSX with vRealize Automation. You’ll also need an understanding of how to create blueprints, again there is more information on this in the VMware post Part 2 of Integrating NSX with vRealize Automation if you need it.

To add a security tag to a virtual machine provisioned by vRA we must add it to the appropriate blueprint. After adding security tags and/or groups to NSX Manager we need to run a data collection so that vRA is showing up to date information. From the vRA portal browse to the Infrastructure tab, select Compute Resources, Compute Resources. Move the mouse cursor over the compute resource and click Data Collection. Scroll down to Network and Security Inventory and click Request now. The sync will take a couple of minutes, you can leave the page during this time.

vra1

Next open the Design tab and select Blueprints. We can add a security tag to an existing blueprint, or create a new one. In the design canvas click Network & Security from the list of categories. Locate Existing Security Tag and drag this onto the canvas. Alternatively you can use a security group at this stage if you’d prefer.

vra2

Select the security tag from the list of existing tags. From the design canvas select the virtual machine and open the Security tab. Tick the referenced security tag to associate it with the virtual machine. Click Save and Finish to save the changes to the blueprint. Any virtual machines provisioned from this blueprint are now tagged with the security tag (or group) selected.