Category Archives: NSX

McAfee MOVE 4.5.0 Upgrade Guide with NSX

This post walks through the upgrade of McAfee MOVE to version 4.5.0 with NSX Manager, and can be used when upgrading McAfee MOVE Agentless versions 3.5.x, 3.6.x, and 4.0.0. The upgrade of versions 3.5.x or 3.6.x involves migrating all custom settings, policies and tasks with the McAfee MOVE Migration Assistant (these are retained by default when upgrading from version 4.0.0).

Pre-Requisites

The benefits and architecture of offloading AV to a dedicated Service Virtual Machine (SVM) with McAfee MOVE and NSX are covered in the McAfee MOVE with NSX Install Guide. The scope of this guide is to upgrade an existing McAfee MOVE installation and as such it is assumed that NSX Manager, IP Pools, service deployments (i.e. Guest Introspection), policies, and ePO integration are all in place. Furthermore it is assumed that network connectivity between components, time sync, DNS, vSphere access, etc. are also configured. For a full list of pre-requisities see the above install guide. The requirements below are specific to the McAfee MOVE 4.5 upgrade:

2

Update Extensions

The first step is to update the extensions on the ePO server. When upgrading versions 3.5.x or 3.6.x the existing extensions are left in place to facilitate the migration of data, which we’ll cover later. When upgrading version 4.0.0 the extensions are replaced with the new versions, all settings and policies remain.

I am going to use Software Manager to download, install, and check in the software direct on the ePO web UI. If you prefer you can manually download the extensions on your own machine and then install them through the Extensions page (more info on this below). To use Software Manager click the drop down Menu option in the top left hand corner of the page and select Software Manager. Use the search function to find McAfee MOVE AntiVirus 4.5. Browse through the components, you will notice the Migration Assistant is included, click Check In All.

migration1

Accept the license agreement and click Ok. The extensions are downloaded and installed.

migration2

An alternative way of installation or updating extensions is to browse to McAfee Downloads, enter your grant number when prompted and then select McAfee MOVE AV for Virtual Servers, McAfee MOVE AntiVirus 4.5. Download the required files and then browse to the web interface of the ePO server (https://EPO:8443/ where EPO is the name of your EPO server). Log in as an administrator and click the drop down Menu option in the top left hand corner of the page. Locate Software, and select Extensions. Click Install Extension and install the downloaded zip files in the following order: Cloud Workload Discovery Cloud_Workload_Discovery_Hybrid_4.5.0.zip (note that the CommonUI bundle; mfs-commonui-core-ui,commonui-core-common and commonui-core-rest extensions, is a pre-req for the Cloud Workload Discovery 4.5 for ePO 5.1.3 and 5.3.1), McAfee MOVE AntiVirus extension MOVE-AV_Ext_4.5.0_Licensed.zip, Product Help extension MOVE-AV_HELP_EXT_4.5.0.zip.

Which ever way you install the extensions, ensure you download MOVE-AL-AL_SVM_OVF_4.5.0.148 (or most recent version). This zip file contains the Service Virtual Machine (SVM), which we’ll need to add to the SVM repository later.

Once the extensions are installed the new version of MOVE AntiVirus will be visible in the Data Center Security group, under Menu > Software > Extensions.

migration6

For those upgrading versions 3.5.x or 3.6.x the old extensions remain in place in the MOVE AV group.

migration7

You will also notice an additional option in the Automation menu; MOVE AV Agentless remains as the legacy option for versions 3.5.x or 3.6.x, and MOVE AntiVirus Deployment is created for version 4.5.0. The legacy MOVE AV Agentless option is deleted upon removal of the old extensions at the end of the process. Again, doesn’t apply to 4.0.0 because in this case the extensions are upgraded, rather than running side by side.

versions

Migration Assistant

The Migration Assistant can be used when upgrading from MOVE versions 3.5.x or 3.6.x, if you are upgrading from 4.0.0 then this step is not necessary. Use one of the methods outlined above to install the Migration Assistant extension. If you used Software Manager to install the full McAfee MOVE AntiVirus 4.5 package then the Migration Assistant should already be installed. If you need to manually downloading and install the extension then when using McAfee downloads you need to change the Software Downloads tab to Extensions to view the Migration extension, as shown below.

migration

When the install is complete; in the ePO web UI click the drop down Menu option, under Software, click Extensions. The MOVE Migration Assistant 4.5 is listed under Data Center Security.

migration4

We can now go ahead and run the Migration Assistant; from the drop down Menu, under Policy, select MOVE Migration Assistant.

migration5

Select Automatic migration to migrate all settings for supported products (note that unassigned policies are not migrated) and click Next. To select only certain policies or edit policies you can use the Manual migration option, for more information see page 10 of the McAfee MOVE Migration Guide.

migration8

Review the items to be migrated, you can rename and edit the policy notes if required by clicking Rename and Edit Notes. When you’re ready to start migrating click Save.

migration9

Once the migration job has finished go back into the MOVE Migration Assistant, next to Migrate Agentless Deployment Configuration Details (Agentless Only) select Run, and click Next. Click Ok to confirm migrating configuration details.

migration10

When the config migration has completed click the drop down Menu option and under Automation select MOVE AntiVirus Deployment. You will see the SVM configuration and NSX registrations have all been migrated across.

Note that if you are upgrading from 3.5.x then the NSX certificate and credential data is migrated across, however you still need to enter the SVM configuration under Menu, Automation, MOVE AntiVirus Deployment, Configuration, General.

Upgrade SVM Registration

Now we need to add version 4.5.0 of the Service Virtual Machine (SVM) to the SVM repository, and update the registered SVM version with NSX Manager. In the ePO web UI click Menu, under Automation select MOVE AntiVirus Deployment. From the Configuration tab select SVM Repository, click Actions, Add SVM. Browse to the zip file containing the SVM we downloaded earlier and click Ok.

svm1

The new version of the SVM will now be listed in the repository.

svm2

Next go to Menu, Automation, MOVE AntiVirus Deployment. In the Configuration tab NSX Manager details and credentials should still be in place. Click the Service tab. The Registered SVM Version will still show the old version, from the Actions column for the NSX Manager click Upgrade. Select the new SVM version and click Ok. The latest version of the MOVE SVM is now registered with the selected NSX Manager.

Upgrade NSX Components

The final stage is to update the NSX security policy and service deployments. Log into the vSphere web client and click Networking & Security from the home page. Select Service Composer and then the Security Policies tab. As we’re upgrading an existing McAfee MOVE solution you should already have an AV related policy or policies configured, we need to reconfigure those to point at the new MOVE policies that were migrated across in ePO. Select the security policy to update and click the Edit icon.

editpolicy

Click Guest Introspection Services and select the existing guest introspection service, click the edit icon and make a note of the existing settings. Cancel out of the edit window and click the red cross to delete the guest introspection service. Click the green plus symbol to add a new service.

policy1

Enter a name for the service and ensure Apply is selected, use the McAfee MOVE AV service and select the ePO policy from the Service Profile drop down. The state should be set to Enabled and select Yes to enforce the policy. Use the same settings as the previous service if you like, the only difference will be the new service profile (ePO policy). Click Ok.

policy2

Select the Security Groups tab. Confirm that existing security groups are in place with the NSX security policy associated with the McAfee ePO policy applied. If needed you can select a group and click the apply policy icon to apply the security policy edited above to a security group.

policy3

Finally, we can update the Service Virtual Machines deployed on the ESXi hosts. From the left hand navigation pane select Installation and the Service Deployments tab. Existing installations will be listed here, with an Upgrade Available status. Service deployments are installed at vSphere cluster level, select the vSphere cluster to upgrade and click the Upgrade icon.

moveupgrade

New versions of the SVM are pushed out to each ESXi host in the selected cluster, replacing old versions using the same configuration details (datastore, port group, IP address range). Once complete the new version number is listed, the installation status is succeeded, and the service status is up.

movesuccess

If you upgraded version 3.5.x or 3.6.x you can remove the legacy MOVE extensions once you have updated the SVM registration and service deployments on each vCenter. In the ePO web UI open the Extensions page, locate the old version of the McAfee MOVE extension and click Remove.

If any of the components referenced above are not in place, or you need to deploy McAfee MOVE AV to a new vSphere cluster, see the McAfee MOVE with NSX Install Guide post. The only other thing worth noting is I had a vCenter where the MOVE service registration was failing, I had to remove the MOVE service deployments and service definition from NSX Manager, remove the vCenter from cloud accounts in ePO, and then add it all back in as a new install, deploying the SVM as a fresh 4.5 install rather than an upgrade.

NSX Install Guide Part 3 – Edge and DLR

In the final installment of this 3 part guide we will configure the Edge Services Gateway (ESG) and Distributed Logical Router (DLR). The NSX installation and relevant logical switches must be in place before continuing, for further information see NSX Install Guide Part 1 – Mgmt and Control Planes and NSX Install Guide Part 2 – Data Plane. It is important to note that depending on your network configuration and NSX design, additional steps may be required to integrate with your chosen routing protocol.

First we will create an Edge Services Gateway, providing access to the physical network (north-south traffic), followed by a Distributed Logical Router, which will provide connectivity for virtual machines using different logical switches (east-west traffic). The DLR will connect to the ESG to provide external routing using a transit logical switch. The image below shows the topology of the described components (from the VMware Documentation Centre, which also provides more information on advanced features and routing configurations).

topology

Edge Services Gateway

  • The Edge Services Gateway allows virtual machines to route to external devices, in other words to access the physical network.
  • The ESG is deployed as a virtual appliance in 4 different sizes:
    • Compact: 512 MB RAM, 1 vCPU, 500 MB disk.
    • Large: 1 GB RAM, 2 vCPU, 500 MB disk + 512 MB disk
    • Quad Large: 1 GB RAM, 4 vCPU, 500 MB disk + 512 MB disk
    • X-Large: 8 GB RAM, 6 vCPU, 500 MB disk + 2 GB disk
  • Each ESG can have a total of 10 interfaces (internal and uplinks).

From the left hand navigation pane select NSX Edges, click the green plus symbol to create a new Edge. Select Edge Services Gateway. Assign a name that will be displayed in the vSphere inventory and click Next. The hostname will be displayed in the CLI but is an optional field (the Edge-ID will be displayed if no hostname is specified). Should you require HA, and a secondary appliance deploying, tick Enable High Availability.

esg1

Configure the admin password (needs to be 12 characters plus the usual requirements) and logging level. You may want to enable SSH for troubleshooting purposes, this can also be enabled at a later date if required. Click Next.

esg2

Select the datacentre and appliance size as per the recommendation from VMware below:

The Large NSX Edge has more CPU, memory, and disk space than the Compact NSX Edge, and supports a larger number of concurrent SSL VPN-Plus users. The X-Large NSX Edge is suited for environments that have a load balancer with millions of concurrent sessions. The Quad Large NSX Edge is recommended for high throughput and requires a high connection rate.

Large should be ok for most environments, compact shouldn’t be used for production. Click the green plus symbol to add an Edge appliance.

esg3

Configure the vSphere placement parameters and click Ok. If you are using HA then add a second appliance, using a different datastore. DRS rules will automatically be added to keep the 2 appliances apart. If you do not deploy any appliances then the ESG will be created in an offline mode, until appliances are deployed. When you have finished adding the Edge appliances click Next.

esg4

We must now add the Edge interfaces, click the green plus symbol.

esg5

Configure the NSX Edge interfaces:

  • Add the physically connected distributed port group (click Select) to an Uplink interface, and enter the network details of your physical router. This provides a route to the physical network for north-south traffic.
  • If you selected HA then at least one internal interface must be configured to use a logical switch for heartbeat traffic, change the type to Internal and leave the IP address table blank.
  • If you will be adding a Distributed Logical Router then add an internal interface to the TRANSIT logical switch where the DLR will also be attached. The subnets to be routed externally are added to the DLR later.
  • Lab only: if you are not using a Distributed Logical Router, i.e. in a very small lab environment, then add the subnets for external connectivity and their associated logical switches as internal interfaces (east-west traffic).

esg6

When the required interfaces have been added click Next. Depending on your routing configuration you may need to add a default gateway, click Next.

esg7

Tick Configure Firewall default policy and set the default traffic policy to Accept, enable logging if required. The firewall policy can be changed or configured later if required, however if you do not configure the firewall policy, the default policy is set to deny all traffic.

If you have deployed HA each appliance will be assigned a link local IP address on the heartbeat network we created earlier, you can manually override these settings if required in the Configure HA parameters section, otherwise leave as default and click Next.

esg8

On the summary page click Finish to finalize the installation. The ESG will now be deployed, the details are listed on the NSX Edges page, note the type is NSX Edge. If you used HA then two ESG appliances will be deployed, you’ll notice in the vSphere inventory the virtual machine names have -0 and -1 at the end, -0 is the active ESG appliance by default until a failover occurs.

edges1

Once an Edge is deployed you can add or change the existing configuration, such as interfaces, by double clicking the Edge. Depending on your design and network configuration additional routing settings may be required, these can be found under Manage, Routing.

Routing.PNG

Distributed Logical Router

  • A Distributed Logical Router allows connectivity between virtual machines using different logical switches.
  • Distributed Routing allows for communication between virtual machines on different subnets, on the same host, without the need to leave the hypervisor level.
  • The DLR control VM sits in the control plane, although it pushes data plane kernel modules out to each host, allowing routing to be done within the hypervisor itself, these are kept up to date by the NSX Controllers.

From the left hand navigation pane select NSX Edges, click the green plus symbol to create a new Edge. Select Logical (Distributed) Router. Enter a name, this will appear in the vSphere inventory. If required you can enter a hostname, this will appear in the CLI, and a description and tenant. An Edge Appliance is deployed by default, this is needed unless you are using static routes. For dynamic routing and production environments Enable High Availability should also be selected, this deploys a standby virtual appliance, click Next to continue.

dlr1

Configure the local admin password (minimum 12 characters plus the usual requirements), it may also be worthwhile enabling SSH for future troubleshooting purposes. Note the logging level and change if required, otherwise click Next.

dlr2

If you chose to deploy an Edge appliance click the green plus symbol, select the vSphere options for the Edge appliance and click Ok, remember to add an additional appliance for HA using a different host and datastore, then Next.

dlr3

If you are using HA connect the interface to a distributed port group by clicking Select next to the HA Interface Configuration connection box.

Under the Configure interfaces of this NSX Edge section click the green plus symbol to add an interface. Configure the interfaces as required, internal interfaces are for east-west traffic, or VM to VM. Uplinks are for north-south traffic, and will typically connect to an external network through an Edge Services Gateway or third-party router VM. Uplink interfaces added will appear as vNICs on the DLR appliance. Add the interfaces associated with all the relevant networks and subnets you want to be routable, when you’re ready click Next.

dlr4

In this installation I have created three internal interfaces connected to their own dedicated logical switches; WEB, APP, and DB, configured with different subnets. Furthermore an uplink interface connected to the TRANSIT logical switch will be created, this will provide the link to the ESG for external routing.

web

Depending on your routing configuration you may need to add a default gateway (usually the ESG), for me the ESG will publish the default route via our routing protocol.  Click Next, then Finish. The DLR control VM will now be deployed, the details are listed on the NSX Edges page, note the type is Logical Router. If you used HA then two VMs will be deployed, you’ll notice in the vSphere inventory the virtual machine names have -0 and -1 at the end, -0 is the active control VM by default until a failover occurs.

edges2

Once a Logical Router is deployed you can add or change the existing configuration by double clicking the Logical Router. Depending on your design and network configuration additional routing settings may be required, these can be found under Manage, Routing. You will most likely need to add more subnets later on, this can be done under the Manage tab, and Settings, Interfaces. Click the green plus symbol and you will get the same Add Logical Router Interface wizard as we have used above.

interfaces

Ensure any virtual machines connected to the logical switches have their default gateway set to the DLR interface IP address. Virtual machines using logical switches now have connectivity through the DLR, despite being attached to different logical switches, and are able to route out to the physical network through the ESG.

_______________

NSX Install Guide Part 1 – Management and Control Planes

NSX Install Guide Part 2 – Data Plane

NSX Install Guide Part 3 – Edge and DLR

NSX with Log Insight Integration

This post covers the steps required to configure NSX with Log Insight integration. The versions used are NSX 6.2.5 and Log Insight 4.0, for assistance with getting these products up and running see the NSX Install Guide and vRealize Log Insight Install Guide posts. Log Insight is available to NSX customers entitled to use v6.2.4 and above, at no extra cost. The Log Insight for NSX license allows for the collection of vSphere and NSX log data.

The first step is to install the NSX Content pack on the Log Insight instance, then we’ll configure NSX Manager, the NSX Controllers, and any NSX Edges to use Log Insight as a syslog server.

NSX Content Pack

Browse to the IP address or FQDN of the Log Insight appliance and log in as admin.

loginsight

Click the menu option in the top right hand corner of the page.

admin

If you need to configure vSphere integration click Administration and vSphere under the Integration menu on the left hand navigation pane. Enter the connection details of the vCenter Server. To configure only specific hosts to send logs to Log Insight click Advanced options. Test the connection and when you’re ready click Save.

vsphereint

To install the NSX Content Pack select Content Packs from the menu option in the right hand corner of the page. Under Marketplace locate the VMware NSX-vSphere Content Pack.

contentpacks

Select the content pack, accept the license agreement and click Install.

contentpacksinstall

The next message informs you to setup vSphere Integration, which we covered above, and log forwarding for the NSX Manager, Controllers, and Edge components, which we’ll cover next. Click Ok.

contentpacksinstall2

The NSX Content Pack gives us additional dashboards accessible by clicking the drop down menu next to General on the Dashboards page. We won’t see any data there yet, as we need to configure the NSX components to use syslog.

nsxcontent

NSX Manager

Browse to the IP address or FQDN of the NSX Manager and login as admin.

nsxmanager

Click Manage Appliance Settings.

log1

From the General tab locate Syslog Server and click Edit.

log2

Enter the syslog server name or IP address and use port 514 protocol UDP. Click Ok to save the settings.

log3

NSX Controllers

Configuration of a syslog server for NSX Controllers is done through an API call. For the initial configuration a REST client is required. In this example we’ll use Postman for Google Chrome. Download the Postman app from the Chrome Web Store. When you first open the app click skip to use without creating an account. On the Authorisation tab set the authorisation type to Basic Auth. Enter the admin username and password of the NSX Manager.

log7

Click the Headers tab, in the key field type Content-Type, in the value field type application/xml. (The Authorization key in the screenshot automatically generates after configuring authorisation).

headers

To view the configured syslog server of an NSX Controller enter the URL https://NSX/api/2.0/vdn/controller/controller-1/syslog, replacing NSX with the NSX Manager name, you can also update the controller if required (i.e. controller-2, controller-3, and so on). Ensure Get is selected and click Send, the output will list the syslog configuration and is displayed in the Response field.

log7

To configure the syslog server change Get to Post in the drop down menu. Then click the Body tab and select raw. Enter the following text, replacing LOG with the correct syslog server.

<controllerSyslogServer>
<syslogServer>LOG</syslogServer>
<port>514</port>
<protocol>UDP</protocol>
<level>INFO</level>
</controllerSyslogServer>

Click Send. The new syslog server will be set. Change the controller-1 section of the URL to controller-2 and click Send to configure the same syslog server for controller-2, and again for controller-3. It is important that each NSX Controller is configured with the IP address of the Log Insight server. You can change Post to Get to view the syslog server configuration again once complete.

NSX Edges

NSX Edge Service Gateways and Distributed Logical Routers can be configured for syslog in the vSphere web client. From the home page click Networking & Security, select NSX Edges.

log4

Double click the ESG or DLR and open the Manage tab, Settings, Configuration. In the Details pane next to Syslog servers click Change.

log5

Enter the syslog server name or IP, ensure the protocol is UDP and click Ok.

log6

The syslog configuration is now complete, after a few minutes you should see events start to appear in the Log Insight dashboards.

loginsightnsx

VMware NSX Manager Upgrade

This post covers the upgrade of NSX Manager only, and is not a full NSX environment upgrade guide. After NSX Manager is upgraded any other NSX components will also need upgrading. In this example NSX Manager is being used for offloaded AV so we will  also upgrade Guest Introspection. If other components are in use they should be upgraded in the order below, for more information see the NSX 6.4.1 Upgrade Guide post.

  • NSX Controllers
  • ESXi hosts in the prepared clusters
  • NSX Edges / Logical Distributed Routers
  • Guest Introspection, any third party service deployments

Pre-Requisites

  • Take a snapshot of the NSX Manager virtual machine
  • Take a backup of NSX Manager data using the built-in VMware NSX Backup and Restore feature
  • Check compatibility of other VMware products using the Product Interoperability Matrixes
  • Check the compatibility of any other products interacting with NSX, such as Anti-Virus
  • NSX Manager system requirements are 16 GB RAM and 4 vCPU (or 24 GB RAM, 8 vCPU for environments with other 256 hosts). Ensure the existing NSX Manager meets these requirements
  • Check your existing NSX Manager version in the upgrade path matrix here

The versions of NSX used in this post are 6.2.4 to 6.2.5, the appropriate release notes for NSX 6.2.5 can be found here. Note that NSX 6.2.x is not compatible with vSphere 6.5. NSX 6.2.x requires vSphere 5.5 to vSphere 6.0 U2. NSX 6.3.0 has been released and is compatible with vSphere 6.5a, download NSX 6.3 here, NSX 6.3 release notes here, NSX 6.3 what’s new here.

For more up to date versioning and information see the NSX 6.4.1 Upgrade Guide post

Upgrade Process

Download the NSX for vSphere Bundle from VMware: v6.2.5, v6.3.0.

nsxupgrade1

Browse to the IP address or FQDN of the NSX Manager and log in. From the home page click Upgrade.

nsxupgrade2

The current software version will be listed. In the top right hand corner click Upgrade.

nsxupgrade3

Click Browse and locate the .tar.gz bundle downloaded earlier, then click Continue.

nsxupgrade5

Answer the SSH and Customer Experience Improvement Program questions (note that enabling SSH isn’t an upgrade requirement) and click Upgrade.

nsxupgrade6

The upgrade will now commence, there is no status bar. You will be logged out of the NSX Manager once the upgrade is complete.

nsxupgrade7

Log back into the NSX Manager and verify the NSX Manager version in the top right hand corner. Return to the Upgrade page to verify the upgrade status now shows complete.

complete

Before going any further restart the vSphere web client service on the vCenter Server to update the plug-in. You may also need to restart your browser or clear the cache if the plug-in does not display correctly in the vSphere web client.

If you are using Guest Introspection then the ESX Agents must also be upgraded. Log into the vSphere web client and click Networking & Security.

vsphere1

From the left hand navigation pane select Installation and the Service Deployments tab. Existing Guest Introspection deployments will show a status of Upgrade Available. To upgrade Guest Introspection select the service deployment and click the blue upgrade arrow.

guestintrospection1

When prompted confirm the deployment options (datastore, port group, and IP addressing) and click Ok. New ESX Agents will be deployed and the old versions deleted. Once complete the installation status will change to Succeeded.

guestintrospection2

After NSX Manager is upgraded you may notice that the vApp Details on the Summary page when you select the virtual machine still shows the old version. You can manually change this by right clicking the virtual machine and selecting Edit Settings. Open vApp Options and scroll down to Authoring, expand Product. Enter the correct version number and click Ok, the updated version will now be displayed in the vApp details.

vapp

VMware NSX Backup and Restore

This post will detail how to backup and restore NSX. NSX configuration and components configurable through the NSX Manager UI or API are included in the NSX Manager backup. This includes controller nodes, Edge configurations (Distributed Logical Router, Edge Services Gateway), firewall rules, etc. as well as all events and audit log tables. Virtual switches are included in the vCenter Server database backup. When planning a backup strategy for NSX consider the following:

  • NSX Manager backups can be taken on demand, or scheduled at hourly, daily, or weekly intervals.
  • Ensure that the vCenter Server (and database if external) are backed up and factored into the NSX backup schedule. For example if you need to restore the entire environment it is recommended that the NSX Manager backup is taken at the same time as vCenter Server.
  • The only method of backup and restore for NSX Manager is with FTP/SFTP. Using any other method to backup NSX components could result in errors when restoring objects and is not supported by VMware.
  • When restoring NSX Manager a new NSX Manager is deployed and the configuration restored. The restore process of NSX Manager is only compatible from an NSX Manager of the same version. Therefore it is important to backup NSX Manager both before and after upgrades.

NSX Manager Backup

As outlined above, NSX configuration is backed up using the NSX Manager. Open a web browser to the IP address or FQDN of your NSX Manager. Log in with the admin user.

nsx1

From the home page select Backup and Restore.

nsx2

Click Change next to the FTP Server Settings row.

backup1

Enter the details for the destination FTP/SFTP server, add a filename prefix for the backup files, and configure a pass phrase. Make a note of the pass phrase in a password safe since this will be needed for restores.

nsx3

Optional – next to Scheduling click Change to configure a backup schedule.

nsx4

Optional – next to Exclude click Change to logs or events from the backup.

nsx5

Once the backup server is configured backups will run as scheduled, or click Backup to backup NSX Manager now.

backup

Click Start to confirm the backup job.

backup2

Completed backups will be listed in the Backup History table.

backup3

To remove a backup you can delete the files from the FTP/SFTP server and it will be removed from the Backup History table when the browser page is refreshed.

files

NSX Manager Restore

To restore NSX configuration a new NSX Manager must first be deployed. While it is possible to restore NSX configuration from an existing NSX Manager it is assumed that since a restore is required NSX Manager has failed and it is best practise to deploy a new instance. For assistance with deploying NSX Manager see this post.

Ensure the old NSX Manager is powered off. Deploy a new NSX Manager instance and configure it with a management IP address (this will be temporary, all settings will be reverted back to the previous NSX Manager after the restore is complete).

Log into the newly deployed NSX Manager and select Backup and Restore.

nsx2

Click Change next to the FTP Server Settings row.

backup1

When configuring the FTP Server Settings ensure the same settings are configured as when the NSX Manager was backed up. This includes the hostname, username, password, backup directory, filename prefix, and pass phrase.

Select the backup from the Backup History table and click Restore.

restore1

Confirm the restore when prompted. NSX Manager is unavailable during the restore process.

restore2

When the NSX Manager is available log in, the summary page will display a System restore completed message.

NSX Install Guide Part 2 – Data Plane

In Part 1 we looked at the overall design of the environment and official documentation, as well as preparing the management and control planes by installing NSX Manager and an NSX Controller cluster. In this post we will focus on installing components in the data plane; preparing the hosts for NSX, configuring VXLAN, VTEP interfaces, Transport Zones, and creating a logical switch. To complete the routing configuration Part 3  will walk through NSX Edge and Distributed Logical Routers.

Host Preparation

  • Host preparation involves the installation of NSX kernel modules upon ESXi hosts in a vSphere cluster.
  • If you are using stateless environments then you should update the Auto Deploy image with the NSX VIBs so they are not lost on reboot.

Log into the vSphere web client and select Networking & Security. From the left hand navigator pane click Installation. Select the Host Preparation tab. Highlight the cluster you want to prepare for NSX and click Actions, from the drop down menu click Install.

hostdeployment

Click Yes to confirm the install, the NSX kernel modules will now be pushed out to the hosts in the selected cluster. The installation status will change to Installing, and then a green tick with the version number.

hostdeploymentcomplete

To download a zip file containing the VIBs for manual install or updating an ESXi image; browse to https://NSX/bin/vdn/nwfabric.properties, where NSX is the IP address or FQDN of the NSX Manager, and locate the VIB URL for your version of ESXi. Open the relevant URL which will auto download vxlan.zip. For assistance with updating Auto Deploy images see the VMware Auto Deploy 6.x Guide.

Host preparation should be controller via the web UI. If you do need to manually install the modules you can download the VIBs as outlined above, and upload to a vSphere datastore. Open an SSH connection to the host using a client such as Putty, run the following command, replacing datastore with the name of the datastore:

esxcli software vib install -d /vmfs/volumes/datastore/vxlan.zip

For example:

esxcli software vib install -d /vmfs/volumes/Datastore01/vxlan.zip

You can also copy the package to a local directory on the ESXi host, such as tmp, using WinSCP and run the install from there; changing the file path accordingly. Update Manager is another option. The vxlan.zip package contains esx-vdpi, esx-vsip, and esx-vxlan VIBs. Check the VIBs installed on an ESXi host by running esxcli software vib list.

vibs

For more information on installing kernel modules using CLI see this post.

VTEP Interfaces

  • VXLAN Tunnel Endpoints are required for ESXi hosts to talk to each other across the physical network.
  • Tunnel Endpoints are configured on each host using a VMkernel interface, which requires an IP Pool and an existing distributed switch. New distributed port groups are added to the distributed switch.
  • The Tunnel Endpoint interface encapsulates the L2 data packet with a VXLAN header and sends this on to the transport network, as well as removing the VXLAN header at the other end.

Stay within the Host Preparation tab, in the VXLAN for each cluster click Not Configured. Select the distributed switch and VLAN to use for the VTEP interfaces. Ensure the MTU size of the VTEP configuration and underlying network is at least 1600. Select the IP addressing option (you can create a new IP Pool in the Use IP Pool drop down menu). Specify the VMkernel NIC teaming policy and click Ok.

vxlanconfigure

The VMkernel interfaces will now be configured on the specified distributed switch, once complete the VXLAN configuration of the cluster will show configured and a green tick.

vxlan

Tip: you can test the network is correctly configured with 1600 MTU by pinging between VTEP interfaces on different hosts with an increased packet size:

Browse to one of the ESXi hosts in the vSphere web client and click Manage, Networking, VMkernel Adapters. Locate the NIC(s) in use for VXLAN and make a note of the IP addresses, these are the VXLAN Tunnel Endpoints. For example on host 1 we may have vmk4 configured for VXLAN with IP 192.168.30.11, and host 2 with vmk4 configured for VXLAN with IP 192.168.30.12. In this case we can SSH onto host 1 and run the following command:

ping ++netstack=vxlan -d -s 1572 -I <vmk> <IP>

Where <vmk> is the VMkernel to use, in this case vmk4, and <IP> is the IP address to ping, in this case the VTEP interface of host 2 (192.168.30.12).

ping ++netstack=vxlan -d -s 1572 -I vmk4 192.168.30.12

If the ping comes back successful then we know the MTU is set correctly, since the command specifies a packet size of 1572 (there is a 28 byte overhead = 1600). If the ping drops the packet then we can try reducing the packet size to 1472: ping ++netstack=vxlan -d -s 1472 -I (again + 28 byte overhead = 1500). If the smaller ping packet is successful but the larger packet is dropped then we know the MTU is not correct.

VXLAN Network Identifiers

  • For NSX Manager to isolate network traffic we must configure a Segment ID Pool.
  • Each VXLAN is assigned a unique network identifier from the Segment ID Pool.
  • When sizing the pool use a small subnet of the available range, do not use more than 10,000 per vCenter.
  • If you have multiple NSX Managers in an environment make sure the pools do not overlap.

Switch to the Logical Network Preparation tab and click Segment ID. Click Edit to create the Segment ID Pool. Enter the range for the pool within 5000-16777215, for example 5000-5999, and click Ok. (We are not using Multicast addressing in this deployment, however if required you can select the tick box and enter the Multicast address range here).

segmentid

Transport Zone

  • Transport zones control which hosts a logical switch can reach.
  • Virtual machines must be in the same transport zone to be able to connect to one another.
  • Multiple transport zones can be configured as per your own requirements, however typically a single global transport zone is enough.

Still under the Logical Network Preparation tab, select Transport Zones. Click the green plus symbol to add a new zone. In this example we will be using Unicast mode, allowing the NSX Controllers to look after the control plane. There are further network and IP requirements if you want to use multicast or hybrid modes. Add the clusters you want your VXLAN networks to span to the transport zone, click Ok.

transportzone

Logical Switches

  • A logical switch reproduces switching functionality, decoupled from the underlying hardware.
  • Logical switches are assigned a Segment ID from the Segment ID Pool, similar to the concept of VLAN IDs.
  • Virtual machines connected to the same logical switch can connect to one another over VXLAN.

From the left hand navigation pane locate Logical Switches.

logicalswitches

Click the green plus symbol to add a new logical switch. Enter a name and description, the Transport Zone we created earlier will automatically be selected, tick Unicast mode and click Ok. Enable IP Discovery is enabled by default, this option minimises ARP traffic flooding within the logical switch.

logicalswitch

Now the logical switch is created we will see the new port group listed under the relevant distributed switch. Port groups created as NSX logical switches start with vxw-dvs–virtualwire-. Virtual machines can be added to a logical switch using the Add Virtual Machine icon, or by selecting the port group in the traditional Edit Settings option direct on the VM.

editsettings

Virtual machines added to a logical switch at this stage only have connectivity with each other. To create VXLAN subnets, route traffic between different logical switches,  and route traffic outside of VXLAN subnets; we need an Edge Services Gateway and Distributed Logical Router. Both of these components will be covered in Part 3.

_______________

NSX Install Guide Part 1 – Management and Control Planes

NSX Install Guide Part 2 – Data Plane

NSX Install Guide Part 3 – Edge and DLR

NSX Install Guide Part 1 – Mgmt and Control Planes

In this 3 part guide we will focus on the installation of NSX 6.2.5. Part 1 provides details of the deployment and official documentation, we’ll build the management and control planes by deploying NSX Manager and an NSX Controller cluster. Part 2 will walk through the data plane components; host preparation, VTEP / VXLAN configuration, transport zone, and logical switches. Finally in Part 3 we’ll create and configure NSX Edge and Distributed Logical Routers.

The latest NSX version is currently 6.4.1, see also the NSX 6.4.x Upgrade Coordinator and NSX 6.4.1 Upgrade Guide posts for more information.

In order to get NSX up and running we’ll need:

NSX can run on any edition of vSphere from v5.5 Update 3 onwards. For versions 5.5 Update 1 and 2 Enterprise Plus licensing is required. NSX comes in Standard, Advanced, and Enterprise editions, the feature differences between editions can be found on the product page here.

There is a great hardware calculator available on virten.net here, useful for calculating the resource requirements of your design. You can also view NSX version history on the same site, here. NSX reference poster (below) available here.

nsxref

Installation

The focus of these guides will be on the deployment and configuration of the components which make up the NSX installation. The NSX model can be broken down into the following sections:

NSX Install Guide Part 1 – Management and Control Planes

  • Management Plane: provides the UI and REST API interfaces. Consists of the NSX Manager and vCenter Server, as well as a message bus agent to carry communication between other planes in the model.
  • Control Plane: runs in the the NSX Controller cluster which manages the run-time state of logical networks. Does not carry data traffic but connects to the management and data planes using the user world agent.

NSX Install Guide Part 2 – Data Plane

NSX Install Guide Part 3 – Edge and DLR

  • Data Plane: contains the NSX vSwitch and NSX Edge. The NSX vSwitch is made up of the distributed switch and kernel modules running in the hypervisor enabling VXLAN bridging capabilities and distributed services. The NSX Edge acts as gateway device providing L2 bridging from VXLAN to the physical network, as well as other services such as perimeter firewall, load-balancing, VPN, and so on.
  • After NSX is installed you may want to use Guest Introspection to offload AV scanning to a dedicated Service Virtual Machine (SVM) provided by a third party. For more information on Guest Introspection and service deployments see the NSX Manager Guest Introspection post.

There are a number of supported topologies for the vSphere environment,  review the resources listed below for more details. In this example a vCenter Server Appliance has been deployed with 2 vSphere clusters; the management cluster is made up of 3 hosts and will be hosting the vCenter Server, NSX Manager, NSX Controllers, and NSX Edge gateways. The compute cluster is made up of 4 hosts running virtual machine workloads. Distributed switches (vDS) are configured, with the usual redundancies, 1 for management and vMotion traffic (different VMkernel ports), 1 for VXLAN, and 1 for  NSX Edges to connect out to external networks.

You’ll also need an IP addressing scheme in place, IP Pools are required for the deployment of NSX Controllers (minimum 3 recommended) and VTEP interfaces (1 per host, can also be DHCP). The setup wizards allow you to create an IP Pool at the time of deployment, however if you need to extend an existing IP Pool, or want to create your IP Pools in advance, see this post.

hosts

If you want to span virtual networks and objects (logical switches, routers, distributed firewall rules) across multiple sites or vCenter Server instances see the Configuring VMware Cross-vCenter NSX post.

Resources

All links referenced below are official VMware resources:

  • NSX 6.2.5 download link (NSX 6.3.0 download link)
  • NSX 6.2.5 release notes link (NSX 6.3.0 release notes link)
  • NSX 6.2 documentation centre link (NSX 6.3 documentation centre link)
  • NSX Hands on Labs link
  • NSX technical white paper link
  • NSX product walkthrough link
  • NSX design guide link
  • NSX validated designs link
  • NSX icons are available here for designing your own solution.

Installing NSX Manager

NSX Manager is deployed and registered with vCenter Server on a 1:1 mapping. Upon registration a plug-in is injected into the vSphere web client to enable deployment and management of logical networks and services.

Before beginning add a DNS entry for the NSX Manager in the relevant zone. Download the NSX Manager OVA file here. The NSX Manager appliance is preconfigured with 16 GB RAM, 4 vCPU and 60 GB disk. VMware recommend a memory reservation for NSX Manager in production environments.

download

Deploy the OVA file to the vCenter, in the customisation options configure the appliance network settings. Once the NSX Manager appliance is deployed and powered on open a web browser to the configured IP address. Log in with the admin account, if you didn’t change the password during deployment the default password is default.

nsx2

Click Manage vCenter Registration, under vCenter Server click Edit. Enter the name of the vCenter server to register NSX Manager and the relevant credentials, click Ok. Configure the vCenter settings under Lookup Service URL by clicking Edit, enter the vCenter host name and SSO details, click Ok.

Note the Backup & Restore option, this method uses FTP / SFTP and is currently the only supported way of backing up and restoring NSX Manager, for more information see VMware NSX Backup and Restore. Browse to the Manage Appliance Settings page, configure a time server and check DNS and host name settings are correct. You can also configure a syslog server, such as vRealize Log Insight, and change network settings if required.

nsxmanager

After configuring NSX Manager restart the VMware vSphere Web Client on the vCenter Server the NSX Manager was registered with. You may also need to restart your browser. Log in to the vSphere web client and browse to Networking & Security, click NSX Managers and verify the newly deployed NSX Manager is present.

ip1

To configure additional permissions select the NSX Manager and click Manage, Users. Here you can add, edit, and remove users and permissions. Each role provides a description of the level of access, for more information on NSX permissions click here. To add Active Directory permissions to NSX Manager select the Domains tab, and click the green plus symbol to add the LDAP details.

To apply a license key to NSX Manager select the Administration option from the home page of the vSphere web client, click Licenses, Assets, Solutions. Highlight NSX for vSphere and click the All Actions drop down menu, select Assign License. Add a new license key or assign an existing license key and click Ok.

Installing NSX Controllers

  • The NSX Controller cluster is made up of 3 NSX Controllers for high availability, ideally these should be deployed to 3 different hosts in a management cluster, and 3 different datastores, to provide redundancy.
  • The NSX Controller cluster provides the control plane and is responsible for maintaining information of logical switches and distributed logical routers, as well as hosts and virtual machines.
  • NSX Controllers are deployed as Linux based virtual machines with 2 vCPU, 4 GB RAM, and 20 GB disk.
  • NSX Controllers require an IP address from a defined IP Pool, preferably on the same subnet as the ESXi management addresses.

Log into the vSphere web client and select Networking & Security. From the left hand navigator pane click Installation. In the NSX Controllers section click the green plus symbol to add a controller.

controllers1

Populate the fields in the Add Controller wizard, where possible deploy each controller to a different host. For the IP Pool click Select; each NSX Controller uses an IP address from the IP Pool. If you have created IP Pools prior they will be listed here, if you need to extend an existing IP Pool see this post. Alternatively to create a new pool click New IP Pool.

controllers2

To create a new IP Pool fill in the details in the Add Static IP Pool wizard. The IP Pool used can be shared with other services (i.e. doesn’t have to be dedicated to NSX Controllers) as long as there are enough free IP addresses in the pool for all 3 controllers.

controllers3

When you have configured the IP Pool click Ok on the Add Controller wizard. The first controller will now be deployed.

deploying

When the deployment has completed repeat the process a further two times, using the same IP Pool. You may notice the password field is absent when deploying the second and third controllers; subsequent NSX controllers are configured with the same root password as the first deployed controller.

deployed

The NSX Controllers are ready to use. If you do have enough hosts to run the controllers on separate hosts then configure a DRS anti-affinity rule to keep them apart with the following steps:

  • In the vSphere web client click on the cluster where the NSX Controllers reside.
  • Open the Manage tab and select VM/Host Rules under Configuration.
  • Click Add to create a new rule.
  • Choose to Separate Virtual Machines and add the NSX Controllers, click Ok.

drs

Firewall Exclusions

VMware recommend that any management virtual machines are excluded from the NSX distributed firewall. Likely candidates are the vCenter Server, vCenter database (if external), Platform Services Controller (if external), etc. If you have a separate management cluster and are not preparing the management hosts for NSX then you do not need to worry about this step.

If you have management VMs running on hosts with NSX installed, i.e. host preparation, which we’ll do next, and distributed firewall enabled; then we need to exclude those virtual machines. By default the NSX Manager, NSX Controllers, and NSX Edge virtual machines are automatically excluded from the NSX distributed firewall.

  • From the vSphere web client select Networking & Security, click NSX Managers.
  • Select the NSX Manager and click the Manage tab, then Exclusions List.
  • Click the green plus symbol, select the virtual machines to add to the exclusions list and click Add, and Ok.

exclusions

The virtual machines are now excluded from distributed firewall protection. It’s also worth noting that if a new vNIC is added to any of the VMs after adding them to the exlusions list, then the distributed firewall is still automatically deployed to that new vNIC. In this case you need to either power cycle the virtual machine, or remove and re-add it to the exclusion list.

The management and control planes are now setup. The next step is to prepare the ESXi hosts for NSX which falls under the data plane in Part 2.

_______________

NSX Install Guide Part 1 – Management and Control Planes

NSX Install Guide Part 2 – Data Plane

NSX Install Guide Part 3 – Edge and DLR