Setting Manual DFW Override for NSX Restore

The recommended restore process for NSX Manager is to deploy a new OVA of the same version, and restore the configuration. After a recent failed upgrade we needed to restore NSX Manager, so deployed a new OVA with the same network settings. After the new NSX Manager was powered on we were unable to ping the IP address, this was because there were no default rules allowing access to the VM, and since the existing NSX Manager was down we were unable to connect to the UI or API to add the required firewall rules. NSX Manager is normally excluded from Distributed Firewall (DFW) by default, however at this point the hosts saw it as any other VM, since we had not yet restored the configuration. Therefore we needed to add a manual override to clear the filters applied to the new NSX Manager, allowing us to connect and restore the configuration. The following commands were run on the host where the new NSX Manager OVA was deployed, using SSH. For further guidance on the backup and restore process of NSX see the NSX Backup and Restore post.

Disclaimer: the steps below are advanced commands using vsipfwcli which is an extremely powerful tool. You should engage VMware GSS if doing this on anything other than a lab environment, you should also understand the impact of stopping the vsfwd service on the host and the impact this may have on any other VMs with a DFW policy of fail closed.

net-stats -l lists the NIC details of the VMs running on the host, verify the new NSX Manager is present.

/etc/init.d/vShield-Stateful-Firewall stop stops the vsfwd user world agent, you can also use status to display the status.

vsfwd

summarize-dvfilter lists port and filter details, we need the port name for the VM, e.g. nic-38549-eth0-vmware-sfw.2.

DFW_1

vsipioctl getrules -f nic-38549-eth0-vmware-sfw.2 lists the existing filters applied to the port, replace the port name with your own, from the output check to confirm the ruleset name, e.g. ruleset domain-c17.

DFW_2

vsipioctl vsipfwcli -f nic-38549-eth0-vmware-sfw.2 -c "create ruleset domain-c17;" creates a new empty ruleset with the same name, overriding the previous ruleset applied to the port. Replace the port name with your own and the ruleset name if it is different.

vsipioctl getrules -f nic-38549-eth0-vmware-sfw.2 again lists the existing filters applied to the port, the ruleset should now be empty as no filters are applied.

DFW_3

The NSX Manager is now pinging and the normal restore process can resume; connect to the web interface by browsing to the IP address or FQDN of the NSX Manager.

Restore_NSX_1

Select Backup & Restore.

Restore_NSX_2

Select the appropriate restore point and click Restore. Click Yes to confirm.

Restore_NSX_3

The restore generally takes 5-10 minutes, once complete you will see a restore completed successfully message in a blue banner on the Summary page. You may need to log out and log back in after the config is restored.

Restore_NSX_4

Once the NSX Manager services have started you can manage the DFW from the vSphere web client as normal. Remember to start the vsfwd service again on the host, after the vsfwd service is started the empty ruleset we created earlier is replaced with the original ruleset when the host syncs with NSX Manager.

/etc/init.d/vShield-Stateful-Firewall start starts the vsfwd user world agent, you can also use status to display the status.

CLI Reference for Troubleshooting NSX

Quick post documenting some useful CLI commands for troubleshooting NSX, mainly for my own reference. Other useful information can be found at NSX CLI Cheat Sheet and NSX for vSphere Command Line Interface Reference.

ESXi Hosts

Open an SSH session to an ESXi host. The SSH service can be started from the Configure, System, Security Profile page in the vSphere web client, or under Manage, Services when logging into the host UI.

ESXi_SSH

  • esxcli software vib list displays installed vibs, add | grep esx to filter.
  • vmkload_mod -l | grep vd displays the loaded drivers, add | grep nsx to filter, nsx-vdl2, nsx-vdrb, and nsx-vsip kernel modules should be loaded (
  • /etc/init.d/vShield-Stateful-Firewall status displays the status of user world agent vsfwd which connects the host to NSX Manager.
  • /etc/init.d/netcpad status displays the status of user world agent netcpa which connects the host to the controller cluster.

ESXi_SSH_1

  • tail -f /var/log/netcpa.log tails the user world agent netcpa log.
  • Note – to change the logging level for netcpa execute the following commands on the ESXi host:
    • chmod +wt /usr/lib/vmware/netcpa/etc/netcpa.xml gives write permissions to the file.
    • vi /usr/lib/vmware/netcpa/etc/netcpa.xml opens the file in an editor. Find <level>info</level>, press insert to edit the line and replace info with verbose. Press escape twice and enter :wq to save the file and quit.
    • /etc/init.d/netcpad restart restarts netcpad.

ESXi_SSH_3.png

  • esxcfg-advcfg -g /UserVars/RmqIpAddress lists the IP address of the registered NSX Manager.
  • esxcli network ip connection list lists active TCP/IP connections, add | grep 5671 to filter port 5671 used to connect to NSX Manager.
  • ping ++netstack=vxlan -d -s 1572 -I vmk3 <VMK> <VTEP> can be used to ping a VTEP IP address using an increased packet size, where <VMK> is the VMkernel to use on the source host, and <VTEP> is the destination VTEP IP address to ping.
    • For example ping ++netstack=vxlan -d -s 1572 -I vmk4 192.168.30.12
    • If the ping comes back successful then we know the MTU is set correctly, since the command specifies a packet size of 1572 (there is a 28 byte overhead = 1600). If the ping drops the packet then try reducing the packet size to 1472: ping ++netstack=vxlan -d -s 1472 -I (again + 28 byte overhead = 1500). If the smaller ping packet is successful but the larger packet is dropped then we know there is an MTU mismatch.
  • pktcap-uw can be used for packet capturing, full syntax here.
  • esxtop is a useful host troubleshooting tool, type n to switch to network view.

ESXi_SSH_2

NSX Manager Appliance

Open an SSH session to the NSX Manager. The SSH service can be started from the Summary page of the NSX Manager Virtual Appliance Management page.

Enable_SSH

  • show interface displays information for the NSX Manager management interface.
  • show ip route NSX Manager route information.
  • show filesystem NSX Manager file system capacity.
  • show log manager follow follows the NSX Manager log.

NSX_SSH_1

  • show controller list all displays the controller nodes status.
  • show cluster all displays vSphere clusters managed by the vCenter Server.
  • show logical-switch list all displays all logical switch information.
  • show logical-switch controller master vni 5001 connection displays the hosts connected to segment ID 5001, also replace connection with vtep mac arp.
  • show logical-router list all displays all distributed logical router information.

Deploying an NSX Load Balancer with vRA

In this post we will walk through the process of deploying an NSX Load Balancer using vRealize Automation. We will also cover high availability and post deployment scaling. In order to take advantage of the direct NSX API integration with vRA you will need to be running at least v7.3, read more about the enhancements made in vRA 7.3 from the release notes or what’s new. In the example we’ll work towards multiple web servers are provisioned with an On-Demand Load Balancer, along with app servers and a database server. The On-Demand Load Balancer deploys an NSX edge for load balancing and adds the web servers as pool members. There are a number of available customisations which we’ll cover in the configuration process below.

Blueprint_2

Adding Endpoints

The following process assumes that you have a fully deployed vRA topology with all the components required to provision virtual machines; vCenter endpoint(s), reservations, compute resources, and a published catalog with entitlements. It would also be beneficial to have an understanding of using an NSX edge for load balancing or have deployed an edge manually to see the corresponding deployment options.

The first step is to add the NSX Manager as a vRA endpoint. From the Infrastructure tab select Endpoints and Endpoints again. Click New and select Networking and Security, NSX. Enter the details for the NSX Manager. Before adding the NSX endpoint we can create an association with the registered vCenter Server. From the Associations tab, click New. Select the vCenter Server from the dropdown, the platform type will auto-populate to vSphere and the description vSphere to NSX Association. Click Test Connection and then Ok to save the configuration.

NSX_Endpoint

Blueprint Modifications

After NSX has been added as an endpoint navigate to Blueprints under the Design tab. From the design canvas of a new or existing blueprint select Network & Security, drag and drop the On-Demand Load Balancer onto the canvas.

Blueprint_1

Click the On-Demand Load Balancer that has been added to the canvas. When the load balancer is provisioned in NSX the servers associated with the load balancer in the blueprint are automatically added as members in the pool. This is set in the Member field, in the example below the web servers in the blueprint are added as members of the load balancer.

Blueprint_3

The network for the member servers and the network for the VIP address are configured in the appropriate fields. Leave the IP address blank to automatically assign an IP address from the associated VIP network. Under Virtual servers click New, here you can configure the protocol settings for the load balancer, and the algorithm/persistence, health check, and connection settings by selecting Customize.

Customize_LB

Before saving the blueprint click the settings cog at the top of the page, this opens the blueprint properties. From the NSX Settings tab set the Transport zone to attach the load balancer to, this can be a local or universal transport zone. Next select the Edge and routed gateway reservation policy, this is the reservation policy (compute, storage) that will be used when provisioning the edge.

Blueprint_Properties_1

Click the Properties tab and select Custom Properties. There are a number of optional parameters we can add here.

  • NSX.Edge.ApplianceSize sets the appliance size of the edge, accepted values are compact, large, quadlarge and xlarge.
  • NSX.Edge.HighAvailability deploys the edge appliance in HA mode when the value is true. Without this property only a single appliance is deployed.
  • NSX.Edge.HighAvailability.PortGroup references the port group to use for the heartbeat network of the edge appliances deployed in HA mode.

Blueprint_Properties_2

Click Ok and Finish to save the blueprint. Make the blueprint available as a catalog item and request a test deployment. In vSphere you will see the edge and VMs being provisioned and, once complete, the virtual machines will be added as members in the load balacer pool. You can view the settings of the deployed edge in the vSphere web client under Networking & Security, NSX Edges, double click the edge and select Load Balancer.

NSX_Load_Balancer

Post Deployment

When the deployment is destroyed the edge appliances are removed along with the VMs as part of the cleanup process. If the deployment is scaled out then the new server is added as a member to the existing load balancer pool, likewise if the deployment is scaled in then the server deleted is also removed from the pool.

Scale_Out

The scale in and scale out actions are assigned as entitled actions from within the relevant entitlement  Aswell as having the permissions to perform the scale actions the blueprint must also contain a higher number of maximum instances. In the example below 2 web servers will be deployed with an On-Demand Load Balancer, as the maximum number of instances is set to 10 the requester can scale out the number of web servers and pool members to a maximum of 10 servers.

Blueprint_Scale

Configuring VMware Cross-vCenter NSX

This post provides an overview of cross-vCenter NSX and walks through the configuration steps. Cross-vCenter NSX allows central management of network virtualization and security policies across multiple vCenter Server systems. Cross vCenter NSX introduces universal objects; such as universal logical switches, universal logical routers, and universal distributed firewall rules. Universal objects are able to span multiple sites or vCenter Server instances, enhancing workload mobility by allowing cross vCenter and long distance vMotion for virtual machines, whilst keeping the same network settings and firewall rules. This improves DR capabilities, overcomes scale limits of vCenter Server, and gives administrators more control over resource pooling and the separation of environments.

Cross vCenter-NSX was introduced in NSX v6.2 and requires vSphere v6.0 or later. As normal NSX Manager is deployed with vCenter server in a 1:1 pairing.  In a single site NSX deployment the NSX Manager is given the standalone role by default. When configuring cross-vCenter NSX one NSX Manager is assigned the primary role, and up to seven other NSX Managers are assigned the secondary role. NSX Managers configured for cross-vCenter NSX must all be running the same version. The primary NSX Manager is responsible for deploying the Universal Controller Cluster; forming the control plane across the NSX Managers. The Universal Controller Cluster runs in the site of the primary NSX Manager. Universal objects are created on the primary NSX Manager and automatically synchronized across the multi-site NSX environment.

Configuring Cross-vCenter NSX

The steps below assume you have already deployed and registered the NSX Managers, and have a good understanding of NSX. This post is intended as add on to the NSX Install Guide to provide an outline of the additional or different steps required for a cross-vCenter NSX install, further resources are listed at the bottom of the page. If you are using vCenter enhanced linked mode then multiple NSX Manager instances are displayed within the same interface, or single pane of glass, when managing the Network & Security section of the vSphere web client. Enhanced linked mode is not a requirement for cross-vCenter NSX however, and vCenter Server systems not in enhanced linked mode can still be configured for cross-vCenter NSX.

From the Networking & Security page of the vSphere web client select Installation, highlight the NSX Manager in the primary site, from the Actions menu select Assign Primary Role.

NSX_Promote

The secondary NSX Manager(s) synchronize with the primary using the Universal Synchronization Service. These sites do not run any NSX Controllers, although they can be redeployed easily in the event of a primary site outage. Before assigning the secondary role you should ensure there are no existing NSX Controllers deployed in the associated vCenter. If you have already assigned a segment ID pool to the NSX Managers then ensure the segment ID pools do not overlap. Select the primary NSX Manager and from the Actions menu click Add Secondary NSX Manager. Enter the secondary NSX Manager information and admin password.

NSX2

Review the table of NSX Managers, the roles have now changed accordingly.

NSX_Roles

The universal controller cluster is formed by individually deploying the NSX controllers from the primary NSX Manager, the method of deploying the controllers is the same (see NSX Install Guide Part 1 – Mgmt and Control Planes for further assistance). Once the controllers are deployed you will notice placeholder controllers listed against the secondary NSX Manager, these are not connected or deployed. In the event of a site failure the configuration is synchronized between NSX Managers so you can simply re-deploy the controllers in the DR site. To see the failover process review this blog post. VMware recommend deploying 3 controllers on different hosts with anti-affinity rules.

NSX_Controllers

The next part of the install process is to follow the host preparation and VXLAN configuration steps as normal (see NSX Install Guide Part 2 – Data Plane for further assistance). Create the segment ID pools for each NSX Manager, making sure they do not overlap. On the primary NSX Manager you will also assign a universal segment ID pool.

In order for us to deploy universal logical switches we need to create a universal transport zone. A universal transport zone determines which hosts a universal logical switch can reach, spanning multiple vCenters. From the Logical Network Preparation tab open Transport Zones, ensure the primary NSX Manager is selected and click the plus symbol. Select Mark this object for Universal Synchronization, and enter the configuration for the universal transport zone. All universal objects must be created on the primary NSX Manager, change the NSX Manager to the secondary site and you will see the universal transport zone has synchronized there also.

NSX_TZ

Next we will create a universal logical switch for the transit network. Local objects such as logical switches, logical routers, and Edge Services Gateways can still be deployed from each NSX Manager, although by design they are only local to the vCenter linked to that specific NSX Manager, and cannot be deployed or edited elsewhere. From the left hand navigation pane in Networking & Security select Logical Switches, ensure the primary NSX Manager is selected and click the plus symbol. Enter a name for the transit network and select the universal transport zone we created earlier.

NSX_Universal_Transit

At this stage you can also deploy another universal logical switch, connecting a couple of test VMs on a private subnet, and have them ping one another to confirm connectivity. Now that we have a transit network and test universal logical switches connected to our universal transport zone we can go ahead and create a universal DLR. In this particular environment we have already deployed an ESG in each site. For further assistance with deploying an ESG and DLR see NSX Install Guide Part 3 – Edge and DLR.

From the Networking & Security page click NSX Edges, ensure the primary NSX Manager is selected and click the plus symbol. The control VM for the DLR is deployed to the primary site, again the configuration is synchronized and this can be re-deployed to the DR site in the event of a primary site outage. Select Universal Logical Router and follow the wizard as normal, if local egress is required then check the appropriate box. Sites configured in a cross-vCenter NSX environment can use the same physical routers for egress traffic, or have the local egress feature enabled within a universal logical router. The local egress feature allows routes to be customized at host, cluster, or router level.

NSX_UDLR

From the NSX Edges page double click the new universal DLR, select Manage, Settings, Interfaces and click the add button. In order for traffic to route from the universal DLR to the ESG(s) we must add an uplink interface connecting them to the universal transit network. Change the logical router interface to Uplink, in the Connected To field select the transit network universal logical switch we created earlier. Configure the IP and MTU settings of the interface per your own environment.

NSX_UDLR_Interface

You can also add Internal interfaces here corresponding with universal logical switches for virtual machine subnets. Before these subnets can route out follow the same process to add an Internal interface to the ESG(s) connecting them to the same transit network.

A virtual machine connected to the test universal logical switch can now vMotion between sites keeping the same IP addressing, providing L2 over L3 capability. As well as remaining on the same logical network a virtual machine can also be migrated across sites without any additional firewall rules, this is achieved with the use of universal firewall rules. Universal firewall rules require a dedicated section creating under the Firewall section of Networking & Security, you must select Mark this section for Universal Synchronization. For assistance with creating universal firewall rules see here.

NSX_Universal_Firewall

Additional Resources

To plan a cross-vCenter NSX installation review the VMware Cross-vCenter NSX Design Guide, Cross-vCenter NSX Topologies Guide, and the VMware Cross-vCenter Installation Guide. For more information on cross-vCenter NSX design see the following blog posts:

VM Security Tags with NSX Firewall

This post will walk through virtual machine security tags; how we can create tags, automatically add virtual machines with tags to a specific security group, and build associated NSX firewall rules. As a bonus we’ll also apply a security tag to a vRA blueprint, allowing vRA provisioned machines to automatically receive a security tag and apply any corresponding NSX firewall rules.

Security tags and groups allow us to identify virtual machines with a common value, such as business department, support group, workloads, and so on. By applying security tags to virtual machines, and/or adding virtual machines to security groups, we can control security at a custom defined level, independent of the underlying infrastructure. Virtual machines can have multiple tags, allowing administrators to identify different values upon which to act. Many third party anti-virus solutions with NSX integration use security tags to protect and quarantine virtual machines depending on their health status.

The steps below assume that NSX is installed and working, for more information on installing the required components see the following series of posts.

NSX Install Guide Part 1 – Mgmt and Control Planes

NSX Install Guide Part 2 – Data Plane

NSX Install Guide Part 3 – Edge and DLR

Creating Security Tags and Groups

From the vSphere web client browse to Networking & Security, click NSX Managers, and select the appropriate NSX Manager. Open the Manage tab, then Security Tags. Existing security tags are listed, some third part plugins such as AV may also add and use security tags. To create a new security tag click the New Security Tag icon. Enter a name for the security tag, and a description if required, then click Ok.

Tags1

Security tags can be applied to virtual machines manually in the page referenced above, or through an automatic provisioning solution such as vRealize Automation.

Next, select the Grouping Objects page. Under Security Groups, click the Add New Security Group icon. Enter a name and description if required.

Group1

Go to the third option: Select objects to include. Change the Object Type in the drop-down to Security Tag. Select the new security tag we created earlier.

Group2

Review the details on the summary page and click Finish.

Group3

The security group has now been created, and any virtual machines that use the security tag we included are automatically added to the group. You can create multiple security tags and groups for different departments, applications, or however you want to segregate these out.

Group4

Creating NSX Firewall Rules

Our new security group / tag setup can be used to configure NSX firewall rules. Still under Networking & Security in the vSphere web client; select Firewall.

If you have already configured NSX firewall rules you’ll be familiar with this page, and likely have a number of sections and rules already configured. You can edit an existing rule or create a new one in the relevant section. To create a new section use the Add Section (folder) icon. Click the green plus icon to add a new rule, or the edit icon to edit an existing rule.

Firewall1

When configuring the rule you can set the source, destination, or both to use a security group. Change the Object Type drop-down to Security Group, and select the new security group we created earlier.

Firewall2

Remember to click Publish Changes when you’re done. For assistance with creating NSX firewall rules see this section of the NSX Documentation Center.

Adding Tags to vRA Blueprints

The use of security tags with blueprints requires NSX to be integrated with vRA. If you haven’t already done so you can follow the steps outlined in the VMware post Part 1 of Integrating NSX with vRealize Automation. You’ll also need an understanding of how to create blueprints, again there is more information on this in the VMware post Part 2 of Integrating NSX with vRealize Automation if you need it.

To add a security tag to a virtual machine provisioned by vRA we must add it to the appropriate blueprint. After adding security tags and/or groups to NSX Manager we need to run a data collection so that vRA is showing up to date information. From the vRA portal browse to the Infrastructure tab, select Compute Resources, Compute Resources. Move the mouse cursor over the compute resource and click Data Collection. Scroll down to Network and Security Inventory and click Request now. The sync will take a couple of minutes, you can leave the page during this time.

vra1

Next open the Design tab and select Blueprints. We can add a security tag to an existing blueprint, or create a new one. In the design canvas click Network & Security from the list of categories. Locate Existing Security Tag and drag this onto the canvas. Alternatively you can use a security group at this stage if you’d prefer.

vra2

Select the security tag from the list of existing tags. From the design canvas select the virtual machine and open the Security tab. Tick the referenced security tag to associate it with the virtual machine. Click Save and Finish to save the changes to the blueprint. Any virtual machines provisioned from this blueprint are now tagged with the security tag (or group) selected.

McAfee MOVE 4.5.0 Upgrade Guide with NSX

This post walks through the upgrade of McAfee MOVE to version 4.5.0 with NSX Manager, and can be used when upgrading McAfee MOVE Agentless versions 3.5.x, 3.6.x, and 4.0.0. The upgrade of versions 3.5.x or 3.6.x involves migrating all custom settings, policies and tasks with the McAfee MOVE Migration Assistant (these are retained by default when upgrading from version 4.0.0).

Pre-Requisites

The benefits and architecture of offloading AV to a dedicated Service Virtual Machine (SVM) with McAfee MOVE and NSX are covered in the McAfee MOVE with NSX Install Guide. The scope of this guide is to upgrade an existing McAfee MOVE installation and as such it is assumed that NSX Manager, IP Pools, service deployments (i.e. Guest Introspection), policies, and ePO integration are all in place. Furthermore it is assumed that network connectivity between components, time sync, DNS, vSphere access, etc. are also configured. For a full list of pre-requisities see the above install guide. The requirements below are specific to the McAfee MOVE 4.5 upgrade:

2

Update Extensions

The first step is to update the extensions on the ePO server. When upgrading versions 3.5.x or 3.6.x the existing extensions are left in place to facilitate the migration of data, which we’ll cover later. When upgrading version 4.0.0 the extensions are replaced with the new versions, all settings and policies remain.

I am going to use Software Manager to download, install, and check in the software direct on the ePO web UI. If you prefer you can manually download the extensions on your own machine and then install them through the Extensions page (more info on this below). To use Software Manager click the drop down Menu option in the top left hand corner of the page and select Software Manager. Use the search function to find McAfee MOVE AntiVirus 4.5. Browse through the components, you will notice the Migration Assistant is included, click Check In All.

migration1

Accept the license agreement and click Ok. The extensions are downloaded and installed.

migration2

An alternative way of installation or updating extensions is to browse to McAfee Downloads, enter your grant number when prompted and then select McAfee MOVE AV for Virtual Servers, McAfee MOVE AntiVirus 4.5. Download the required files and then browse to the web interface of the ePO server (https://EPO:8443/ where EPO is the name of your EPO server). Log in as an administrator and click the drop down Menu option in the top left hand corner of the page. Locate Software, and select Extensions. Click Install Extension and install the downloaded zip files in the following order: Cloud Workload Discovery Cloud_Workload_Discovery_Hybrid_4.5.0.zip (note that the CommonUI bundle; mfs-commonui-core-ui,commonui-core-common and commonui-core-rest extensions, is a pre-req for the Cloud Workload Discovery 4.5 for ePO 5.1.3 and 5.3.1), McAfee MOVE AntiVirus extension MOVE-AV_Ext_4.5.0_Licensed.zip, Product Help extension MOVE-AV_HELP_EXT_4.5.0.zip.

Which ever way you install the extensions, ensure you download MOVE-AL-AL_SVM_OVF_4.5.0.148 (or most recent version). This zip file contains the Service Virtual Machine (SVM), which we’ll need to add to the SVM repository later.

Once the extensions are installed the new version of MOVE AntiVirus will be visible in the Data Center Security group, under Menu > Software > Extensions.

migration6

For those upgrading versions 3.5.x or 3.6.x the old extensions remain in place in the MOVE AV group.

migration7

You will also notice an additional option in the Automation menu; MOVE AV Agentless remains as the legacy option for versions 3.5.x or 3.6.x, and MOVE AntiVirus Deployment is created for version 4.5.0. The legacy MOVE AV Agentless option is deleted upon removal of the old extensions at the end of the process. Again, doesn’t apply to 4.0.0 because in this case the extensions are upgraded, rather than running side by side.

versions

Migration Assistant

The Migration Assistant can be used when upgrading from MOVE versions 3.5.x or 3.6.x, if you are upgrading from 4.0.0 then this step is not necessary. Use one of the methods outlined above to install the Migration Assistant extension. If you used Software Manager to install the full McAfee MOVE AntiVirus 4.5 package then the Migration Assistant should already be installed. If you need to manually downloading and install the extension then when using McAfee downloads you need to change the Software Downloads tab to Extensions to view the Migration extension, as shown below.

migration

When the install is complete; in the ePO web UI click the drop down Menu option, under Software, click Extensions. The MOVE Migration Assistant 4.5 is listed under Data Center Security.

migration4

We can now go ahead and run the Migration Assistant; from the drop down Menu, under Policy, select MOVE Migration Assistant.

migration5

Select Automatic migration to migrate all settings for supported products (note that unassigned policies are not migrated) and click Next. To select only certain policies or edit policies you can use the Manual migration option, for more information see page 10 of the McAfee MOVE Migration Guide.

migration8

Review the items to be migrated, you can rename and edit the policy notes if required by clicking Rename and Edit Notes. When you’re ready to start migrating click Save.

migration9

Once the migration job has finished go back into the MOVE Migration Assistant, next to Migrate Agentless Deployment Configuration Details (Agentless Only) select Run, and click Next. Click Ok to confirm migrating configuration details.

migration10

When the config migration has completed click the drop down Menu option and under Automation select MOVE AntiVirus Deployment. You will see the SVM configuration and NSX registrations have all been migrated across.

Note that if you are upgrading from 3.5.x then the NSX certificate and credential data is migrated across, however you still need to enter the SVM configuration under Menu, Automation, MOVE AntiVirus Deployment, Configuration, General.

Upgrade SVM Registration

Now we need to add version 4.5.0 of the Service Virtual Machine (SVM) to the SVM repository, and update the registered SVM version with NSX Manager. In the ePO web UI click Menu, under Automation select MOVE AntiVirus Deployment. From the Configuration tab select SVM Repository, click Actions, Add SVM. Browse to the zip file containing the SVM we downloaded earlier and click Ok.

svm1

The new version of the SVM will now be listed in the repository.

svm2

Next go to Menu, Automation, MOVE AntiVirus Deployment. In the Configuration tab NSX Manager details and credentials should still be in place. Click the Service tab. The Registered SVM Version will still show the old version, from the Actions column for the NSX Manager click Upgrade. Select the new SVM version and click Ok. The latest version of the MOVE SVM is now registered with the selected NSX Manager.

Upgrade NSX Components

The final stage is to update the NSX security policy and service deployments. Log into the vSphere web client and click Networking & Security from the home page. Select Service Composer and then the Security Policies tab. As we’re upgrading an existing McAfee MOVE solution you should already have an AV related policy or policies configured, we need to reconfigure those to point at the new MOVE policies that were migrated across in ePO. Select the security policy to update and click the Edit icon.

editpolicy

Click Guest Introspection Services and select the existing guest introspection service, click the edit icon and make a note of the existing settings. Cancel out of the edit window and click the red cross to delete the guest introspection service. Click the green plus symbol to add a new service.

policy1

Enter a name for the service and ensure Apply is selected, use the McAfee MOVE AV service and select the ePO policy from the Service Profile drop down. The state should be set to Enabled and select Yes to enforce the policy. Use the same settings as the previous service if you like, the only difference will be the new service profile (ePO policy). Click Ok.

policy2

Select the Security Groups tab. Confirm that existing security groups are in place with the NSX security policy associated with the McAfee ePO policy applied. If needed you can select a group and click the apply policy icon to apply the security policy edited above to a security group.

policy3

Finally, we can update the Service Virtual Machines deployed on the ESXi hosts. From the left hand navigation pane select Installation and the Service Deployments tab. Existing installations will be listed here, with an Upgrade Available status. Service deployments are installed at vSphere cluster level, select the vSphere cluster to upgrade and click the Upgrade icon.

moveupgrade

New versions of the SVM are pushed out to each ESXi host in the selected cluster, replacing old versions using the same configuration details (datastore, port group, IP address range). Once complete the new version number is listed, the installation status is succeeded, and the service status is up.

movesuccess

If you upgraded version 3.5.x or 3.6.x you can remove the legacy MOVE extensions once you have updated the SVM registration and service deployments on each vCenter. In the ePO web UI open the Extensions page, locate the old version of the McAfee MOVE extension and click Remove.

If any of the components referenced above are not in place, or you need to deploy McAfee MOVE AV to a new vSphere cluster, see the McAfee MOVE with NSX Install Guide post. The only other thing worth noting is I had a vCenter where the MOVE service registration was failing, I had to remove the MOVE service deployments and service definition from NSX Manager, remove the vCenter from cloud accounts in ePO, and then add it all back in as a new install, deploying the SVM as a fresh 4.5 install rather than an upgrade.

NSX Install Guide Part 3 – Edge and DLR

In the final installment of this 3 part guide we will configure the Edge Services Gateway (ESG) and Distributed Logical Router (DLR). The NSX installation and relevant logical switches must be in place before continuing, for further information see NSX Install Guide Part 1 – Mgmt and Control Planes and NSX Install Guide Part 2 – Data Plane. It is important to note that depending on your network configuration and NSX design, additional steps may be required to integrate with your chosen routing protocol.

First we will create an Edge Services Gateway, providing access to the physical network (north-south traffic), followed by a Distributed Logical Router, which will provide connectivity for virtual machines using different logical switches (east-west traffic). The DLR will connect to the ESG to provide external routing using a transit logical switch. The image below shows the topology of the described components (from the VMware Documentation Centre, which also provides more information on advanced features and routing configurations).

topology

Edge Services Gateway

  • The Edge Services Gateway allows virtual machines to route to external devices, in other words to access the physical network.
  • The ESG is deployed as a virtual appliance in 4 different sizes:
    • Compact: 512 MB RAM, 1 vCPU, 500 MB disk.
    • Large: 1 GB RAM, 2 vCPU, 500 MB disk + 512 MB disk
    • Quad Large: 1 GB RAM, 4 vCPU, 500 MB disk + 512 MB disk
    • X-Large: 8 GB RAM, 6 vCPU, 500 MB disk + 2 GB disk
  • Each ESG can have a total of 10 interfaces (internal and uplinks).

From the left hand navigation pane select NSX Edges, click the green plus symbol to create a new Edge. Select Edge Services Gateway. Assign a name that will be displayed in the vSphere inventory and click Next. The hostname will be displayed in the CLI but is an optional field (the Edge-ID will be displayed if no hostname is specified). Should you require HA, and a secondary appliance deploying, tick Enable High Availability.

esg1

Configure the admin password (needs to be 12 characters plus the usual requirements) and logging level. You may want to enable SSH for troubleshooting purposes, this can also be enabled at a later date if required. Click Next.

esg2

Select the datacentre and appliance size as per the recommendation from VMware below:

The Large NSX Edge has more CPU, memory, and disk space than the Compact NSX Edge, and supports a larger number of concurrent SSL VPN-Plus users. The X-Large NSX Edge is suited for environments that have a load balancer with millions of concurrent sessions. The Quad Large NSX Edge is recommended for high throughput and requires a high connection rate.

Large should be ok for most environments, compact shouldn’t be used for production. Click the green plus symbol to add an Edge appliance.

esg3

Configure the vSphere placement parameters and click Ok. If you are using HA then add a second appliance, using a different datastore. DRS rules will automatically be added to keep the 2 appliances apart. If you do not deploy any appliances then the ESG will be created in an offline mode, until appliances are deployed. When you have finished adding the Edge appliances click Next.

esg4

We must now add the Edge interfaces, click the green plus symbol.

esg5

Configure the NSX Edge interfaces:

  • Add the physically connected distributed port group (click Select) to an Uplink interface, and enter the network details of your physical router. This provides a route to the physical network for north-south traffic.
  • If you selected HA then at least one internal interface must be configured to use a logical switch for heartbeat traffic, change the type to Internal and leave the IP address table blank.
  • If you will be adding a Distributed Logical Router then add an internal interface to the TRANSIT logical switch where the DLR will also be attached. The subnets to be routed externally are added to the DLR later.
  • Lab only: if you are not using a Distributed Logical Router, i.e. in a very small lab environment, then add the subnets for external connectivity and their associated logical switches as internal interfaces (east-west traffic).

esg6

When the required interfaces have been added click Next. Depending on your routing configuration you may need to add a default gateway, click Next.

esg7

Tick Configure Firewall default policy and set the default traffic policy to Accept, enable logging if required. The firewall policy can be changed or configured later if required, however if you do not configure the firewall policy, the default policy is set to deny all traffic.

If you have deployed HA each appliance will be assigned a link local IP address on the heartbeat network we created earlier, you can manually override these settings if required in the Configure HA parameters section, otherwise leave as default and click Next.

esg8

On the summary page click Finish to finalize the installation. The ESG will now be deployed, the details are listed on the NSX Edges page, note the type is NSX Edge. If you used HA then two ESG appliances will be deployed, you’ll notice in the vSphere inventory the virtual machine names have -0 and -1 at the end, -0 is the active ESG appliance by default until a failover occurs.

edges1

Once an Edge is deployed you can add or change the existing configuration, such as interfaces, by double clicking the Edge. Depending on your design and network configuration additional routing settings may be required, these can be found under Manage, Routing.

Routing.PNG

Distributed Logical Router

  • A Distributed Logical Router allows connectivity between virtual machines using different logical switches.
  • Distributed Routing allows for communication between virtual machines on different subnets, on the same host, without the need to leave the hypervisor level.
  • The DLR control VM sits in the control plane, although it pushes data plane kernel modules out to each host, allowing routing to be done within the hypervisor itself, these are kept up to date by the NSX Controllers.

From the left hand navigation pane select NSX Edges, click the green plus symbol to create a new Edge. Select Logical (Distributed) Router. Enter a name, this will appear in the vSphere inventory. If required you can enter a hostname, this will appear in the CLI, and a description and tenant. An Edge Appliance is deployed by default, this is needed unless you are using static routes. For dynamic routing and production environments Enable High Availability should also be selected, this deploys a standby virtual appliance, click Next to continue.

dlr1

Configure the local admin password (minimum 12 characters plus the usual requirements), it may also be worthwhile enabling SSH for future troubleshooting purposes. Note the logging level and change if required, otherwise click Next.

dlr2

If you chose to deploy an Edge appliance click the green plus symbol, select the vSphere options for the Edge appliance and click Ok, remember to add an additional appliance for HA using a different host and datastore, then Next.

dlr3

If you are using HA connect the interface to a distributed port group by clicking Select next to the HA Interface Configuration connection box.

Under the Configure interfaces of this NSX Edge section click the green plus symbol to add an interface. Configure the interfaces as required, internal interfaces are for east-west traffic, or VM to VM. Uplinks are for north-south traffic, and will typically connect to an external network through an Edge Services Gateway or third-party router VM. Uplink interfaces added will appear as vNICs on the DLR appliance. Add the interfaces associated with all the relevant networks and subnets you want to be routable, when you’re ready click Next.

dlr4

In this installation I have created three internal interfaces connected to their own dedicated logical switches; WEB, APP, and DB, configured with different subnets. Furthermore an uplink interface connected to the TRANSIT logical switch will be created, this will provide the link to the ESG for external routing.

web

Depending on your routing configuration you may need to add a default gateway (usually the ESG), for me the ESG will publish the default route via our routing protocol.  Click Next, then Finish. The DLR control VM will now be deployed, the details are listed on the NSX Edges page, note the type is Logical Router. If you used HA then two VMs will be deployed, you’ll notice in the vSphere inventory the virtual machine names have -0 and -1 at the end, -0 is the active control VM by default until a failover occurs.

edges2

Once a Logical Router is deployed you can add or change the existing configuration by double clicking the Logical Router. Depending on your design and network configuration additional routing settings may be required, these can be found under Manage, Routing. You will most likely need to add more subnets later on, this can be done under the Manage tab, and Settings, Interfaces. Click the green plus symbol and you will get the same Add Logical Router Interface wizard as we have used above.

interfaces

Ensure any virtual machines connected to the logical switches have their default gateway set to the DLR interface IP address. Virtual machines using logical switches now have connectivity through the DLR, despite being attached to different logical switches, and are able to route out to the physical network through the ESG.

_______________

NSX Install Guide Part 1 – Management and Control Planes

NSX Install Guide Part 2 – Data Plane

NSX Install Guide Part 3 – Edge and DLR