Deploying an NSX Load Balancer with vRA

In this post we will walk through the process of deploying an NSX Load Balancer using vRealize Automation. We will also cover high availability and post deployment scaling. In order to take advantage of the direct NSX API integration with vRA you will need to be running at least v7.3, read more about the enhancements made in vRA 7.3 from the release notes or what’s new. In the example we’ll work towards multiple web servers are provisioned with an On-Demand Load Balancer, along with app servers and a database server. The On-Demand Load Balancer deploys an NSX edge for load balancing and adds the web servers as pool members. There are a number of available customisations which we’ll cover in the configuration process below.

Blueprint_2

Adding Endpoints

The following process assumes that you have a fully deployed vRA topology with all the components required to provision virtual machines; vCenter endpoint(s), reservations, compute resources, and a published catalog with entitlements. It would also be beneficial to have an understanding of using an NSX edge for load balancing or have deployed an edge manually to see the corresponding deployment options.

The first step is to add the NSX Manager as a vRA endpoint. From the Infrastructure tab select Endpoints and Endpoints again. Click New and select Networking and Security, NSX. Enter the details for the NSX Manager. Before adding the NSX endpoint we can create an association with the registered vCenter Server. From the Associations tab, click New. Select the vCenter Server from the dropdown, the platform type will auto-populate to vSphere and the description vSphere to NSX Association. Click Test Connection and then Ok to save the configuration.

NSX_Endpoint

Blueprint Modifications

After NSX has been added as an endpoint navigate to Blueprints under the Design tab. From the design canvas of a new or existing blueprint select Network & Security, drag and drop the On-Demand Load Balancer onto the canvas.

Blueprint_1

Click the On-Demand Load Balancer that has been added to the canvas. When the load balancer is provisioned in NSX the servers associated with the load balancer in the blueprint are automatically added as members in the pool. This is set in the Member field, in the example below the web servers in the blueprint are added as members of the load balancer.

Blueprint_3

The network for the member servers and the network for the VIP address are configured in the appropriate fields. Leave the IP address blank to automatically assign an IP address from the associated VIP network. Under Virtual servers click New, here you can configure the protocol settings for the load balancer, and the algorithm/persistence, health check, and connection settings by selecting Customize.

Customize_LB

Before saving the blueprint click the settings cog at the top of the page, this opens the blueprint properties. From the NSX Settings tab set the Transport zone to attach the load balancer to, this can be a local or universal transport zone. Next select the Edge and routed gateway reservation policy, this is the reservation policy (compute, storage) that will be used when provisioning the edge.

Blueprint_Properties_1

Click the Properties tab and select Custom Properties. There are a number of optional parameters we can add here.

  • NSX.Edge.ApplianceSize sets the appliance size of the edge, accepted values are compact, large, quadlarge and xlarge.
  • NSX.Edge.HighAvailability deploys the edge appliance in HA mode when the value is true. Without this property only a single appliance is deployed.
  • NSX.Edge.HighAvailability.PortGroup references the port group to use for the heartbeat network of the edge appliances deployed in HA mode.

Blueprint_Properties_2

Click Ok and Finish to save the blueprint. Make the blueprint available as a catalog item and request a test deployment. In vSphere you will see the edge and VMs being provisioned and, once complete, the virtual machines will be added as members in the load balacer pool. You can view the settings of the deployed edge in the vSphere web client under Networking & Security, NSX Edges, double click the edge and select Load Balancer.

NSX_Load_Balancer

Post Deployment

When the deployment is destroyed the edge appliances are removed along with the VMs as part of the cleanup process. If the deployment is scaled out then the new server is added as a member to the existing load balancer pool, likewise if the deployment is scaled in then the server deleted is also removed from the pool.

Scale_Out

The scale in and scale out actions are assigned as entitled actions from within the relevant entitlement  Aswell as having the permissions to perform the scale actions the blueprint must also contain a higher number of maximum instances. In the example below 2 web servers will be deployed with an On-Demand Load Balancer, as the maximum number of instances is set to 10 the requester can scale out the number of web servers and pool members to a maximum of 10 servers.

Blueprint_Scale

Configuring VMware Cross-vCenter NSX

This post provides an overview of cross-vCenter NSX and walks through the configuration steps. Cross-vCenter NSX allows central management of network virtualization and security policies across multiple vCenter Server systems. Cross vCenter NSX introduces universal objects; such as universal logical switches, universal logical routers, and universal distributed firewall rules. Universal objects are able to span multiple sites or vCenter Server instances, enhancing workload mobility by allowing cross vCenter and long distance vMotion for virtual machines, whilst keeping the same network settings and firewall rules. This improves DR capabilities, overcomes scale limits of vCenter Server, and gives administrators more control over resource pooling and the separation of environments.

Cross vCenter-NSX was introduced in NSX v6.2 and requires vSphere v6.0 or later. As normal NSX Manager is deployed with vCenter server in a 1:1 pairing.  In a single site NSX deployment the NSX Manager is given the standalone role by default. When configuring cross-vCenter NSX one NSX Manager is assigned the primary role, and up to seven other NSX Managers are assigned the secondary role. NSX Managers configured for cross-vCenter NSX must all be running the same version. The primary NSX Manager is responsible for deploying the Universal Controller Cluster; forming the control plane across the NSX Managers. The Universal Controller Cluster runs in the site of the primary NSX Manager. Universal objects are created on the primary NSX Manager and automatically synchronized across the multi-site NSX environment.

Configuring Cross-vCenter NSX

The steps below assume you have already deployed and registered the NSX Managers, and have a good understanding of NSX. This post is intended as add on to the NSX Install Guide to provide an outline of the additional or different steps required for a cross-vCenter NSX install, further resources are listed at the bottom of the page. If you are using vCenter enhanced linked mode then multiple NSX Manager instances are displayed within the same interface, or single pane of glass, when managing the Network & Security section of the vSphere web client. Enhanced linked mode is not a requirement for cross-vCenter NSX however, and vCenter Server systems not in enhanced linked mode can still be configured for cross-vCenter NSX.

From the Networking & Security page of the vSphere web client select Installation, highlight the NSX Manager in the primary site, from the Actions menu select Assign Primary Role.

NSX_Promote

The secondary NSX Manager(s) synchronize with the primary using the Universal Synchronization Service. These sites do not run any NSX Controllers, although they can be redeployed easily in the event of a primary site outage. Before assigning the secondary role you should ensure there are no existing NSX Controllers deployed in the associated vCenter. If you have already assigned a segment ID pool to the NSX Managers then ensure the segment ID pools do not overlap. Select the primary NSX Manager and from the Actions menu click Add Secondary NSX Manager. Enter the secondary NSX Manager information and admin password.

NSX2

Review the table of NSX Managers, the roles have now changed accordingly.

NSX_Roles

The universal controller cluster is formed by individually deploying the NSX controllers from the primary NSX Manager, the method of deploying the controllers is the same (see NSX Install Guide Part 1 – Mgmt and Control Planes for further assistance). Once the controllers are deployed you will notice placeholder controllers listed against the secondary NSX Manager, these are not connected or deployed. In the event of a site failure the configuration is synchronized between NSX Managers so you can simply re-deploy the controllers in the DR site. To see the failover process review this blog post. VMware recommend deploying 3 controllers on different hosts with anti-affinity rules.

NSX_Controllers

The next part of the install process is to follow the host preparation and VXLAN configuration steps as normal (see NSX Install Guide Part 2 – Data Plane for further assistance). Create the segment ID pools for each NSX Manager, making sure they do not overlap. On the primary NSX Manager you will also assign a universal segment ID pool.

In order for us to deploy universal logical switches we need to create a universal transport zone. A universal transport zone determines which hosts a universal logical switch can reach, spanning multiple vCenters. From the Logical Network Preparation tab open Transport Zones, ensure the primary NSX Manager is selected and click the plus symbol. Select Mark this object for Universal Synchronization, and enter the configuration for the universal transport zone. All universal objects must be created on the primary NSX Manager, change the NSX Manager to the secondary site and you will see the universal transport zone has synchronized there also.

NSX_TZ

Next we will create a universal logical switch for the transit network. Local objects such as logical switches, logical routers, and Edge Services Gateways can still be deployed from each NSX Manager, although by design they are only local to the vCenter linked to that specific NSX Manager, and cannot be deployed or edited elsewhere. From the left hand navigation pane in Networking & Security select Logical Switches, ensure the primary NSX Manager is selected and click the plus symbol. Enter a name for the transit network and select the universal transport zone we created earlier.

NSX_Universal_Transit

At this stage you can also deploy another universal logical switch, connecting a couple of test VMs on a private subnet, and have them ping one another to confirm connectivity. Now that we have a transit network and test universal logical switches connected to our universal transport zone we can go ahead and create a universal DLR. In this particular environment we have already deployed an ESG in each site. For further assistance with deploying an ESG and DLR see NSX Install Guide Part 3 – Edge and DLR.

From the Networking & Security page click NSX Edges, ensure the primary NSX Manager is selected and click the plus symbol. The control VM for the DLR is deployed to the primary site, again the configuration is synchronized and this can be re-deployed to the DR site in the event of a primary site outage. Select Universal Logical Router and follow the wizard as normal, if local egress is required then check the appropriate box. Sites configured in a cross-vCenter NSX environment can use the same physical routers for egress traffic, or have the local egress feature enabled within a universal logical router. The local egress feature allows routes to be customized at host, cluster, or router level.

NSX_UDLR

From the NSX Edges page double click the new universal DLR, select Manage, Settings, Interfaces and click the add button. In order for traffic to route from the universal DLR to the ESG(s) we must add an uplink interface connecting them to the universal transit network. Change the logical router interface to Uplink, in the Connected To field select the transit network universal logical switch we created earlier. Configure the IP and MTU settings of the interface per your own environment.

NSX_UDLR_Interface

You can also add Internal interfaces here corresponding with universal logical switches for virtual machine subnets. Before these subnets can route out follow the same process to add an Internal interface to the ESG(s) connecting them to the same transit network.

A virtual machine connected to the test universal logical switch can now vMotion between sites keeping the same IP addressing, providing L2 over L3 capability. As well as remaining on the same logical network a virtual machine can also be migrated across sites without any additional firewall rules, this is achieved with the use of universal firewall rules. Universal firewall rules require a dedicated section creating under the Firewall section of Networking & Security, you must select Mark this section for Universal Synchronization. For assistance with creating universal firewall rules see here.

NSX_Universal_Firewall

Additional Resources

To plan a cross-vCenter NSX installation review the VMware Cross-vCenter NSX Design Guide, Cross-vCenter NSX Topologies Guide, and the VMware Cross-vCenter Installation Guide. For more information on cross-vCenter NSX design see the following blog posts:

VM Security Tags with NSX Firewall

This post will walk through virtual machine security tags; how we can create tags, automatically add virtual machines with tags to a specific security group, and build associated NSX firewall rules. As a bonus we’ll also apply a security tag to a vRA blueprint, allowing vRA provisioned machines to automatically receive a security tag and apply any corresponding NSX firewall rules.

Security tags and groups allow us to identify virtual machines with a common value, such as business department, support group, workloads, and so on. By applying security tags to virtual machines, and/or adding virtual machines to security groups, we can control security at a custom defined level, independent of the underlying infrastructure. Virtual machines can have multiple tags, allowing administrators to identify different values upon which to act. Many third party anti-virus solutions with NSX integration use security tags to protect and quarantine virtual machines depending on their health status.

The steps below assume that NSX is installed and working, for more information on installing the required components see the following series of posts.

NSX Install Guide Part 1 – Mgmt and Control Planes

NSX Install Guide Part 2 – Data Plane

NSX Install Guide Part 3 – Edge and DLR

Creating Security Tags and Groups

From the vSphere web client browse to Networking & Security, click NSX Managers, and select the appropriate NSX Manager. Open the Manage tab, then Security Tags. Existing security tags are listed, some third part plugins such as AV may also add and use security tags. To create a new security tag click the New Security Tag icon. Enter a name for the security tag, and a description if required, then click Ok.

Tags1

Security tags can be applied to virtual machines manually in the page referenced above, or through an automatic provisioning solution such as vRealize Automation.

Next, select the Grouping Objects page. Under Security Groups, click the Add New Security Group icon. Enter a name and description if required.

Group1

Go to the third option: Select objects to include. Change the Object Type in the drop-down to Security Tag. Select the new security tag we created earlier.

Group2

Review the details on the summary page and click Finish.

Group3

The security group has now been created, and any virtual machines that use the security tag we included are automatically added to the group. You can create multiple security tags and groups for different departments, applications, or however you want to segregate these out.

Group4

Creating NSX Firewall Rules

Our new security group / tag setup can be used to configure NSX firewall rules. Still under Networking & Security in the vSphere web client; select Firewall.

If you have already configured NSX firewall rules you’ll be familiar with this page, and likely have a number of sections and rules already configured. You can edit an existing rule or create a new one in the relevant section. To create a new section use the Add Section (folder) icon. Click the green plus icon to add a new rule, or the edit icon to edit an existing rule.

Firewall1

When configuring the rule you can set the source, destination, or both to use a security group. Change the Object Type drop-down to Security Group, and select the new security group we created earlier.

Firewall2

Remember to click Publish Changes when you’re done. For assistance with creating NSX firewall rules see this section of the NSX Documentation Center.

Adding Tags to vRA Blueprints

The use of security tags with blueprints requires NSX to be integrated with vRA. If you haven’t already done so you can follow the steps outlined in the VMware post Part 1 of Integrating NSX with vRealize Automation. You’ll also need an understanding of how to create blueprints, again there is more information on this in the VMware post Part 2 of Integrating NSX with vRealize Automation if you need it.

To add a security tag to a virtual machine provisioned by vRA we must add it to the appropriate blueprint. After adding security tags and/or groups to NSX Manager we need to run a data collection so that vRA is showing up to date information. From the vRA portal browse to the Infrastructure tab, select Compute Resources, Compute Resources. Move the mouse cursor over the compute resource and click Data Collection. Scroll down to Network and Security Inventory and click Request now. The sync will take a couple of minutes, you can leave the page during this time.

vra1

Next open the Design tab and select Blueprints. We can add a security tag to an existing blueprint, or create a new one. In the design canvas click Network & Security from the list of categories. Locate Existing Security Tag and drag this onto the canvas. Alternatively you can use a security group at this stage if you’d prefer.

vra2

Select the security tag from the list of existing tags. From the design canvas select the virtual machine and open the Security tab. Tick the referenced security tag to associate it with the virtual machine. Click Save and Finish to save the changes to the blueprint. Any virtual machines provisioned from this blueprint are now tagged with the security tag (or group) selected.

VMware vRealize Network Insight Overview

This post will walk through the installation and configuration of VMware vRealize Network Insight (vRNI). The latest version is currently v3.5.0, you can see what’s new in v3.5.0 in this VMware blog post. Network Insight integrates with NSX to deliver intelligent operations for software defined networking. The key features and use cases of vRNI include 360 degree visibility and end-to-end troubleshooting across converged infrastructure and physical and virtual networks, performance optimization and topology mapping, physical switch vendor integration, advanced monitoring to ensure health and availability of NSX, rich traffic analytics, change tracking, planning and monitoring of micro-segmentation, and best practice compliance checking. The VMware graphic below shows where vRNI sits in the Software Defined Data Center.

vrni1

Resources

Requirements

  • At least v5.5 of vCenter Server is required, Network Insight versions 3.3.0 and above support vCenter Server 6.5 and 6.5 U1.
    • HTTPS connectivity to vCenter is required to fetch virtual environment information.
  • Distributed switches must be vDS v5.5 or above. The configuration of NetFlow is a requirement but this can be done automatically when adding vCenter as a data source.
  • The screenshot below shows the compatible versions of NSX with Network Insight v3.3.0 through to v3.5.0. For the latest version of NSX (v6.3.3) Network Insight v3.5.0 is needed.
    • HTTPS connectivity to NSX Manager, SSH connectivity to NSX Controller(s), and SSH or Central CLI connectiity to NSX Edge(s) is also required.

NSX_NetworkInsight

Installation

The installation consists of deploying the vRealize Network Insight Platform OVA; preconfigured with 8 vCPU, 32 GB RAM, and 750 GB HDD. Plus the vRealize Network Insight Proxy OVA; preconfigured with 4 vCPU, 10 GB RAM, and 150 GB HDD. Disks can be thin provisioned. A memory and CPU reservation at 50% of the specifications listed is recommended for production environments. The deployment can also be automated using PowerCLI, covered in this blog post by William Lam.

  • Using the download links referenced above, download the vRealize Network Insight – Platform OVA file and the vRealize Network Insight – Proxy OVA file.
  • Manually add DNS entries for the host names and planned IP addresses of the appliances.
  • In the vSphere web client right click the datacenter, cluster, or host to deploy the appliance to, and select Deploy OVF Template. Browse to the downloaded platform OVA file.
  • Follow the standard OVF deployment wizard, selecting the compute, storage, and network configuration to use. Ensure DNS and time settings are configured.
  • Before clicking Finish select Power on after deployment.

When the appliance has deployed navigate to the IP address or FQDN in a web browser. Enter your license key and click Validate, then Activate. On the setup proxy virtual appliance page click Generate to generate a shared secret. Copy the shared secret, you will need this for the proxy deployment, leave the web browser open.

  • In the vSphere web client right click the datacenter, cluster, or host to deploy the appliance to, and select Deploy OVF Template. Browse to the downloaded proxy OVA file.
  • Follow the standard OVF deployment wizard, selecting the compute, storage, and network configuration to use. Ensure DNS and time settings are configured.
  • During the template customization, in the Shared Secret for vRealize Network Insight Proxy field, enter the shared secret generated earlier.
  • Before clicking Finish select Power on after deployment.

Go back to the web browser, after the proxy appliance has powered on it will automatically detect the platform appliance. When this happens the web page will show a proxy detected message, click Finish, you are redirected to the login page. If the deployed proxy is not detected within 5 minutes follow the validation steps outlined in the FAQ document referenced above.

login

Configuration

Log into Network Insight using the default username admin@local and default password password. Select the settings icon in the far right hand corner and click Settings. The Install and Support tab lists the health of the appliances, additional nodes can also be added here.

Settings1

The password of the logged in user, in this case admin@local, can be changed under My Profile.

Click Data Sources and Add new source. This is where we will add the data sources for Network Insight to monitor, first we’ll add vCenter so select VMware vCenter from the drop-down Source Type list.

Settings2

Enter the vCenter IP address or FQDN and credentials with distributed switch and dvPort group modify permissions, click Validate. Enter a friendly name and click Submit to add the data source. In the vSphere client tasks pane you will see NetFlow being configured on the distributed switches. Repeat the process to add the NSX Manager; selecting VMware NSX Manager from the drop-down Source Type list and entering the NSX Manager credentials. You can add multiple vCenter Servers and NSX Managers.

If applicable add any converged infrastructure and physical networking hardware, accounts with read access are required. Once a data source is added information will start trickling in within a few minutes, however the first full data collection can take up to 2 hours. You should also wait at least 24 hours before generating reports.

Examples

When logged in to the web UI, the home page displays a dashboard of problems and events you should be aware of, as well as quick links to plan, operate, and troubleshoot the environment. Return to the home page at any time by clicking the VM icon in the top left hand corner.

Home

Move the mouse cursor over the left hand navigation pane to expand the menu. Navigate through the different options to view path topologies, port and network metrics, and events.

VMPathsHostVLANs

Nearly all components can be selected for deep dive views or path mappings. We can analyse services and flows and troubleshoot problems from within the same interface.

NSXNSXGroupsPlan

Events and Entities allow us to drill down more, when viewing an event, problem, or change click the alarm bell symbol to create a notification for that item. You can also use the search bar which auto-prompts as you type, visible in the screenshot below. Save a search term using the pin icon, saved searches can be accessed in the left hand navigation window at any time. For further use cases consult the user guides referenced above.

Search

McAfee MOVE 4.5.0 Upgrade Guide with NSX

This post walks through the upgrade of McAfee MOVE to version 4.5.0 with NSX Manager, and can be used when upgrading McAfee MOVE Agentless versions 3.5.x, 3.6.x, and 4.0.0. The upgrade of versions 3.5.x or 3.6.x involves migrating all custom settings, policies and tasks with the McAfee MOVE Migration Assistant (these are retained by default when upgrading from version 4.0.0).

Pre-Requisites

The benefits and architecture of offloading AV to a dedicated Service Virtual Machine (SVM) with McAfee MOVE and NSX are covered in the McAfee MOVE with NSX Install Guide. The scope of this guide is to upgrade an existing McAfee MOVE installation and as such it is assumed that NSX Manager, IP Pools, service deployments (i.e. Guest Introspection), policies, and ePO integration are all in place. Furthermore it is assumed that network connectivity between components, time sync, DNS, vSphere access, etc. are also configured. For a full list of pre-requisities see the above install guide. The requirements below are specific to the McAfee MOVE 4.5 upgrade:

2

Update Extensions

The first step is to update the extensions on the ePO server. When upgrading versions 3.5.x or 3.6.x the existing extensions are left in place to facilitate the migration of data, which we’ll cover later. When upgrading version 4.0.0 the extensions are replaced with the new versions, all settings and policies remain.

I am going to use Software Manager to download, install, and check in the software direct on the ePO web UI. If you prefer you can manually download the extensions on your own machine and then install them through the Extensions page (more info on this below). To use Software Manager click the drop down Menu option in the top left hand corner of the page and select Software Manager. Use the search function to find McAfee MOVE AntiVirus 4.5. Browse through the components, you will notice the Migration Assistant is included, click Check In All.

migration1

Accept the license agreement and click Ok. The extensions are downloaded and installed.

migration2

An alternative way of installation or updating extensions is to browse to McAfee Downloads, enter your grant number when prompted and then select McAfee MOVE AV for Virtual Servers, McAfee MOVE AntiVirus 4.5. Download the required files and then browse to the web interface of the ePO server (https://EPO:8443/ where EPO is the name of your EPO server). Log in as an administrator and click the drop down Menu option in the top left hand corner of the page. Locate Software, and select Extensions. Click Install Extension and install the downloaded zip files in the following order: Cloud Workload Discovery Cloud_Workload_Discovery_Hybrid_4.5.0.zip (note that the CommonUI bundle; mfs-commonui-core-ui,commonui-core-common and commonui-core-rest extensions, is a pre-req for the Cloud Workload Discovery 4.5 for ePO 5.1.3 and 5.3.1), McAfee MOVE AntiVirus extension MOVE-AV_Ext_4.5.0_Licensed.zip, Product Help extension MOVE-AV_HELP_EXT_4.5.0.zip.

Which ever way you install the extensions, ensure you download MOVE-AL-AL_SVM_OVF_4.5.0.148 (or most recent version). This zip file contains the Service Virtual Machine (SVM), which we’ll need to add to the SVM repository later.

Once the extensions are installed the new version of MOVE AntiVirus will be visible in the Data Center Security group, under Menu > Software > Extensions.

migration6

For those upgrading versions 3.5.x or 3.6.x the old extensions remain in place in the MOVE AV group.

migration7

You will also notice an additional option in the Automation menu; MOVE AV Agentless remains as the legacy option for versions 3.5.x or 3.6.x, and MOVE AntiVirus Deployment is created for version 4.5.0. The legacy MOVE AV Agentless option is deleted upon removal of the old extensions at the end of the process. Again, doesn’t apply to 4.0.0 because in this case the extensions are upgraded, rather than running side by side.

versions

Migration Assistant

The Migration Assistant can be used when upgrading from MOVE versions 3.5.x or 3.6.x, if you are upgrading from 4.0.0 then this step is not necessary. Use one of the methods outlined above to install the Migration Assistant extension. If you used Software Manager to install the full McAfee MOVE AntiVirus 4.5 package then the Migration Assistant should already be installed. If you need to manually downloading and install the extension then when using McAfee downloads you need to change the Software Downloads tab to Extensions to view the Migration extension, as shown below.

migration

When the install is complete; in the ePO web UI click the drop down Menu option, under Software, click Extensions. The MOVE Migration Assistant 4.5 is listed under Data Center Security.

migration4

We can now go ahead and run the Migration Assistant; from the drop down Menu, under Policy, select MOVE Migration Assistant.

migration5

Select Automatic migration to migrate all settings for supported products (note that unassigned policies are not migrated) and click Next. To select only certain policies or edit policies you can use the Manual migration option, for more information see page 10 of the McAfee MOVE Migration Guide.

migration8

Review the items to be migrated, you can rename and edit the policy notes if required by clicking Rename and Edit Notes. When you’re ready to start migrating click Save.

migration9

Once the migration job has finished go back into the MOVE Migration Assistant, next to Migrate Agentless Deployment Configuration Details (Agentless Only) select Run, and click Next. Click Ok to confirm migrating configuration details.

migration10

When the config migration has completed click the drop down Menu option and under Automation select MOVE AntiVirus Deployment. You will see the SVM configuration and NSX registrations have all been migrated across.

Note that if you are upgrading from 3.5.x then the NSX certificate and credential data is migrated across, however you still need to enter the SVM configuration under Menu, Automation, MOVE AntiVirus Deployment, Configuration, General.

Upgrade SVM Registration

Now we need to add version 4.5.0 of the Service Virtual Machine (SVM) to the SVM repository, and update the registered SVM version with NSX Manager. In the ePO web UI click Menu, under Automation select MOVE AntiVirus Deployment. From the Configuration tab select SVM Repository, click Actions, Add SVM. Browse to the zip file containing the SVM we downloaded earlier and click Ok.

svm1

The new version of the SVM will now be listed in the repository.

svm2

Next go to Menu, Automation, MOVE AntiVirus Deployment. In the Configuration tab NSX Manager details and credentials should still be in place. Click the Service tab. The Registered SVM Version will still show the old version, from the Actions column for the NSX Manager click Upgrade. Select the new SVM version and click Ok. The latest version of the MOVE SVM is now registered with the selected NSX Manager.

Upgrade NSX Components

The final stage is to update the NSX security policy and service deployments. Log into the vSphere web client and click Networking & Security from the home page. Select Service Composer and then the Security Policies tab. As we’re upgrading an existing McAfee MOVE solution you should already have an AV related policy or policies configured, we need to reconfigure those to point at the new MOVE policies that were migrated across in ePO. Select the security policy to update and click the Edit icon.

editpolicy

Click Guest Introspection Services and select the existing guest introspection service, click the edit icon and make a note of the existing settings. Cancel out of the edit window and click the red cross to delete the guest introspection service. Click the green plus symbol to add a new service.

policy1

Enter a name for the service and ensure Apply is selected, use the McAfee MOVE AV service and select the ePO policy from the Service Profile drop down. The state should be set to Enabled and select Yes to enforce the policy. Use the same settings as the previous service if you like, the only difference will be the new service profile (ePO policy). Click Ok.

policy2

Select the Security Groups tab. Confirm that existing security groups are in place with the NSX security policy associated with the McAfee ePO policy applied. If needed you can select a group and click the apply policy icon to apply the security policy edited above to a security group.

policy3

Finally, we can update the Service Virtual Machines deployed on the ESXi hosts. From the left hand navigation pane select Installation and the Service Deployments tab. Existing installations will be listed here, with an Upgrade Available status. Service deployments are installed at vSphere cluster level, select the vSphere cluster to upgrade and click the Upgrade icon.

moveupgrade

New versions of the SVM are pushed out to each ESXi host in the selected cluster, replacing old versions using the same configuration details (datastore, port group, IP address range). Once complete the new version number is listed, the installation status is succeeded, and the service status is up.

movesuccess

If you upgraded version 3.5.x or 3.6.x you can remove the legacy MOVE extensions once you have updated the SVM registration and service deployments on each vCenter. In the ePO web UI open the Extensions page, locate the old version of the McAfee MOVE extension and click Remove.

If any of the components referenced above are not in place, or you need to deploy McAfee MOVE AV to a new vSphere cluster, see the McAfee MOVE with NSX Install Guide post. The only other thing worth noting is I had a vCenter where the MOVE service registration was failing, I had to remove the MOVE service deployments and service definition from NSX Manager, remove the vCenter from cloud accounts in ePO, and then add it all back in as a new install, deploying the SVM as a fresh 4.5 install rather than an upgrade.

NSX Install Guide Part 3 – Edge and DLR

In the final installment of this 3 part guide we will configure the Edge Services Gateway (ESG) and Distributed Logical Router (DLR). The NSX installation and relevant logical switches must be in place before continuing, for further information see NSX Install Guide Part 1 – Mgmt and Control Planes and NSX Install Guide Part 2 – Data Plane. It is important to note that depending on your network configuration and NSX design, additional steps may be required to integrate with your chosen routing protocol.

First we will create an Edge Services Gateway, providing access to the physical network (north-south traffic), followed by a Distributed Logical Router, which will provide connectivity for virtual machines using different logical switches (east-west traffic). The DLR will connect to the ESG to provide external routing using a transit logical switch. The image below shows the topology of the described components (from the VMware Documentation Centre, which also provides more information on advanced features and routing configurations).

topology

Edge Services Gateway

  • The Edge Services Gateway allows virtual machines to route to external devices, in other words to access the physical network.
  • The ESG is deployed as a virtual appliance in 4 different sizes:
    • Compact: 512 MB RAM, 1 vCPU, 500 MB disk.
    • Large: 1 GB RAM, 2 vCPU, 500 MB disk + 512 MB disk
    • Quad Large: 1 GB RAM, 4 vCPU, 500 MB disk + 512 MB disk
    • X-Large: 8 GB RAM, 6 vCPU, 500 MB disk + 2 GB disk
  • Each ESG can have a total of 10 interfaces (internal and uplinks).

From the left hand navigation pane select NSX Edges, click the green plus symbol to create a new Edge. Select Edge Services Gateway. Assign a name that will be displayed in the vSphere inventory and click Next. The hostname will be displayed in the CLI but is an optional field (the Edge-ID will be displayed if no hostname is specified). Should you require HA, and a secondary appliance deploying, tick Enable High Availability.

esg1

Configure the admin password (needs to be 12 characters plus the usual requirements) and logging level. You may want to enable SSH for troubleshooting purposes, this can also be enabled at a later date if required. Click Next.

esg2

Select the datacentre and appliance size as per the recommendation from VMware below:

The Large NSX Edge has more CPU, memory, and disk space than the Compact NSX Edge, and supports a larger number of concurrent SSL VPN-Plus users. The X-Large NSX Edge is suited for environments that have a load balancer with millions of concurrent sessions. The Quad Large NSX Edge is recommended for high throughput and requires a high connection rate.

Large should be ok for most environments, compact shouldn’t be used for production. Click the green plus symbol to add an Edge appliance.

esg3

Configure the vSphere placement parameters and click Ok. If you are using HA then add a second appliance, using a different datastore. DRS rules will automatically be added to keep the 2 appliances apart. If you do not deploy any appliances then the ESG will be created in an offline mode, until appliances are deployed. When you have finished adding the Edge appliances click Next.

esg4

We must now add the Edge interfaces, click the green plus symbol.

esg5

Configure the NSX Edge interfaces:

  • Add the physically connected distributed port group (click Select) to an Uplink interface, and enter the network details of your physical router. This provides a route to the physical network for north-south traffic.
  • If you selected HA then at least one internal interface must be configured to use a logical switch for heartbeat traffic, change the type to Internal and leave the IP address table blank.
  • If you will be adding a Distributed Logical Router then add an internal interface to the TRANSIT logical switch where the DLR will also be attached. The subnets to be routed externally are added to the DLR later.
  • Lab only: if you are not using a Distributed Logical Router, i.e. in a very small lab environment, then add the subnets for external connectivity and their associated logical switches as internal interfaces (east-west traffic).

esg6

When the required interfaces have been added click Next. Depending on your routing configuration you may need to add a default gateway, click Next.

esg7

Tick Configure Firewall default policy and set the default traffic policy to Accept, enable logging if required. The firewall policy can be changed or configured later if required, however if you do not configure the firewall policy, the default policy is set to deny all traffic.

If you have deployed HA each appliance will be assigned a link local IP address on the heartbeat network we created earlier, you can manually override these settings if required in the Configure HA parameters section, otherwise leave as default and click Next.

esg8

On the summary page click Finish to finalize the installation. The ESG will now be deployed, the details are listed on the NSX Edges page, note the type is NSX Edge. If you used HA then two ESG appliances will be deployed, you’ll notice in the vSphere inventory the virtual machine names have -0 and -1 at the end, -0 is the active ESG appliance by default until a failover occurs.

edges1

Once an Edge is deployed you can add or change the existing configuration, such as interfaces, by double clicking the Edge. Depending on your design and network configuration additional routing settings may be required, these can be found under Manage, Routing.

Routing.PNG

Distributed Logical Router

  • A Distributed Logical Router allows connectivity between virtual machines using different logical switches.
  • Distributed Routing allows for communication between virtual machines on different subnets, on the same host, without the need to leave the hypervisor level.
  • The DLR control VM sits in the control plane, although it pushes data plane kernel modules out to each host, allowing routing to be done within the hypervisor itself, these are kept up to date by the NSX Controllers.

From the left hand navigation pane select NSX Edges, click the green plus symbol to create a new Edge. Select Logical (Distributed) Router. Enter a name, this will appear in the vSphere inventory. If required you can enter a hostname, this will appear in the CLI, and a description and tenant. An Edge Appliance is deployed by default, this is needed unless you are using static routes. For dynamic routing and production environments Enable High Availability should also be selected, this deploys a standby virtual appliance, click Next to continue.

dlr1

Configure the local admin password (minimum 12 characters plus the usual requirements), it may also be worthwhile enabling SSH for future troubleshooting purposes. Note the logging level and change if required, otherwise click Next.

dlr2

If you chose to deploy an Edge appliance click the green plus symbol, select the vSphere options for the Edge appliance and click Ok, remember to add an additional appliance for HA using a different host and datastore, then Next.

dlr3

If you are using HA connect the interface to a distributed port group by clicking Select next to the HA Interface Configuration connection box.

Under the Configure interfaces of this NSX Edge section click the green plus symbol to add an interface. Configure the interfaces as required, internal interfaces are for east-west traffic, or VM to VM. Uplinks are for north-south traffic, and will typically connect to an external network through an Edge Services Gateway or third-party router VM. Uplink interfaces added will appear as vNICs on the DLR appliance. Add the interfaces associated with all the relevant networks and subnets you want to be routable, when you’re ready click Next.

dlr4

In this installation I have created three internal interfaces connected to their own dedicated logical switches; WEB, APP, and DB, configured with different subnets. Furthermore an uplink interface connected to the TRANSIT logical switch will be created, this will provide the link to the ESG for external routing.

web

Depending on your routing configuration you may need to add a default gateway (usually the ESG), for me the ESG will publish the default route via our routing protocol.  Click Next, then Finish. The DLR control VM will now be deployed, the details are listed on the NSX Edges page, note the type is Logical Router. If you used HA then two VMs will be deployed, you’ll notice in the vSphere inventory the virtual machine names have -0 and -1 at the end, -0 is the active control VM by default until a failover occurs.

edges2

Once a Logical Router is deployed you can add or change the existing configuration by double clicking the Logical Router. Depending on your design and network configuration additional routing settings may be required, these can be found under Manage, Routing. You will most likely need to add more subnets later on, this can be done under the Manage tab, and Settings, Interfaces. Click the green plus symbol and you will get the same Add Logical Router Interface wizard as we have used above.

interfaces

Ensure any virtual machines connected to the logical switches have their default gateway set to the DLR interface IP address. Virtual machines using logical switches now have connectivity through the DLR, despite being attached to different logical switches, and are able to route out to the physical network through the ESG.

_______________

NSX Install Guide Part 1 – Management and Control Planes

NSX Install Guide Part 2 – Data Plane

NSX Install Guide Part 3 – Edge and DLR

NSX with Log Insight Integration

This post covers the steps required to configure NSX with Log Insight integration. The versions used are NSX 6.2.5 and Log Insight 4.0, for assistance with getting these products up and running see the NSX Install Guide and vRealize Log Insight Install Guide posts. Log Insight is available to NSX customers entitled to use v6.2.4 and above, at no extra cost. The Log Insight for NSX license allows for the collection of vSphere and NSX log data.

The first step is to install the NSX Content pack on the Log Insight instance, then we’ll configure NSX Manager, the NSX Controllers, and any NSX Edges to use Log Insight as a syslog server.

NSX Content Pack

Browse to the IP address or FQDN of the Log Insight appliance and log in as admin.

loginsight

Click the menu option in the top right hand corner of the page.

admin

If you need to configure vSphere integration click Administration and vSphere under the Integration menu on the left hand navigation pane. Enter the connection details of the vCenter Server. To configure only specific hosts to send logs to Log Insight click Advanced options. Test the connection and when you’re ready click Save.

vsphereint

To install the NSX Content Pack select Content Packs from the menu option in the right hand corner of the page. Under Marketplace locate the VMware NSX-vSphere Content Pack.

contentpacks

Select the content pack, accept the license agreement and click Install.

contentpacksinstall

The next message informs you to setup vSphere Integration, which we covered above, and log forwarding for the NSX Manager, Controllers, and Edge components, which we’ll cover next. Click Ok.

contentpacksinstall2

The NSX Content Pack gives us additional dashboards accessible by clicking the drop down menu next to General on the Dashboards page. We won’t see any data there yet, as we need to configure the NSX components to use syslog.

nsxcontent

NSX Manager

Browse to the IP address or FQDN of the NSX Manager and login as admin.

nsxmanager

Click Manage Appliance Settings.

log1

From the General tab locate Syslog Server and click Edit.

log2

Enter the syslog server name or IP address and use port 514 protocol UDP. Click Ok to save the settings.

log3

NSX Controllers

Configuration of a syslog server for NSX Controllers is done through an API call. For the initial configuration a REST client is required. In this example we’ll use Postman for Google Chrome. Download the Postman app from the Chrome Web Store. When you first open the app click skip to use without creating an account. On the Authorisation tab set the authorisation type to Basic Auth. Enter the admin username and password of the NSX Manager.

log7

Click the Headers tab, in the key field type Content-Type, in the value field type application/xml. (The Authorization key in the screenshot automatically generates after configuring authorisation).

headers

To view the configured syslog server of an NSX Controller enter the URL https://NSX/api/2.0/vdn/controller/controller-1/syslog, replacing NSX with the NSX Manager name, you can also update the controller if required (i.e. controller-2, controller-3, and so on). Ensure Get is selected and click Send, the output will list the syslog configuration and is displayed in the Response field.

log7

To configure the syslog server change Get to Post in the drop down menu. Then click the Body tab and select raw. Enter the following text, replacing LOG with the correct syslog server.

<controllerSyslogServer>
<syslogServer>LOG</syslogServer>
<port>514</port>
<protocol>UDP</protocol>
<level>INFO</level>
</controllerSyslogServer>

Click Send. The new syslog server will be set. Change the controller-1 section of the URL to controller-2 and click Send to configure the same syslog server for controller-2, and again for controller-3. It is important that each NSX Controller is configured with the IP address of the Log Insight server. You can change Post to Get to view the syslog server configuration again once complete.

NSX Edges

NSX Edge Service Gateways and Distributed Logical Routers can be configured for syslog in the vSphere web client. From the home page click Networking & Security, select NSX Edges.

log4

Double click the ESG or DLR and open the Manage tab, Settings, Configuration. In the Details pane next to Syslog servers click Change.

log5

Enter the syslog server name or IP, ensure the protocol is UDP and click Ok.

log6

The syslog configuration is now complete, after a few minutes you should see events start to appear in the Log Insight dashboards.

loginsightnsx