VMware vRealize Network Insight Overview

This post will walk through the installation and configuration of VMware vRealize Network Insight (vRNI). The latest version is currently v3.5.0, you can see what’s new in v3.5.0 in this VMware blog post. Network Insight integrates with NSX to deliver intelligent operations for software defined networking. The key features and use cases of vRNI include 360 degree visibility and end-to-end troubleshooting across converged infrastructure and physical and virtual networks, performance optimization and topology mapping, physical switch vendor integration, advanced monitoring to ensure health and availability of NSX, rich traffic analytics, change tracking, planning and monitoring of micro-segmentation, and best practice compliance checking. The VMware graphic below shows where vRNI sits in the Software Defined Data Center.

vrni1

Resources

Requirements

  • At least v5.5 of vCenter Server is required, Network Insight versions 3.3.0 and above support vCenter Server 6.5 and 6.5 U1.
    • HTTPS connectivity to vCenter is required to fetch virtual environment information.
  • Distributed switches must be vDS v5.5 or above. The configuration of NetFlow is a requirement but this can be done automatically when adding vCenter as a data source.
  • The screenshot below shows the compatible versions of NSX with Network Insight v3.3.0 through to v3.5.0. For the latest version of NSX (v6.3.3) Network Insight v3.5.0 is needed.
    • HTTPS connectivity to NSX Manager, SSH connectivity to NSX Controller(s), and SSH or Central CLI connectiity to NSX Edge(s) is also required.

NSX_NetworkInsight

Installation

The installation consists of deploying the vRealize Network Insight Platform OVA; preconfigured with 8 vCPU, 32 GB RAM, and 750 GB HDD. Plus the vRealize Network Insight Proxy OVA; preconfigured with 4 vCPU, 10 GB RAM, and 150 GB HDD. Disks can be thin provisioned. A memory and CPU reservation at 50% of the specifications listed is recommended for production environments. The deployment can also be automated using PowerCLI, covered in this blog post by William Lam.

  • Using the download links referenced above, download the vRealize Network Insight – Platform OVA file and the vRealize Network Insight – Proxy OVA file.
  • Manually add DNS entries for the host names and planned IP addresses of the appliances.
  • In the vSphere web client right click the datacenter, cluster, or host to deploy the appliance to, and select Deploy OVF Template. Browse to the downloaded platform OVA file.
  • Follow the standard OVF deployment wizard, selecting the compute, storage, and network configuration to use. Ensure DNS and time settings are configured.
  • Before clicking Finish select Power on after deployment.

When the appliance has deployed navigate to the IP address or FQDN in a web browser. Enter your license key and click Validate, then Activate. On the setup proxy virtual appliance page click Generate to generate a shared secret. Copy the shared secret, you will need this for the proxy deployment, leave the web browser open.

  • In the vSphere web client right click the datacenter, cluster, or host to deploy the appliance to, and select Deploy OVF Template. Browse to the downloaded proxy OVA file.
  • Follow the standard OVF deployment wizard, selecting the compute, storage, and network configuration to use. Ensure DNS and time settings are configured.
  • During the template customization, in the Shared Secret for vRealize Network Insight Proxy field, enter the shared secret generated earlier.
  • Before clicking Finish select Power on after deployment.

Go back to the web browser, after the proxy appliance has powered on it will automatically detect the platform appliance. When this happens the web page will show a proxy detected message, click Finish, you are redirected to the login page. If the deployed proxy is not detected within 5 minutes follow the validation steps outlined in the FAQ document referenced above.

login

Configuration

Log into Network Insight using the default username admin@local and default password password. Select the settings icon in the far right hand corner and click Settings. The Install and Support tab lists the health of the appliances, additional nodes can also be added here.

Settings1

The password of the logged in user, in this case admin@local, can be changed under My Profile.

Click Data Sources and Add new source. This is where we will add the data sources for Network Insight to monitor, first we’ll add vCenter so select VMware vCenter from the drop-down Source Type list.

Settings2

Enter the vCenter IP address or FQDN and credentials with distributed switch and dvPort group modify permissions, click Validate. Enter a friendly name and click Submit to add the data source. In the vSphere client tasks pane you will see NetFlow being configured on the distributed switches. Repeat the process to add the NSX Manager; selecting VMware NSX Manager from the drop-down Source Type list and entering the NSX Manager credentials. You can add multiple vCenter Servers and NSX Managers.

If applicable add any converged infrastructure and physical networking hardware, accounts with read access are required. Once a data source is added information will start trickling in within a few minutes, however the first full data collection can take up to 2 hours. You should also wait at least 24 hours before generating reports.

Examples

When logged in to the web UI, the home page displays a dashboard of problems and events you should be aware of, as well as quick links to plan, operate, and troubleshoot the environment. Return to the home page at any time by clicking the VM icon in the top left hand corner.

Home

Move the mouse cursor over the left hand navigation pane to expand the menu. Navigate through the different options to view path topologies, port and network metrics, and events.

VMPathsHostVLANs

Nearly all components can be selected for deep dive views or path mappings. We can analyse services and flows and troubleshoot problems from within the same interface.

NSXNSXGroupsPlan

Events and Entities allow us to drill down more, when viewing an event, problem, or change click the alarm bell symbol to create a notification for that item. You can also use the search bar which auto-prompts as you type, visible in the screenshot below. Save a search term using the pin icon, saved searches can be accessed in the left hand navigation window at any time. For further use cases consult the user guides referenced above.

Search

Defining vRealize Automation Datacenter Locations

This post will walk through defining datacenter locations for VMware vRealize Automation 7.2. The primary two use cases for additional datacenter locations are to allow users to select a datacenter for service deployments, or for the administrator to specify a set datacenter when configuring a blueprint. We will cover both scenarios below.

Adding Datacenter Locations

Datacenter locations are defined in an xml file on the IaaS server(s). If you have multiple IaaS servers then we must perform the change on each server individually, and disable it from the load balancing configuration before commencing. If you are only using a single IaaS server, such as in a lab environment, then obviously this is not necessary. For vRA installations using NSX as a load balancer you can follow the brief steps below, otherwise refer to the documentation for your load balancing solution.

  • Log into the vSphere web client as a user with NSX administrative privileges, select Networking & Security.
  • Click NSX Edges and then double click the NSX Edge containing the load balancing configuration.
  • From the Manage tab select Load Balancer and Pools. Select the pool configured for the IaaS web servers and click Edit.
  • Select one of the nodes in the Members table and click the edit symbol. Untick Enable Member and click Ok.
  • The server is now disabled from the load balancing configuration and you can go ahead and make the change outlined below. Once complete enable the member and disable the next node, repeating the process for each member of the pool.

When the IaaS server node has been disabled in the IaaS Web load balancing pool (if applicable) navigate to C:\Program Files(x86)\VMware\vCAC\Server\Website\XmlData, or replace with the installation directory as appropriate. Edit the DataCenterLocations.xml file, entering your datacenter names in the CustomDataType body, in place of London and Boston.

dcl

Save and close the file, then restart the VMware vCloud Automation Center Service.

service

If you removed the IaaS from the load balancer remember to add it back in, you’ll then need to repeat the process for each instance. Once the change has been made on each IaaS node we can assign the locations to compute resources.

Log into the vRA tenant portal as a fabric administrator, you may need to clear your browser history to show the updated datacenters in the xml file we changed earlier. Open the Infrastructure tab and browse to Compute Resources, Compute Resources. Move the mouse pointer over the compute resource and click Edit, from the drop-down Location menu select the site to associate with the compute resource, click Ok. Repeat this for each compute resource requiring an assigned datacenter location.

compute

Selecting Datacenter Locations

Now that we have available locations assigned to our compute resource we can specify this using a blueprint. Log into the vRA tenant portal as a tenant administrator, from the Design tab select Blueprints. Select the blueprint to edit and click Edit. The main 2 options we are concerned with for datacenter locations are:

  • Allow the user to select the datacenter location.
    • From the General tab select the Display location on request tickbox. Click Save and Finish. Assuming the blueprint is published with appropriate catalog entitlements then when the user requests the catalog item they can select from the drop-down Location menu in the vSphere machine General tab.

usersite

  • Set the datacenter location in the blueprint, and do not allow the user to change the location. This option is useful for when the administrator wants to set where certain blueprints are deployed.
    • Check the setting mentioned above is unticked. Navigate to the Properties tab and select Custom Properties. Click New to add a new property. In the Name field enter Vrm.DataCenter.Location, in the Value field enter the site name, matching one of the site names we added previously, click Ok. Click Save and Finish. When the user requests the catalog item it will be deployed at the datacenter defined by the blueprint custom property.

adminsite

vRealize Operations High Availability

VMware vRealize Operations 6.x has been removed from general support, for more information review the VMware Lifecycle MatrixSee also How to Install vRealize Operations 8.x.

Following on from the vRealize Operations 6.4 Install Guide this post will detail High Availability (HA) for vRealize Operations Manager. By implementing HA the analytics cluster is protected against the loss of a single node. For example should the master node fail, services will automatically fail over to the replica node within 2-3 minutes. Following a fail over the cluster runs in degraded mode, and cannot tolerate the loss of another node until the cluster is returned to HA mode through repairing or replacing the failed node, or removing it if sufficient nodes exist within the cluster. The analytics cluster is made up of the master node, replica node, and data node or nodes. It does not include any remote collector nodes.

Requirements

  • There should be sufficient hosts in the vSphere cluster to have no more than one node running on each host. HA does not protect against the loss of more than one node, only one replica node can be configured.
  • Each node in the analytics cluster requires a static IP address.
  • When adding additional nodes keep in mind the following:
    • All nodes must be running the same version
    • All nodes must use the same deployment type, i.e. virtual appliance, Windows, or Linux.
    • All nodes must be sized the same in terms of CPU, memory, and disk.
    • Nodes can be in different vSphere clusters, but must be in the same physical location and subnet.
    • Time must be synchronised across all nodes.
    • Click here to see a full list of multiple node cluster requirements.
  • Note that for existing clusters the master node must be online to enable HA, and the cluster is restarted. This does not apply to new clusters where the cluster has not yet been started.

Deploy the Data Node

In order to configure HA a data node must be deployed and then converted into a replica node. The replica node holds a copy of all data stored in the master node. First let’s deploy our new node; download vRealize Operations Manager here.

Navigate to the vSphere web client home page, click vRealize Operations Manager and select Deploy vRealize Operations Manager.

vro1

The OVF template wizard will open. Browse to the location of the OVA file and click Next.

vro2

Enter a name for the virtual appliance, and select a location. Click Next.

vro3

Select the host or cluster compute resources for the virtual appliance and click Next. Remember this should be on a different host to the master node.

vro4

Review the details of the OVA, click Next.

vro5

Accept the EULA and click Next.

vro6

Select the same configuration size as the master node and click Next.

vra7

Select the storage for the virtual appliance, click Next. For HA you should use a different datastore to the master node.

vra8

Select the network for the virtual appliance, click Next.

vra9

Configure the virtual appliance network settings, click Next. Click Finish on the final screen to begin deploying the virtual appliance.

vra10

Configure the Replica Node

Once the virtual appliance has been deployed and is powered on, open a web browser to the FQDN or IP address configured during deployment. Select Expand Existing Installation.

install1

Click Next to begin the setup wizard.

expand1

Enter the name of the new node, ensure Data is selected as the node type. Enter the IP address or FQDN of the master node and click Validate. Tick Accept this certificate and click Next to continue.

expand2

Enter the admin password of the master node and click Next.

expand3

Click Finish.

expand4

Cluster Configuration

You will now be returned to the cluster admin page. Note the Waiting to finish cluster expansion. Installation in progress… message; the new node is being configured and will likely take 5-10 minutes. If you want to add any additional data nodes or remote collector nodes you can repeat the process above. For the purposes of this post we are adding a single data node to be converted to a replica. When you’re ready click Finish Adding New Node(s) and Ok to continue.

finish

Once the node to be used as a replica is online we can configure HA. Locate the High Availability section at the top right of the admin page, the status will be set to disabled, click Enable. Ensure the correct data node is selected to be converted to a replica node. Tick Enable High Availability for this cluster and click Ok.

enableha

Configuring HA can take up to 20 minutes and the cluster will restart. Click Yes to continue.

confirm

You may need to log back into the admin console. The admin console for vRealize Operations Manager can be accessed by browsing to http://<vROps>/admin where <vROps> is the IP address of FQDN of your vRealize Operations Manager appliance or server. The HA status will now show enabled and the cluster online. Note that the role of the node has now changed to Master Replica.

The same data can now be accessed via both the master node and the replica node. Consider implementing load balancing for larger environments; review the vRealize Operations Manager Load Balancing document.

DRS Anti-Affinity

The final step is to configure an anti-affinity rule to stop the master and replica nodes (and any data nodes) from running on the same hosts. Log into the vSphere web client and browse to Hosts and Clusters. Click the vSphere cluster and select the Manage tab. Under Configuration click VM/Host Rules. Under VM/Host Rules click Add.

Enter a name for the rule, such as vRealize Operations Nodes, ensure Enable rule is ticked and select Separate Virtual Machines as the rule type. Click Add and select the vRealize Operations nodes. Click Ok.

drs

This rule will ensure DRS does not place nodes on the same hosts in a vSphere cluster.