In this post we will walk through the process of deploying an NSX Load Balancer using vRealize Automation. We will also cover high availability and post deployment scaling. In order to take advantage of the direct NSX API integration with vRA you will need to be running at least v7.3, read more about the enhancements made in vRA 7.3 from the release notes or what’s new. In the example we’ll work towards multiple web servers are provisioned with an On-Demand Load Balancer, along with app servers and a database server. The On-Demand Load Balancer deploys an NSX edge for load balancing and adds the web servers as pool members. There are a number of available customisations which we’ll cover in the configuration process below.
Adding Endpoints
The following process assumes that you have a fully deployed vRA topology with all the components required to provision virtual machines; vCenter endpoint(s), reservations, compute resources, and a published catalog with entitlements. It would also be beneficial to have an understanding of using an NSX edge for load balancing or have deployed an edge manually to see the corresponding deployment options.
The first step is to add the NSX Manager as a vRA endpoint. From the Infrastructure tab select Endpoints and Endpoints again. Click New and select Networking and Security, NSX. Enter the details for the NSX Manager. Before adding the NSX endpoint we can create an association with the registered vCenter Server. From the Associations tab, click New. Select the vCenter Server from the dropdown, the platform type will auto-populate to vSphere and the description vSphere to NSX Association. Click Test Connection and then Ok to save the configuration.
Blueprint Modifications
After NSX has been added as an endpoint navigate to Blueprints under the Design tab. From the design canvas of a new or existing blueprint select Network & Security, drag and drop the On-Demand Load Balancer onto the canvas.
Click the On-Demand Load Balancer that has been added to the canvas. When the load balancer is provisioned in NSX the servers associated with the load balancer in the blueprint are automatically added as members in the pool. This is set in the Member field, in the example below the web servers in the blueprint are added as members of the load balancer.
The network for the member servers and the network for the VIP address are configured in the appropriate fields. Leave the IP address blank to automatically assign an IP address from the associated VIP network. Under Virtual servers click New, here you can configure the protocol settings for the load balancer, and the algorithm/persistence, health check, and connection settings by selecting Customize.
Before saving the blueprint click the settings cog at the top of the page, this opens the blueprint properties. From the NSX Settings tab set the Transport zone to attach the load balancer to, this can be a local or universal transport zone. Next select the Edge and routed gateway reservation policy, this is the reservation policy (compute, storage) that will be used when provisioning the edge.
Click the Properties tab and select Custom Properties. There are a number of optional parameters we can add here.
- NSX.Edge.ApplianceSize sets the appliance size of the edge, accepted values are compact, large, quadlarge and xlarge.
- NSX.Edge.HighAvailability deploys the edge appliance in HA mode when the value is true. Without this property only a single appliance is deployed.
- NSX.Edge.HighAvailability.PortGroup references the port group to use for the heartbeat network of the edge appliances deployed in HA mode.
Click Ok and Finish to save the blueprint. Make the blueprint available as a catalog item and request a test deployment. In vSphere you will see the edge and VMs being provisioned and, once complete, the virtual machines will be added as members in the load balacer pool. You can view the settings of the deployed edge in the vSphere web client under Networking & Security, NSX Edges, double click the edge and select Load Balancer.
Post Deployment
When the deployment is destroyed the edge appliances are removed along with the VMs as part of the cleanup process. If the deployment is scaled out then the new server is added as a member to the existing load balancer pool, likewise if the deployment is scaled in then the server deleted is also removed from the pool.
The scale in and scale out actions are assigned as entitled actions from within the relevant entitlement Aswell as having the permissions to perform the scale actions the blueprint must also contain a higher number of maximum instances. In the example below 2 web servers will be deployed with an On-Demand Load Balancer, as the maximum number of instances is set to 10 the requester can scale out the number of web servers and pool members to a maximum of 10 servers.
Thanks for the detailed explanation on this feature.
Unfortunately, it’s not working in my lab environment…not sure what am I missing in this…I have used the same custom properties but edge is deploying only as a compact one…..any thoughts ?
LikeLike