Configuring VMware Cross-vCenter NSX

This post provides an overview of cross-vCenter NSX and walks through the configuration steps. Cross-vCenter NSX allows central management of network virtualization and security policies across multiple vCenter Server systems. Cross vCenter NSX introduces universal objects; such as universal logical switches, universal logical routers, and universal distributed firewall rules. Universal objects are able to span multiple sites or vCenter Server instances, enhancing workload mobility by allowing cross vCenter and long distance vMotion for virtual machines, whilst keeping the same network settings and firewall rules. This improves DR capabilities, overcomes scale limits of vCenter Server, and gives administrators more control over resource pooling and the separation of environments.

Cross vCenter-NSX was introduced in NSX v6.2 and requires vSphere v6.0 or later. As normal NSX Manager is deployed with vCenter server in a 1:1 pairing.  In a single site NSX deployment the NSX Manager is given the standalone role by default. When configuring cross-vCenter NSX one NSX Manager is assigned the primary role, and up to seven other NSX Managers are assigned the secondary role. NSX Managers configured for cross-vCenter NSX must all be running the same version. The primary NSX Manager is responsible for deploying the Universal Controller Cluster; forming the control plane across the NSX Managers. The Universal Controller Cluster runs in the site of the primary NSX Manager. Universal objects are created on the primary NSX Manager and automatically synchronized across the multi-site NSX environment.

Configuring Cross-vCenter NSX

The steps below assume you have already deployed and registered the NSX Managers, and have a good understanding of NSX. This post is intended as add on to the NSX Install Guide to provide an outline of the additional or different steps required for a cross-vCenter NSX install, further resources are listed at the bottom of the page. If you are using vCenter enhanced linked mode then multiple NSX Manager instances are displayed within the same interface, or single pane of glass, when managing the Network & Security section of the vSphere web client. Enhanced linked mode is not a requirement for cross-vCenter NSX however, and vCenter Server systems not in enhanced linked mode can still be configured for cross-vCenter NSX.

From the Networking & Security page of the vSphere web client select Installation, highlight the NSX Manager in the primary site, from the Actions menu select Assign Primary Role.

NSX_Promote

The secondary NSX Manager(s) synchronize with the primary using the Universal Synchronization Service. These sites do not run any NSX Controllers, although they can be redeployed easily in the event of a primary site outage. Before assigning the secondary role you should ensure there are no existing NSX Controllers deployed in the associated vCenter. If you have already assigned a segment ID pool to the NSX Managers then ensure the segment ID pools do not overlap. Select the primary NSX Manager and from the Actions menu click Add Secondary NSX Manager. Enter the secondary NSX Manager information and admin password.

NSX2

Review the table of NSX Managers, the roles have now changed accordingly.

NSX_Roles

The universal controller cluster is formed by individually deploying the NSX controllers from the primary NSX Manager, the method of deploying the controllers is the same (see NSX Install Guide Part 1 – Mgmt and Control Planes for further assistance). Once the controllers are deployed you will notice placeholder controllers listed against the secondary NSX Manager, these are not connected or deployed. In the event of a site failure the configuration is synchronized between NSX Managers so you can simply re-deploy the controllers in the DR site. To see the failover process review this blog post. VMware recommend deploying 3 controllers on different hosts with anti-affinity rules.

NSX_Controllers

The next part of the install process is to follow the host preparation and VXLAN configuration steps as normal (see NSX Install Guide Part 2 – Data Plane for further assistance). Create the segment ID pools for each NSX Manager, making sure they do not overlap. On the primary NSX Manager you will also assign a universal segment ID pool.

In order for us to deploy universal logical switches we need to create a universal transport zone. A universal transport zone determines which hosts a universal logical switch can reach, spanning multiple vCenters. From the Logical Network Preparation tab open Transport Zones, ensure the primary NSX Manager is selected and click the plus symbol. Select Mark this object for Universal Synchronization, and enter the configuration for the universal transport zone. All universal objects must be created on the primary NSX Manager, change the NSX Manager to the secondary site and you will see the universal transport zone has synchronized there also.

NSX_TZ

Next we will create a universal logical switch for the transit network. Local objects such as logical switches, logical routers, and Edge Services Gateways can still be deployed from each NSX Manager, although by design they are only local to the vCenter linked to that specific NSX Manager, and cannot be deployed or edited elsewhere. From the left hand navigation pane in Networking & Security select Logical Switches, ensure the primary NSX Manager is selected and click the plus symbol. Enter a name for the transit network and select the universal transport zone we created earlier.

NSX_Universal_Transit

At this stage you can also deploy another universal logical switch, connecting a couple of test VMs on a private subnet, and have them ping one another to confirm connectivity. Now that we have a transit network and test universal logical switches connected to our universal transport zone we can go ahead and create a universal DLR. In this particular environment we have already deployed an ESG in each site. For further assistance with deploying an ESG and DLR see NSX Install Guide Part 3 – Edge and DLR.

From the Networking & Security page click NSX Edges, ensure the primary NSX Manager is selected and click the plus symbol. The control VM for the DLR is deployed to the primary site, again the configuration is synchronized and this can be re-deployed to the DR site in the event of a primary site outage. Select Universal Logical Router and follow the wizard as normal, if local egress is required then check the appropriate box. Sites configured in a cross-vCenter NSX environment can use the same physical routers for egress traffic, or have the local egress feature enabled within a universal logical router. The local egress feature allows routes to be customized at host, cluster, or router level.

NSX_UDLR

From the NSX Edges page double click the new universal DLR, select Manage, Settings, Interfaces and click the add button. In order for traffic to route from the universal DLR to the ESG(s) we must add an uplink interface connecting them to the universal transit network. Change the logical router interface to Uplink, in the Connected To field select the transit network universal logical switch we created earlier. Configure the IP and MTU settings of the interface per your own environment.

NSX_UDLR_Interface

You can also add Internal interfaces here corresponding with universal logical switches for virtual machine subnets. Before these subnets can route out follow the same process to add an Internal interface to the ESG(s) connecting them to the same transit network.

A virtual machine connected to the test universal logical switch can now vMotion between sites keeping the same IP addressing, providing L2 over L3 capability. As well as remaining on the same logical network a virtual machine can also be migrated across sites without any additional firewall rules, this is achieved with the use of universal firewall rules. Universal firewall rules require a dedicated section creating under the Firewall section of Networking & Security, you must select Mark this section for Universal Synchronization. For assistance with creating universal firewall rules see here.

NSX_Universal_Firewall

Additional Resources

To plan a cross-vCenter NSX installation review the VMware Cross-vCenter NSX Design Guide, Cross-vCenter NSX Topologies Guide, and the VMware Cross-vCenter Installation Guide. For more information on cross-vCenter NSX design see the following blog posts:

Installing vCenter Internal CA signed SSL Certificates

This post will walk through the process of replacing the default self-signed certificates in vCenter with SSL certificates signed by your own internal Certificate Authority (CA). In previous versions of vSphere the certificate replacement procedure was so complex that many administrators ignored it completely. Now with the certificate tool improvements in vSphere 6.x, and the ever increasing security threat of todays digital world, applying SSL certificates takes on an enhanced significance for verifying servers, solutions, and users are who they say they are.

The procedure outlined below is specific to installing Microsoft intermediate CA signed certificates on VCSA 6.5 with embedded PSC, protecting us against man in the middle attacks with a secure connection which we can see in the screenshot below. From v6.0 onwards the VMware Certificate Authority (VMCA) was also introduced, for more information on using the VMCA see this blog post, or to read how to use the VMCA as an intermediate CA see here. VMware documentation for replacing self-signed certificates can be reviewed from this KB article.

Trusted_vSphere

Before beginning the replacement certificate process ensure you have a good backup, and snapshot of the VCSA. The following links are the official VMware guides and this blog post provides a good overview of the certificates we’re actually going to be replacing. Replacing default certificates with CA signed SSL certificates in vSphere 6.x (2111219)Replacing a vSphere 6.x Machine SSL certificate with a Custom Certificate Authority Signed Certificate (2112277)How to replace the vSphere 6.x Solution User certs with CA signed certs (2112278).

Generate CSR

The first thing we need to do is generate a Certificate Signing Request (CSR). Open an SSH connection to the VCSA using an SSH client such as Putty, and login as root – if you need to enable SSH you can do so from the VAMI (https://vCenterIPorFQDN:5480) under Access; enable both SSH Login and Bash Shell. Run the following command to open the VMware built in Certificate Manager tool:

/usr/lib/vmware-vmca/bin/certificate-manager

Cert_Tool_1

Select the appropriate option. In this case we first want to replace the machine SSL certificate with a custom certificate, option 1. When prompted enter the SSO administrator username and password. Enter 1 again to generate certificate signing request(s) and Key(s) for machine SSL certificate, and enter the output directory. In the example below we are using the /tmp directory. Fill in the required values for the certool.cfg file.

Cert_Tool_2

The CSR and key are generated in the location specified. Change the shell to /bin/bash using chsh -s "/bin/bash" root and open an SCP connection to the VCSA using WinSCP. Copy the vmca_issued_csr.csr file to your local machine, you can use Notepad to view the contents of the file. Leave the WinSCP session open as we’ll need it to copy the certificate chain back to the VCSA.

Request Certificate

The next step is to use the CSR to request a certificate from your internal Certificate Authority (official KB here). A Microsoft CA template needs creating with the settings specified here (official KB here) before requesting the certs. Once this is done open a web browser to the Microsoft Certificate Services page (normally https://CAServer/certsrv) and select Request a Certificate.

Internal_CA_1

Then we want to Submit a certificate request by using a base-64-encoded CMC or PKCS #10 file, or submit a renewal request by using a base-64-encoded PKCS #7 file. The next page allows us to enter the CSR generated earlier to request a certificate with the pre-configured vSphere 6.5 certificate template.

Internal_CA_2

Click Submit and then select Base 64 encoded and Download certificate and Download certificate chain. A .cer file will be downloaded, I have renamed this machine_name_ssl.cer, and a .p7b. Double click the .p7b file to open in certmgr, locate and right click the root certificate, select All Tasks, Export. Export the root certificate in Base-64 encoded X.509 (.CER) format, in this example I have named the file Root64.cer. Using WinSCP copy the machine and root certificate files to the VCSA.

Install Certificate

Go back to Certificate Manager and enter 1 to continue to importing custom certificate(s) and key(s) for machine SSL certificate. Enter the file for the machine SSL certificate we copied, I have used /tmp/machine_name_ssl.cer. Enter the associated custom key that was generated with the CSR request, in this case /tmp/vmca_issued_key.key. Finally, enter the signing certificate of the machine SSL certificate, in this case /tmp/Root64.cer. When prompted enter y to replace the default machine SSL certificate with the custom certificate.

Cert_Tool_3

The certificate will now be installed, when finished a success message will be displayed. If certificate installation fails at 0% see this KB article.

Cert_Tool_4

To verify the machine certificate open a web browser to the vCenter FQDN, the connection will now show secure. Depending on the browser used you can view the certificate properties to verify it is correct, alternatively browse to https://vCenterFQDN/psc and log in with an SSO administrator account. Open Certificate Management and Machine Certificates, select the installed machine certificate and click Show Details, verify the certificate properties are correct.

Certificate_Management

Solution User Certificates

Repeat the steps above for the solution user certificates (official KB here). Replacing the solution user certificates may break some external plugins, such as SRM, in which case you should review this KB article for corrective action. To recap: /usr/lib/vmware-vmca/bin/certificate-manager. This time select option 5 replace solution user certificates with custom certificates. Generate the CSRs and keys, you will notice that for the solution user certs 4 CSR and key files are created; machine, vsphere-webclient, vpxd, and vpxd-extension.

Cert_Tool_5

Using WinSCP copy the files to your local machine and repeat the certificate request process from the Microsoft Certificate Services page. Copy the new certificates to the VCSA and repeat the install process. Solution User certificates can be viewed on the PSC web interface under Certificate Management, Solution User Certificates.

Solution_User_Management

Upgrading to vCenter Server 6.5 Update 1

VMware have released the first major update to vSphere 6.5. This post will walk through how to update the vCenter Server Appliance (VCSA) from 6.5 to 6.5 U1. The new features in the latest release are listed here. The official VMware blog goes into further detail here, and of course the release notes cover the important technical information here.

Prior to updating vCenter ensure you have verified the compatibility of any third party products such as backups, anti-virus, monitoring, etc. Also cross-check the compatibility of other VMware products using the Product Interoperability Matrix. Since we are updating vCenter Server 6.5 to 6.5 U1 I am assuming the usual pre-requisites such as FQDN resolution, time synchronization, relevant ports open, etc. are already in place, and all hosts are running at least ESXi version 5.5. For more information on the requirements for vCenter Server 6.5, or if you are upgrading from an earlier version, the following posts may be of use:

Before beginning the update process take a backup and snapshot of the vCenter Server Appliance. There is downtime during the update but this is minimal – around 10 mins to update and reboot using an ISO as an update source, when using the online repository the update time may vary depending on your internet connection.

VAMI Update

The easiest way of updating the vCenter Server is through the VAMI (vCenter Server Appliance Management Interface). Browse to https://vCenter:5480, where vCenter is the FQDN or IP address of the vCenter Server. Log in as the root user.

VAMI1

Select the Update option from the navigator.

VAMI2

Click the Check Updates drop-down. If the VCSA has internet access then select Check Repository to pull the update direct from the VMware online repository.

If the VCSA does not have internet access, or you’d prefer to provide the patch manually then download the relevant patch from VMware here (in this case VMware-vCenter-Server-Appliance-6.5.0.10000-5973321-patch-FP.iso) and attach the ISO to the CD/DVD drive of the VCSA in the virtual machine settings. Back in the VAMI update page select the Check Updates drop-down and click Check CDROM.

VAMI3

Details of the available update from either the online repository or attached ISO are displayed. Click Install Updates.

VAMI4

Accept the EULA and click Install to begin the installation.

VAMI5

When the update process has completed click OK. From an attached ISO the installation took around 5 minutes.

VAMI7

The updated version and release date should now be displayed in the current version details. Finally, to complete the upgrade reboot the vCenter Server Appliance. Select Summary from the navigator and click Reboot.

VAMI8

CLI Update

Alternatively the vCenter Server Appliance can be updated from the command line. Again, either using the online repository or by downloading the patch from VMware here (VMware-vCenter-Server-Appliance-6.5.0.10000-5973321-patch-FP.iso or latest version) and attaching the ISO to the CD/DVD drive of the VCSA in the virtual machine settings. For more information on patching the vCenter Server Appliance using the appliance shell see this section of VMware docs.

Log in to the vCenter Server appliance as root. First stage the patches from your chosen source using either:

  • software-packages stage --iso --acceptEulas stages software packages from ISO and accepts EULA.
  •  software-packages stage --url --acceptEulas stages software packages from the default VMware online repository and accepts EULA.

Next, review the staged packages, install the update, and reboot the VCSA.

  • software-packages list --staged lists the details of the staged software package.
  • software-packages install --staged installs the staged software package.
  • shutdown reboot -r update reboots the VCSA where ‘update’ is the reboot reason. Use -d to add a delay.

CLI4

ESXi 6.5 FCoE Adapters Missing

After installing or upgrading to ESXi 6.5 FCoE adapters and datastores are missing. In this case the hardware in use is a HP ProLiant BL460c Gen9 server with HP FlexFabric 10Gb 2-port 536FLB adapters, although this seems to have been a problem for other vendors (see here) and versions too.

This issue should be resolved with a driver provided by the vendor which has the FCoE auto discovery on boot parameter enabled. Cross reference your hardware against the VMware Hardware Compatibility Guide here, and confirm you are using the correct version of the bnx2fc driver and firmware. If no updated driver is available from the vendor then review the workarounds outlined below.

Stateful Installs

Credit to this article, SSH onto the host and run the following commands.

esxcli fcoe adapter list lists the discovered FCoE adapters, at this stage there will be no results.

esxcli fcoe nic list lists the adapters available as potential FCoE candidates. Locate the name of the adapter.

esxcli fcoe nic enable -n vmnicX enables the adapter, replace vmnicX with the adapter name, for example vmnic2.

esxcli fcoe nic discover -n vmnicX enables discovery on the adapter, replace vmnicX with the adapter name.

esxcli fcoe adapter list lists the discovered FCoE adapters, you should now see the FCoE adapters listed.

The storage adapters should now be showing in the vSphere web client, however if you are using stateless installs with Auto Deploy, then this workaround is not persistent and is lost at reboot.

storageadapters2

Stateless Installs

Credit to this article, we were able to create a custom script bundle to enable discovery on the FCoE adapters as part of the deploy rule with the steps below. Custom script bundles open up a lot of possibilities with Auto Deploy, but at this stage they are CLI only. I also noticed that if you create a deploy rule with a script bundle from the CLI, although it shows in the GUI if you then edit that rule in the GUI (for something unrelated, e.g. updated host profile) then it removes the script bundle without warning. So this is something you would need to weigh up against your environment, if you are already using CLI to configure deploy rules it shouldn’t be a problem.

PowerCLI can now be installed directly through PowerShell, if you don’t already have PowerCLI installed see here.

  • First up we’ll need to create the script on a Linux / Unix system. I just used a test ESXi host we had kicking about over SSH. Type vi scriptname.sh replacing with an appropriate name for your script.
  • The file will open, type i to begin editing.
  • On the first line enter #!/bin/ash followed by the relevant enable and discover commands from the section above. You can see in the example below the commands for enabling vmnic2 and vmnic3 as FCoE adapters.

ssh1

  • Press escape to leave the text editor and type :wq to save changes to the file and close.
  • Next we need to create the script bundle that will be imported into Auto Deploy. Type tar -cvzf bundlename.tgz scriptname.sh

ssh2

  • Copy the script bundle with the .tgz extension to your local machine, or the computer from where you will be using PowerCLI to create the deploy rule. In my case I copied the file over with WinSCP.
  • You should also have an ESXi image in zip format, make a note of the location. Add the script bundle and the ESXi software depot by running the following commands Add-ScriptBundle location\bundlename.tgz and Add-EsxSoftwareDepot location\file.zip. If you need further assistance with building custom images or using PowerCLI to manage Auto Deploy see the VMware Auto Deploy 6.x Guide and How to Create Custom ESXi Images posts.

ps1

  • Build the deploy rule using your own variables, again if you’re already using Auto Deploy I’m assuming you know this bit, we’re just adding an additional item in for the script bundle. See the guide referenced above if you need assistance creating deploy rules. I have used:
    • New-DeployRule -Name "Test Rule" -Item "autodeploy-script","HPE-ESXi-6.5.0-Build-5146846", LAB_Cluster, -Pattern "ipv4=192.168.0.101" | Add-DeployRule

ps2

  • The deploy rule is created and activated, I can now see it in the Auto Deploy GUI in the vSphere web client, with the associated script bundle. When the host boots from the deploy rule the script is extracted and executed, and the FCoE adapters are automatically enabled and discovered on boot.

autodeployGUI

  • If you don’t use the | Add-DeployRule parameter then the deploy rule will be created but show inactive. You can activate using the GUI but do not edit the rule using the GUI or the script bundle will break.
  • If you are updating an existing image then don’t forget to remove cached rules by remediating host associations, under the Deployed Hosts tab.

VMware vRealize Business for Cloud Install

VMware vRealize Business for Cloud provides automated cost analysis and consumption metering; allowing administrators to make workload placement decisions between private and pulic clouds based on cost and available services. Furthermore infrastructure stakeholders have full visibility of virtual machine provisioning costs and are able to accurately manage capital expenditure and operating expenditure. For more information see the vRealize Business product page, you can try vRealize Business for Cloud using the Hands on Labs available here.

This post will walk through the installation of vRealize Business for Cloud 7.3; we’ll be provisioning to a vSphere environment running vRealize Automation 7.3. Each vRealize Business instance scales up to 20,000 virtual machines and 10 vCenter Servers, remote data collectors can be deployed to distributed geographical sites. vRealize Business is deployed in OVA format as a virtual appliance, you should ensure this appliance is backed up appropriately. There is no built in HA or DR functionality within vRealize Business, but you can take advantage of VMware components such as High Availability, Fault Tolerance, or Site Recovery Manager. Logs can be output to a syslog server such as vRealize Log Insight.

vRB_Launchpad

Requirements

  • vRealize Business for Cloud must be deployed to an ESXi host, and can be used to mange vCenter Server, vCloud Director, vCloud Air, vRealize Automation, and vRealize Operations Manager.
  • vRB 7.3 is compatible with vCenter and ESXi versions 5.5 through to 6.5, and vRealize Automation verisons 6.2.4 through to 7.3 (latest versions at the time of writing).
  • For compatibilty with other VMware products see the VMware Product Interoperability Matrix.
  • The vRB appliance requires 8 GB memory, 4 vCPU and 50 GB disk (thick provisioned).
  • If you use any remote data collectors the memory on these appliances can be reduced to 2 GB.
  • vRealize Business for Cloud is licensed as part of the vRealize suite, per CPU, or in packs of 25-OSI.
  • There are 2 available editions; standard and advanced. Features such as public cloud costing require the advanced version, for more information see the feature comparison section of the product page.
  • The web UI can be accessed from IE 10 or later, Chrome 36.x or later, and Firefox 31.x and later.
  • Time synchronization and name resolution should be in place across all VMware components.
  • For a full list of pre-requisites including port requirements see here.

Before beginning review the following VMware links:

Installing vRB

Download the VMware vRealize Business for Cloud 7.3 OVA file here. Log into the vSphere web client and right click the datastore, cluster, or host where you want to deploy the virtual appliance. Select Deploy OVF Template and browse to the location of the OVA file.

  • Enter a name for the virtual appliance and select the deployment location, click Next.
  • Confirm the compute resource and click Next.
  • Review the details of the OVF template and click Next.
  • Accept the end user license agreement and click Next.
  • Select the storage for the virtual appliance, ensure the virtual disk format is set to Thick provision eager zeroed, and click Next.
  • Select the network to attach to the virtual appliance and click Next.
  • Set the Currency, note that at this time the currency cannot be changed after deployment. Ensure Enable Server is checked, select or de-select SSH and the customer experience improvement program based on your own preferences. Configure a Root user password for the virtual appliance and enter the network settings for the virtual appliance in the Networking Properties fields.
  • Click Next and review the summary page. Click Finish to deploy the virtual appliance.

Once the virtual appliance has been deployed and powered on open a web browser to https://vRB:5480, where vRB is the IP address or FQDN of the appliance. Log in with the root account configured during setup.

vRB_Mgmt

Verify the settings under AdministrationTime Settings, and Network. At this stage the appliance is ready to be registered with a cloud solution. In this example I will be using vRealize Automation, for other products or further information see the install guide referenced above. Return to the Registration tab and ensure vRA is selected.

vRB_Register

Enter the host name or IP address of the vRA appliance or load balancer. Enter the name of the vRA default tenant and the default tenant administrator username and password. Select Accept vRealize Automation certificate and click Register.

Accessing vRB

vRealize Business for Cloud can be integrated into vRealize Automation, or you can enable stand-alone access. To access vRB after integrating with vRA log into the vRA portal. First open the Administration tab, select Directory Users and Computers, search for a user or group and assign the relevant business management roles. A user with a business management role has access to the Business Management tab in vRA.

vRB_Roles

Optional: to enable stand-alone access first enable SSH from the Administration tab. Use a client such as Putty to open an SSH connection to the virtual appliance, log in with the root account. Enter cd /usr/ITFM-Cloud/va-tools/bin to change directory, enter sh manage-local-user.sh and select the operation, in this case 5 to enable local authentication.

ssh

If you want to create new local users user option 1 and enter the username and password, when prompted for permissions VCBM_ALL provides administrator access and VCBM_VIEW read-only. You can also log in to the web UI with the root account, although it would be better practice to create a separate account.

Disable SSH from the Administration tab if required. Wait a few minutes for the services to restart and then browse to https://IP/itfm-cloud/login.html, where IP is the IP address of your appliance. If you try to access this URL without enabling stand-alone access you will receive a HTTP Status 401 – Authentication required error message.

vRB Configuration

We will continue with the configuration in the vRA portal, open the Administration tab and click Business Management.

vRB_Connections

Expand License Information, enter a license key and click Save. Expand Manage Private Cloud Connections, configure the required connections. In this example I have added multiple vCenter Server endpoints. Open the Business Management tab, the Launchpad will load.

vRB_Launchpad

Select Expenses, Private Cloud (vSphere) and click Edit Expenses. At this stage you will need the figures associated with hardware, storage, and licensing for the environment. You can also add costs for maintenance, labour, network, facilities, and any other additional costs.

vRB_Expenses_vSphere

Once vRB is populated with the new infrastructure costs utilisation and projected pricing will start to be updated. Consumption showback, what-if analysis, and public cloud comparisons can all be accessed from the navigation menu on the left hand side. For further guidance on getting the most out of vRB see the vRealize Business for Cloud User Guide.

vRB_Operational

vCenter Server Appliance Integrated TFTP Server

This post covers the steps required to use the vCenter Server Appliance for Auto Deploy, with the built in TFTP server in vSphere 6.5. For more information on Auto Deploy, and to see the process for creating ESXi images and deploy rules to boot hosts, see the VMware Auto Deploy 6.x Guide. This post assumes that you have a working vCenter Server Appliance, and may be of use if you have recently migrated from Windows vCenter to VCSA.

Enable Auto Deploy

Open the vSphere web client and click System Configuration, Nodes. Select the vCenter Server and open the Related Objects tab. The Auto Deploy, ImageBuilder Service, and VMware vSphere ESXi Dump Collector services should all be set to Automatic and Running.

To start a service right click and select Start, then select Edit Startup Type and choose Automatic.

servicesLog out of the web client and log back in. You should now see the Auto Deploy icon on the home page.

autodeploy1

Enable TFTP

Now that Auto Deploy is enabled we can configure the TFTP server. Enable SSH on the VCSA by browsing to the Appliance Management page: https://VCSA:5480 where VCSA is the IP or FQDN of your appliance.

Log in as the root account. From the Access page enable SSH Login and Bash Shell.

SSH

SSH onto the vCenter Appliance, using a client such as Putty, and log in with the root account. First type shell and hit enter to launch Bash.

To start the TFTP service enter service atftpd start. Check the service is started using service atftpd status.

startservice

To allow TFTP traffic through the firewall on port 69; we must run iptables -A port_filter -p udp -m udp –dport 69 -j ACCEPT. Validate traffic is being accepted over port 69 using iptables -nL | grep 69.

firewall

The TFTP server will now work, however we need to make a couple of additional changes to make the configuration persistent after the VCSA is rebooted. There isn’t an official VMware way of doing this, and as it’s done in Linux there may be more than one way of achieving what we want. Basically I am going to backup iptables and create a script to restore iptables and start the TFTP service when the appliance boots. The steps are outlined below and this worked for me, however as a reminder this is not supported by VMware, and if you are a Linux expert you’ll probably find a better way round it.

The following commands are all run in Bash on the vCenter Server Appliance, you can stay in the existing session we were using above.

First make a copy of the existing iptables config by running iptables-save > /etc/iptables.rules.

Next change the directory by running cd /etc/init.d, and create a new script: vi scriptname.sh, for example: vi starttftp.sh.

Press I to begin typing. I used the following, which was copied from the Image Builder Service startup script, and modified for TFTP.

#! /bin/sh
#
# TFTP Start/Stop the TFTP service and allow port 69
#
# chkconfig: 345 80 05
# description: atftpd

### BEGIN INIT INFO
# Provides: atftpd
# Required-Start: $local_fs $remote_fs $network
# Required-Stop:
# Default-Start: 3 5
# Default-Stop: 0 1 2 6
# Description: TFTP
### END INIT INFO

service atftpd start
iptables-restore -c < /etc/iptables.rules

The file must be in the above format to be compatible with chkconfig which runs the script at startup. I left the defaults in from the Image Builder Service as it made sense they started at the same time and had the same dependencies. If you wish to modify further see the following sources: Bash commands, Script Options, Startup, and vmwarebits.com for the iptables commands.

Press escape to leave the editor and :wq to save the file and quit.

Next set execute permissions on the script by running chmod +x scriptname.sh, for example: chmod +x starttftp.sh.

To set the script to run at startup use chkconfig –add scriptname.sh, for example: chkconfig –add starttftp.sh.

Reboot the vCenter appliance to test the script is running. If successful the atftpd service will be started and port 69 allowed, you can check these with service atftpd status and iptables -nL | grep 69.

Close the session and disable SSH if required.

Configure DHCP

In this example I will be using PXE boot to boot the ESXi hosts using a DHCP reservation. On the DHCP scope that will be serving the hosts I have configured reservations and enabled options 066 and 067. In the value for option 066 (Boot Server Host Name) goes the IP address or FQDN of the vCenter Server where TFTP is running. In the value for option 067 (Bootfile Name) I have entered the BIOS DHCP File Name (undionly.kpxe.vmw-hardwired).

DHCP

Now that Auto Deploy is up and running using the built-in components of VCSA 6.5 you can begin creating ESXi images and deploy rules to boot hosts; using the Auto Deploy GUI. See the VMware Auto Deploy 6.x Guide.

Add a User Defined Windows Administrator to a vRA Blueprint

This post will walk through implementing a process allowing a vRA portal user to specify a user account to be added to the local administrators group on a Windows server provisioned by vRA. There are plenty of posts out there, including a kb article, on adding the virtual machine requester (owner) to the administrators group if that is what you need to do. Before beginning I am assuming you have a fully working vRA installation (I’m using v7.2), and Windows templates with the vRealize Automation Guest Agent installed. Some blueprints would also be handy, but you can create those after.

We’ll need a script on the template Windows machine, in this example I’ve created a Scripts sub-folder within the VRMGuestAgent folder, and a new text file which I’ve saved as AdminUser.cmd. The full path therefore is C:\VRMGuestAgent\Scripts\AdminUser.cmd.

Location

Copy and paste the following line into the batch file: Net localgroup administrators /add %1.

Script

Log in to the vRA portal, for example https://*loadbalancer*/vcac/org/*tenant*. Open the Administration tab and select Property Dictionary. We need to provide the user with a field in the virtual machine request process for them to specify an account to be added as a local administrator. Click Property Definitions and New.

  • Enter a name, it is best practice to use the tenant name, a dot, and then the name of the proeprty definition, for example YourTenant.AdminUser.
  • Enter a useful description, this text will be displayed when the user points to the help symbol next to the field we’re adding in the virtual machine request.
  • Change the Data type to String, and select whether you want the field to be mandatory.
  • From the Display as drop-down menu select Textbox. Click Ok to save.

Admin1

Next click Property Groups. If your blueprints are using an existing property group then click the property group.  If you need to create a new property group click New and enter a name. The following lines need adding to the property group that is used, or will be used, by a blueprint.

  • Name:   VirtualMachine.Software0.Name
  • Value:   AdminUser
    • Replace the value with an appropriate name for the property, I have used the same name as the script but it doesn’t have to match up.
  • Name:   VirtualMachine.Software0.ScriptPath
  • Value:   C:\VRMGuestAgent\Scripts\AdminUser.cmd {YourTenant.AdminUser}
    • Replace the value with the location of the script on the template OS and include the squiggly brackets; with the name of the property definition we created earlier inside.
  • Name:   YourTenant.AdminUser
  • Value:
  • Show in Request:   Yes
    • Enter the name of the property definition we created earlier and leave the value blank (this will be entered by the user). Ensure Show in Request is ticked.

If you are already using VirtualMachine.Software0 for something else, such as adding the virtual machine owner to the local administrators group, then you can amend to VirtualMachine.Software1 and so on. When you’re done the entries should look something like this, click Ok.

Properties

If you haven’t yet assigned a property group to your blueprint then click the Design tab and Blueprints. Click the blueprint to edit, select the vSphere_Machine and click the Properties tab, from the Property Groups tab click Add.

CustomProperty

Select the property group we recently created or changed and click Ok. Click Save and Finish. The values in the property group will now be applied to any virtual machines deployed from this blueprint, repeat as required for any other vSphere_Machines or blueprints.

Assuming your blueprint is published and has the necessary entitlements; click the Catalog tab. Locate the catalog item linked to the blueprint and click Request. Select the vSphere_Machine component and you’ll see the new field for the requester to enter the domain\user or user@domain account to be added to the Windows local Administrator group. If you opted to make data input mandatory you’ll see an asterisk next to the new field.

Request