Tag Archives: SDDC

vRealize Operations 6.4 Install Guide

The vRealize product suite is a complete, enterprise, cloud management and automation platform for private, public, and hybrid clouds. Specifically vRealize Operations Manager provides intelligent operations management across heterogeneous physical, virtual, and cloud environments from a wide range of vendors. vRealize Operations Manager is able to deliver proactive and automated performance improvements  by implementing resource reclamation, configuration standardisations, workload placement, planning, and forecasting techniques. By leveraging vRealize Operations Manager users can protect their environment from outages with preventative and predictive analytics and monitoring across the estate; utilising management packs to  unify operations management. The image below is taken from the vRealize Operations Manager datasheet.

vro

vRealize Operations Manager can be deployed as a single node cluster, or a multiple node cluster. In single node cluster environments the master node is deployed with adapters installed which collect data and perform analysis. For larger environments additional data nodes can be added to scale out the solution, these are known as multiple node clusters. In a multiple node cluster the master node is responsible for the management of all other nodes. Data nodes handle data collection and analysis. High availability can be achieved by converting a data node into a replica of the master node. For distributed environments remote collector nodes are deployed to gather inventory objects and navigate firewalls in remote locations. These nodes do not store data or perform analytics; you can read more about remote collector nodes here. In this post we will deploy a single node cluster for small environments, proof of concept, test, or lab purposes, and link it to a vCenter Server instance. There will also be references to larger deployments and scaling out the application throughout the guide. If you have already deployed your vRealize cluster and want to add additional nodes or configure High Availability click here.

Licensing is split out into 3 editions; standard, advanced, and enterprise. To view the full feature list of the different editions see the vRealize Operations page. There are a number of VMware product suites bundling vRealize Operations, or it can be purchased standalone. Licensing is allocated in portable license units (vCloud suite and vRealize suite only), per processor with unlimited VMs, or in packs of 25 VMs (or OS instances).

Design Considerations

  • Additional data nodes can be added at any time using the Expand an Existing Installation option.
  • When scaling out the cluster by 25% or more the cluster should be restarted to optimise performance.
  • The master node must be online before any other nodes are brought online (except for when adding nodes at first setup of the cluster).
  • When adding additional data nodes keep in mind the following:
    • All nodes must be running the same version
    • All nodes must use the same deployment type, i.e. virtual appliance, Windows, or Linux.
    • All nodes must be sized the same in terms of CPU, memory, and disk.
    • Nodes can be in different vSphere clusters, but must be in the same physical location and subnet.
    • Time must be synchronised across all nodes.
  • These rules also apply to replica nodes. Click here to see a full list of multiple node cluster requirements.
  • Remote collector nodes can be deployed to remote locations to gather objects for monitoring. These nodes do not store data or perform any analytics but connect remote data sources to the analytics cluster whilst reducing bandwidth and providing firewall navigation. Read more about remote collector nodes here.
  • When designing a larger vROps environment check the Environment Complexity guide to determine if you should engage VMware Professional Services. You should also review the following documentation:

Requirements

  • The vRealize Operations Manager virtual appliance can be deployed to hosts running ESXi 5.1 U3 or later, and requires vCenter Server 5.1 U3 or later (it is recommended that vSphere 5.5 or later is used).
  • The virtual appliance is the preferred deployment method, a Windows and Linux installer is also available however the Windows installer will no longer be offered after v6.4, and end of life for the Linux installer is also imminent.
  • A static IP address must be used for each node (to change the IP after deployment see this kb).
  • Review the list of Network Ports used by vRealize Operations Manager.
  • The following table is from the vRealize Operations Manager Sizing Guide and lists the hardware requirements, latency, and configuration maximums.

sizing

Installation

Download vRealize Operations Manager here, in virtual appliance, Windows, or Linux formats. Try for free with hands on labs or a 60 day trial here.

In this example we are going to deploy as an appliance. Navigate to the vSphere web client home page, click vRealize Operations Manager and select Deploy vRealize Operations Manager.

vro1

The OVF template wizard will open. Browse to the location of the OVA file we downloaded earlier and click Next.

vro2

Enter a name for the virtual appliance, and select a location. Click Next.

vro3

Select the host or cluster compute resources for the virtual appliance and click Next.

vro4

Review the details of the OVA, click Next.

vro5

Accept the EULA and click Next.

vro6

Select the configuration size based on the considerations listed above, then click Next.

vra7

Select the storage for the virtual appliance, click Next.

vra8

Select the network for the virtual appliance, click Next.

vra9

Configure the virtual appliance network settings, click Next.

vra10

Click Finish on the final screen to begin deploying the virtual appliance.

vra11

Setup

Once the virtual appliance has been deployed and is powered on, open a web browser to the FQDN or IP address configured during deployment. Select New Installation.

install1

Click Next to begin the setup wizard.

install2

Configure a password for the admin account and click Next.

install3

On the certificate page select either the default certificates or custom. For assistance with adding custom certificates click here.

install4

Enter the host name for the master node and an NTP server, click Next.

install5

Click Finish.

install6

If required you can add additional data nodes before starting the cluster, or add them at a later date. See the Design Considerations section of this post before scaling out. To add additional data nodes or configure High Availability follow the steps at vRealize Operations High Availability before starting the cluster. Alternatively, you can start the cluster as a single node cluster and add data nodes or High Availability at a later date.

Since we are deploying a single node cluster we will now click Start vRealize Operations Manager. Depending on the size of the cluster it may take 10-30 minutes to fully start up.

install7

Confirm that the cluster has adequate nodes for the environment and click Yes to start up the application.

install8

After the cluster has started you will be diverted to the user interface. Log in with the admin details configured earlier.

install9

The configuration wizard will automatically start, click Next.

install10

Accept the EULA and click Next.

install11

Enter the license key or use the 60 day product evaluation. Click Next.

install12

Select whether or not to join the VMware Customer Experience Improvement Program and click Next.

install13

Click Finish.

install14

The vRealize Operations Manager dashboard will be loaded. The installation process is now complete. The admin console can be accessed by browsing to http:///admin where is the IP address of FQDN of your vRealize Operations Manager appliance or server.

install15

To add additional data nodes or configure High Availability see the vRealize Operations High Availability post.

Post Installation

After first setup we need to secure the console by creating a root account. Browse to the vROps appliance in vSphere and open the console. Press ALT + F1 and log in as root. You will be prompted to create a root password. All other work in this post is carried out using the vRealize Operations web interface.

The vRealize Operations web interface can be accessed by browsing to the IP address or FQDN of any node in the vRealize Operations management cluster (master node or replica node). During the installation process the admin interface is presented, after installation the IP address or FQDN resolves to the user interface. To access the admin interface browse to https:///admin where is the IP address or FQDN of either node in the management cluster. For supported browsers see the vRealize Operations Manager 6.4 Release Notes.

The next step is to configure the vCenter Adapter to collect and analyse data. Select Administration from the left hand navigation pane. From the Solutions menu select VMware vSphere and click the Configure icon.

config1

Enter the vCenter Server details and credentials with administrator access.

config2

Click Test Connection to validate connectivity to the vCenter Server.

config3

Expand Advanced Settings and review the default settings, these can be changed if required. Click Define Monitoring Goals and review the default policy, again this can be changed to suit your environment.

config4

When you’re ready click Save Settings and Close. The vCenter adapter will now begin collecting data. Collection cycles begin every 5 minutes, depending on the size of your environment the initial collection may take more than one cycle.

config5

Once data has been collected from the vCenter Server go back to the Home page and browse the different tabs and dashboards.

dashboard

Customise your vRealize Operations Manager instance to suit you environment using the VMware guides below.

Windows 2016 Storage Spaces Direct

Storage Spaces Direct for Windows Server 2016 is a software defined storage solution providing pooled storage resources across industry standard servers with attached local drives. Storage Spaces Direct (S2D) is able to provide scalability, built-in fault tolerance, resource efficiency, high performance, simplified management, and cost savings.

Storage Spaces Direct is a feature included at no extra cost with Datacentre editions of Windows Server 2016. S2D can be deployed across Windows clusters comprising of between 2 and 16 physical servers, with over 400 drives, using the Software Storage Bus to establishe a software-defined storage fabric spanning the cluster. Existing clusters can be scaled out by simply adding more drives, or more servers to the cluster. Storage Spaces Direct will automatically detect additional resources and absorb these drives into the pool; redistributing existing volumes. Resiliency is provided across not only drives, components, and servers; but can also be configured for chasis, rack, and site fault tolerance by creating fault domains to which the data spread will comply. The video below provided by Microsoft goes into more detail about fault domains and how they provide resiliency.

Furthermore volumes can be configured to use mirror resiliency or parity resiliency to protect data. Using mirror resiliency provides resiliency to drive and server failures by storing a default of 3 copies across different drives in different servers. This is a simple deployment with minimal CPU overhead but a relatively inefficient use of storage. Alternatively we can use parity resiliency, where parity symbols are spread across a larger set of data symbols to provide both drive and server resiliency, but also a more efficient use of storage resources (requires 4 physical servers). You can learn more about both these methods at the Volume Resiliency blog by Microsoft.

The main use case for Storage Spaces Direct is a private cloud (either on or off-premises) using one of two deployment models. Hyper-Converged where compute and storage reside on the same servers, in this use case virtual machines would sit directly on top of the volumes provided by S2D. Using a Private Cloud Storage or Converged deployment method S2D is disaggregated from the hypervisor, providing a separate storage cluster for larger-scale deployments such as Iaas (Infrastructure as a Service). A SoFS (Scale-out File Server) is built on S2D to provide network-attached storage over SMB3 file shares.

Storage Spaces Direct is configured using a number of PowerShell cmdlets, and utilises Failover Clustering and Cluster Shared Volumes. For instructions on enabling and configuring S2D see Configuring Storage Spaces Direct – Step by Step, Robert Keith, Argon Systems. The requirements are as follows:

  • Windows Server 2016 Datacentre Edition.
  • Minimum of 2 servers, maximum of 16, with local-attached SATA, SAS, or NVMe drives.
  • Each server must have at least 2 solid-state drives plus at least 4 additional drives, the read/write cache uses the fastest media present by default.
  • The SATA and SAS devices should be behind a HBA and SAS expander.
  • Storage Spaces Direct uses SMB3, including SMB Direct and SMB Multichannel, over Ethernet to communicate between servers. 10 GbE or above is recommended for optimum performance.
  • All hardware must support SMB (Server Message Block) and RDMA (Remote Direct Memory Access).

s2ddeployments

Deploying EMC Unity VSA

The EMC Unity product line is a mid-range storage platform built completely from the group up as an eventual replacement for most VNX and VNXe use cases. The Unity virtual storage appliance is a software defined storage platform bringing the software intelligence of Unity arrays to your existing storage infrastructure.

The Unity VSA is ideal for remote office and branch offices (ROBO) as well as hardware consolidation and IT staging and testing. It comes in a 4 TB free community edition and a subscription based professional edition which seamlessly scales up from 10 TB to 20 or 50 TB. The virtual storage appliance includes all the features of the Unity range such as replication, data protection snapshots, FAST VP auto-tiering and more.

See also EMC Unity Setup Guide, which covers a walkthrough on the setup of a physical Unity array.

vsa

Key features

  • Affordable software defined solution
  • Deploy to your existing storage infrastructure
  • Quick and easy setup of CIFS, NFS and iSCSI
  • Unified block, file and VMware VVOLs support
  • Allows VMware administrators to manage storage from vCenter
  • HTML5-enabled Unisphere management
  • Manage virtual storage and physical arrays together

Requirements

  • ESXi 5.5 or later (must be ESXi 6.0 or later for VVOLs)
  • The use of VMware vCenter Server to manage ESXi is optional but recommended
  • The Unity VSA requires 2 vCPU, 12 GB RAM and 6 NICs (4 ports for I/O, 1 for Unisphere, 1 for system use)

If you are deploying the Unity VSA in a production environment then you should consider how the data is stored across your existing hardware ensuring RAID and HA are configured appropriately. If you are presenting VMware datastores or virtual volumes then contact EMC support for best practises and the VMware vStorage APIs for Storage Integration (VAAI) and vStorage APIs for Storage Awareness (VASA).

Deploying Unity VSA

Download the OVA file from https://www.emc.com/products-solutions/trial-software-download/unity-vsa.htm and deploy the OVA to vSphere. Accept the extra configuration options, this is just to disable time synchronisation of the virtual machine as it is controlled from within the appliance.

ovf1

The only customisation settings required are the system name and network settings.

ovf2

Once the appliance has been deployed right click the virtual machine and select Edit Settings. Add the virtual hard disks required for the file systems on your virtual appliance, this can be done later but you will not be able to create any storage pools until additional disks are added. Note that virtual hard disks 1 – 3 are for system use and should not be modified.

Powered on the appliance, when it has fully booted browse to the IP address configured during the OVF deployment process. Log in with the default user of admin with password Password123#.

vsa1

The Unisphere configuration wizard will auto start, click Next.

vsa2

Accept the license agreement and click Next.

vsa3

Configure the admin and service passwords, click Next.

vsa4

Obtain a license key from https://www.emc.com/auth/elmeval.htm and click Install License to upload the .lic file, click Next.

vsa5

Configure the DNS servers and click Next.

vsa6

Configure the NTP servers and click Next.

vsa7

You can create a pool now or later. To create a storage pool now click Create Pools. Unisphere scans for virtual disks available to the VM that can be used for a storage pool. Once the storage pool has been created click Next.

vsa8

Configure the SMTP server and recipients for email alerts, click Next.

vsa9

Add the network interfaces to use for iSCSI and click Next.

vsa10

Add a NAS server to store metadata, click Next.

vsa11

This concludes the Unisphere configuration wizard.

vsa12

You will be returned to the Unisphere dashboard.

vsa13

The virtual storage appliance has now been deployed and uses the same software and Unisphere interface as its hardware counterpart. From here you can go ahead and setup CIFS and NFS shares or present iSCSI targets.