How to Install vSphere 7.0 – vRealize Operations Manager 8.2

How to Install vSphere 7.0 – vRealize Operations Manager 8.2

Introduction

In this post we take a look at a vRealize Operations (vROps) deployment for vSphere 7; building on the installation of vCenter 7.0 U1 and vSAN 7.0 U1. Shortly after installing vROps 8.2, vRealize Operations 8.3 was released. The install process is similar, you can read what’s new here and see the upgrade process here.

vRealize Operations is an IT operations management tool for monitoring full-stack physical, virtual, and cloud infrastructure, along with virtual machine, container, operating system, and application level insights. vROps provides performance and capacity optimisation, monitoring and alerting, troubleshooting and remediation, and dashboards and reporting. vROps also handles private costings, showback, and what-if scenarios for VMware, VMware Cloud, and public cloud workloads. Many of these features have been released with version 8.2, and now work slicker fully integrated into the vROps user interface, rather than a standalone product. Previously vRealize Business would cater for similar costing requirements, but has since been declared end of life.

vRealize Operations can be deployed on-premises to an existing VMware environment, or consumed Software-as-a-Service (SaaS). vRealize Operations Cloud has the same functionality, with the ongoing operational overhead of lifecycle management and maintenance taken care of by VMware. Multiple vCenter Servers or cloud accounts can be managed and monitored from a single vROps instance. For more information on vROps see the What is vRealize Operations product page.

vRealize Operations Manager 8.2 Install Guide

The vRealize Operations Manager installation for lone instances is really straight forward, as is applying management packs for monitoring additional environments. Where the installation may get more complex, is if multiple cluster nodes need to be deployed, along with remote collector nodes, and/or multiple instances. If you think this may apply to you review the complexity levels outlined in the vRealize Operations Manager 8.2 Deployment Guide.

The installation steps below walk through the process of installing vROps using the master node. All deployments start out with a master node, which in some cases is sufficient to manage itself and perform all data collection and analysis operations. Optional nodes can be added in the form of; further data nodes for larger deployments, replica nodes for highly available deployments, and remote collector nodes for distributed deployments. Remote collector nodes, for example, can be used to compress and encrypt data collected at another site or another VMware Cloud platform. This could be an architecture where a solution like Azure VMware Solution is in use, with an on-premises installation of vROps. For more information on the different node types and availability setups see the deployment guide linked above.

When considering the deployment size and node design for vROps, review the VMware KB ​vRealize Operations Manager Sizing Guidelines, which is kept up to date with sizing requirements for the latest versions. The compute and storage allocations needed depend on your environment, the type of data collected, the data retention period, and the deployment type.

Installation

Before starting ensure you have a static IP address ready for the master node, or (ideally and) a Fully Qualified Domain Name (FQDN) with forward and reverse DNS entries. For larger than single node deployments check the Cluster Requirements section of the deployment guide.

The vRealize Operations Manager appliance can be downloaded in Open Virtualisation Format (OVF) here, and the release note for v8.2.0 here. As with many VMware products a 60 day evaluation period is applied. The vRealize Operations Manager OVF needs to be deployed for each vROps cluster node in the environment. Deployment and configuration of vRealize Operations Manager can also be automated using vRealize Suite Lifecycle Manager.

vRealize Operations Manager download

Log into the vSphere client and deploy the OVF (right click the data centre, cluster, or host object and select Deploy OVF Template).

The deployment interface prompts for the usual options like compute, storage, and IP address allocation, as well as the appliance size based on the sizing guidelines above. Do not include an underscore (_) in the hostname. The disk sizes (20 GB, 250 GB, 4 GB) are the same regardless of the appliance size configured. New disks can be added, but extending existing disks is not supported. Also be aware that snapshots can cause performance degradation and should not be used. For this deployment I have selected a small deployment; 4 CPU, 16 GB RAM.

Once deployed browse to the appliance FQDN or IP address to complete the appliance setup. You can double check the IP address from the virtual machine page in vSphere or the remote console. For larger environments and additional settings like custom certificates, high availability, and multiple nodes select New Installation. In this instance since vROps will be managing only a single vCenter with 3 or 4 hosts I select the Express Installation.

vRealize Operations Manager start page

The vRealize Operations Manager appliance will be set as the master node, this configuration can be scaled out later on if needed. Click Next to continue.

vRealize Operations Manager new cluster setup

Set an administrator password at least 8 characters long, with an uppercase and lowercase letter, number, and special character, then click Next. Note that the user name is admin, and not administrator.

vRealize Operations Manager administrator credentials

Click Finish to apply the configuration. A loading bar preparing vRealize Operations Manager for first use will appear. This stage can take up to 15 minutes.

vRealize Operations Manager initial setup

Login with the username admin and the password set earlier.

vRealize Operations Manager login page

There are a few final steps to configure before gaining access to the user interface. Click Next.

vRealize Operations Manager final setup

Accept the End User License Agreement (EULA) and click Next.

vRealize Operations Manager terms and conditions

Enter the license information and click Next.

vRealize Operations Manager license information

Select or deselect the Customer Experience Improvement Program (CEIP) option and click Next. Click Finish to progress to the vROps user interface.

vRealize Operations Manager final setup

Finally we’re into vRealize Operations home page, take a look around, or go straight into Add Cloud Account.

vRealize Operations Manager home page

Select the account type, in this case we’re adding a vCenter.

vRealize Operations Manager account types

Enter a name for the account, and the vCenter Server FQDN or IP address. I’m using the default collector group since we are only monitoring a small lab environment. You can test using Validate Connection, then click Add.

vRealize Operations Manager add vCenter Server

Give the vCenter account a few minutes to sync up, the status should change to OK. A message in the right-hand corner will notify that the vCenter collection is in progress.

vRealize Operations Manager vCenter collection

Back at the home page a prompt is displayed to set the currency; configurable under Administration, Management, Global Settings, Currency. In this case I’ve set GBP(£). For accurate cost comparisons and environment specific optimisations you can also add your own costs for things like hardware, software, facilities, and labour. Cost data can be customised under Administration, Configuration, Cost Settings.

vRealize Operations Manager quick start page

A common next step is to configure access using your corporate Identity Provider, such as Active Directory. Click Administration, Access, Authentication Sources, Add, and configure the relevant settings.

Multiple vCenter Servers can be managed from the vRealize Operations Manager interface. Individual vCenter Servers can also access vROps data from the vSphere client, from the Menu dropdown and vRealize Operations. A number of nested ESXi hosts are shut down in this environment which is generating the critical errors in the screenshot.

vRealize Operations Manager overview page

Featured image by Jonas Svidras on Unsplash

What is Oracle Cloud VMware Solution?

Introduction

Oracle Cloud VMware Solution (OCVS) provides high performance dedicated hardware using Oracle Cloud Infrastructure (OCI), running the full VMware software stack. Announced August 2020, Existing VMware and Oracle customers can now take advantage of:

  • Infrastructure-as-a-Service (IaaS) model with VMware overlay – abstracting functionality into the software for the customer to control, whilst consuming the underlying infrastructure as a service. This removes the overhead of traditional data centre maintenance tasks such as hardware and firmware patching, or failure remediation.
  • Cloud migrations with reduced risk and operational continuity – example use cases include data centre exits, data centre scale out, and disaster recovery or increased availability.
  • Hybrid applications and outcome focused refactoring – VMware workloads can run natively on Oracle Cloud Infrastructure, but can also be refactored gradually over time, or where there are clear business drivers and benefits in doing so.
  • Secure single tenancy infrastructuretier 3 and 4 secure cloud data centres, as well as an exclusive dual-region Government Cloud for the UK public sector in London and Newport; connected through a high speed private network.
Example application migration and modernisation using Oracle Cloud VMware Solution

Oracle Cloud VMware Solution Explained

Oracle Cloud VMware Solution wraps up the automated deployment and configuration of VMware Cloud Foundation (VCF) onto physical hardware in Oracle’s cloud data centres. Oracle are the VMware Cloud Provider Partner (VCPP) and the single point of support for the whole stack. VCF is a standardised architecture made up of VMware’s market leading Software-Defined Data Centre (SDDC) stack, consisting of:

Oracle Cloud VMware Solution is priced per OCPU for physical nodes, and that includes all hardware, support, and VMware licensing. VMware’s HCX capabilities are also included, and features like L2 network extension combined with VCF provide seamless migrations of VMware workloads to the cloud. This removes not only the need to refactor or rehost, it also means not changing any of the virtual machine file format or network settings. VMware administrators can leverage a common software based infrastructure across on-premises and the cloud, which they continue to manage using their existing tools and skills.

OCVS is deployed to the customers existing Oracle Cloud account using an existing Virtual Cloud Network (VCN). Virtual machines residing in the SDDC can then integrate into native cloud services, like Oracle RAC, Exadata, and Database Cloud Services, using the Oracle Cloud backbone network. Along with full control over the Oracle Cloud account, the customer retains full administrative/root access within the VMware stack too, and this means end-to-end control over the environment, with the ability to implement zero trust security protocols and policies.

Announcing the Global Availability of Oracle Cloud VMware Solution

During the deployment process, OCVS asks for the region for hosting the SDDC. An Oracle Cloud region is a geographic area containing at least 1, but currently being built out to 3 availability domains. An availability domain is a data centre or site, and within it are 3 fault domains used to spread workloads across hardware and racks to prevent against common failures. The vSAN element of VCF will replicate storage between physical hosts for high availability across fault domains.

Although the management of the underlying infrastructure hardware is carried out by Oracle, the VMware stack is managed by the customer; giving them the choice of product versions to run and when to upgrade. This allows full interoperability with existing third party solutions, like backups, monitoring, and security products, whilst reducing the risk of cloud migrations and data breaches. At the same time, removing the burden of hardware lifecycle management means engineers can focus on service improvements and project delivery.

A minimum of 3 and maximum of 64 physical nodes can be deployed in each vSphere cluster, using the bare metal (BM).DennseIO2.52 instance type. Each instance comes with 156 OCPUs, 2304 GB RAM, and 153 TB NVMe raw storage.

To read more about OCVS take a look at the Oracle documentation, as well as Simon Long’s blog where you can see an OCVS deployment and example integration with OCI services. Steve Nelson provides a great in-depth overview of Oracle Cloud VMware Solution available on YouTube below, and through the Oracle Cloud presents at Cloud Field Day 10 page.

Cloud Field Day 10 – Oracle Cloud VMware Solution

Oracle Cloud VMware Solution is the latest VMware Cloud Foundation hyperscaler offering, accelerating cloud migrations through familiar technologies and investments. Oracle and OCVS complements VMware partnerships with AWS (VMware Cloud on AWS), Microsoft (Azure VMware Solution), and Google (Google Cloud VMware Engine), allowing organisations to select the correct public cloud for their VMware workloads. If you’re interested in relocating VMware workloads to public cloud check out The Complete Guide to VMware Hybrid Cloud.

Relocating UK Public Sector to the Cloud

Introduction

A recent guidance paper published by The Commission for Smart Government urges the UK Government to take action towards transforming public services into intrinsically digital services. The Commission advises the government to move all services to the cloud by 2023.

It is clear from the paper that strong leadership and digital understanding amongst decision makers is incredibly important. This is something I noted when writing this post on defining a cloud strategy for public sector organisations. The cloud strategy should set out how technology supports and delivers the overall organisational goals.

If implemented correctly, cloud computing can maximise security and business benefits, automating and streamlining many tasks that are currently manual and slow. Published by the National Cyber Security Centre in November 2020, the Security Benefits of Good Cloud Service whitepaper provides some great pointers that should be incorporated into any cloud migration strategy.

This article discusses how to achieve a common cloud infrastructure, focusing on brownfield environments where local government, and other public sector organisations like the NHS, need to address some of the challenges below.

Common Challenges

  • IT is rarely seen as delivering value to end users, citizens, patients, etc. Often budgets are being reduced but IT are being asked to deliver more, faster. In general, people have higher demands of technology and digital services. Smart phones are now just called phones. Internet-era companies like Amazon, Google, and Netflix provide instant access to products, services, and content. Consumer expectations have shifted and the bar is raised for public services.
  • IT staff are under pressure to maintain infrastructure hardware and software. There are more vulnerabilities being exposed, and targeted cyber attacks, than ever before, which means constant security patching and fire-fighting. I’d like to add that it means more systems being architecturally reviewed and improved, but the reality is that most IT teams are still reacting. Running data centres comes with an incredible operational burden.
  • Understanding new technologies well enough to implement them confidently requires time and experience. There are more options than ever for infrastructure; on-prem, in the cloud, at the edge, managed services – Platform as a Service (PaaS), Infrastructure as a Service (IaaS). Furthermore applications are no longer just monolithic or 3-tier, they are becoming containerised, packaged, hybrid, managed – Software-as-a-Service (SaaS). IT teams are expected to maintain and securely join up all these different services whilst repurposing existing investments in supporting software and technical knowledge.
  • Business models are changing at pace, successful organisations are able to react quickly and make use of data to predict and understand their customers and consumers. The emergence of smart cities and smart hospitals can improve public services and enable cost-savings, but needs to be delivered on a strong digital foundation with fast, reliable connectivity. This approach requires joined up systems that share a secure, scalable, and resilient platform. In an ideal world applications and data should be abstracted from the underlying infrastructure in a way that allows them to securely move or be redeployed with the same policies and user experience, regardless of the hardware or provider. Legacy hardware and older systems are mostly disjointed, built in silos, with single points of failure and either non-existent or expensive business continuity models.
  • Innovation typically takes longer when the risk extends beyond monetary value. The ideas of agile development and fail-fast experimentation will naturally be challenged more for public facing services. A 999 operator locating a specialist hospital for an ambulance response unit cannot afford unpredictability or instability because developers and engineers were failing-fast. Neither can a family dependent on a welfare payment system. In environments where services are stable and reliable there is less appetite for change, even when other areas of the organisation are crying out for fast and flexible delivery.

Cloud Migration Strategies

Greater economical and technical benefits can be achieved at scale. Hyperscalers have access to cheaper commodity hardware and renewable energy sources. They are able to invest more in physical security and auditing. Infrastructure operations that are stood up and duplicated thousands of times over across the UK by individual public sector organisations can shift to the utility based model of the cloud, to free up IT staff from fire-fighting, and to be able to focus on delivering quality digital services at speed.

There are 7 R’s widely accepted as cloud migration strategies. These are listed below with a particular focus on relocate. Whilst a brand new startup might go straight into a cloud-native architecture by deploying applications through micro-services, those with existing customers and users have additional considerations. Migrating to the cloud will in most cases use more than one of the options below. Implementing the correct migration strategy for existing environments, alongside new cloud-native services, can reduce the desire for people to use shadow IT. Finding the right balance is about understanding the trade-off between risk, cost, time, and the core organisational drivers mentioned earlier.

  1. Retire. No longer needed – shut it down. Don’t know what it is – shut it down. This is a very real option for infrastructure teams hosting large numbers of Virtual Machines. VM sprawl that has built up over the years could surprise you.
  2. Retain. Leaving on-premises. This doesn’t necessarily mean doing nothing. In the most part your existing applications should run in the cloud. A requirement for applications that need to be closer to the action has progressed edge computing. Hardware advancements in areas like Hyper-Converged Infrastructure (HCI) enable high performance computing with single socket small footprints, or withstanding higher operating temperatures for locations away from data centre cooling. The key is to maintain that common underlying infrastructure, enabling service deployment in the cloud or at the edge with consistent operations and technologies.
  3. Repurchase. For example changing an on-premises and self-maintained application to a SaaS alternative. This could be the same product in SaaS form, or a competitor. The main technical consideration now becomes connectivity and how the application is accessed. Focus is generally shifted away from the overall architecture of the application itself, and more into transitioning or onboarding users and importing data.
  4. Rehost. Changing a Virtual Machine to run on a different hypervisor. This could be a VMware or Hyper-V VM, converted to run on a cloud providers hypervisor as a particular instance type. This can be relatively straight forward for small numbers of Virtual Machines, but consider other dependencies that will need building out such as networking, security, load balancing, backups, and Disaster Recovery. Although not huge, this potential change in architecture adds more time, complexity, and risk, as the size of the environment grows.
  5. Replatform. Tweaking elements of an application to run as a cloud service. This is often shifting from self-hosted to managed services, such as migrating a database from a VM with an Operating System to a managed database service. Replatform is a common approach for like-for-like infrastructure services like databases and storage.
  6. Refactor. The big bang. Rearchitecting an entire application to run as a cloud-native app. This normally means rewriting source code from scratch using a micro-services architecture or serverless / function based deployment. Infrastructure is deployed and maintained as code and can be stateless and portable. A desirable end state for modern applications.
  7. Relocate. Moves applications and Virtual Machines to a hyperscaler / cloud provider without changing network settings, dependencies, or underlying VM file format and hypervisor. This results in a seamless transition without business disruption.

Why Relocate Virtual Machines?

Relocating Virtual Machines is a great ‘lift-and-shift’ method for moving applications into the cloud. To get the most value out of this migration strategy it can be combined with one or more of the other approaches, generally replatforming some of the larger infrastructure components like database and file storage, or refactoring a certain part of an application; a component that is problematic, one that will provide a commercial or functional benefit, or that improves the end user experience. By auditing the whole infrastructure and applying this blueprint we can strike the right balance between moving to the cloud and protecting existing services.

For existing VMware customers, VMware workloads can be moved to AWS (VMware Cloud on AWS), Azure (Azure VMware Solution), Google Cloud (Google Cloud VMware Engine), Oracle Cloud (Oracle Cloud VMware Solution), as well as IBM Cloud and UK based VMware Cloud Provider Partners without changing the workload format or network settings. This provides the following benefits:

  • Standardised software stack – A Software-Defined Data Centre (SDDC) that can be deployed across commodity hardware in public and private clouds or at the edge, creating a common software-based cloud infrastructure.
  • Complete managed service – The hardware and software stack is managed infrastructure down, removing the operational overhead of patching, maintenance, troubleshooting, and failure remediation. Data centre tasks become automated workflows allowing for on-demand scaling of compute and storage.
  • Operational continuity – Retain skills and investment for managing applications and supporting software (backups, monitoring, security, etc.). Allowing for replacing solutions and application refactoring to take place at a gradual pace, for example when contracts expire, and with a lower risk.
  • Full data control – The Virtual Machine up is managed by the customer; security policies, data location (UK), VM and application configuration, providing the best of both worlds. Cloud security guardrails can be implemented to standardise and enforce policies and prevent insecure configurations. These same policies can extend into native cloud services and across different cloud providers using CloudHealth Secure State.
  • Sensible transformation – Although a longer term switch from capex investment to opex expenditure is required, due to the on-demand subscription based nature of many cloud services, dedicated hardware lease arrangements in solutions like those listed above can potentially be billed as capital costs. This give finance teams time to adapt and change, along with the wider business culture and processes.
  • Hybrid applications – Running applications that make use of native cloud services in conjunction with existing components, such as Virtual Machines and containers, supports a gradual refactoring process and de-risks the overall project.
Azure VMware Solution Basic Architecture
Example application migration and modernisation using Azure VMware Solution

To read more about the information available from the Government Digital Service and other UK sources see Helping Public Sector Organisations Define Cloud Strategy.

If you’re interested in seeing VMware workloads relocated to public cloud check out The Complete Guide to VMware Hybrid Cloud.