Tag Archives: VMware

How to Deploy VMware Horizon Cloud on Microsoft Azure

This post provides a high-level walkthrough of the VMware Horizon Cloud on Azure deployment, with some additional gotchas and insights. As with any cloud computing article features and functionality change quickly so multiple information sources should be used, some are listed in this article.

Introduction

VMware Horizon Cloud is a cloud-native virtual desktop platform that transforms an organisation’s digital workspace experience. Virtual desktops and applications can be accessed by end-users securely from any device, anywhere, with a cost-effective subscription-based model. Infrastructure administrators can deploy highly available and distributed environments consuming capacity from Microsoft Azure, VMware Cloud on AWS, or on-premises infrastructure. VMware Horizon Cloud can also be deployed to IBM Cloud. You can read more about the Horizon Cloud service offerings in the VMware Horizon Cloud Service Documentation.

Horizon on Azure allows customers to deploy Horizon Cloud as a VMware managed service using Infrastructure-as-a-Service (IaaS) from their own Microsoft Azure subscription. Horizon Cloud on Azure delivers virtual applications and dedicated or floating Windows 10 desktops, leveraging Azure cloud resources for multiple scalable deployment options. The Horizon Cloud admin console provides a single interface to manage virtual desktops and users with built-in service monitoring, logs, and analytics. You can see a full list of features in the VMware Horizon Cloud on Azure FAQs.

Example Design

AD Req

During pod deployment Horizon Cloud deploys a pair of Unified Access Gateways in an Azure Virtual Machine Scale Set behind an Azure Load Balancer assigned a public IP address. The Unified Access Gateways provide secure external access from a demilitarised zone (DMZ) subnet and directs authenticated requests accordingly. The public load balancer IP address is visible from the Horizon Cloud management portal and will need adding with the FQDN to a public DNS zone. A certificate matching the FQDN is uploaded in PEM format during the UAG deployment wizard.

A further Azure Load Balancer with a private IP address is automatically deployed with an Azure Virtual Machine Scale Set for the pod’s manager instances into a management subnet. The manager IP address is also visible from the Horizon Cloud portal and will need adding with the FQDN to a private DNS zone. A certificate chain matching the internal load balancer FQDN and DNS resolution is necessary if integrating Horizon with Workspace ONE, you can read more in the Overview of Configuring SSL Certificates on the Horizon Cloud Pod’s Manager VMs documentation.

The gold image(s) and virtual desktop pools are deployed and managed from the Horizon Cloud portal and use a dedicated private tenant subnet. Each of the components mentioned is provisioned to the customer’s Azure subscription organised in Azure Resource Groups with the supporting resources such as databases, Key Vaults, disks, Storage Accounts, network interfaces, and Network Security Groups.

Workspace ONE and True SSO

Each pod deployment in the example design can serve up to 2000 virtual desktops and can scale out to multiple pods across additional regions to provide extra capacity and resilience. Using Workspace ONE with Horizon Cloud enables a single URL for all users to access regardless of where their virtual desktop is located. Workspace ONE Access, formerly VMware Identity Manager, adds a further layer of security with Multi-Factor Authentication (MFA).

To allow for Single-Sign-On (SSO), VMware’s True SSO needs to be used. True SSO removes the need to enter the username and password more than once while accessing virtual desktops and published applications. True SSO comes with an additional set of requirements which you should review in full before starting along with the Integrate a Horizon Cloud Pod in Microsoft Azure with Workspace ONE Access documentation. At a high-level Active Directory (AD) with DNS and an enterprise Certificate Authority (CA) are needed. If you are deploying a greenfield environment, without an existing federated Azure Active Directory (AAD), then you may need to manually install Active Directory Domain Services on Virtual Machines in the Azure subscription as portrayed in the example design above. Azure Active Directory Domain Services (AAD DS) cannot be used with an enterprise CA at the time of writing which is a requirement for True SSO. Configuration of Workspace ONE and True SSO is beyond the scope of this document, but it is recognised as a component in the overall design.

Workspace-One-Verify

Azure Pre-Requisites

Review the VMware Horizon Cloud Service on Microsoft Azure Requirements Checklist. Before Horizon pod deployment, you will need to configure the following Azure resources:

  • Azure subscription with available capacity
  • The following resource providers registered in each Azure subscription:
    • microsoft.authorization, microsoft.keyvault, microsoft.storage, microsoft.sql, microsoft.dbforpostgresql, microsoft.insights (registers automatically when a service using insights is deployed)
  • Azure Active Directory (AAD) App registration (service principal) with an authentication key for each subscription
    • You will need the Subscription ID, Directory ID, Application ID and key to hand
  • Contributor role assigned to the subscription access control (IAM) for the above service principal
  • A VNet created with the Microsoft.Sql service endpoint enabled, DNS configured, internet access, and 3 non-overlapping address ranges (subnets can be added in advance or at pod deployment)
    • Management subnet, minimum /27
    • Tenant subnet, minimum /27 up to /21 based on the number of virtual desktops
    • DMZ subnet, minimum /28
  • Any required VNet peering should be in place for line of sight Active Directory access, and optional Express Route or VPN for on-premises connectivity

    Horizon Pod Deployment

    Access to Horizon Cloud is provided through email invite via your VMware representative. After logging in the first step is to add pod capacity, the Getting Started page defaults to the Capacity section. Next to Microsoft Azure click Add.

    The Add Microsoft Azure Capacity wizard opens. Follow the instructions to associate the Horizon Cloud control plane with the Azure subscription, using the Subscription ID, and the Azure AD (AAD) App Registration, using the Directory ID with the Application ID and Key for the service principal created during the pre-requisite configuration.

    Horizon-Cloud-Capacity-1

    In the Pod Setup page, configure the pod details. Enter the network settings, including the VNet and subnets to use as discussed in the design section above.

    Horizon-Cloud-Capacity-2

    Configure the external Unified Access Gateways (UAGs) with the public FQDN and the DMZ subnet. Upload the certificate to be applied to both UAGs in PEM format, the certificate must use the FQN specified in this page and must be signed by a trusted Certificate Authority.

    Horizon-Cloud-Capacity-3

    If the pod and gateway configurations validate successfully, then review the summary details and click submit to begin the pod deployment.

    Horizon-Cloud-Capacity-4

    The screenshot below shows the completed post-setup dashboard. In this instance, 3 pods for Azure capacity have been configured.

    Horizon-Cloud-UI

    After adding capacity to Horizon Cloud, the next step is to configure Active Directory. Review in full the Horizon Cloud service accounts requirements before starting. If you are using a third-party identity source, validate the permissions outlined are acceptable, along with the enterprise CA requirement mentioned above. Cross-check with the Active Directory Requirements section of the VMware Horizon Cloud Service on Microsoft Azure Requirements Checklist.

    Click Configure next to Active Directory to register your domain, add the domain bind and domain join accounts, and define the AD group for Horizon Cloud administrators. After applying the Active Directory configuration, you will need to log back into the portal with a domain account with Horizon Cloud administrative permissions, as well as your My VMware account. You can configure additional My VMware accounts under Settings and then General Settings.

    Publish a Horizon Desktop Image

    With Active Directory configured, we can go ahead and add the first gold image. As an optional configuration item, you can specify the allowed Virtual Machine types for deployments under Settings and then VM Types & Sizes.

    Images can be uploaded or imported from the Azure Marketplace under Inventory and Imported VMs. When you select an image from the Marketplace choose an OS and configure settings like domain join, and Horizon Agent features such as Smart Card / USB redirection, etc. You can enable a public IP address to access the image over RDP, or you can use the Azure Portal (Bastion) to apply software and configuration to the base build. During the import process, Horizon Cloud enables the RDS role, automates the agent installation, and performs a bootstrap process to securely pair the agent and the Horizon Cloud pod.

    Horizon-Cloud-Import-VM

    Click Import, after a few minutes the image Status changes to green and the Agent Status Active. With the image imported, you can carry out any customisations required to the base build. When complete, select the image and from the More drop-down menu click Convert to image. The build is now converted to an image, Horizon runs sysprep for you, seals the OS, and publishes to the Images section.

    This example is using a single gold image, but you can use multiple images and farms to publish many desktops and applications to end-users. The final step is to configure a new Desktop Assignment enabling users to deploy the image from Horizon Cloud. Click Assignments and New then select Desktops. Choose floating or dedicated desktop types and fill in the fixed attributes, fixed attributes on the assignment cannot be changed after publishing. Complete the flexible attributes, such as minimum and a maximum number of desktops, and machine prefix. Flexible attributes can be updated later.

    Horizon-Cloud-Desktop-Assignment

    Virtual desktops are powered off and deallocated when they are not being used to balance infrastructure costs. You can configure power off protect timings or add power management schedules. On the Users page add the Active Directory users or user groups that will have access to the desktop pool. After the Assignment is created and online it is available for use.

    Horizon-Cloud-Created-Assignment

    Users given access to the Assignment can now log in direct to the public FQDN for the pod.

    Horizon-Splash-Screen

    Entitled images and applications are shown in the Horizon client.

    Horizon-Desktops

    Back in the Horizon Cloud admin portal the dashboard and reports functionality can be used for general monitoring of the service. At the time of writing, there is no syslog forwarding feature available from the Horizon Cloud portal. Automation of report downloads in CSV format can be scripted, or an agent used on the image build itself, such as a Splunk forwarder.

    Horizon-Cloud-Dashboard

    Here are some additional gotchas found at the time of deployment. These are expected to be fixed in future releases.

    • Communicating with the internal IP address of a basic Azure Load Balancer across regions with VNet peering is not supported, a standard Azure Load Balancer is needed. At the time of writing the Horizon Cloud pod deployment uses basic load balancers for the internal manager VMs.
    • When applying a certificate to the internal load balancer for the manager VMs to facilitate Workspace ONE integration at the time of writing a common name in the certificate will be ignored if Subject Alternative Names (SANs) are present. All pods should be added as SANs.

    Useful Documentation

    Hands-on:

    vSphere 7 and vSAN 7 Headline New Features

    vSphere 7 Cloud Infrastructure for Modern Applications Part 1:

    vSphere 7 Cloud Infrastructure for Modern Applications Part 2

    vSphere 7 with Kubernetes

    Now and over the next 5 years, we will see a shift in how applications are built and run. In 2019 Line of Business (LOB) IT, or shadow IT, spend exceeded Infrastructure and Operations IT spend for the first time*. Modern applications are distributed systems built across serverless functions or managed services, containers, and Virtual Machines (VMs), replacing typical monolithic VM application and database deployments. The VMware portfolio is expanding to meet the needs of customers building modern applications, with a portfolio of services from Pivotal, Wavefront, Cloud Health, bitnami, heptio, bitfusion, and more. In the container space, VMware is strongly positioned to address modern application challenges for developers, business leaders, and infrastructure administrators.

    Launched on March 10 2020, with expected April 2020 availability, vSphere 7 with Kubernetes is powering VMware Cloud Foundation 4. vSphere 7 with Kubernetes integration, the first product including capabilities announced as part of Project Pacific, provides real-time access to infrastructure in the Software-Defined Data Centre (SDDC) through familiar Kubernetes APIs, delivering security and performance benefits even over bare-metal hardware. The Kubernetes integration enables the full SDDC stack to utilise the Hybrid Infrastructure Services from ESXi, vSAN, and NSX-T, which provide the Storage Service, Registry Service, Network Service, and Container Service. Developers do not need to translate applications to infrastructure, instead of leveraging existing APIs to provision micro-services, while infrastructure administrators use existing vCenter Server tooling to support Kubernetes workloads along with side Virtual Machines.

    At this point, on-premises Kubernetes orchestration is available through VMware Cloud Foundation 4. You can read more about Kubernetes with vSphere 7 in vSphere 7 with Kubernetes and Tanzu on VMware Cloud Foundation. Continue reading this post to review the additional functionality introduced with vSphere 7 and vSAN 7 around lifecycle management, scalability, security and compliance, you can also check the full vSphere 7 introduction here.

    vSphere 7 Headline New Features

    vCenter Server Profiles

    vCenter Server Profiles are introduced in vSphere 7; enabling consistent configuration across the vCenter Server estate. vCenter Server Profiles export management, network, authentication, and user configurations into JSON format. The configurations can be edited, validated, and imported or pushed to up to 100 vCenter Servers, providing version control and a consistent last-known-good state. vCenter Server Profiles are accessible via 4 new REST APIs; list, export, validate, and import.  This also means they can be consumed with DCLI, PowerCLI, or other automation tools such as Ansible, Puppet, and Chef. Behind the scenes, vCenter Server Profiles are known as Infrastructure Profiles and can be found under infra-profiles in the vCenter Developer Center  API Explorer. Note that vCenter Server profiles do not replace file-based backups for vCenter, the profile exports do not contain GUIDs etc. that would be required for a full and supported vCenter Server restore.

    vCenter Server Profiles

    vCenter Server Update Planner

    The new vCenter Server Update Planner provides native tooling to help with discovering, planning and upgrading vCenter Server and connected products successfully. VMware administrators can receive notifications in the vSphere client when an upgrade or update is available. VMware product interoperability is built-in and automatically detects installed products to provide monitoring and checks against the current vCenter Server version; showing compatible upgrades and removing guesswork and complicated interoperability questions for complex environments. To further validate upgrade paths ‘what-if’ workflows, and pre-update checks can be run against the selected target vCenter. The vCenter Server Update Planner also links to applicable release notes and Knowledge Base (KB) articles. An extra benefit of the vCenter Server 7 upgrade process is the automation of the external Platform Services Controller (PSC) which is now built into the upgrade, more on this further down the post.

    vCenter Server Cluster Image Management

    Cluster Image is the new model for management of the ESXi lifecycle, providing consistency of ESXi hosts across a cluster. The cluster image comprises of specific firmware, drivers, or vendor software add-ons, to create a desired state model with multi-host remediation capabilities. The Cluster Image feature is exposed through the vSphere client, REST API, and also integrates with third-party vendor management tools such as Dell OpenManage and HPE OneView. This means host firmware can now be managed and upgraded from within vSphere, removing the risk of unsupported drivers and firmware. To use this feature, all hosts in a cluster must be the same hardware type and must all be running ESXi 7.0.

    New vSphere DRS Improvements

    Distributed Resource Scheduler (DRS) is evolving to meet Virtual Machine needs and has undergone several new improvements. DRS now makes workload centric placement decisions based on VM data gathered every minute, as opposed to cluster centric decisions based on 5 minutes of data. Placement decisions are now based on the individual VM DRS scores and granted memory. This shifts the focus onto the workload resource fulfilment, rather than the balance of the whole cluster. The VM DRS score is calculated using CPU %RDY (Ready) time, memory swap, CPU cache behaviour, headroom for the workload to burst, and migration cost. VM DRS scores are grouped into buckets of 20% increments.

    Improved DRS vSphere 7

    Improved DRS in vSphere 7 now includes Scalable Shares, providing relative resource entitlements across Resource Pools. Setting a share level to ‘high’ now ensures prioritisation over lower share VM entitlements, whereas previously, the higher share level did not guarantee a higher resource entitlement. Scalable Shares need to be enabled and can be configured on a cluster level and/or Resource Pool level. Share allocations are dynamically changed depending on the number of VMs in a Resource Pool. The only exception to this rule is vSphere with Kubernetes where a Resource Pool is used as a Namespace, in this instance, Scalable Shares are used by default.

    DRS placement now includes assignable hardware – support for hardware accelerators, for both DRS initial placement and vSphere High Availability (HA). When adding a new device dynamic DirectPath IO or NVIDIA GRID vGPU devices are supported. DRS works with the assignable hardware framework to find a host with an available PCIe device configured, or hardware profile when making initial placement decisions. The functionality requires the new VM hardware version 17.

    New vSphere vMotion Improvements

    Increased workload resource consumption as applications change-over-time has started presenting performance challenges during vMotion and stun times for large or monster VMs. To address these challenges, vMotion has been refactored as part of vSphere 7, bringing back vMotion capabilities for workloads like SAP HANA or Oracle.

    During vMotion a page tracer is installed so vSphere can keep track of the memory pages that are overwritten by the guest OS while the VM is in a vMotion state. To install the page tracer, the vCPU is stopped (for micro-seconds), allowing the monitoring of memory page overwrites. These overwrites are referred to as page fires, which are replicated to the destination ESXi host. The page tracer was previously installed on all vCPUs in a VM. In vSphere 7 only one vCPU is claimed and dedicated to all the page tracing work during a vMotion operation. This improves the efficiency of page tracing and dramatically reduces the performance impact on a workload. When all memory pages have been migrated the last memory bitmap is transferred, in previous versions the entire bitmap was transferred, in vSphere 7 the bitmap is compacted, and only the last pages are sent, cutting down the switch over and stun time.

    vMotion Improvements

    Enhanced vMotion Capability (EVC) has been updated with new baselines for CPU packages: Intel Cascade Lake generation and AMD Zen2 generation (EPYC Rome).

    New vSphere Security & Compliance Features

    vSphere 7 now supports Intel Softguard Extensions (SGX) which allows applications to work within the underlying hardware to create a secure enclave that cannot be viewed by the guest OS or hypervisor. The application can store secrets or data in the enclave, which is an important feature for risk management, although currently there is minimal hardware support. Intel Ice Lake CPUs will have dual socket implementations of SGX. If implementing SGX, remember that you will lose certain features such as vMotion, snapshots, etc. if the hypervisor cannot see everything in the VM, this becomes very much an application design decision.

    vSphere 7 introduces vSphere Trust Authority (vTA), providing trusted hosts and encryption key management. Previous trust models in vSphere had the potential for running secure workload on untrusted hosts, with no repercussions for failing secure baselines. Attestation and key management were done by vCenter Server, which itself could not be encrypted. The dependencies on the vCenter Server itself made it difficult to implement the principle of least privilege. With vTA a hardware root of trust is created using a separate ESXi host cluster, this can also be your management cluster. The key manager only talks directly to trusted hosts, rather than the vCenter Server. Workloads running on the trusted cluster, now including vCenter Server, can be encrypted. A smaller number of administrators can be given access to the trusted hosts, with regular admins maintaining access to the workload hosts. Currently, vTA is still foundational, so expect more functionality to be available in future releases. It is important to note that to use the trusted host model, the physical server must have the TMP 2.0 chip, which is cryptographically bonded to the host.

    vSphere 7 Trust Authority

    Identity Federation is introduced in vSphere 7 to modernise vSphere Authentication utilising standards-based federated authentication with enterprise Identity Providers. Using Identity Federation organisations can benefit from reduced audit scope and administrative workload, as well as security enhancements such as Multi-Factor Authentication (MFA). Initial integration will be with Active Directory Federation Services (ADFS) / Azure Active Directory, which alongside MFA is great for compliance and security. Identity Federation will also work with the Supervisor Cluster for Kubernetes, which inherits a lot of the security and functional controls from vCenter to help bridge the gap between developing modern applications and existing processes and infrastructure.vSphere 7 Identity Federation

    There are hundreds of improvements in vSphere 7 to drive consistency and trust in the environment. For example, the default settings for the vSwitch now includes SecurebyDefault to enforce security settings, the Certificate Management UI has been consolidated and simplified, and so on. You can review the vSphere 7 release notes for full information.

    Additional Noteworthy vSphere 7 Features

    • vCenter Server Content Library: a new interface provides vast improvements in template management. Virtual Machine templates are now checked out to edit and checked in to save, facilitating version control, quick historical view of edits, and ability to restore to previous versions. You can switch between the new view and classic view in the vSphere client. Additional features such as versioning are only available when the VM template is stored in a Content Library. Advanced configuration now allows an update of auto-sync frequency and performance optimisation.
    • vCenter Server Multi-Homing: vCenter Server 7 now supports multiple network adaptors, the maximum supported vNIC limit is 4 per vCenter Server, with NIC1 reserved for vCenter HA.
    • vCenter Server SSO Domain Consolidation: vSphere SSO domain or external PSC consolidation has been simplified with new tooling commands for domain re-pointing or un-registering: cmsso-util unregister and domain-repoint.
    • vCenter Server External PSC Consolidation: the Upgrade and Migration setup no longer allows the deployment of an external PSC. Furthermore, the external PSC consolidation process is now automatically built into the upgrade, reducing administrative time and effort during the upgrade process. This means the vCenter Server Converge Tool has been removed from the ISO. The external PSC consolidation during an upgrade is also a supported configuration in JSON format when upgrading using the CLI.
    • VM Hardware v17: the new VM hardware version features a virtual Watchdog Timer providing guest OS and application monitoring, especially important for clustered applications like databases and filesystems. Precision Time Protocol (PTP) now provides sub-millisecond timekeeping, helpful for financial and scientific applications. PTP requires both an in-guest device and the ESXi service to be enabled.
    • vCenter Server Configuration Maximums: further enhancements to vCenter Server scalability:
      • vCenter Server (standalone) number of hosts per vCenter Server: 2500, powered-on VMs per vCenter Server: 30,000
      • Linked Mode vCenter Servers (15 per SSO domain) hosts: 15,000, powered-on VMs: 150,000
      • vCenter Server latency requirements for vCenter Server to vCenter Server: 150ms, vCenter Server to ESXi Hosts: 150ms, vSphere Client to vCenter Server: 100ms

    vCenter Server Config Maximums

    You can read the full vSphere 7 release information at Introducing vSphere 7: Essential Services for the Modern Hybrid Cloud as well as the vSphere 7 Data Sheet and vSphere 7 Product Page.

    vSAN 7 Headline New Features

    Several new features have been added to vSAN 7 alongside the vSphere 7 announcement, here are the key product enhancements:

    Simplified Lifecycle Management

    vSphere Lifecycle Manager (vLCM) is a new approach to unified software and firmware management, increasing reliability and decreasing the number of update tools. vLCM is built around the desired state model and monitors and remediates compliance drift. Desired state and desired images are applied at cluster level and manage the server stack as a whole, across hypervisor, drivers, and firmware. Furthermore, the modular framework supports vendor firmware plugins such as Dell and HPE.

    Unified Block and File Storage

    Fully Integrated File Services provides a native file service built into the hypervisor through vSAN. Cluster capacity for vSAN can be provisioned into file shares with support for NFS v4.1 & v3, and file share quotas, unifying management of block and file storage. vSAN file shares are aimed at ease of use for both cloud-native and traditional workloads running in the cluster, it is not necessarily a replacement for large scale filers.

    vSANFile

    Expanded Data Services

    Continued Integration of Cloud Native Storage provides the control plan and storage service for vSphere with Kubernetes integration, and offers file-based persistent volumes easily accessible and managed within vCenter. This now includes support for vVols, persistent volume encryption, and snapshots, volume resizing, and a mixture of tooling such as application monitoring with Wavefront, next-generation monitoring solutions like Prometheus, and infrastructure analytic solutions like vRealize Operations, providing an advanced level of visibility for vSphere administrators.

    vSANCloudNative

    Improved Efficiency and Operations

    • Enhancements for Stretched Cluster and 2-Node Topologies: such as support for overriding the default gateway used by vSAN hosts to simplify deployments and routed topologies, immediate repair operation after a witness host appliance is replaced.
    • Intelligent capacity management for stretched cluster topologies; when the cluster is in a capacity-constrained state, for example, due to host failure, objects in a critical condition are marked by vSAN as absent, allowing I/O to be processed at another site. The degraded state of the object in terms of resilience still stands, but the VM uptime is improved by allowing the continuation of read/write operations. The object is updated when the capacity strain condition is removed.
    • Stretched cluster awareness for DRS placement decisions; enables prioritisation of I/O read locality over VM site affinity rules, completion of vSAN resync before DRS migrations,  and a reduction in I/O across ISL in recovery conditions.
    • Improved accuracy in VM capacity reporting across vCenter UI and APIs when working with thin-provisioned VMs, swap objects, and namespace objects; reducing confusion and inconsistency over provided and used space for a given VM.
    • A new vSAN memory metric has been added in the vSAN performance service to display memory consumption of vSAN operations such as hardware and software configuration changes. The additional vSAN memory metric shows time-based memory consumption per host and is available in the vCenter UI and API.
    • New vSphere Replication object identity types to easily identify objects created by or using vSphere Replication, replacing the previous unknown object type.
    • Additional support for larger storage devices; up to 32 TB physical drives, and up to 1 PB in logical capacity. This gives the potential for improved deduplication ratios when using larger devices for the capacity tier and deduplication domain.
    • Native support for NVMe hot-plug through vSAN and vSphere for selected OEM platforms. This feature reduces host restarts and administrative complexity when carrying out planned or unplanned maintenance.
    • Removal of Eager Zero Thick (EZT) requirement for vSAN shared disks, improving application consumption and flexibility.
    • The full vSAN announcement can be found here

    vSphere 7 with Kubernetes and vSAN 7 are built into VMware Cloud Foundation 4, read more on the March 10 2020 announcement in vSphere 7 with Kubernetes and Tanzu on VMware Cloud Foundation.

    *LOB spend 51% to infrastructure operations spend 49% – source IDC WW Semiannual IT Spending Guide: Line of Business, 09 April 2018 (HW, SW and services; excludes Telecom)

    Understand VMware Tanzu, Pacific, and Kubernetes for VMware Administrators

    This post was last updated 26/10/2019 and provides an overview of VMware Tanzu and Project Pacific.

    Peanut Butter & Jelly VMware and Kubernetes

    There will be more apps deployed in the next 5 years than in the last 40 years (source: Introducing Project Pacific: Transforming vSphere into the App Platform of the Future). The VMware strategy of late has seen a significant shift towards cloud-agnostic software and the integration of cloud-native application development. In November 2018 VMware announced the Acquisition of Heptio to help accelerate enterprise adoption of Kubernetes on-premise and across multi-cloud environments. In May and August, 2019 VMware announced its intent to Acquire Bitnami and Pivotal Software, following the successful launch of Pivotal Container Service (PKS) which was later re-branded VMware Enterprise PKS.

    To help better address application support complexities between development and operations teams, VMware has now announced VMware Tanzu:

    “In Swahili, ’tanzu’ means the growing branch of a tree. In Japanese, ’tansu’ refers to a modular form of cabinetry. At VMware, Tanzu represents our growing portfolio of solutions to help you build, run and manage modern apps.”

    VMware Tanzu is a portfolio of capabilities that empowers cloud-native development by enabling build, run, and manage operations across platforms. Using VMware Tanzu Mission Control Kubernetes clusters can be created and managed from a single control point.

    Another key announcement alongside VMware Tanzu was code-named Project Pacific; enabling IT operators and developers to build and run modern applications with VMware vSphere and native Kubernetes. Project Pacific is focused on re-architecting vSphere for Kubernetes containers to run alongside VMware Virtual Machines (VMs) in ESXi, enabling the development of portable cloud-native applications and micro-services, while protecting existing investments in products and skills. You can review the press release of all products in the VMware Tanzu portfolio here, and the split of build, run, manage products here.

    Introduction to Kubernetes

    Kubernetes is an open-source orchestration and management tool that provides a simple Application Programming Interface (API), exposing a set of capabilities for defining workloads and services. Kubernetes enables containers to run and operate in a production-ready environment at an enterprise scale by managing and automating resource utilisation, failure handling, availability, configuration, scale, and desired state. Micro-services can be rapidly published, maintained, and updated.

    Containers package applications and their dependencies into a distributed image that can run almost anywhere, simplifying application path to live. Kubernetes makes it easier to run applications across multiple cloud platforms, accelerates application development and deployment, increases agility, flexibility, and the ability to adapt to change.

    For VMware administrators with little exposure to DevOps, the following high-level resources can help set a foundation understanding of Kubernetes, and why VMware are making some of these critical changes in architecture and strategy. You can try Kubernetes for yourself using the Kubernetes Academy by VMware, or a Kind Way to Learn Kubernetes.

    Kubernetes for Executives: “Containers encapsulate an application in a form that’s portable and easy to deploy. Containers can run on any compatible system—in any
    cloud—without changes. Containers consume resources efficiently, enabling high density and utilization. Kubernetes makes it possible to deploy and run complex applications requiring multiple containers by clustering physical or virtual resources for application hosting. Kubernetes is extensible, self-healing, scales applications automatically, and is inherently multi-cloud.”

     

    Introduction to Project Pacific (Run)

    Kubernetes uses a cluster of nodes to distribute container instances. The master node is the management plane containing the API server and scheduling capabilities. Worker nodes make up the control plane and act as compute resources for running workloads (known as pods). VMware has re-designed vSphere to include a Kubernetes control plane for managing Kubernetes workloads on ESXi hosts. The control plane is made up of a supervisor cluster using ESXi as the worker nodes, allowing workloads or pods to be deployed and run natively in the hypervisor, along with side traditional Virtual Machine workloads. This new functionality is provided by a new container runtime built into ESXi called CRX. CRX optimises the Linux kernel and hypervisor and strips some of the traditional heavy config of a Virtual Machine enabling the binary image and executable code to be quickly loaded and booted. The container runtime produces some of the performance benchmarks VMware have been publishing, such as improvements even over bare metal, in combination with ESXi’s powerful scheduler.

    To ensure containers are running in pods, an agent called a Kubelet runs on Kubernetes cluster nodes. With the supervisor cluster, the role of the Kubelet agent is handled by a new ‘Spherelet’ running on each ESXi host. Pods are created on a network internal to the Kubernetes nodes. By default, pods cannot talk to each other across the cluster of nodes unless a Service is created. A Service in Kubernetes allows a group of pods to be exposed by a common IP address, helping define network routing and load balancing policies without having to understand the IP addressing of individual pods.

    Another of the great features of Kubernetes is namespaces. Namespaces are commonly used to provide multi-tenancy across applications or users, and to manage resource quotas (backed in this instance by vSphere Resource Pools). Kubernetes namespaces segment resources for large teams working on a single Kubernetes cluster. Resources can have the same name as long as they belong to different namespaces, think of them as sub-domains and the Kubernetes cluster as the root domain the namespace gets attached to. Multiple namespaces can exist within the supervisor cluster, with different storage policies assigned to them, for persistent storage, etc.

    Kubernetes can be accessed through a GUI known as the Kubernetes dashboard, or through a command-line tool called kubectl. Both query the Kubernetes API server to get or manage the state of various resources like pods, deployments, and services. Labels assigned to pods can be used to look up pods belonging to the same application, tier, or service. With Project Pacific; developers use Kubernetes APIs to access the Software-Defined Data Centre (SDDC) and ultimately consume Kubernetes clusters as a service using the same application deployment tools they use currently. This service is delivered by Infrastructure Operations teams using existing vSphere tools, with the flexibility of running Kubernetes workloads and Virtual Machine workloads side by side.

    By applying application-focused management Project Pacific allows application-level control over policies, quota, and role-based access for Developers. Service features provided by vSphere such as High Availability (HA), Distributed Resource Scheduler (DRS) and vMotion can be applied at application level across Virtual Machines and containers. Unified visibility in vCenter for Kubernetes clusters, containers, and existing Virtual Machines is provided for a consistent view between Developers and Infrastructure Operations alike.

    The following resources provide further reading on Project Pacific for enabling Kubernetes on vSphere.

    Project_Pacific

    Introduction to VMware Tanzu Mission Control (Manage)

    VMware Tanzu Mission Control brings together Kubernetes clusters providing operator consistency for deployment, configuration, security, and policy enforcement across multiple clouds while maintaining developer independence and self-service.

    VMware Tanzu Mission Control is a Software as a Service (SaaS) control plane offering allowing administrators to deploy, monitor, and manage ALL Kubernetes clusters from a single point of control. The beauty of this approach is that lifecycle management, access management, health and diagnostics, security and configuration policies, quota management, and backup or restore capabilities are all consolidated into a single toolset.  Kubernetes clusters running on vSphere, VMware Enterprise or Essential PKS, Public Cloud (AWS, Microsoft Azure, Google Cloud Platform), and managed services or other implementations can all be attached to VMware Tanzu Mission Control. New Kubernetes clusters can also be deployed to all of these platforms from the Tanzu Mission Control interface.

    For more information on VMware Tanzu Mission Control, see the product page here, and Introducing VMware Tanzu Mission Control to Bring Order to Cluster Chaos. If you are attending VMworld Europe 2019 have a look through VMware Tanzu Sessions in the content catalog and also Explore Kubernetes at VMworld 2019. At the time of writing VMware Tanzu and Project Pacific are in tech preview, this post will be updated when more information is released. Please use the comments section below if you feel any key elements are missing or not explained clearly. There are some additional useful video tutorials available from the Project Pacific at Tech Field Day Extra at VMworld 2019,