April 2022 VMware Multi-Cloud Briefing

The VMware Multi-Cloud Briefing is an online quarterly series, in its fifth iteration, that brings vision, technology, and customer stories to the table. The briefing series has evolved through cloud platform, operations, and application development since its introduction in the summer of 2020. Both cloud technology and cloud adoption is advancing at a fast pace, and this April briefing provides an opportunity to see what’s new directly from VMware engineering, independent industry experts, and customers.

The latest session is opened with Joel Neeb, VP Execution and Transformation, VMware, and former F-15 pilot. Joel will talk through the history of aviation and the advancements in the cockpit, from having limited technology to running over 300 different instruments. With so many new features and capabilities, there comes a tipping point where it cannot be practically managed by a single operator, or it takes more time than it offers value. These instruments are now streamlined into a handful of features, displayed on screens instead of through switches and dials, with the computer systems surfacing what’s important to the operator at a given time.

We can learn from this approach, and apply similar models to be able to abstract and simplify multi-cloud complexity across different environments and locations. VMware Cross-Cloud Services can remove complexity, whilst enabling the agility of different cloud providers and the freedom to choose the right target environment for each application. Offering standardisation and consistency at the infrastructure layer allows scale and flexibility. Then, as requirements change and new use cases are uncovered, IT teams and developers can move quickly to accelerate overall business transformation.

VMware Cross-Cloud Services

The session continues with quick fire customer stories around streamlining operations with VMware technology, and a customer interview with S&P Global covering their approach to solving multi-cloud complexity. Later, we’ll also hear a partner perspective from DXC Technology, on how they work with customers to deliver multi-cloud outcomes, and what trends they are seeing across the market.

Next is a technology deep dive, starting out with examining how we’ve arrived at the complexity of running environments across public cloud, private cloud, and the edge. You can then expect to see:

  • How easy it is to add a new VMware environment to a hyperscaler, using vRealize Automation. In this demo we’ll start with an on-premises hosted environment, and scale out by spinning up new environments in the cloud, with the same management tooling and policies.
  • How to manage multiple cloud environments from a single tool, using vRealize Operations. In this demo we’ll look at a consistent way of managing and optimising resources, performance, capacity, and costs, with a unified troubleshooting interface.
  • How to add Kubernetes clusters in different hyperscalers to a common management plane, using Tanzu Mission Control. In this demo we’ll see how you can standardise the management of Kubernetes services, which will likely compliment your existing virtual machine infrastructure. Furthermore, we’ll find out how Tanzu Service Mesh can secure the communication of micro-services between environments and across clouds. Tanazu Service Mesh is able to bring micro-services under the same security umbrella, and automate features like mutual TLS encryption across all services.

The final segment is an industry interview with IDC and VMware, talking about what it means for customers to standardise their infrastructure and cloud platforms. There are multiple layers of abstraction and standardisation, covering the likes of management, optimisation, and security. IDC will detail where you can start, and what they see as good first steps.

The April 2022 VMware Multi-Cloud Briefing, and associated launch blog, is now live and available on YouTube. The video is embedded below. You can watch the current and previous briefings on the VMware Multi-Cloud Briefing page, each video is between 30-40 minutes long.

VMware Multi-Cloud Briefing April 2022

vSphere 7 with Kubernetes and Tanzu on VMware Cloud Foundation

vSphere 7 Cloud Infrastructure for Modern Applications Part 1

vSphere 7 Cloud Infrastructure for Modern Applications Part 2:

VMware Cloud Foundation 4

VMware Cloud Foundation (VCF) 4 delivers vSphere with Kubernetes at cloud scale, bringing together developers and IT operations by providing a full-stack Hyper-Converged Infrastructure (HCI) for Virtual Machines (VMs) and containers. By utilising software-defined infrastructure for compute, storage, network, and management IT operations can provide agility, flexibility, and security for modern applications. The automated provisioning and maintenance of Kubernetes clusters through vCenter Server means that developers can rapidly deploy new applications or micro-services with cloud agility, scale, and simplicity. At the same time, IT operations continue supporting the modern application framework by leveraging existing vSphere functionality and tooling.

VMware has always been an effective abstraction provider, VCF 4 with Tanzu Services View takes infrastructure abstraction to the next level. Within vSphere underlying infrastructure components are abstracted into a set of services exposed to APIs, allowing the developer to look down from the application layer to consume the hybrid infrastructure services. Meanwhile, IT operations can build out policies and manage pods alongside VMs at scale using vSphere.

VCFwKubernetes

vSphere 7 with Kubernetes

Now and over the next 5 years, we will see a shift in how applications are built and run. In 2019 Line of Business (LOB) IT, or shadow IT, spend exceeded Infrastructure and Operations IT spend for the first time*. Modern applications are distributed systems built across serverless functions or managed services, containers, and Virtual Machines (VMs), replacing typical monolithic VM application and database deployments. The VMware portfolio is expanding to meet the needs of customers building modern applications, with a portfolio of services from Pivotal, Wavefront, Cloud Health, bitnami, heptio, bitfusion, and more. In the container space, VMware is strongly positioned to address modern application challenges for developers, business leaders, and infrastructure administrators.

Launched on March 10 2020, with expected April 2020 availability, vSphere 7 with Kubernetes is powering VMware Cloud Foundation 4. vSphere 7 with Kubernetes integration, the first product including capabilities announced as part of Project Pacific, provides real-time access to infrastructure in the Software-Defined Data Centre (SDDC) through familiar Kubernetes APIs, delivering security and performance benefits even over bare-metal hardware. The Kubernetes integration enables the full SDDC stack to utilise the Hybrid Infrastructure Services from ESXi, vSAN, and NSX-T, which provide the Storage Service, Registry Service, Network Service, and Container Service. Developers do not need to translate applications to infrastructure, instead leveraging existing APIs to provision micro-services, while infrastructure administrators use existing vCenter Server tooling to support Kubernetes workloads along with side Virtual Machines.

You can read more about the workings of vSphere with Kubernetes in this Project Pacific Technical Overview for New Users by Ellen Mei. Also, Initial Placement of a vSphere Pod by Frank Denneman is another useful and recent article detailing the process behind the ESXi container runtime.

VMware Application Modernisation Portfolio with Tanzu

VMware Cloud Foundation services is the first manifestation of Project Pacific; now vSphere with Kubernetes, and provides consistent services managed by IT operations to developers, ultimately anywhere that is running the VMware Cloud platform.

In the past, VMware was efficient at taking many Virtual Machines and running them across multiple hypervisors in a cluster, the challenge was consolidating numerous physical servers for cost-saving, management, and efficiency gains. Today the challenge is that application deployments are groups of VMs, which presents a new challenge of consolidating distributed applications across multiple clouds. Project Pacific brings Kubernetes and Tanzu to the vSphere environment, making it operationally easier to get upstream Kubernetes running, but also to effortlessly in-place upgrade and maintain Kubernetes clusters. This functionality accelerates vSphere into a much more modern, API driven, self-service and fast provisioning interface backed by optimised ESXi for all workloads.

Tanzu Kubernetes Grid (TKG) is a Kubernetes runtime built into VMware Cloud Foundation Services; allowing installing and maintaining of multi-cluster Kubernetes environments across different infrastructure. Tanzu Kubernetes Grid also works for operational consistency across Amazon Web Services (AWS), Azure, and Google Compute Engine (GCE). This is different to public cloud-managed Kubernetes services such as EKS, AKS, GKE, etc. as it integrates natively into the existing infrastructure; meeting the needs of organisations who require abstracted logging, events, governance policies, and admission policies. This capability is delivering not just Kubernetes but a set of management services to provision, deploy, upgrade and maintain Kubernetes clusters. By having this granular level of control over the underlying VMs or cloud environment customers can implement, monitor, or enforce their own security policies and governance.

Tanzu Mission Control provides operator consistency for deployment, configuration, security, and policy enforcement for Kubernetes across multiple clouds, simplifying the management of Kubernetes at scale. Tanzu Mission Control is a Software as a Service (SaaS) control plane offering allowing VMware administrators to standardise Identity Access Management (IAM), configuration and policy control, backup and recovery, ingress control, cost accounting, and more. The multi-cluster control plane supports the propagation of Kubernetes across vSphere, Pivotal Container Services (PKS), AWS, Azure, and Google Cloud Platform (GCP), all from a single point of control.

VMware have announced the availability of Tanzu Kubernetes Grid, Tanzu Mission Control, and Tanzu Application Catalog (open-source software catalog powered by Bitnami), providing a unified platform to build, run, and manage modern applications.

VMware Cloud Foundation Additional Updates

VMware Cloud Foundation (VCF) 4 is expected in April 2020 and includes fully updated software building blocks for the private cloud, including vCenter, ESXi, and vSAN 7.0, plus the addition of NSX-T.

VCF with NSX-T is made up of workload domain constructs,  and by default, every architecture starts with a management domain, which hosts vCenter, private NSX managers, and the edge cluster. There are a couple of changes in VCF 4 that reduce the footprint of the management domain; NSX-T is being fully utilised for the first time, the NSX Edge cluster can be deployed at day X, NSX Manager and controllers are now integrated, and the Platform Services Controller (PSC) is also now using the embedded model with vCenter. Additionally, we have the capability to use Application Virtual Networks (AVM) using BGP peering on deployment, or again as a day X action. Another side note is that Log Insight has been changed from a default deployment requirement to an optional day X action.

Workload domains are built out to server vSphere with Kubernetes and expose the network services for developers to use. Workload domains can be built on new or existing NSX-T managers; offering the choice of one-to-one or one-to-many relationship for NSX-T instances with VCF 4. This provides customers with the option of separating out NSX-T instances, while simultaneously protecting the management domain. Day X automation then can be used to place edge deployments in the appropriate cluster:

NSXWorkloadDomains

SDDC Manager and Lifecycle Manager (LCM) provide automated provisioning and upgrades. Lifecycle Manager enhances ease of upgrade and patching by providing automated lifecycle management; with update notifications, review and schedule options, and monitoring and reporting. Also, LCM can manage all inter-dependencies of versioning at a cluster level, from vSphere right through to the Kubernetes runtime. SDDC Manager is orchestrating and automating the provisioning of vSphere with Kubernetes workload domains, and crucially enabling the LCM functionality for maintaining upgrades for the entire software stack, and eliminating typical day 2 challenges for developers.

SDDCManager

Multi-Instance Management: multiple VCF instances can now be federated to provide a global view of workload domains without the installation of any additional components. Administrators can click-through to any VCF data centre to centrally view patching, upgrades, maintenance, and remediation operations.

New Security Enhancements: native Workspace ONE Identity Access integration for vRealize suite and NSX-T using AD or LDAP identity sources. Admin and Operator roles for API and UI, with the operator role providing all privileges minus password management, user management, and backup and restore. Token-based authentication is also now enforced across all APIs.

You can find out more about the VMware Cloud Foundation 4 update at What’s New in VMware Cloud Foundation 4  and Delivering Kubernetes at Cloud Scale with VMware Cloud Foundation 4.

VMware vRealize Cloud Management Integration

The vRealize Cloud Management product suite has been comprehensively updated to include vSphere 7 with Kubernetes support. vRealize Operations (vROps) 8.1 is now available for the first time as a SaaS (Software as a Service) offering with an enhanced feature-set. Some of the key new functionality enables self-driving operations across multi-cloud, hybrid-cloud and data centre environments.

vROps 8.1 and Cloud now fully support integrations with GCP, native VMware Cloud on AWS as a cloud account (including additional vSAN, NSX-T, and Cloud Services Portal information with billing), an enhanced portfolio of AWS objects, Cloud Health, and vSphere Kubernetes constructs. With the latter crucially enabling Kubernetes cluster onboarding, discovery, continuous performance optimisation, capacity and cost optimisation, monitoring and troubleshooting, and configuration and compliance management. Furthermore, new dashboards and topology views of workload management can be leveraged to display all Kubernetes objects visible from vCenter, for a complete end-to-end view of the infrastructure.

K8s_Dashboard

vRealize Operations 8.1 and Cloud integration for vSphere with Kubernetes:

  • Automatically discover new constructs of supervisor cluster, namespaces, pods, and Tanzu Kubernetes clusters.
  • New dashboards and summary pages for performance, capacity, utilisation, and configuration management of Kubernetes constructs, with full topology views from Kubernetes substrate to physical infrastructure.
  • Capacity forecasting detects utilisation and potential bottlenecks for supervisor clusters and pods and shows time remaining projections for CPU, memory, and disk space.
  • Out of the box reporting functionality for workload management, inventory, configuration, and capacity, with configurable alerting to operationalise the workload platform and provide complete visibility and control.
  • Container management pack extends visibility to monitor and visualise multiple Kubernetes clusters, map and co-relate virtual infrastructure to Kubernetes infrastructure, set up alerts and monitoring, and provide support for PKS.

You can find out more about what’s new in the vRealize suite at Delivering Modern Infrastructure for Modern Apps with vRealize Suite.

*LOB spend 51% to infrastructure operations spend 49% – source IDC WW Semiannual IT Spending Guide: Line of Business, 09 April 2018 (HW, SW and services; excludes Telecom)

Understand VMware Tanzu, Pacific, and Kubernetes for VMware Administrators

This post was last updated 26/10/2019 and provides an overview of VMware Tanzu and Project Pacific.

Peanut Butter & Jelly VMware and Kubernetes

There will be more apps deployed in the next 5 years than in the last 40 years (source: Introducing Project Pacific: Transforming vSphere into the App Platform of the Future). The VMware strategy of late has seen a significant shift towards cloud-agnostic software and the integration of cloud-native application development. In November 2018 VMware announced the Acquisition of Heptio to help accelerate enterprise adoption of Kubernetes on-premise and across multi-cloud environments. In May and August, 2019 VMware announced its intent to Acquire Bitnami and Pivotal Software, following the successful launch of Pivotal Container Service (PKS) which was later re-branded VMware Enterprise PKS.

To help better address application support complexities between development and operations teams, VMware has now announced VMware Tanzu:

“In Swahili, ’tanzu’ means the growing branch of a tree. In Japanese, ’tansu’ refers to a modular form of cabinetry. At VMware, Tanzu represents our growing portfolio of solutions to help you build, run and manage modern apps.”

VMware Tanzu is a portfolio of capabilities that empowers cloud-native development by enabling build, run, and manage operations across platforms. Using VMware Tanzu Mission Control Kubernetes clusters can be created and managed from a single control point.

Another key announcement alongside VMware Tanzu was code-named Project Pacific; enabling IT operators and developers to build and run modern applications with VMware vSphere and native Kubernetes. Project Pacific is focused on re-architecting vSphere for Kubernetes containers to run alongside VMware Virtual Machines (VMs) in ESXi, enabling the development of portable cloud-native applications and micro-services, while protecting existing investments in products and skills. You can review the press release of all products in the VMware Tanzu portfolio here, and the split of build, run, manage products here.

Introduction to Kubernetes

Kubernetes is an open-source orchestration and management tool that provides a simple Application Programming Interface (API), exposing a set of capabilities for defining workloads and services. Kubernetes enables containers to run and operate in a production-ready environment at an enterprise scale by managing and automating resource utilisation, failure handling, availability, configuration, scale, and desired state. Micro-services can be rapidly published, maintained, and updated.

Containers package applications and their dependencies into a distributed image that can run almost anywhere, simplifying application path to live. Kubernetes makes it easier to run applications across multiple cloud platforms, accelerates application development and deployment, increases agility, flexibility, and the ability to adapt to change.

For VMware administrators with little exposure to DevOps, the following high-level resources can help set a foundation understanding of Kubernetes, and why VMware are making some of these critical changes in architecture and strategy. You can try Kubernetes for yourself using the Kubernetes Academy by VMware, or a Kind Way to Learn Kubernetes.

Kubernetes for Executives: “Containers encapsulate an application in a form that’s portable and easy to deploy. Containers can run on any compatible system—in any
cloud—without changes. Containers consume resources efficiently, enabling high density and utilization. Kubernetes makes it possible to deploy and run complex applications requiring multiple containers by clustering physical or virtual resources for application hosting. Kubernetes is extensible, self-healing, scales applications automatically, and is inherently multi-cloud.”

 

Introduction to Project Pacific (Run)

Kubernetes uses a cluster of nodes to distribute container instances. The master node is the management plane containing the API server and scheduling capabilities. Worker nodes make up the control plane and act as compute resources for running workloads (known as pods). VMware has re-designed vSphere to include a Kubernetes control plane for managing Kubernetes workloads on ESXi hosts. The control plane is made up of a supervisor cluster using ESXi as the worker nodes, allowing workloads or pods to be deployed and run natively in the hypervisor, along with side traditional Virtual Machine workloads. This new functionality is provided by a new container runtime built into ESXi called CRX. CRX optimises the Linux kernel and hypervisor and strips some of the traditional heavy config of a Virtual Machine enabling the binary image and executable code to be quickly loaded and booted. The container runtime produces some of the performance benchmarks VMware have been publishing, such as improvements even over bare metal, in combination with ESXi’s powerful scheduler.

To ensure containers are running in pods, an agent called a Kubelet runs on Kubernetes cluster nodes. With the supervisor cluster, the role of the Kubelet agent is handled by a new ‘Spherelet’ running on each ESXi host. Pods are created on a network internal to the Kubernetes nodes. By default, pods cannot talk to each other across the cluster of nodes unless a Service is created. A Service in Kubernetes allows a group of pods to be exposed by a common IP address, helping define network routing and load balancing policies without having to understand the IP addressing of individual pods.

Another of the great features of Kubernetes is namespaces. Namespaces are commonly used to provide multi-tenancy across applications or users, and to manage resource quotas (backed in this instance by vSphere Resource Pools). Kubernetes namespaces segment resources for large teams working on a single Kubernetes cluster. Resources can have the same name as long as they belong to different namespaces, think of them as sub-domains and the Kubernetes cluster as the root domain the namespace gets attached to. Multiple namespaces can exist within the supervisor cluster, with different storage policies assigned to them, for persistent storage, etc.

Kubernetes can be accessed through a GUI known as the Kubernetes dashboard, or through a command-line tool called kubectl. Both query the Kubernetes API server to get or manage the state of various resources like pods, deployments, and services. Labels assigned to pods can be used to look up pods belonging to the same application, tier, or service. With Project Pacific; developers use Kubernetes APIs to access the Software-Defined Data Centre (SDDC) and ultimately consume Kubernetes clusters as a service using the same application deployment tools they use currently. This service is delivered by Infrastructure Operations teams using existing vSphere tools, with the flexibility of running Kubernetes workloads and Virtual Machine workloads side by side.

By applying application-focused management Project Pacific allows application-level control over policies, quota, and role-based access for Developers. Service features provided by vSphere such as High Availability (HA), Distributed Resource Scheduler (DRS) and vMotion can be applied at application level across Virtual Machines and containers. Unified visibility in vCenter for Kubernetes clusters, containers, and existing Virtual Machines is provided for a consistent view between Developers and Infrastructure Operations alike.

The following resources provide further reading on Project Pacific for enabling Kubernetes on vSphere.

Project_Pacific

Introduction to VMware Tanzu Mission Control (Manage)

VMware Tanzu Mission Control brings together Kubernetes clusters providing operator consistency for deployment, configuration, security, and policy enforcement across multiple clouds while maintaining developer independence and self-service.

VMware Tanzu Mission Control is a Software as a Service (SaaS) control plane offering allowing administrators to deploy, monitor, and manage ALL Kubernetes clusters from a single point of control. The beauty of this approach is that lifecycle management, access management, health and diagnostics, security and configuration policies, quota management, and backup or restore capabilities are all consolidated into a single toolset.  Kubernetes clusters running on vSphere, VMware Enterprise or Essential PKS, Public Cloud (AWS, Microsoft Azure, Google Cloud Platform), and managed services or other implementations can all be attached to VMware Tanzu Mission Control. New Kubernetes clusters can also be deployed to all of these platforms from the Tanzu Mission Control interface.

For more information on VMware Tanzu Mission Control, see the product page here, and Introducing VMware Tanzu Mission Control to Bring Order to Cluster Chaos. If you are attending VMworld Europe 2019 have a look through VMware Tanzu Sessions in the content catalog and also Explore Kubernetes at VMworld 2019. At the time of writing VMware Tanzu and Project Pacific are in tech preview, this post will be updated when more information is released. Please use the comments section below if you feel any key elements are missing or not explained clearly. There are some additional useful video tutorials available from the Project Pacific at Tech Field Day Extra at VMworld 2019,