Tag Archives: VMware Cloud Foundation

The Evolution of vSphere: Project Pacific Graduates and Project Monterey is Launched

Project Pacific Introduction

Earlier this year VMware announced the next generation of vSphere; vSphere 7.0 with Kubernetes; an end product released directly from the Project Pacific vision. To transform vSphere into a platform for modern applications it was rearchitected from the ground up. Initially the Kubernetes functionality was only available through VMware Cloud Foundation (VCF) 4.0, since it relied on using NSX-T and vSAN as the network and storage providers.

Modern applications span Virtual Machines and containers, stateful and stateless workloads. A single platform is needed that can run these types of applications, whilst removing silos, operating complications, and workload sprawl. With the introduction of vSphere 7 Update 1; Kubernetes integration is renamed vSphere with Tanzu and is the major focus of this release. To provide developer-ready infrastructure to existing customers, vSphere 7 with Tanzu now runs on top of vSphere and vSphere Distributed Switches (vDS) without the need for the full VCF stack.

Why vSphere is Relevant to Application Modernisation

Over the years organisations have accumulated millions of lines of custom software code, including the mission-critical systems that run their organisations. In most cases this code was built to do a job reliably, but not to quickly change and evolve. In todays society businesses need to convert an idea into a feature or production software faster, and rapidly change services based on customer feedback. Failure to take products or functionality to market quick enough can impact business operations and revenue.

In addition, a build up of tech debt has left IT teams maintaining vast amounts of software and hardware, meaning refactoring applications to run in native cloud services is often slower, more complex, and more expensive than anticipated.

In contrast to legacy applications designed around the physical infrastructure, modern applications are typically deployed using Infrastructure as Code (IaC). VMware customers are depending on applications as the primary part of their business model more than ever before, to ensure they stay relevant in the market.

Over 70 million workloads run on vSphere today, and over 80% of enterprise applications. vSphere with Tanzu embeds a Kubernetes runtime in the hypervisor; allowing developers to consume infrastructure through the Kubernetes API, while IT administrators manage containers as first class workloads along side Virtual Machines in vCenter. This makes vSphere with Tanzu the fastest way to get started with Kubernetes and modernise workloads or deploy new applications, without having to procure new products, and upskill IT staff.

What’s New with vSphere with Tanzu

vSphere with Tanzu brings Kubernetes to your existing VMware infrastructure, introducing bring your own storage, network, and load balancing solutions that can be leveraged to get up and running with Kubernetes in around an hour. To run vSphere with Tanzu a small add-on calculated per CPU is needed, called Tanzu Basic. Customers must have vSphere Enterprise Plus licensing with vSphere Distributed Switches, and must have upgraded to vSphere 7 Update 1.

vSphere with Tanzu utilises any existing block or file storage to present persistent storage volumes. The existing network infrastructure can provide Kubernetes networking on top of vSphere Distributed Switches, using port groups for Kubernetes namespaces, or NSX. NSX can also be used for load balancing or your own L4 based Load Balancer. At the initial release the supported customer Load Balancer will be HAProxy, although the open API will allow any Load Balancer provider to be eventually integrated. The presence of the Tanzu Kubernetes Grid (TKG) service ensures conformity with upstream Kubernetes, enabling migration of container-based applications across different Kubernetes based platforms without refactoring.

VMware Cloud Foundation with Tanzu is still the preferred approach for running Kubernetes at scale, since deployment and policies can be automated and centralised. VCF with Tanzu uses NSX for virtual networking and load balancing with vSAN for storage.

VMware Tanzu is now available in multiple editions:

  • Tanzu Basic: enables Kubernetes in vSphere, improves resource utilisation and embraces a move towards container based workloads
  • Tanzu Standard: enables Kubernetes in VCF and multi-cloud for centralised deployments and policies across platforms
  • vSphere with Tanzu has a built-in 60 day evaluation, to get started use the vSphere with Tanzu Quick Start Guide
This image has an empty alt attribute; its file name is screenshot-2020-10-11-at-12.25.48.png
Tanzu Editions from VMware Tanzu Blog

Further Tanzu Advanced and Tanzu Enterprise editions focused on DevOps delivery of workloads on Kubernetes and automation, aimed at refactoring applications, are expected to be made available as the product expands.

Project Monterey Introduction

At VMworld 2020, VMware announced the technical preview of Project Monterey; continuing the rearchitecture of vSphere towards fully composable infrastructure. Kit Colbert announces Project Monterey whilst talking about the need for modern applications to consume more CPU cycles and enforce zero-trust security locally, yet across distributed environments. Faster network speeds and cross-VM or container network traffic all adds CPU overhead. Enter the SmartNIC.

A SmartNIC (Network Interface Card), runs with an onboard DPU (Data Processing Unit) capable of offloading x86 cycles from the main server CPU. By taking care of network, security, and storage tasks like network I/O, micro-segmentation, and encryption, more compute power is made available for VMs and applications. As well as high speed Ethernet ports, the SmartNIC features an out-of-band management port, and is able to expose or directly pass-through PCI bus devices, like NVMe, to the core CPU OS. The SmartNIC is essentially a mini server inside of a NIC.

Project Monterey is another fundamental shift in the VMware Cloud Foundation architecture, by moving core CPU tasks to run on the SmartNIC. To do this, VMware have partnered with hardware and SmartNIC vendors such as NVIDIA, Intel, Dell, Lenovo, and HPE. The SmartNIC will run ESXi simultaneously with the main instance, these can be managed separately, or for most as a single logical instance. The SmartNIC instance of ESXi is going to handle the network and storage tasks mentioned above, reducing the burden on the main CPU. Furthermore, this second ESXi instance creates a security airgap in the event the hypervisor is ever compromised. Now that the virtualisation layer is separate from the application layer, apps are in a trust domain where they cannot be impacted by things like side-channel attacks from the virtualisation layer.

In addition to ESXi, the SmartNIC is also capable of running a bare-metal Operating System (OS), which opens up the possibility of VCF managing bare-metal OS, and delivering the same network and storage services as VMs or containers. Lifecycle management for the SmartNIC and ESXi is all rolled up into a single task, so there will be no extra operational overhead for VMware admins. Although there is no product release directly tied to Project Monterey it is expected that, much like Project Pacific, functionality could materialise over the next 12-18 months.

This image has an empty alt attribute; its file name is screenshot-2020-10-11-at-11.19.06.png
Future composable infrastructure from VMware vSphere Blog

Finally, since most SmartNIC’s are based on the 64-bit ARM processor, VMware have successfully ported ESXi over to ARM. Whilst ESXi based compute virtualisation has traditionally run on x86 platforms, VMware and ARM have worked together to release ESXi on ARM, initially as a Fling.

VMware and NVIDIA

Although there are multiple SmartNIC providers and collaborations that are in flight with Project Pacific, the big one to come out of VMworld 2020 was between VMware and NVIDIA. NVIDIA invented and brought to market the GPU, transforming computer graphics and the gaming industry. Since then, the GPU itself has evolved into a diverse and powerful coprocessor, now popular in particular with Artificial Intelligence (AI).

The NVIDIA Mellanox Bluefield-2 SmartNIC will eventually be integrated with VMware Cloud Foundation. This provides customers with data infrastructure on a chip at the end of each compute node, accelerating network, storage, and security services for high performance hybrid cloud workloads. An example of this technology is using the DPU to handle firewall capabilities, controlling traffic that is actually passed to the host itself to improve threat protection and mitigate against attacks like DDoS.

The data centre architecture will incorporate 3 core processors, the standard CPU, the GPU for accelerated computing, and the new DPU for offloading core infrastructure and security tasks. Since VMware Cloud Foundation can be consumed on-premises, at the edge, or in public cloud, VMware and NVIDIA are hopeful this technology makes it as easy as possible for organisations to consume AI with a consistent experience. You can read the full announcement here, and more information about Project Monterey with Bluefield-2 here.

Google Cloud VMware Engine Explained

Google Cloud VMware Engine (GCVE) is a fully managed VMware-as-a-Service solution provided and managed by Google, or a third-party VMware Cloud Provider, that can be deployed in as little as 30 minutes. VMware Engine runs VMware Cloud Foundation on dedicated Google Cloud bare metal servers, with native cloud integrations including Google’s innovative big data and machine-learning services. The VMware Cloud Foundation stack is made up of VMware vSphere, vCenter, NSX-T, and vSAN. The platform is VMware Cloud Verified and includes Hybrid Cloud Extension (HCX) to facilitate data centre network extension and migration. You can read the full Google announcement from May 2020 here.

Google Cloud Platform

Google Cloud Platform (GCP) offers a wide variety of services from Infrastructure-as-a-Service (IaaS) to Platform-as-a-Service (PaaS) running on the same infrastructure Google uses to provide global end-user services. Google’s cloud services are built on data centres designed to save water and electricity, they have been carbon-neutral since 2007, powered data centre operations with 100% renewable energy since 2017, and have a target of 2030 to run on carbon-free energy, 24/7.

As an organisation, Google is all about big data at huge scale. Google has one of the largest most advanced private Software-Defined Networks in the world, stretching across thousands of miles of fibre optic cable through over 200 countries, with 140 network edge locations.

Google-Global-Locations

Perhaps the key differentiator for Google as a cloud services provider is the commercialisation of some innovative big data and machine-learning tools they use internally to serve billions of search results and billions of YouTube videos every day. Google’s focus is really to allow developers to think about the code and applications they develop, and not about operations.

Of course, like all the major cloud providers, Google provides you with the functionality to spin up Virtual Machines, and this is a completely different service to Google Cloud VMware Engine. Google Compute Engine (GCE) supplies the raw building blocks for Virtual Machine instances and networks. GCE enables performance-optimised fast-booting instances in an Infrastructure-as-a-Service (IaaS) model, similar to AWS’ Elastic Compute Cloud (EC2). In addition to standard pre-configured instance types, GCE allows you to customise CPU/RAM metrics and save money on ‘always-on’ VMs with sustained usage discounts. GCE is part of the Google Cloud compute suite of services alongside Platform-as-a-Service offerings like Google App Engine and Google Kubernetes Engine. The comprehensive list of Google Cloud products can be found here, VMware Engine is categorised as compute.

GCP-Example

You can try out Google Cloud here with certain always free products and $300 free credit.

Google Cloud VMware Engine

Google Cloud VMware Engine runs on high-performance bare metal hosts in Google Cloud locations. At the time of writing the service is available from Los Angeles, Virginia, Frankfurt, Tokyo, with London just launched and Sydney to follow. Further regions Montreal, Netherlands, Sao Paulo, and Singapore are due in 2020 Q4. The full VMware Cloud Foundation stack is utilised to provide a secure, scalable, consistent environment for VMware workloads with Google managing the lifecycle of the VMware stack and all related infrastructure.

By running VMware Cloud Foundation in Google Cloud customers are able to migrate workloads to the cloud without having to refactor applications, replace third-party integrated products, or reskill teams. The existing vSphere network design can be migrated or extended with minimal re-architecture using HCX, and taking advantage of Google Cloud’s edge network security and advanced DDoS protection. The dedicated VMware stack in Google Cloud can be linked back to the on-premises VMware environment using a VPN or high-speed, low-latency private interconnect, with HCX adding hybrid-connectivity for seamless workload and subnet migration.

VMware Engine enables deep integration with third-party services for backup and storage such as Veeam, NetApp, Dell, Cohesity, and Zerto. Infrastructure administrators can leverage the scale and agility of the cloud whilst maintaining operational continuity of tools, policies, and processes.

The Google Cloud console has a built-in VMware Engine User Interface (UI) that integrates with billing and Identity and Access Management. VMware workloads in the VMware Engine environment can connect into native Google Cloud services like BigQuery, Anthos, and Cloud Storage using a private interconnect into Google’s 100Gbps backbone network. While the Google Cloud UI integration provides unified management of VMware workloads and native cloud services, access to vCenter Server enables consistent operations and investment protection for IT support personnel. The familiar vCenter and ESXi host model also helps with licensing through the VMware partner ecosystem.

As with other VMware Cloud platforms, the customer retains control of their  Virtual Machines; deciding upon the data location, authorisation and access policies, and the networking and firewall configuration of both north-south traffic and east-west with separate Layer-2 networks within a private cloud environment. With VMware Engine Google also allows 24-hour privilege elevation for installing and managing tools requiring vCenter administrative access.

Google-Cloud-VMware-Engine

Technical specification for Google Cloud VMware Engine:

VMware Cloud Foundation in Google Cloud is built on isolated single-tenancy bare-metal infrastructure. All-flash NVMe storage in a hyper-converged setup provides the speed and performance required for most demanding workloads like Oracle, VDI, Microsoft Exchange and SQL. Data is encrypted at rest using vSAN, with support for customer-managed keys. Google Cloud Storage or third party solutions can be leveraged for lower-cost and secondary storage tiers. The standard node size is Google’s ve1-standard-72 with the following specifications:

  • CPU: Intel Xeon Gold 6240 (Skylake) 2.6 GHz (3.9 GHz Turbo) x2, 36 cores/72 hyper-threads
  • Memory: 768 GB
  • Data: 19.2 TB (6 x 3.2 TB NVMe)
  • Cache: 3.2 TB (2 x 1.6 TB NVMe)
  • Network: 100 Gbps throughput (4 x Mellanox ConnectX-4 Lx Dual Port 25 GbE)

Screenshot 2020-09-07 at 08.59.31

The minimum configuration is 3 hosts, up to 16 in a cluster, with a 64 host maximum per private cloud (soft limit) and any number of private clouds. A private cloud can be deployed in around 30 minutes while adding hosts to an existing cloud can be done in 15 minutes. Hosts can be purchased as a 1 or 3-year commitment or using on-demand per-hour pricing with all infrastructure costs and associated licenses included.

VMware administrators can use Storage Policy-Based Management (SPBM) to set policies defining RAID or protection configuration, and IOPS based performance, using the vCenter Server interface. Storage policies can be applied to many objects or as granular as an individual VMDK file. GCVE enables the bring your own Key Management Service (KMS) model, allowing the customer to maintain and manage vSAN encryption keys.

Access to the NSX-T Manager means customers can make use of the full suite of L2-L7 services available, including load balancing, perimeter and distributed firewalls, and full control over private networks.

With Google’s backbone 100 Gbps network taking care of GCVE private cloud to VPC connectivity (in the same region or between regions, without a VPN), there are a couple of options for on-premises connectivity. Hybrid cloud connectivity is achieved using either a private interconnect or a secure VPN over the Internet. The interconnect is a low latency, typically high bandwidth connection; available in 10 Gbps or 100 Gbps, or 50 Mbps to 10 Gbps through a partner.

Google sell and support VMware Engine, the customer’s contract is with Google while the VMware Cloud Verified accreditation gives existing VMware customers peace of mind that hybrid environments are supported end to end. Google provide 24×7 support with a 99.99% SLA on the network and storage infrastructure, and 99.9% for the management components.

Example use cases for Google Cloud VMware Engine:

  • Data Centre Extension or Migration: extend data centre boundaries and scale to the cloud or additional regions quickly with guaranteed compatibility of existing workloads. Achieve true workload mobility between VMware environments for high availability and demand-based scalability. Migrate Virtual Machines to the cloud, and back if needed, without refactoring applications or even changing network settings.
  • Disaster Recovery (DR): backup and DR targets can be moved to the cloud to improve availability options and reduce total cost of ownership. By taking advantage of Google’s global infrastructure organisations can improve system availability by deploying across multiple zones or regions. Business-critical applications can be scaled on-demand, either through native services or SDDC expansion in minutes. VMware Site Recovery Manager (SRM) can automate failover for use cases where the customer data centre is the primary site, and GCVE is the DR site.
  • Global Expansion and Virtual Desktop Infrastructure (VDI): expansion of business and services into new regions without having to commission new data centres and hardware. Burst capacity in the locations needed and provide low latency, local content delivery at a global scale. This use case is highlighted further with the need for many organisations to provide remote working, often in the form of virtual desktops. VMware Horizon 7 can provide a highly available pod architecture deployment using GCVE infrastructure, with customer managed desktops.
  • Data Analytics and Innovation: access to Google’s internal big data services for querying massive data-sets in seconds, with actionable insights from serverless and machine-learning data analytics platforms. IT staff can concentrate on new projects, or improving systems and processes, whilst Google maintains upgrades, updates, and security patches for all the underlying infrastructure.
  • Hybrid Applications: high-speed, low-latency (<2 ms) access to native Google Cloud Services with Virtual Private Cloud (VPC) peering enables hybrid application across platforms. For example, front end web and application servers migrated from on-premises data centres to Google Cloud VMware Engine and large databases in a dedicated VPC with millisecond response times.

Google Cloud VMware Engine provides secure and compliant ready infrastructure; globally with ISO 27001/27017/27018, SOC1/2/3, PCI DSS, CSA STAR, MPAA, GxP, and Independent Security Evaluators (ISE) Audit. In the UK & Europe the platform is also compliant with NCSC Cloud Security Principles, GDPR, Privacy Shield, and EU Model Contract Clauses. By implementing a shared security model, the above is caveated with compliant ready, since the necessary processes and controls have been implemented but a customer could implement poor security controls and governance in their own environment.

If you’re interested in learning more about Google Cloud VMware Engine take a look at the useful links below, along with the GCVE documentation page, and the GCVE product page; which lists features, reference architecture, and pricing. VMware Engine is also listed in the Google Cloud pricing calculator.

vSphere 7 with Kubernetes and Tanzu on VMware Cloud Foundation

vSphere 7 Cloud Infrastructure for Modern Applications Part 1

vSphere 7 Cloud Infrastructure for Modern Applications Part 2:

VMware Cloud Foundation 4

VMware Cloud Foundation (VCF) 4 delivers vSphere with Kubernetes at cloud scale, bringing together developers and IT operations by providing a full-stack Hyper-Converged Infrastructure (HCI) for Virtual Machines (VMs) and containers. By utilising software-defined infrastructure for compute, storage, network, and management IT operations can provide agility, flexibility, and security for modern applications. The automated provisioning and maintenance of Kubernetes clusters through vCenter Server means that developers can rapidly deploy new applications or micro-services with cloud agility, scale, and simplicity. At the same time, IT operations continue supporting the modern application framework by leveraging existing vSphere functionality and tooling.

VMware has always been an effective abstraction provider, VCF 4 with Tanzu Services View takes infrastructure abstraction to the next level. Within vSphere underlying infrastructure components are abstracted into a set of services exposed to APIs, allowing the developer to look down from the application layer to consume the hybrid infrastructure services. Meanwhile, IT operations can build out policies and manage pods alongside VMs at scale using vSphere.

VCFwKubernetes

vSphere 7 with Kubernetes

Now and over the next 5 years, we will see a shift in how applications are built and run. In 2019 Line of Business (LOB) IT, or shadow IT, spend exceeded Infrastructure and Operations IT spend for the first time*. Modern applications are distributed systems built across serverless functions or managed services, containers, and Virtual Machines (VMs), replacing typical monolithic VM application and database deployments. The VMware portfolio is expanding to meet the needs of customers building modern applications, with a portfolio of services from Pivotal, Wavefront, Cloud Health, bitnami, heptio, bitfusion, and more. In the container space, VMware is strongly positioned to address modern application challenges for developers, business leaders, and infrastructure administrators.

Launched on March 10 2020, with expected April 2020 availability, vSphere 7 with Kubernetes is powering VMware Cloud Foundation 4. vSphere 7 with Kubernetes integration, the first product including capabilities announced as part of Project Pacific, provides real-time access to infrastructure in the Software-Defined Data Centre (SDDC) through familiar Kubernetes APIs, delivering security and performance benefits even over bare-metal hardware. The Kubernetes integration enables the full SDDC stack to utilise the Hybrid Infrastructure Services from ESXi, vSAN, and NSX-T, which provide the Storage Service, Registry Service, Network Service, and Container Service. Developers do not need to translate applications to infrastructure, instead leveraging existing APIs to provision micro-services, while infrastructure administrators use existing vCenter Server tooling to support Kubernetes workloads along with side Virtual Machines.

You can read more about the workings of vSphere with Kubernetes in this Project Pacific Technical Overview for New Users by Ellen Mei. Also, Initial Placement of a vSphere Pod by Frank Denneman is another useful and recent article detailing the process behind the ESXi container runtime.

VMware Application Modernisation Portfolio with Tanzu

VMware Cloud Foundation services is the first manifestation of Project Pacific; now vSphere with Kubernetes, and provides consistent services managed by IT operations to developers, ultimately anywhere that is running the VMware Cloud platform.

In the past, VMware was efficient at taking many Virtual Machines and running them across multiple hypervisors in a cluster, the challenge was consolidating numerous physical servers for cost-saving, management, and efficiency gains. Today the challenge is that application deployments are groups of VMs, which presents a new challenge of consolidating distributed applications across multiple clouds. Project Pacific brings Kubernetes and Tanzu to the vSphere environment, making it operationally easier to get upstream Kubernetes running, but also to effortlessly in-place upgrade and maintain Kubernetes clusters. This functionality accelerates vSphere into a much more modern, API driven, self-service and fast provisioning interface backed by optimised ESXi for all workloads.

Tanzu Kubernetes Grid (TKG) is a Kubernetes runtime built into VMware Cloud Foundation Services; allowing installing and maintaining of multi-cluster Kubernetes environments across different infrastructure. Tanzu Kubernetes Grid also works for operational consistency across Amazon Web Services (AWS), Azure, and Google Compute Engine (GCE). This is different to public cloud-managed Kubernetes services such as EKS, AKS, GKE, etc. as it integrates natively into the existing infrastructure; meeting the needs of organisations who require abstracted logging, events, governance policies, and admission policies. This capability is delivering not just Kubernetes but a set of management services to provision, deploy, upgrade and maintain Kubernetes clusters. By having this granular level of control over the underlying VMs or cloud environment customers can implement, monitor, or enforce their own security policies and governance.

Tanzu Mission Control provides operator consistency for deployment, configuration, security, and policy enforcement for Kubernetes across multiple clouds, simplifying the management of Kubernetes at scale. Tanzu Mission Control is a Software as a Service (SaaS) control plane offering allowing VMware administrators to standardise Identity Access Management (IAM), configuration and policy control, backup and recovery, ingress control, cost accounting, and more. The multi-cluster control plane supports the propagation of Kubernetes across vSphere, Pivotal Container Services (PKS), AWS, Azure, and Google Cloud Platform (GCP), all from a single point of control.

VMware have announced the availability of Tanzu Kubernetes Grid, Tanzu Mission Control, and Tanzu Application Catalog (open-source software catalog powered by Bitnami), providing a unified platform to build, run, and manage modern applications.

VMware Cloud Foundation Additional Updates

VMware Cloud Foundation (VCF) 4 is expected in April 2020 and includes fully updated software building blocks for the private cloud, including vCenter, ESXi, and vSAN 7.0, plus the addition of NSX-T.

VCF with NSX-T is made up of workload domain constructs,  and by default, every architecture starts with a management domain, which hosts vCenter, private NSX managers, and the edge cluster. There are a couple of changes in VCF 4 that reduce the footprint of the management domain; NSX-T is being fully utilised for the first time, the NSX Edge cluster can be deployed at day X, NSX Manager and controllers are now integrated, and the Platform Services Controller (PSC) is also now using the embedded model with vCenter. Additionally, we have the capability to use Application Virtual Networks (AVM) using BGP peering on deployment, or again as a day X action. Another side note is that Log Insight has been changed from a default deployment requirement to an optional day X action.

Workload domains are built out to server vSphere with Kubernetes and expose the network services for developers to use. Workload domains can be built on new or existing NSX-T managers; offering the choice of one-to-one or one-to-many relationship for NSX-T instances with VCF 4. This provides customers with the option of separating out NSX-T instances, while simultaneously protecting the management domain. Day X automation then can be used to place edge deployments in the appropriate cluster:

NSXWorkloadDomains

SDDC Manager and Lifecycle Manager (LCM) provide automated provisioning and upgrades. Lifecycle Manager enhances ease of upgrade and patching by providing automated lifecycle management; with update notifications, review and schedule options, and monitoring and reporting. Also, LCM can manage all inter-dependencies of versioning at a cluster level, from vSphere right through to the Kubernetes runtime. SDDC Manager is orchestrating and automating the provisioning of vSphere with Kubernetes workload domains, and crucially enabling the LCM functionality for maintaining upgrades for the entire software stack, and eliminating typical day 2 challenges for developers.

SDDCManager

Multi-Instance Management: multiple VCF instances can now be federated to provide a global view of workload domains without the installation of any additional components. Administrators can click-through to any VCF data centre to centrally view patching, upgrades, maintenance, and remediation operations.

New Security Enhancements: native Workspace ONE Identity Access integration for vRealize suite and NSX-T using AD or LDAP identity sources. Admin and Operator roles for API and UI, with the operator role providing all privileges minus password management, user management, and backup and restore. Token-based authentication is also now enforced across all APIs.

You can find out more about the VMware Cloud Foundation 4 update at What’s New in VMware Cloud Foundation 4  and Delivering Kubernetes at Cloud Scale with VMware Cloud Foundation 4.

VMware vRealize Cloud Management Integration

The vRealize Cloud Management product suite has been comprehensively updated to include vSphere 7 with Kubernetes support. vRealize Operations (vROps) 8.1 is now available for the first time as a SaaS (Software as a Service) offering with an enhanced feature-set. Some of the key new functionality enables self-driving operations across multi-cloud, hybrid-cloud and data centre environments.

vROps 8.1 and Cloud now fully support integrations with GCP, native VMware Cloud on AWS as a cloud account (including additional vSAN, NSX-T, and Cloud Services Portal information with billing), an enhanced portfolio of AWS objects, Cloud Health, and vSphere Kubernetes constructs. With the latter crucially enabling Kubernetes cluster onboarding, discovery, continuous performance optimisation, capacity and cost optimisation, monitoring and troubleshooting, and configuration and compliance management. Furthermore, new dashboards and topology views of workload management can be leveraged to display all Kubernetes objects visible from vCenter, for a complete end-to-end view of the infrastructure.

K8s_Dashboard

vRealize Operations 8.1 and Cloud integration for vSphere with Kubernetes:

  • Automatically discover new constructs of supervisor cluster, namespaces, pods, and Tanzu Kubernetes clusters.
  • New dashboards and summary pages for performance, capacity, utilisation, and configuration management of Kubernetes constructs, with full topology views from Kubernetes substrate to physical infrastructure.
  • Capacity forecasting detects utilisation and potential bottlenecks for supervisor clusters and pods and shows time remaining projections for CPU, memory, and disk space.
  • Out of the box reporting functionality for workload management, inventory, configuration, and capacity, with configurable alerting to operationalise the workload platform and provide complete visibility and control.
  • Container management pack extends visibility to monitor and visualise multiple Kubernetes clusters, map and co-relate virtual infrastructure to Kubernetes infrastructure, set up alerts and monitoring, and provide support for PKS.

You can find out more about what’s new in the vRealize suite at Delivering Modern Infrastructure for Modern Apps with vRealize Suite.

*LOB spend 51% to infrastructure operations spend 49% – source IDC WW Semiannual IT Spending Guide: Line of Business, 09 April 2018 (HW, SW and services; excludes Telecom)