VMware Cloud on Dell EMC Overview

Introduction

Managed and as-a-service models are a growing trend across infrastructure consumers. Customers in general want ease and consistency within both IT and finance, for example opting to shift towards OpEx funding models.

For large or enterprise organisations with significant investments in existing technologies, processes, and skills, refactoring everything into cloud native services can be complex and expensive. For these types of environments the strategy has sharpened from Cloud-First to Cloud-Smart. A Cloud-Smart approach enables customers to transition to the cloud quickly where it makes sense to do so, without tearing up roots on existing live services, and workloads or data that do not have a natural progression to traditional cloud.

In addition to the operational complexities of rearchitecting services, many industries have strict regulatory and compliance rules that must be adhered to. Customers may have specific security standards or customised policies requiring sensitive data to be located on-premises, under their own physical control. Applications may also have low latency requirements or the need to be located in close proximity to data processing or back end systems. This is where VMware Local Cloud as a Service (LCaaS) can help combine the key benefits from both public cloud and on-premises environments.

What is VMware Cloud on Dell EMC?

VMware Cloud on Dell EMC is a fully managed Infrastructure-as-a-Service (IaaS) local-cloud deployment. A dedicated rack with all supporting hardware and equipment is wheeled into the customer site where it is maintained directly by VMware Site Reliability Engineering (SRE). The customer provides the physical location for the rack to sit, the power source, and the existing network for the data plane switches to plug into.

VMware Cloud on Dell EMC delivers a fully integrated software and hardware stack, jointly engineered by VMware and Dell EMC.

VMware Cloud on Dell EMC Overview

The VMware Software Defined Data Centre (SDDC) overlay, and hardware underlay, comprises of:

  • VMware vSphere and vCenter for compute virtualisation and management
  • VMware vSAN for storage virtualisation
  • VMware NSX-T for network virtualisation
  • VMware HCX for live migration of virtual machines with stretched Layer 2 capability
  • 3-26 Dell VxRail Hyper-Converged Infrastructure (HCI) nodes per full-height rack (and currently up to 3 racks per SDDC)
  • 1 non-chargeable standby VxRail node per rack for service continuity
  • Redundant Power Distribution Units (PDUs)
  • Uninterruptible Power Supply (UPS) for half-height rack configurations
  • Redundant Top of Rack (ToR) data plane switches
  • Redundant VMware SD-WAN appliances for remote management

All of this is delivered in a dedicated rack, as a fully managed service, with a single point of support directly with VMware. VMware SRE will take care of updating and maintaining all components of the software overlay, firmware updates, and management or repair of the underlying hardware. The customer maintains responsibility for the virtual machines they run on the infrastructure, plus configuration like network and storage policies. Let’s take a deeper dive.. you can also find out more from the VMware Cloud on Dell EMC product page, or the VMware Cloud on Dell EMC Solution Overview Brief.

VMware Cloud on Dell EMC can be used in any location the customer has authority to land equipment into. A site survey needs to be carried out before kit is shipped and installed. VMware is the single point of contact for support (unless you are purchasing through Dell APEX, more on that at the end of this post). For support issues that require an on-site fix, a Dell engineer will attend, but VMware will manage that support case directly. The subscription price per-node is inclusive of all hardware, software, licensing, support, and services, outlined in the graphic below.

VMware Cloud on Dell EMC What’s Included

The VMware SRE boundary ends at the LAN link into the customers network (beyond the ToR switches), VMware teams have no access beyond this point. Equally, the customer boundary ends at the LAN link between the SDDC and the VeloCloud Edge devices in the rack. The VeloCloud Edge devices provide connectivity over VMware’s SD-WAN using a secure IPSEC tunnel, and will need outbound connectivity on ports TCP 443 and UDP 2426.

There are multiple security processes in place to protect against unauthorised access. For example, in order to access a customer environment, a support engineer must generate one-time, time-sensitive credentials, which require a support case to be raised in the system. All activity is logged and monitored by VMware’s Cyber Security Operations Centre (CSOC), and can also be logged into a similar customer setup. Further references and information can be found in the VMware Cloud on Dell EMC Shared Responsibility Model Overview.

VMware Cloud on Dell EMC hosts come in standardised ‘T-Shirt’ sizes to optimise CPU, memory, and storage resources. Currently there are 6 different node sizes from extra small through extra large. You can find full specifications of the node sizes and rack types in the VMware Cloud on Dell EMC Service Data Sheet. Here is a quick run down of the sizing naming convention:

VMware Cloud on Dell EMC Node Sizing Guide

Why VMware Cloud on Dell EMC?

You’ll see me advocate public cloud a lot on this blog, but on-premises infrastructure often has its use cases. Data sovereignty, regulatory and compliance, workload to data proximity, latency requirements, local control, and existing investments all spring to mind. Running infrastructure at the edge is also becoming more prominent and overlaps with some of these use cases. As systems are more distributed, and consumers have more choice, there are many benefits in creating consistent application, infrastructure, and operating experiences across private cloud, public cloud, and edge locations.

VMware Cloud on Dell EMC benefits from a cloud operating and delivery model, whilst being classed as an on-premises service. This means that regulatory and data sovereignty requirements can be satisfied as all customer data is held on the local hardware. The VMware SD-WAN appliances and VMware Cloud portal are only used for management, without any further access into the customers network. VI admins continue to use vCenter Server as normal to manage virtual machines, however they no longer need to worry about maintaining the underlying infrastructure. IT teams now benefit from a managed service operating model with a predictable subscription-based monthly or annual outgoing, without the hardware ownership depreciation and management overhead.

VMware Cloud on Dell EMC Use Cases

A great use case for VMware Cloud on Dell EMC is VDI. Whether or not you have data or application proximity requirements, the Hyper-Converged Infrastructure (HCI) and node size configurations fit exceptionally well with virtual desktops utilising hyper-threading and instant clone technology. The SDDC can be built as a brand new pod, or used to extend an existing pod within the customers environment.

At the time of writing Horizon perpetual licenses can be used to run virtual desktops on VMware Cloud on Dell EMC, along with existing Microsoft licensing. A common consideration of moving VDI to the cloud is around Microsoft license mobility for Windows, Office 365, and SQL, and the requirement for Horizon Universal. Microsoft treat this solution as customer on-premises, which means that implementing VMware LCaaS delivers the best of both worlds. You can read more about the VDI use case in the VMware Horizon Deployed on VMware Cloud on Dell EMC technical overview.

As well as VDI, other popular use cases for VMware Cloud on Dell EMC include data centre modernisation, a change in IT funding model, application modernisation, and services with low latency, sensitive data, or data sovereignty requirements. VMware Cloud on Dell EMC integrates seamlessly with existing on-premises environments, with continuity of third party tools and processes already in place, such as backups, monitoring, and security. Hybrid Linked Mode allows single pane of glass management of vCenter Servers across IaaS and self-managed infrastructure. You can find out more about the benefits of VMware Cloud on Dell EMC, including Total Cost of Ownership (TCO) improvements, in the VMware Cloud Economics data sheet.

VMware Local Cloud as a Service (LCaaS)

Getting Started with VMware Cloud on Dell EMC

VMware Cloud on Dell EMC can be ordered, customised, and scaled through the VMware Cloud portal. Delivery and installation takes place in a matter of weeks, including the site survey. Check with your VMware or Dell account team for up to date time timelines, I have been quoted between 4-8 weeks at the time of writing (early 2022) which may fluctuate depending on hardware availability. The service is available in the UK, USA, France, and Germany, with plans to roll out to further regions.

When ordering the service, the customer can select the rack type and see full details of the host capacity, network bandwidth, height in rack units, and power configuration. The customer will be asked to confirm that the site location meets the rack requirements, including rack dimensions, power source, and environmental variables such as temperature and humidity.

VMware Cloud on Dell EMC Example Requirements

Next the customer will be asked to select the host type, the number of hosts, and provide the networking settings. A CIDR block is needed for the management subnets, including rack out-of-band management, SDDC management, and the VMware SD-WAN appliances. It is very important that the IP ranges are correct and do not overlap with any existing networks. Changing these values post-order will cause additional complexity and delays.

Ports TCP 443 and UDP 2426 will need to be open outbound to connect to VMware Cloud. The term commitment is also selected during the order process, and the term begins when the SDDC is deployed and activated from the VMware Cloud console. You can track the status of the order at any time from the portal.

VMware Cloud on Dell EMC Example SDDC Order

When the rack arrives on-site it is fully cabled and ready to be connected to the customer environment. The ToR switches are physically connected to the existing upstream network using customer provided SFP adapters and copper or fibre cables. Dynamic routing can be configured using eBGP, facilitating fast routing failover in the event of a ToR switch failure or upstream switch failure. Static routing can also be used but is less optimal.

Once the SDDC is deployed the L3 ECMP uplink connectivity between the ToR switches and the existing upstream network can be configured from the VMware Cloud console.

VMware Cloud on Dell EMC Example SDDC Summary

After setup is complete the service maintains operational consistency with existing VMware environments; for example virtual machines are managed using vCenter Server, and new networks are created using NSX-T. For more information review the VMware Cloud on Dell EMC Data Sheet, or the more comprehensive VMware Cloud on Dell EMC Technical Overview.

Another great place to get started is the VMware Cloud Tech Zone. You can find detailed white papers, reference architectures, technical demos, and hands on labs for VMware Cloud on Dell EMC specifically at the VMware Cloud on Dell EMC Tech Zone.

VMware Cloud on Dell EMC vs Dell APEX Cloud Services

At VMworld 2021, VMware and Dell announced general availability of Dell APEX Cloud Services With VMware Cloud.

As outlined in the introduction of this post, many organisation are moving to as-a-service and subscription services. Dell, along with VMware, have recognised this shift and made many of their compute and storage platforms available on managed and subscription based plans. Dell APEX Cloud Services is the self-service portal where Dell customers can configure and order such solutions.

Dell APEX Cloud Services with VMware Cloud, allows Dell customers to order VMware Cloud on Dell EMC directly through Dell. Although this may seem confusing, it gives customers an alternative purchasing route which can help leverage existing commercial agreements, credits, partners, and relationships.

The core technical concepts of the solution outlined above all remain the same. The key difference is that when purchasing through Dell APEX, the customer is buying directly from Dell (instead of VMware), and Dell are the single point of contact for all support and maintenance (instead of VMware). Whilst the order process remains fundamentally the same, the screenshots above are of the VMware Cloud portal, and so the Dell APEX portal will look slightly different.

Google Cloud VMware Engine Explained

Introduction

Google Cloud VMware Engine (GCVE) is a fully managed VMware-as-a-Service solution provided and managed by Google, or a third-party VMware Cloud Provider, that can be deployed in under 2 hours. VMware Engine runs VMware Cloud Foundation on dedicated Google Cloud bare metal servers, with native cloud integrations including Google’s innovative big data and machine-learning services. The VMware Cloud Foundation stack is made up of VMware vSphere, vCenter, NSX-T, and vSAN. The platform is VMware Cloud Verified and includes Hybrid Cloud Extension (HCX) to facilitate data centre network extension and migration. You can read the full Google announcement from May 2020 here.

Google Cloud VMware Engine

Google Cloud Platform

Google Cloud Platform (GCP) offers a wide variety of services from Infrastructure-as-a-Service (IaaS) to Platform-as-a-Service (PaaS) running on the same infrastructure Google uses to provide global end-user services. Google’s cloud services are built on data centres designed to save water and electricity, they have been carbon-neutral since 2007, powered data centre operations with 100% renewable energy since 2017, and have a target of 2030 to run on carbon-free energy, 24/7.

As an organisation, Google is all about big data at huge scale. Google has one of the largest most advanced private Software-Defined Networks in the world, stretching across thousands of miles of fibre optic cable through over 200 countries, with 140 network edge locations.

Google-Global-Locations

Perhaps the key differentiator for Google as a cloud services provider is the commercialisation of some innovative big data and machine-learning tools they use internally to serve billions of search results and billions of YouTube videos every day. Google’s focus is really to allow developers to think about the code and applications they develop, and not about operations.

Of course, like all the major cloud providers, Google provides you with the functionality to spin up Virtual Machines, and this is a completely different service to Google Cloud VMware Engine. Google Compute Engine (GCE) supplies the raw building blocks for Virtual Machine instances and networks. GCE enables performance-optimised fast-booting instances in an Infrastructure-as-a-Service (IaaS) model, similar to AWS’ Elastic Compute Cloud (EC2). In addition to standard pre-configured instance types, GCE allows you to customise CPU/RAM metrics and save money on ‘always-on’ VMs with sustained usage discounts. GCE is part of the Google Cloud compute suite of services alongside Platform-as-a-Service offerings like Google App Engine and Google Kubernetes Engine. The comprehensive list of Google Cloud products can be found here, VMware Engine is categorised as compute.

GCP-Example

You can try out Google Cloud here with certain always free products and $300 free credit.

Google Cloud VMware Engine

Google Cloud VMware Engine runs on high-performance bare metal hosts in Google Cloud locations. At the time of writing the service is available from Los Angeles, Virginia, Frankfurt, Tokyo, with London just launched and Sydney to follow. Further regions Montreal, Netherlands, Sao Paulo, and Singapore are due in 2020 Q4. The full VMware Cloud Foundation stack is utilised to provide a secure, scalable, consistent environment for VMware workloads with Google managing the lifecycle of the VMware stack and all related infrastructure.

By running VMware Cloud Foundation in Google Cloud customers are able to migrate workloads to the cloud without having to refactor applications, replace third-party integrated products, or reskill teams. The existing vSphere network design can be migrated or extended with minimal re-architecture using HCX, and taking advantage of Google Cloud’s edge network security and advanced DDoS protection. The dedicated VMware stack in Google Cloud can be linked back to the on-premises VMware environment using a VPN or high-speed, low-latency private interconnect, with HCX adding hybrid-connectivity for seamless workload and subnet migration.

VMware Engine enables deep integration with third-party services for backup and storage such as Veeam, NetApp, Dell, Cohesity, and Zerto. Infrastructure administrators can leverage the scale and agility of the cloud whilst maintaining operational continuity of tools, policies, and processes.

The Google Cloud console has a built-in VMware Engine User Interface (UI) that integrates with billing and Identity and Access Management. VMware workloads in the VMware Engine environment can connect into native Google Cloud services like BigQuery, Anthos, and Cloud Storage using a private interconnect into Google’s 100Gbps backbone network. While the Google Cloud UI integration provides unified management of VMware workloads and native cloud services, access to vCenter Server enables consistent operations and investment protection for IT support personnel. The familiar vCenter and ESXi host model also helps with licensing through the VMware partner ecosystem.

As with other VMware Cloud platforms, the customer retains control of their  Virtual Machines; deciding upon the data location, authorisation and access policies, and the networking and firewall configuration of both north-south traffic and east-west with separate Layer-2 networks within a private cloud environment. With VMware Engine Google also allows 24-hour privilege elevation for installing and managing tools requiring vCenter administrative access.

Google-Cloud-VMware-Engine

Technical specification for Google Cloud VMware Engine:

VMware Cloud Foundation in Google Cloud is built on isolated single-tenancy bare-metal infrastructure. All-flash NVMe storage in a hyper-converged setup provides the speed and performance required for most demanding workloads like Oracle, VDI, Microsoft Exchange and SQL. Data is encrypted at rest using vSAN, with support for customer-managed keys. Google Cloud Storage or third party solutions can be leveraged for lower-cost and secondary storage tiers. The standard node size is Google’s ve1-standard-72 with the following specifications:

  • CPU: Intel Xeon Gold 6240 (Skylake) 2.6 GHz (3.9 GHz Turbo) x2, 36 cores/72 hyper-threads
  • Memory: 768 GB
  • Data: 19.2 TB (6 x 3.2 TB NVMe)
  • Cache: 3.2 TB (2 x 1.6 TB NVMe)
  • Network: 100 Gbps throughput (4 x Mellanox ConnectX-4 Lx Dual Port 25 GbE)

Screenshot 2020-09-07 at 08.59.31

The minimum configuration is 3 hosts, up to 16 in a cluster, with a 64 host maximum per private cloud (soft limit) and any number of private clouds. A private cloud can be deployed in around 30 minutes while adding hosts to an existing cloud can be done in 15 minutes. Hosts can be purchased as a 1 or 3-year commitment or using on-demand per-hour pricing with all infrastructure costs and associated licenses included.

VMware administrators can use Storage Policy-Based Management (SPBM) to set policies defining RAID or protection configuration, and IOPS based performance, using the vCenter Server interface. Storage policies can be applied to many objects or as granular as an individual VMDK file. GCVE enables the bring your own Key Management Service (KMS) model, allowing the customer to maintain and manage vSAN encryption keys.

Access to the NSX-T Manager means customers can make use of the full suite of L2-L7 services available, including load balancing, perimeter and distributed firewalls, and full control over private networks.

With Google’s backbone 100 Gbps network taking care of GCVE private cloud to VPC connectivity (in the same region or between regions, without a VPN), there are a couple of options for on-premises connectivity. Hybrid cloud connectivity is achieved using either a private interconnect or a secure VPN over the Internet. The interconnect is a low latency, typically high bandwidth connection; available in 10 Gbps or 100 Gbps, or 50 Mbps to 10 Gbps through a partner.

Google sell and support VMware Engine, the customer’s contract is with Google while the VMware Cloud Verified accreditation gives existing VMware customers peace of mind that hybrid environments are supported end to end. Google provide 24×7 support with a 99.99% SLA on the network and storage infrastructure, and 99.9% for the management components.

Example use cases for Google Cloud VMware Engine:

  • Data Centre Extension or Migration: extend data centre boundaries and scale to the cloud or additional regions quickly with guaranteed compatibility of existing workloads. Achieve true workload mobility between VMware environments for high availability and demand-based scalability. Migrate Virtual Machines to the cloud, and back if needed, without refactoring applications or even changing network settings.
  • Disaster Recovery (DR): backup and DR targets can be moved to the cloud to improve availability options and reduce total cost of ownership. By taking advantage of Google’s global infrastructure organisations can improve system availability by deploying across multiple zones or regions. Business-critical applications can be scaled on-demand, either through native services or SDDC expansion in minutes. VMware Site Recovery Manager (SRM) can automate failover for use cases where the customer data centre is the primary site, and GCVE is the DR site.
  • Global Expansion and Virtual Desktop Infrastructure (VDI): expansion of business and services into new regions without having to commission new data centres and hardware. Burst capacity in the locations needed and provide low latency, local content delivery at a global scale. This use case is highlighted further with the need for many organisations to provide remote working, often in the form of virtual desktops. VMware Horizon 7 can provide a highly available pod architecture deployment using GCVE infrastructure, with customer managed desktops.
  • Data Analytics and Innovation: access to Google’s internal big data services for querying massive data-sets in seconds, with actionable insights from serverless and machine-learning data analytics platforms. IT staff can concentrate on new projects, or improving systems and processes, whilst Google maintains upgrades, updates, and security patches for all the underlying infrastructure.
  • Hybrid Applications: high-speed, low-latency (<2 ms) access to native Google Cloud Services with Virtual Private Cloud (VPC) peering enables hybrid application across platforms. For example, front end web and application servers migrated from on-premises data centres to Google Cloud VMware Engine and large databases in a dedicated VPC with millisecond response times.

Google Cloud VMware Engine provides secure and compliant ready infrastructure; globally with ISO 27001/27017/27018, SOC1/2/3, PCI DSS, CSA STAR, MPAA, GxP, and Independent Security Evaluators (ISE) Audit. In the UK & Europe the platform is also compliant with NCSC Cloud Security Principles, GDPR, Privacy Shield, and EU Model Contract Clauses. By implementing a shared security model, the above is caveated with compliant ready, since the necessary processes and controls have been implemented but a customer could implement poor security controls and governance in their own environment.

If you’re interested in learning more about Google Cloud VMware Engine take a look at the useful links below, along with the GCVE documentation page, and the GCVE product page; which lists features, reference architecture, and pricing. VMware Engine is also listed in the Google Cloud pricing calculator.

vSphere 7 with Kubernetes and Tanzu on VMware Cloud Foundation

vSphere 7 Cloud Infrastructure for Modern Applications Part 1

vSphere 7 Cloud Infrastructure for Modern Applications Part 2:

VMware Cloud Foundation 4

VMware Cloud Foundation (VCF) 4 delivers vSphere with Kubernetes at cloud scale, bringing together developers and IT operations by providing a full-stack Hyper-Converged Infrastructure (HCI) for Virtual Machines (VMs) and containers. By utilising software-defined infrastructure for compute, storage, network, and management IT operations can provide agility, flexibility, and security for modern applications. The automated provisioning and maintenance of Kubernetes clusters through vCenter Server means that developers can rapidly deploy new applications or micro-services with cloud agility, scale, and simplicity. At the same time, IT operations continue supporting the modern application framework by leveraging existing vSphere functionality and tooling.

VMware has always been an effective abstraction provider, VCF 4 with Tanzu Services View takes infrastructure abstraction to the next level. Within vSphere underlying infrastructure components are abstracted into a set of services exposed to APIs, allowing the developer to look down from the application layer to consume the hybrid infrastructure services. Meanwhile, IT operations can build out policies and manage pods alongside VMs at scale using vSphere.

VCFwKubernetes

vSphere 7 with Kubernetes

Now and over the next 5 years, we will see a shift in how applications are built and run. In 2019 Line of Business (LOB) IT, or shadow IT, spend exceeded Infrastructure and Operations IT spend for the first time*. Modern applications are distributed systems built across serverless functions or managed services, containers, and Virtual Machines (VMs), replacing typical monolithic VM application and database deployments. The VMware portfolio is expanding to meet the needs of customers building modern applications, with a portfolio of services from Pivotal, Wavefront, Cloud Health, bitnami, heptio, bitfusion, and more. In the container space, VMware is strongly positioned to address modern application challenges for developers, business leaders, and infrastructure administrators.

Launched on March 10 2020, with expected April 2020 availability, vSphere 7 with Kubernetes is powering VMware Cloud Foundation 4. vSphere 7 with Kubernetes integration, the first product including capabilities announced as part of Project Pacific, provides real-time access to infrastructure in the Software-Defined Data Centre (SDDC) through familiar Kubernetes APIs, delivering security and performance benefits even over bare-metal hardware. The Kubernetes integration enables the full SDDC stack to utilise the Hybrid Infrastructure Services from ESXi, vSAN, and NSX-T, which provide the Storage Service, Registry Service, Network Service, and Container Service. Developers do not need to translate applications to infrastructure, instead leveraging existing APIs to provision micro-services, while infrastructure administrators use existing vCenter Server tooling to support Kubernetes workloads along with side Virtual Machines.

You can read more about the workings of vSphere with Kubernetes in this Project Pacific Technical Overview for New Users by Ellen Mei. Also, Initial Placement of a vSphere Pod by Frank Denneman is another useful and recent article detailing the process behind the ESXi container runtime.

VMware Application Modernisation Portfolio with Tanzu

VMware Cloud Foundation services is the first manifestation of Project Pacific; now vSphere with Kubernetes, and provides consistent services managed by IT operations to developers, ultimately anywhere that is running the VMware Cloud platform.

In the past, VMware was efficient at taking many Virtual Machines and running them across multiple hypervisors in a cluster, the challenge was consolidating numerous physical servers for cost-saving, management, and efficiency gains. Today the challenge is that application deployments are groups of VMs, which presents a new challenge of consolidating distributed applications across multiple clouds. Project Pacific brings Kubernetes and Tanzu to the vSphere environment, making it operationally easier to get upstream Kubernetes running, but also to effortlessly in-place upgrade and maintain Kubernetes clusters. This functionality accelerates vSphere into a much more modern, API driven, self-service and fast provisioning interface backed by optimised ESXi for all workloads.

Tanzu Kubernetes Grid (TKG) is a Kubernetes runtime built into VMware Cloud Foundation Services; allowing installing and maintaining of multi-cluster Kubernetes environments across different infrastructure. Tanzu Kubernetes Grid also works for operational consistency across Amazon Web Services (AWS), Azure, and Google Compute Engine (GCE). This is different to public cloud-managed Kubernetes services such as EKS, AKS, GKE, etc. as it integrates natively into the existing infrastructure; meeting the needs of organisations who require abstracted logging, events, governance policies, and admission policies. This capability is delivering not just Kubernetes but a set of management services to provision, deploy, upgrade and maintain Kubernetes clusters. By having this granular level of control over the underlying VMs or cloud environment customers can implement, monitor, or enforce their own security policies and governance.

Tanzu Mission Control provides operator consistency for deployment, configuration, security, and policy enforcement for Kubernetes across multiple clouds, simplifying the management of Kubernetes at scale. Tanzu Mission Control is a Software as a Service (SaaS) control plane offering allowing VMware administrators to standardise Identity Access Management (IAM), configuration and policy control, backup and recovery, ingress control, cost accounting, and more. The multi-cluster control plane supports the propagation of Kubernetes across vSphere, Pivotal Container Services (PKS), AWS, Azure, and Google Cloud Platform (GCP), all from a single point of control.

VMware have announced the availability of Tanzu Kubernetes Grid, Tanzu Mission Control, and Tanzu Application Catalog (open-source software catalog powered by Bitnami), providing a unified platform to build, run, and manage modern applications.

VMware Cloud Foundation Additional Updates

VMware Cloud Foundation (VCF) 4 is expected in April 2020 and includes fully updated software building blocks for the private cloud, including vCenter, ESXi, and vSAN 7.0, plus the addition of NSX-T.

VCF with NSX-T is made up of workload domain constructs,  and by default, every architecture starts with a management domain, which hosts vCenter, private NSX managers, and the edge cluster. There are a couple of changes in VCF 4 that reduce the footprint of the management domain; NSX-T is being fully utilised for the first time, the NSX Edge cluster can be deployed at day X, NSX Manager and controllers are now integrated, and the Platform Services Controller (PSC) is also now using the embedded model with vCenter. Additionally, we have the capability to use Application Virtual Networks (AVM) using BGP peering on deployment, or again as a day X action. Another side note is that Log Insight has been changed from a default deployment requirement to an optional day X action.

Workload domains are built out to server vSphere with Kubernetes and expose the network services for developers to use. Workload domains can be built on new or existing NSX-T managers; offering the choice of one-to-one or one-to-many relationship for NSX-T instances with VCF 4. This provides customers with the option of separating out NSX-T instances, while simultaneously protecting the management domain. Day X automation then can be used to place edge deployments in the appropriate cluster:

NSXWorkloadDomains

SDDC Manager and Lifecycle Manager (LCM) provide automated provisioning and upgrades. Lifecycle Manager enhances ease of upgrade and patching by providing automated lifecycle management; with update notifications, review and schedule options, and monitoring and reporting. Also, LCM can manage all inter-dependencies of versioning at a cluster level, from vSphere right through to the Kubernetes runtime. SDDC Manager is orchestrating and automating the provisioning of vSphere with Kubernetes workload domains, and crucially enabling the LCM functionality for maintaining upgrades for the entire software stack, and eliminating typical day 2 challenges for developers.

SDDCManager

Multi-Instance Management: multiple VCF instances can now be federated to provide a global view of workload domains without the installation of any additional components. Administrators can click-through to any VCF data centre to centrally view patching, upgrades, maintenance, and remediation operations.

New Security Enhancements: native Workspace ONE Identity Access integration for vRealize suite and NSX-T using AD or LDAP identity sources. Admin and Operator roles for API and UI, with the operator role providing all privileges minus password management, user management, and backup and restore. Token-based authentication is also now enforced across all APIs.

You can find out more about the VMware Cloud Foundation 4 update at What’s New in VMware Cloud Foundation 4  and Delivering Kubernetes at Cloud Scale with VMware Cloud Foundation 4.

VMware vRealize Cloud Management Integration

The vRealize Cloud Management product suite has been comprehensively updated to include vSphere 7 with Kubernetes support. vRealize Operations (vROps) 8.1 is now available for the first time as a SaaS (Software as a Service) offering with an enhanced feature-set. Some of the key new functionality enables self-driving operations across multi-cloud, hybrid-cloud and data centre environments.

vROps 8.1 and Cloud now fully support integrations with GCP, native VMware Cloud on AWS as a cloud account (including additional vSAN, NSX-T, and Cloud Services Portal information with billing), an enhanced portfolio of AWS objects, Cloud Health, and vSphere Kubernetes constructs. With the latter crucially enabling Kubernetes cluster onboarding, discovery, continuous performance optimisation, capacity and cost optimisation, monitoring and troubleshooting, and configuration and compliance management. Furthermore, new dashboards and topology views of workload management can be leveraged to display all Kubernetes objects visible from vCenter, for a complete end-to-end view of the infrastructure.

K8s_Dashboard

vRealize Operations 8.1 and Cloud integration for vSphere with Kubernetes:

  • Automatically discover new constructs of supervisor cluster, namespaces, pods, and Tanzu Kubernetes clusters.
  • New dashboards and summary pages for performance, capacity, utilisation, and configuration management of Kubernetes constructs, with full topology views from Kubernetes substrate to physical infrastructure.
  • Capacity forecasting detects utilisation and potential bottlenecks for supervisor clusters and pods and shows time remaining projections for CPU, memory, and disk space.
  • Out of the box reporting functionality for workload management, inventory, configuration, and capacity, with configurable alerting to operationalise the workload platform and provide complete visibility and control.
  • Container management pack extends visibility to monitor and visualise multiple Kubernetes clusters, map and co-relate virtual infrastructure to Kubernetes infrastructure, set up alerts and monitoring, and provide support for PKS.

You can find out more about what’s new in the vRealize suite at Delivering Modern Infrastructure for Modern Apps with vRealize Suite.

*LOB spend 51% to infrastructure operations spend 49% – source IDC WW Semiannual IT Spending Guide: Line of Business, 09 April 2018 (HW, SW and services; excludes Telecom)