VMware Project Arctic Graduates to vSphere+

Introduction

Today, June 28 2022, VMware announced vSphere+ and vSAN+; subscription based offerings of their enterprise compute and storage virtualisation solutions.

First mooted during VMworld 2021, Project Arctic promised to deliver a cloud operating model to customer’s data centre and edge locations. At a high level, that means hands-off maintenance, proactive monitoring, pay-as-you-grow consumption, subscription billing, and a shift to opex funding.

Furthermore, vSphere subscriptions allow VMware to integrate products and services as features. VMware Cross-Cloud Services will enable on-demand scale out capacity and disaster recovery capabilities. We know from the general industry shift towards Software-as-a-Service (SaaS), that the frequency of development cycles and feature delivery are increased, resulting in faster and greater value to the end customer.

The release of vSphere+ and vSAN+ is VMware’s first iteration of the Project Arctic feature set, with more capabilities and products to be added. In this release, customers can expect to benefit from simplified operations, faster time to value, and future investment in IT strategy. Find out more at the vSphere+ microsite.

What is vSphere+?

The launch of vSphere+ and vSAN+ provides customers with a subscription to compute and storage virtualisation solutions. It is aimed at organisations wanting to retain an on-premises footprint, either data centre or edge, with a consistent operating experience to their cloud infrastructure.

This means it is easy for brownfield environments to adopt, and improve their operational processes and security posture. vSphere+ is more than just a subscription to an existing product, it also offers administrators the following benefits:

  • Aggregate vCenter Servers and global infrastructure into a single view
  • VMware assisted lifecycle management, initially for vCenter Server
  • Significantly lower maintenance touch, and reduced down time with vCenter Server Reduced Downtime Upgrades
  • Faster access to new features, fixes, and security patches
  • Check for configuration drift, security issues, consistent errors, and update status across all vCenters and clusters
  • Enable access to the embedded Tanzu services for build, run, and manage, of modern container based applications
  • Global monitoring of VMware environments, see examples in this vSphere+ Tech Zone blog
  • Deploy virtual machines to multiple platforms from anywhere with the new cloud admin interface
  • Co-term licensing and support across VMware environments with flexible scaling options
  • Removes the need for individual vCenter Server licenses (see the licensing section below)

vSphere+ introduces a new cloud admin portal, this is an additional SaaS control plane, which interacts with a gateway server on-premises. The sections below go into more technical detail, but the vCenter Servers do not talk directly out to the Internet, and no workloads or components are moved to the cloud as part of this operating model.

The term cloud-like operating model relates to features like the one-click vCenter updates, one-click Kubernetes cluster enablement (a cloud native container orchestration tool), and flexible subscription, or operating expenditure, nature of the service.

Many customers want the benefits of cloud, namely flexible consumption, minimal maintenance, built-in resilience, developer agility, and anywhere management. They may also need to retain some on-premises infrastructure, for data privacy, security, or sovereignty reasons, and for high-performance or low-latency requirements. The introduction of vSphere+ aims to provide these cloud benefits in the remaining data centre or edge locations.

You can read more about the admin services and developer services available through the new cloud portal, as well as the full range of benefits introduced by vSphere+, in the blog VMware vSphere+ Introducing The Multi-Cloud Workload Platform.

vSphere+ Benefits

How Does vSphere+ Work?

Beyond the licensing information in the section below, there are some further technical considerations and clarifications.

Since the vSphere infrastructure on-premises are already deployed, there is no impact to those existing vSphere, vCenter, or vSAN environments. The vCenter Server needs to be running a minimum of version 7.0.3, so there may be a vCenter upgrade, but there is no vSphere/ESXi update required. vCenter 7.0.3 is backwards compatible with vSphere 6.5 onwards, although note that vSphere 6.x reaches end of support on 15 October 2022.

A Cloud Gateway appliance is used to connect the on-premises vSphere estate with the VMware Cloud control plane. The appliance is a standard OVA, here is some additional information:

  • The appliance needs 8 CPU, ~24 GB RAM, 190 GB disk, and a secondary IP address
  • The appliance does not need backing up or HA deployment
  • The appliance is stateless and can easily be deleted and re-deployed in the event of any issues
  • There is an admin interface for setting minimal configuration such as Internet proxy
  • Lifecycle management of the appliance is automated from the cloud control plane
  • There is a maximum latency requirement of 100ms from the vCenter to the gateway appliance, and from the gateway to the cloud portal
  • The gateway appliance has limited access to the customer environment
  • Communication between the gateway appliance and cloud portal is fully encrypted and there is no VPN requirement
  • The gateway appliance needs outbound HTTPS connectivity only, and there are no network charges
  • The gateway appliance also uploads logs to VMware support, accelerating troubleshooting during incidents
  • The gateway appliance is the point of authentication, and no usernames and passwords are transmitted to the cloud
  • Data is not shared with third parties or used for marketing purposes
  • You can have multiple gateway appliances, with up to 4 vCenter Servers per gateway (note that there is no change in vCenter and vSphere configuration maximums)
vSphere+ Cloud Gateway Appliance High Level Architecture

Subscription services for vSphere+ and vSAN+ can be activated from the cloud portal. Host billing and licensing is also managed here, with no need to install license keys. Outside of vCenter lifecycle management, and subtle differences like the removal of license keys, there is no day-to-day change in how you manage and operate the vSphere environment.

If the gateway appliance, or Internet connection, is lost the vSphere environment continues to work as normal. If the gateway has not connected to the cloud control plane after 24 hours then vSphere administrators will see advisory messages bringing this to their attention, on the login page.

For vCenter updates, VMware do not apply updates automatically without informing the customer. The customer has complete control over the planning and scheduling of updates across vCenter Servers. When a new update is available a notification is generated, and the customer chooses when to have the update applied. The inventory will apply a traffic light system for vCenter instances depending on how many versions behind the latest release they might be.

How Does vSphere+ Licensing Work?

Previously, virtualisation customers would shell out a large upfront cost for perpetual licenses they would own outright. To deliver full value the perpetual license was supplemented with SnS (Support and Subscription), adding technical support, and access to the latest updates and security patches.

With perpetual licenses and SnS renewals, the vCenter Server license (per instance) and vSphere license (per CPU) were purchased separately. The vCenter Server provides overarching management capabilities, including enterprise features like resource balancing and High Availability (HA). The hypervisor vSphere, or ESXi, is installed on physical servers and facilitates compute virtualisation.

From July 2022, customers can upgrade to subscription based offerings of vSphere+ and vSAN+ rather than the traditional SnS renewal. You may have seen a similar early access program, branded vSphere Advantage. Both vSphere Advantage and Project Arctic are officially named vSphere+ at launch.

The vSphere+ license will include vSphere (for the core count stipulated), vCenter Server (for unlimited instances), the new vSphere admin service (SaaS Based), the Tanzu Standard runtime, and Tanzu Mission Control Essentials. Tanzu services enable build, run, and manage for modern applications through the use of containers and Kubernetes orchestration, directly within the hypervisor.

The version of vSphere included with vSphere+ has feature parity with vSphere Enterprise Plus, and production support. You can view the full vSphere Enterprise Plus feature set here.

Once a vCenter Server is registered with the cloud control plane all connected hosts and associated CPUs will be counted as licensed physical cores. Note that 16 cores make up 1 CPU, which is a change to the existing perpetual limit where 1 CPU is currently valid for up to 32 cores. As physical servers are added or removed, the corresponding core count is increased or decreased.

Core commits can be made for 1, 3, or 5 year periods, with additional cores billed as overage (or the commit level increased). Any overage is calculated per hour and billed in arrears at the end of the month. A customer can run a combination of vSphere+ and perpetual vSphere, however they need to be registered with different vCenter Servers.

How Does vSAN+ Licensing Work?

The vSAN+ license is available as an add-on to vSphere+, it cannot be purchased separately. As the license is an add-on it automatically co-terms with the vSphere+ duration. Commit and overage terms are the same as vSphere+.

Using vSAN+, customers benefit from centralised management, global inventory monitoring, and global alert status from the cloud console. Existing vSAN datastores are integrated into the cloud portal virtual machine provisioning workflow, to allow deployment of workloads to a vSAN cluster from anywhere. You can read more in the Introducing vSAN+ blog.

The vSAN+ license has feature parity with vSAN Enterprise, you can view the full vSAN feature list here. At initial release, lifecycle management only covers vCenter Server. It is likely that in the future vSphere/vSAN lifecycle management may also be added to Project Arctic.

VMware Cloud on AWS Outposts Overview

Introduction

Managed and as-a-service models are a growing trend across infrastructure consumers. Customers in general want ease and consistency within both IT and finance, for example opting to shift towards OpEx funding models.

For large or enterprise organisations with significant investments in existing technologies, processes, and skills, refactoring everything into cloud native services can be complex and expensive. For these types of environments the strategy has sharpened from Cloud-First to Cloud-Smart. A Cloud-Smart approach enables customers to transition to the cloud quickly where it makes sense to do so, without tearing up roots on existing live services, and workloads or data that do not have a natural progression to traditional cloud.

In addition to the operational complexities of rearchitecting services, many industries have strict regulatory and compliance rules that must be adhered to. Customers may have specific security standards or customised policies requiring sensitive data to be located on-premises, under their own physical control. Applications may also have low latency requirements or the need to be located in close proximity to data processing or back end systems. This is where VMware Local Cloud as a Service (LCaaS) can help combine the key benefits from both public cloud and on-premises environments.

What is VMware Cloud on AWS Outposts?

VMware Cloud on AWS Outposts is a jointly engineered solution, bringing AWS hardware and the VMware Software Defined Data Centre (SDDC) to the customer premises. The relationship with AWS is VMware’s longest standing hyperscaler partnership; with VMware Cloud on AWS the maturest of the multi-cloud offerings from VMware, having been available since August 2017. In October 2021, at VMworld, VMware announced general availability of VMware Cloud on AWS Outposts.

VMware Cloud on AWS Outposts is a fully managed service, as if it were in an AWS location, with consistent APIs. It is built on the same AWS-designed bare metal infrastructure using the AWS Nitro System, assembled into a dedicated rack, and then installed in the customer site ready to be plumbed into power and networking. The term Outpost is a logical construct that is used to pool capacity from 1 or more racks of servers.

The VMware SSDDC overlay, and hardware underlay, comprises of:

  • VMware vSphere and vCenter for compute virtualisation and management
  • VMware vSAN for storage virtualisation
  • VMware NSX-T for network virtualisation
  • VMware HCX for live migration of virtual machines with stretched Layer 2 capability
  • 3-8 AWS managed dedicated Nitro-based i3.en metal EC2 instances with local SSD storage
  • Non-chargeable standby node in each rack for service continuity
  • Fully assembled standard 42U rack
  • Redundant Top of Rack (ToR) data plane switches
  • Redundant power conversion unit and DC distribution system (with support for redundant power feeds)

At the time of writing the i3.en metal is the only node type available with VMware Cloud on AWS Outposts. The node specification is as follows:

  • 48 physical CPU cores, with hyperthreading enabled delivering 96 logical cores
  • 768 GiB RAM
  • 45.84 TiB (50 TB) raw capacity per host, delivering up to 40.35 TiB of usable storage capacity per host depending on RAID and FTT configuration

Both scale-out and multi-rack capabilities are currently not available, but are expected. It is also expected that the maximum node count will increase over time, check with your VMware or AWS teams for the most up to date information.

Once the rack is installed on-site, the customer is responsible for power, connectivity into the LAN, and environmental prerequisites such as temperature, humidity, and airflow. The customer is also responsible for the physical security of the Outpost location, however each rack has a lockable door and tamper detection features. Each server is protected by a removable and destroyable Nitro hardware security key. Data on the Outpost is encrypted both at-rest, and in-transit between nodes in the Outpost and back to the AWS region.

Inside the rack, all the hardware is managed and maintained by AWS and VMware, this includes things like firmware updates and failure replacements. VMware are the single support contact for the service regardless of whether the issue is hardware or software related. Additionally, VMware take on the lifecycle management of the full SDDC stack. Customers can run virtual machines using familiar tooling without having to worry about vSphere, vSAN, and NSX upgrades or security patches. Included in the cost ‘per node’ is all hardware within the rack, the VMware SDDC licensing, and the managed service and support.

Existing vCenter environments running vSphere 6.5 or later can be connected in Hybrid Linked Mode for ease of management. Unfortunately for consumers of Microsoft licensing, such as Windows and SQL, Outposts are still treated as AWS cloud infrastructure (in other words not customer on-premises).

Why VMware Cloud on AWS Outposts?

VMware Cloud on AWS Outposts provides a standardised platform with built-in availability and resiliency, continuous lifecycle management, proactive monitoring, and enhanced security. VMware Cloud on AWS delivers a developer ready infrastructure that can now be stood up in both AWS and customer locations in a matter of weeks. Using VMware Cloud on AWS, virtual machines can be moved bi-directionally across environments without the need for application refactoring or conversion.

The initial use case for VMware Cloud on AWS Outposts is existing VMware or VMware Cloud on AWS customers with workloads that must remain on-premises. This could be for regulatory and compliance reasons, or app/data proximity and latency requirements. As features and configurations start to scale, further use cases will no doubt become more prominent.

You can also use other AWS services with Outposts, however you have to make a decision on a per-rack basis whether you are running VMware Cloud on AWS for that rack, or native AWS services. The deployment of the rack is dedicated to one or the other.

VMware Cloud on AWS Outposts Network Connectivity

VMware Cloud on AWS Outposts requires a connection back to a parent VMware Cloud on AWS supported region, or more specifically an availability zone. Conceptually, you can think of the physical VMware Cloud on AWS Outposts installation as an extension of that availability zone. The connection back to AWS is used for the VMware Cloud control plane, also known as the service link.

The service link needs to be a minimum of 1Gbps with a maximum 150ms latency, either using a Direct Connect, or over the public internet using a VPN. Public Amazon Elastic IPs are used for the service link endpoint. Although the VMware Cloud on AWS Outposts service is not designed to operate in environments with limited or no connectivity, in the event of a service link outage the local SDDC will continue functioning as normal. This includes vCenter access and VM operations. A service link outage will prevent monitoring and access to configurations or other functionality from the VMware Cloud portal.

There is no charge for data transfer from VMware Cloud on AWS Outposts back to the connected region. Data transfer from the parent availability zone to the VMware Cloud on AWS Outposts environment will incur the standard AWS inter-AZ VPC data transfer charges.

Customers can use the connected VPC in the customer managed AWS account to access native AWS services in the cloud, either using the Elastic Network Interface (ENI) or VMware Transit Connect.

The Local Gateway (LGW) is an Outposts-specific logical construct used to route traffic to and from the existing on-premises network. This traffic stays within the local site allowing for optimised traffic flow and low latency communication. There is no data transfer cost for data traversing the LGW, either out to the internet or to your local network.

For more information on network connectivity and VMware Cloud on AWS Outposts in general, take a look at the AWS re:Invent 2021 session – A technical deep dive on VMware Cloud on AWS Outposts.

VMware Cloud on AWS Outposts LGW example

Getting Started with VMware Cloud on AWS Outposts

You can view a demo of the steps in the VMware Cloud on AWS Outposts: Order Flow video. At a high level, the process is as follows:

  • Extensive workshops are carried out between VMware and/or AWS and the customer
  • If the customer is a new VMware Cloud customer then a new org is created with a unique org ID
    • Customer pre-req: a VMware Cloud account and org is required
  • The customer receives an invite to join the VMware Cloud on AWS Outposts service through email
  • The customer places an order via the VMware Cloud console
    • Customer pre-req: customer AWS account with VPC and dedicated subnet, if using a private VIF for Direct Connect, then the VIF should already be created in the customer AWS account
    • Customer pre-req: knowledge of the facility, power, and network setup*
    • Customer pre-req: knowledge of desired instance count and configuration
  • The customer receives and responds to the request to submit logical networking information
    • This information will be gathered during the customer workshop, the service link requires a dedicated VLAN and /26 subnet, the SDDC management network requires a dedicated /23 minimum, and an additional CIDR block needs allocating for compute networks
  • AWS schedule and carry out a site survey
  • AWS builds and delivers the rack
  • Final onsite provisioning is carried out by AWS and validated by VMware
  • VMware notify the customer the environment is ready to use
  • The SDDC is provisioned through automated workflows as instructed by the customer

*full details of the facility, power, and network requirements for the local site can be found in the AWS Outposts requirements page

The VMware Cloud on AWS Outposts solution brief provides more information, and you can find an overview, pricing, and FAQ section on the VMware Cloud on AWS Outposts solution page. AWS also have their own version of the VMware Cloud on AWS Outposts FAQ page.

Another great place to get started is the VMware Cloud Tech Zone, and for AWS specifically the VMware Cloud on AWS Tech Zone.

VMware Cloud on AWS Tech Zone

How CloudHealth Optimises and Secures Your Cloud Assets

Introduction

Over the past 12 months we have seen further growth within the cloud, as many organisations scale or create new digital services in response to the coronavirus pandemic. Improved speed and agility has allowed businesses to pivot where traditional siloed infrastructure may have caused them to stall.

As the usage of cloud services expands, standardising and consolidating cloud tooling becomes important for financial management, operational governance, and security and compliance. Visibility into distributed system architectures across many accounts or subscriptions, or even multi-cloud, is another key challenge. For some customers cloud workloads are not optimised or configured to best standards, many will spend more than their anticipated budget, and others may accidentally expose data or services.

Those with an established cloud strategy may decide to implement a Cloud Centre of Excellence (CCoE); responsible for cloud operations, security, and financial management. The CCoE will navigate the security and configuration landscape of cloud assets, automating response and remediation to configuration drift or threats. As the team grows in maturity optimisations are made continuously and automatically, inline with the key drivers of the business. This is where CloudHealth comes in.

CloudHealth by VMware is a multi-cloud SaaS solution managing more than $11B of public cloud spend for over 10,000 customers. CloudHealth accelerates business transformation in the cloud by providing a single platform solution for visibility into AWS, Microsoft Azure, Google Cloud Platform, Oracle Cloud Infrastructure, VMware Cloud on AWS, and on-premises VMware based environments. The key functionality is broken down into the 2 products we’ll look at below.

CloudHealth Multicloud Platform

CloudHealth takes data from cloud platforms, data centres, and third party tools for application, security, and configuration management. Data is ingested and aggregated using CloudHealth’s integrated data layer, which performs analysis on usage, performance, cost, and security posture. CloudHealth becomes a single source for multi-cloud management across environments, strengthening security and compliance, consolidating management, and improving collaboration between previously siloed teams of people and tools.

Data and assets can be categorised by tags or other metadata, and viewed in logical business groups known as perspectives . Perspectives provide a breakdown for cost allocation using dynamic groups such as line of business, department, cost centre, or project. The output can be used to identify trends and build dashboards and reports. This approach simplifies financial management, saves time, aids with budgeting and forecasting, and encourages accountability through accurate chargeback or showback.

CloudHealth Cost Dashboard

Whilst visibility is great, to really have a positive impact on operations we need to know what to do with the data collected. CloudHealth presents back cost optimisation recommendations and security risks, but can also carry out remediation actions automatically.

Cost optimisation is where you can save money, using AWS as an example, based on things like; EC2 instances that are oversized or on an inefficient purchase plan, elastic IP addresses or EBS volumes that are not attached to any resources, snapshots that have not been deleted. In the physical on-premises world all of these issues were common as part of VM sprawl, they impacted capacity planning and resource consumption but were mostly hidden or swallowed as part of the wider infrastructure cost. As organisations shift from large capital investments to ongoing revenue and consumption based pricing, oversized or unused resources literally convert to money going out of the door every single month.

CloudHealth Health Check

Recommendations and actions are where CloudHealth carries out remediation for incorrectly configured or under-utilised resources. Policies can also be used to define desired states and ensure operational compliance. For example, an organisation may want to report on untagged resources, connected accounts, or open ports. The number of available actions currently appears to only cover AWS and Azure, but with support recently added for Oracle Cloud Infrastructure, and Google Cloud Platform before that, hopefully this functionality will continue to be built out.

CloudHealth Remediation Actions

At the time of writing CloudHealth is priced based on cloud spend, and can be purchased as a 1, 2, or 3 year prepaid commitment, or variable pricing based on the previous months cloud spend. A free trial is available to uncover ROI in your own environment from CloudHealth here.

Where VMware environments are in use with vRealize Operations, the CloudHealth management pack for vRealize Operations can be installed. Bringing CloudHealth dashboards and prospects into vROps allows IT ops teams to track on-premises infrastructure and public cloud costs from a single interface. The CloudHealth management pack for vROps can be downloaded from the VMware Marketplace, instructions are here.

CloudHealth Secure State

By default CloudHealth provides real-time information on security risk exposure, but for deep-dive visibility and remediation those who are serious about security will want to look at Secure State. CloudHealth Secure State is available with CloudHealth or standalone, and currently supports AWS, Azure, and GCP.

Dashboards within CloudHealth Secure State enable at-a-glance checks on security posture and compliance. There are over 700 built-in security rules and compliance frameworks that can be used as security guardrails, with the ability to add custom rules and frameworks on top.

As systems become distributed over multiple accounts, subscriptions, or even clouds, the dynamics of securing an organisations assets shift significantly. Previously all services were contained within a data centre, firstly using perimeter firewalls and then with micro-segmentation. IT teams were generally in control and had visibility throughout the corporate network. Nowadays a developer or user responsible for a service can potentially open applications or data to the public, either on purpose or by accident. Cloud security guardrails form an important baseline for security posture and cloud strategy. Security guardrails are made up of critical must-have configurations in policies with auto-remediation actions attached, they help avoid mistakes or configuration drift to ultimately reduce security risk.

CloudHealth Secure State gives further visibility into resource relationships and context, using the Explore UI. Explore enables a powerful model of multi-cloud or account architectures, with visual topology diagrams of complex environments. Cyber security analysts or operations centres can drill down into individual resources with all interoperable components and dependencies already mapped out.

CloudHealth Secure State Dashboard
CloudHealth Secure State Compliance

Featured image by Scott Webb on Unsplash