Author Archives: esxsi

The Evolution of vSphere: Project Pacific Graduates and Project Monterey is Launched

Project Pacific Introduction

Earlier this year VMware announced the next generation of vSphere; vSphere 7.0 with Kubernetes; an end product released directly from the Project Pacific vision. To transform vSphere into a platform for modern applications it was rearchitected from the ground up. Initially the Kubernetes functionality was only available through VMware Cloud Foundation (VCF) 4.0, since it relied on using NSX-T and vSAN as the network and storage providers.

Modern applications span Virtual Machines and containers, stateful and stateless workloads. A single platform is needed that can run these types of applications, whilst removing silos, operating complications, and workload sprawl. With the introduction of vSphere 7 Update 1; Kubernetes integration is renamed vSphere with Tanzu and is the major focus of this release. To provide developer-ready infrastructure to existing customers, vSphere 7 with Tanzu now runs on top of vSphere and vSphere Distributed Switches (vDS) without the need for the full VCF stack.

Why vSphere is Relevant to Application Modernisation

Over the years organisations have accumulated millions of lines of custom software code, including the mission-critical systems that run their organisations. In most cases this code was built to do a job reliably, but not to quickly change and evolve. In todays society businesses need to convert an idea into a feature or production software faster, and rapidly change services based on customer feedback. Failure to take products or functionality to market quick enough can impact business operations and revenue.

In addition, a build up of tech debt has left IT teams maintaining vast amounts of software and hardware, meaning refactoring applications to run in native cloud services is often slower, more complex, and more expensive than anticipated.

In contrast to legacy applications designed around the physical infrastructure, modern applications are typically deployed using Infrastructure as Code (IaC). VMware customers are depending on applications as the primary part of their business model more than ever before, to ensure they stay relevant in the market.

Over 70 million workloads run on vSphere today, and over 80% of enterprise applications. vSphere with Tanzu embeds a Kubernetes runtime in the hypervisor; allowing developers to consume infrastructure through the Kubernetes API, while IT administrators manage containers as first class workloads along side Virtual Machines in vCenter. This makes vSphere with Tanzu the fastest way to get started with Kubernetes and modernise workloads or deploy new applications, without having to procure new products, and upskill IT staff.

What’s New with vSphere with Tanzu

vSphere with Tanzu brings Kubernetes to your existing VMware infrastructure, introducing bring your own storage, network, and load balancing solutions that can be leveraged to get up and running with Kubernetes in around an hour. To run vSphere with Tanzu a small add-on calculated per CPU is needed, called Tanzu Basic. Customers must have vSphere Enterprise Plus licensing with vSphere Distributed Switches, and must have upgraded to vSphere 7 Update 1.

vSphere with Tanzu utilises any existing block or file storage to present persistent storage volumes. The existing network infrastructure can provide Kubernetes networking on top of vSphere Distributed Switches, using port groups for Kubernetes namespaces, or NSX. NSX can also be used for load balancing or your own L4 based Load Balancer. At the initial release the supported customer Load Balancer will be HAProxy, although the open API will allow any Load Balancer provider to be eventually integrated. The presence of the Tanzu Kubernetes Grid (TKG) service ensures conformity with upstream Kubernetes, enabling migration of container-based applications across different Kubernetes based platforms without refactoring.

VMware Cloud Foundation with Tanzu is still the preferred approach for running Kubernetes at scale, since deployment and policies can be automated and centralised. VCF with Tanzu uses NSX for virtual networking and load balancing with vSAN for storage.

VMware Tanzu is now available in multiple editions:

  • Tanzu Basic: enables Kubernetes in vSphere, improves resource utilisation and embraces a move towards container based workloads
  • Tanzu Standard: enables Kubernetes in VCF and multi-cloud for centralised deployments and policies across platforms
  • vSphere with Tanzu has a built-in 60 day evaluation, to get started use the vSphere with Tanzu Quick Start Guide
This image has an empty alt attribute; its file name is screenshot-2020-10-11-at-12.25.48.png
Tanzu Editions from VMware Tanzu Blog

Further Tanzu Advanced and Tanzu Enterprise editions focused on DevOps delivery of workloads on Kubernetes and automation, aimed at refactoring applications, are expected to be made available as the product expands.

Project Monterey Introduction

At VMworld 2020, VMware announced the technical preview of Project Monterey; continuing the rearchitecture of vSphere towards fully composable infrastructure. Kit Colbert announces Project Monterey whilst talking about the need for modern applications to consume more CPU cycles and enforce zero-trust security locally, yet across distributed environments. Faster network speeds and cross-VM or container network traffic all adds CPU overhead. Enter the SmartNIC.

A SmartNIC (Network Interface Card), runs with an onboard DPU (Data Processing Unit) capable of offloading x86 cycles from the main server CPU. By taking care of network, security, and storage tasks like network I/O, micro-segmentation, and encryption, more compute power is made available for VMs and applications. As well as high speed Ethernet ports, the SmartNIC features an out-of-band management port, and is able to expose or directly pass-through PCI bus devices, like NVMe, to the core CPU OS. The SmartNIC is essentially a mini server inside of a NIC.

Project Monterey is another fundamental shift in the VMware Cloud Foundation architecture, by moving core CPU tasks to run on the SmartNIC. To do this, VMware have partnered with hardware and SmartNIC vendors such as NVIDIA, Intel, Dell, Lenovo, and HPE. The SmartNIC will run ESXi simultaneously with the main instance, these can be managed separately, or for most as a single logical instance. The SmartNIC instance of ESXi is going to handle the network and storage tasks mentioned above, reducing the burden on the main CPU. Furthermore, this second ESXi instance creates a security airgap in the event the hypervisor is ever compromised. Now that the virtualisation layer is separate from the application layer, apps are in a trust domain where they cannot be impacted by things like side-channel attacks from the virtualisation layer.

In addition to ESXi, the SmartNIC is also capable of running a bare-metal Operating System (OS), which opens up the possibility of VCF managing bare-metal OS, and delivering the same network and storage services as VMs or containers. Lifecycle management for the SmartNIC and ESXi is all rolled up into a single task, so there will be no extra operational overhead for VMware admins. Although there is no product release directly tied to Project Monterey it is expected that, much like Project Pacific, functionality could materialise over the next 12-18 months.

This image has an empty alt attribute; its file name is screenshot-2020-10-11-at-11.19.06.png
Future composable infrastructure from VMware vSphere Blog

Finally, since most SmartNIC’s are based on the 64-bit ARM processor, VMware have successfully ported ESXi over to ARM. Whilst ESXi based compute virtualisation has traditionally run on x86 platforms, VMware and ARM have worked together to release ESXi on ARM, initially as a Fling.

VMware and NVIDIA

Although there are multiple SmartNIC providers and collaborations that are in flight with Project Pacific, the big one to come out of VMworld 2020 was between VMware and NVIDIA. NVIDIA invented and brought to market the GPU, transforming computer graphics and the gaming industry. Since then, the GPU itself has evolved into a diverse and powerful coprocessor, now popular in particular with Artificial Intelligence (AI).

The NVIDIA Mellanox Bluefield-2 SmartNIC will eventually be integrated with VMware Cloud Foundation. This provides customers with data infrastructure on a chip at the end of each compute node, accelerating network, storage, and security services for high performance hybrid cloud workloads. An example of this technology is using the DPU to handle firewall capabilities, controlling traffic that is actually passed to the host itself to improve threat protection and mitigate against attacks like DDoS.

The data centre architecture will incorporate 3 core processors, the standard CPU, the GPU for accelerated computing, and the new DPU for offloading core infrastructure and security tasks. Since VMware Cloud Foundation can be consumed on-premises, at the edge, or in public cloud, VMware and NVIDIA are hopeful this technology makes it as easy as possible for organisations to consume AI with a consistent experience. You can read the full announcement here, and more information about Project Monterey with Bluefield-2 here.

Migrate to Microsoft Azure with Azure VMware Solution

VMware and Azure sessions available free at VMworld 2020 29 Sept – 1 Oct

Breakout Session: Run VMware natively on Azure with the latest from Azure VMware Solution

Breakout Session: Enable Secure Remote Work: Windows Virtual Desktop & Horizon Cloud on Azure

On-Demand: Deep Dive on the Latest Updates from Azure VMware Solution

On-Demand: Optimize for the End User: Windows Virtual Desktop & Horizon Cloud on Azure

Roundtable: Expert Roundtable: Microsoft Azure VMware Solution

Introduction

Azure VMware Solution (AVS) is a private cloud VMware-as-a-service solution, allowing customers to retain VMware related investments, tools, and skills, whilst taking advantage of the scale and performance of Microsoft Azure.

Microsoft announced the latest evolution of Azure VMware Solution in May 2020, with the major news that AVS is now a Microsoft first party solution, endorsed and cloud verified by VMware. Microsoft are a VMware strategic technology partner, and will build and run the VMware Software Defined Data Centre (SDDC) stack and underlying infrastructure for you. The latest availability of Azure VMware Solution by region can be found here.

If you have looked at AVS by Cloud Simple before this is a new offering, consistent in architecture but now sold and supported direct from Microsoft, providing a single point of support and fully manageable from the Azure Portal. Cloud Simple were acquired by Google in late 2019.

Azure VMware Solution Explained

Azure VMware Solution is the VMware Cloud Foundation (VCF) software stack built using dedicated bare-metal Azure infrastructure, allowing you to run VMware workloads on the Microsoft Azure cloud. AVS is designed for end-to-end High Availability with built in redundancy. Microsoft own and manage all support cases, including any that may need input from VMware.

Microsoft are responsible for all the underlying physical infrastructure, including compute, network, and storage, as well as physical security of the data centres and environments. As well as hardware failure remediation and lifecycle management, Microsoft are also responsible for the deployment, patching, and upgrade of ESXi, vCenter, vSAN, NSX-T, and Identity Management. This allows the customer to consume the VMware infrastructure as a service, and rather than spending time fire fighting or applying security updates; IT staff can concentrate instead on application improvements or new projects. Host maintenance and lifecycle activities such as firmware upgrades or predictive failure remediation are all carried out with no disruption or reduction in capacity.

Microsoft’s data centres meet the high levels of perimeter and access security you would expect, with 24×7 security personnel, biometric and visual sign-in processes, strict requirements for visitors with sufficient business justification including booking, location tracking, metal detectors and security screening, security cameras including per cabinet and video archive.

The customer is still responsible for Virtual Machines and everything running within them, which includes the guest OS, software, VMware Tools, etc. Furthermore the customer also retains control over the configuration of vCenter, vSAN, NSX-T, and Identity Management. VMware administrators have full control over where their data is, and who has access to it by using Role Based Access Control (RBAC), Active Directory (AD) federation, customer managed encryption keys, and Software Defined Network (SDN) configuration including gateway and distributed firewalls.

Elevated root access to vCenter Server is also supported with AVS, and that helps to protect existing investments in third party solutions that may need certain vCenter permissions for services like backup, monitoring, Anti-Virus or Disaster Recovery. By providing operational consistency organisations are able to leverage existing VMware investments in both people skills and licensing across the VMware ecosystem, at the same time as reducing the risk in migrating to the cloud.

Connectivity between environments is visualised at a high level in the image below from Microsoft’s AVS documentation page. The orange box symbolises the VCF stack, made up of vSphere, vSAN, and NSX-T.

AVS Overview

Some example scenarios where AVS may be able to resolve IT issues are as follows:

  • Data centre contract is expiring or increasing in cost:
  • Hardware or software end of life or expensive maintenance contracts
  • Capacity demand, scale, or business continuity
  • Security threats or compliance requirements
  • Cloud first strategy or desire to shift to a cloud consumption model
  • Local servers in offices are no longer needed as workforces become more remote

Azure Hybrid Benefit allows existing Microsoft customers with software assurance to bring on-premises Windows and SQL licenses to AVS. Additionally Microsoft are providing extended security updates for Windows and SQL 2008/R2 running on AVS.

There is a clear and proven migration path to AVS without refactoring whole applications and services, or even changing the VM file format or network settings. With AVS a VM can be live migrated from on-premises to Azure. Hybrid Cloud Extension (HCX) is included with AVS and enables L2 networks to be stretched to the cloud. The Azure VMware Solution assessment appliance can be deployed to calculate the number of hosts needed for existing vSphere environments, full details can be found here.

AVS Technical Specification

Azure VMware Solution uses the customers Azure account and subscription to deploy Private Cloud(s), providing a deep level of integration with Azure services and the Azure Portal. It also means tasks and features can be automated using the API. Each Private Cloud contains a vCenter Server, NSX-T manager, and at least 1 vSphere cluster using vSAN. A Private Cloud can have multiple clusters, up to a maximum of 64 hosts. Each vSphere cluster has a minimum host count of 3 and a maximum of 16. The standard node type used in Azure is the AV36, which is dedicated bare metal hardware with the following specifications:

  • CPU: Intel Xeon Gold 6140 2.3 GHz x2, 36 cores/72 hyper-threads
  • Memory: 576 GB
  • Data: 15.36 TB (8 x 1.92 TB SSD)
  • Cache: 3.2 TB (2 x 1.6 TB NVMe)
  • Network: 4 x Mellanox ConnectX-4 Lx Dual Port 25GbE

AVS uses local all-flash vSAN storage with compression and de-duplication. Storage Based Policy Management (SBPM) allows customers to define policies for IOPS based performance or RAID based protection. Storage policies can be applied to multiple VMs or right down to the individual VMDK file. By default vSAN datastore is encrypted and AVS supports customer managed external HSM or KMS solutions as well as integrating with Azure Key Vault.

An AVS Private Cloud requires at least a /22 CIDR block on deployment, which should not overlap with any of your existing networks. You can view the full requirements in the tutorial section of the AVS documentation. Access to Azure services in your subscription and VNets is achieved using an Azure ExpressRoute connection, which is a high bandwidth, low-latency, private connection with automatically provisioned Border Gateway Protocol (BGP) routing. Access to on-premises environments is enabled using ExpressRoute Global Reach. The diagram below shows the traffic flow from on-premises to AVS using ExpressRoute Global Reach. This hub and spoke network architecture also provides access to native Azure services in connected (peered) VNets, you can read the full detail here.

On-Prem to AVS

AVS Native Azure Integration

A great feature of AVS is the native integration with Azure services using Azure’s private backbone network. Although the big selling point is of course operational consistency, eventually applications can be modernised in ways that will provide a business benefit or improved user experience. Infrastructure administrators that no longer have to manage firmware updates and VMware lifecycle management are able to focus on upskilling to Azure.

Deployment of a Private Cloud with AVS takes as little as 2 hours, and some basic Azure knowledge is required  since the setup is done in the Azure Portal, and you’ll also need to create a Resource Group, VNets, subnets, a VNet gateway, and most likely an ExpressRoute too.

Screenshot 2020-09-10 at 11.28.32

To get the full value out of the solution native AWS services can be used alongside Virtual Machines. Some example integrations that can be looked at straight away are; Blob storage offering varying tiers of cost and performant object storage, Azure Files providing large scale SMB file shares with AD authentication options, and Azure Backup facilitating VM backups to an Azure Recovery Services Vault. Additional services like Azure Active Directory (AAD), Azure NetApp File Services, and Azure Application Gateway may also help modernise your environment, along with Azure Log Analytics, Azure Security Center, and Azure Update Manager.

VMware customers using, or interested in, Horizon will also note that the Horizon Cloud on Microsoft Azure service is available, and if configured accordingly will have network line of sight to your VM workloads in AVS, and native Azure services.

For more information on Azure integration see the Azure native integration section of the AVS document page, and the AVS blogs site by Trevor Davis. Further detail on Azure VMware Solution can be found at the product page, FAQ page, or on-demand webinar.

Google Cloud VMware Engine Explained

Google Cloud VMware Engine (GCVE) has launched in the UK. This post has been updated now that more service information is available.

esxsi.com

Google Cloud VMware Engine (GCVE) is a fully managed VMware-as-a-Service solution provided and managed by Google, or a third-party VMware Cloud Provider, that can be deployed in as little as 30 minutes. VMware Engine runs VMware Cloud Foundation on dedicated Google Cloud bare metal servers, with native cloud integrations including Google’s innovative big data and machine-learning services. The VMware Cloud Foundation stack is made up of VMware vSphere, vCenter, NSX-T, and vSAN. The platform is VMware Cloud Verified and includes Hybrid Cloud Extension (HCX) to facilitate data centre extension or migration. You can read the full Google announcement from May 2020 here.

Google Cloud Platform

Google Cloud Platform (GCP) offers a wide variety of services from Infrastructure-as-a-Service (IaaS) to Platform-as-a-Service (PaaS) running on the same infrastructure Google uses to provide global end-user services. Google’s cloud services are built on data centres designed to save water and electricity, they have…

View original post 1,543 more words