Tag Archives: VMware Cloud

Google Cloud VMware Engine Explained

Introduction

Google Cloud VMware Engine is a fully managed VMware-as-a-Service solution provided and managed by Google. VMware Engine runs VMware Cloud Foundation on dedicated Google Cloud bare metal servers, with native cloud integrations including Google’s innovative big data and machine-learning services. The VMware Cloud Foundation stack is made up of VMware vSphere, vCenter, NSX-T, and vSAN. The platform is VMware Cloud Verified and includes Hybrid Cloud Extension (HCX) to facilitate data centre extension or migration. You can read the full Google announcement from May 2020 here.

Google Cloud Platform

Google Cloud Platform (GCP) offers a wide variety of services from Infrastructure-as-a-Service (IaaS) to Platform-as-a-Service (PaaS) running on the same infrastructure Google uses to provide global end-user services. Google’s cloud services are built on data centres designed to save water and electricity, they have been carbon-neutral since 2007 and have a goal of powering all operations with 100% renewable energy.

As an organisation, Google is all about big data at huge scale. Google has one of the largest most advanced private Software-Defined Networks in the world, stretching across thousands of miles of fibre optic cable through over 200 countries, with 140 network edge locations.

Google-Global-Locations

Perhaps the key differentiator for Google as a cloud services provider is the commercialisation of some innovative big data and machine-learning tools they use internally to serve billions of search results and billions of YouTube videos every day. Google’s focus is really to allow developers to think about the code and applications they develop, and not about operations.

Of course, like all the major cloud providers, Google provides you with the functionality to spin up Virtual Machines, and this is a completely different service to Google Cloud VMware Engine. Google Compute Engine (GCE) supplies the raw building blocks for Virtual Machine instances and networks. GCE enables performance-optimised fast-booting instances in an Infrastructure-as-a-Service (IaaS) model, similar to AWS’ Elastic Compute Cloud (EC2). In addition to standard pre-configured instance types, GCE allows you to customise CPU/RAM metrics and save money on ‘always-on’ VMs with sustained usage discounts. GCE is part of the Google Cloud compute suite of services alongside Platform-as-a-Service offerings like Google App Engine and Google Kubernetes Engine. The comprehensive list of Google Cloud products can be found here, VMware Engine is categorised as compute.

GCP-Example

You can try out Google Cloud here with certain always free products and $300 free credit.

Google Cloud VMware Engine

VMware Engine runs on high-performance bare metal hosts in Google Cloud locations, initially from the us-east-4 and us-west-2 regions with a further 8 regions due late 2020. The full VMware Cloud Foundation stack is utilised to provide a secure, scalable, consistent environment for VMware workloads with Google managing the lifecycle of the VMware stack and all related infrastructure.

By running VMware Cloud Foundation in Google Cloud customers are able to migrate workloads to the cloud without having to refactor applications, replace third-party integrated products, or reskill teams. The existing vSphere network design can be migrated or extended with minimal re-architecture using HCX, and taking advantage of Google Cloud’s edge network security and advanced DDoS protection. The dedicated VMware stack in Google Cloud can be linked back to the on-premises VMware environment using a VPN or high-speed, low-latency private interconnect, with HCX adding hybrid-connectivity for seamless workload and subnet migration.

VMware Engine enables deep integration with third-party services for backup and storage such as Veeam, NetApp, Dell, Cohesity, and Zerto. Infrastructure administrators can leverage the scale and agility of the cloud whilst maintaining operational continuity of tools, policies, and processes.

The Google Cloud console has a built-in VMware Engine User Interface (UI) that integrates with billing and Identity and Access Management. VMware workloads in the VMware Engine environment can connect into native Google Cloud services like BigQuery, Anthos, and Cloud Storage using a private interconnect into Google’s 100Gbps backbone network.

Google-Cloud-VMware-Engine

VMware Cloud Foundation in Google Cloud is built on isolated single-tenancy bare-metal infrastructure. All-flash NVMe storage in a hyper-converged setup provides the speed and performance required for most demanding workloads like Oracle, VDI, Microsoft Exchange and SQL. Data is encrypted at rest and in transit with support for customer-managed keys. Google Cloud Storage or third party solutions can be leveraged for lower-cost and secondary storage tiers. The standard node size is Google’s ve1-standard-72 with the following specifications:

  • CPU: Intel Xeon Gold 6240 (Skylake) 2.6 GHz (3.9 GHz Turbo) x2, 36 cores/72 hyper-threads
  • Memory: 768 GB
  • Data: 19.2 TB NVMe
  • Cache: 3.2 TB NVMe
  • Network: 25Gbps NIC x4

The minimum configuration is 3 hosts with a 64 host maximum per private cloud, and any number of private clouds. Hosts can be purchased as a 1 or 3-year commitment or using on-demand per-hour pricing with all infrastructure costs and associated licenses included. Google sell and support VMware Engine, the customer’s contract is with Google while the VMware Cloud Verified accreditation gives existing VMware customers peace of mind that hybrid environments are supported end to end.

The Google Cloud UI integration provides unified management of VMware workloads and native cloud services. Access to vCenter Server enables consistent operations and investment protection for IT support personnel and licensing through the VMware partner ecosystem. As with other VMware Cloud platforms, the customer retains control of their  Virtual Machines; deciding upon the data location, authorisation and access policies, and the networking and firewalling of both north-south traffic and east-west with separate Layer-2 networks within a private cloud environment. With VMware Engine Google also allows 24-hour privilege elevation for installing and managing tools requiring vCenter administrative access.

Example use cases for Google Cloud VMware Engine:

  • Data Centre Extension or Migration: extend data centre boundaries and scale to the cloud or additional regions quickly with guaranteed compatibility of existing workloads. Achieve true workload mobility between VMware environments for high availability and demand-based scalability. Migrate Virtual Machines to the cloud, and back if needed, without refactoring applications or even changing network settings.
  • Disaster Recovery (DR): backup and DR targets can be moved to the cloud to improve availability options and reduce total cost of ownership. By taking advantage of Google’s global infrastructure organisations can improve system availability by deploying across multiple zones or regions. Business-critical applications can be scaled on-demand, either through native services or SDDC expansion in minutes.
  • Data Analytics and Innovation: access to Google’s internal big data services for querying massive data-sets in seconds, with actionable insights from serverless and machine-learning data analytics platforms. Operational staff can concentrate on improving systems and processes or commission new work, whilst Google maintains upgrades, updates, and security patches for all the underlying infrastructure.
  • Hybrid Applications: high-speed, low-latency access to native Google Cloud Services with Virtual Private Cloud (VPC) peering enables hybrid application across platforms. For example, front end web and application servers migrated from on-premises data centres to Google Cloud VMware Engine and large databases in a dedicated VPC with millisecond response times.

Further reading on Google Cloud VMware Engine can be found at the product page here, a number of useful whitepapers are linked at the bottom of the page.

AWS FSx File Server Storage for VMware Cloud on AWS

Introduction

Amazon FSx for Windows File Server is an excellent example of quick and easy native AWS service integration with VMware Cloud on AWS. Hosting a Windows file share is a common setup in on-premises data centres, it might be across Windows Servers or dedicated file-based storage presenting Server Message Block (SMB) / Common Internet File System (CIFS) shares over the network. When migrating Virtual Machines to VMware Cloud on AWS, an alternative solution may be needed if the data is large enough to impact capacity planning of VMware Cloud hosts, or if it indeed resides on a dedicated storage array.

AWS FSx

FSx is Amazon’s fully managed file storage offering that comes in 2 flavours, FSx for Windows File Server and FSx for Lustre (high-performance workloads). This post will focus on FSx for Windows File Server, which provides a managed file share capable of handling thousands of concurrent connections from Windows, Linux, and macOS clients that support the industry-standard SMB protocol.

FSx is built on Windows Server with AWS managing all the underlying file system infrastructure and can be consumed by users and compute services such as VMware Cloud on AWS VMs, and Amazon’s WorkSpaces or Elastic Compute Cloud (EC2). File-based backups are automated and use Simple Storage Services (S3) with configurable lifecycle policies for archiving data. FSx integrates with Microsoft Active Directory enabling standardised user permissions and migration of existing Access Control Lists (ACLs) from on-premises using tools like Robocopy. As you would expect, file systems can be spun up and down on-demand, with a consumption-based pricing model and different performance tiers of disk. You can read more about the FSx service and additional features such as user quotas and data deduplication in the AWS FSx FAQs.

Example Setup

VMware-Cloud-FSx-Example

In the example above, FSx is deployed to the same Availability Zones as VMware Cloud on AWS for continuous availability. Disk writes are synchronously replicated across Availability Zones to a standby file server. In the event of a service disruption FSx automatically fails over to the standby server. Data is encrypted in transit and at rest, and uses the 25 Gbps Elastic Network Interface (ENI) between VMware Cloud and the AWS backbone network. There are no data egress charges for using the ENI connection, but there may be cross-AZ charges from AWS in multi-AZ configurations. For more information on the connected VPC and services see AWS Native Services Integration With VMware Cloud on AWS.

A reference architecture for Integrating Amazon FSx for Windows Servers with VMware Cloud on AWS is available from VMware, along with a write up by Adrian Roberts here. AWS FSx allows single-AZ or multi-AZ deployments, with single-AZ file systems supporting Microsoft Distributed File System Replication (DFSR) compatible with your own namespace servers, which is the model used in the VMware reference architecture. At the time of writing custom DNS names are still road mapped for multi-AZ. You can see the full table of feature support by deployment type in the Amazon FSx for Windows File Server User Guide.

FSx Setup

To provide user-based authentication, access control, and DNS resolution for FSx file shares, you can use your existing Active Directory domain or deploy AWS Managed Microsoft AD using AWS Directory Services. You will need your Active Directory details ready before starting the FSx deployment, along with the Virtual Private Cloud (VPC) and subnet information to use.

Log into the AWS console and locate FSx under Storage from the Services drop-down. In the FSx splash-screen click Create file system. On this occasion, we are creating a Windows file system.

FSx-Setup-1

Enter the file system details, starting with the file system name, deployment type, storage type, and capacity.

FSx-Setup-2

A throughput capacity value is recommended and can be customised based on the data requirements. Select the VPC, Security Group, and subnets to use. In this example, I have selected the subnets connected to VMware Cloud on AWS as defined in the ENI setup.

FSx-Setup-3

Enter the Active Directory details, including service accounts and DNS servers. If desired, you can make changes to the encryption keys, daily backup window, maintenance window, and add any required resource tags. Review the summary page and click Create file system.

FSx-Setup-4

The file system is created and will show a status of Available once complete.

FSx-Setup-5

If you’re not using the default Security Group with FSx, then the following ports will need defining in rules for inbound and outbound traffic: TCP/UDP 445 (SMB), TCP 135 (RPC), TPC/UDP 1024-65535 (RPC ephemeral port range). There may be additional Active Directory ports required for the domain the file system is being joined to.

Further to the FSx Security Group, the ENI Security Group also needs the SMB and RPC port ranges adding as inbound and outbound rules to allow communication between VMware Cloud on AWS and the FSx service in the connected VPC. In any case, when configuring Security Group or firewall rules, the source or destination should be the clients accessing the file system, or if applicable any other file servers participating in DFS Replication. AWS Security Groups are accessible in the console under VPC. You can either create a dedicated Security Group or modify an existing ruleset. The Security Group in use by the VMware Cloud ENI can be found under EC2 > ENI.

FSx-Security-Group

With the SMB ports open for the FSx and ENI Security Groups, remember that the traffic will also hit the VMware Cloud on AWS Compute Gateway. In the VMware Cloud Services Portal add the same rules to the Compute Gateway, and to the Distributed Firewall if you’re using micro-segmentation. The Compute Gateway Firewall is accessible from the Networking & Security tab of the SDDC.

VMC_GW_FW

Virtual Machines in VMware Cloud on AWS will now be able to access the FSx file shares across the ENI using the DNS name for the share or UNC path.

The FSx service in the AWS console provides some options for managing file systems. Storage capacity, throughput, and IOPS can be viewed quickly and added to a CloudWatch dashboard. CloudWatch Logs can also be ingested by vRealize Log Insight Cloud from the VMware Cloud Services Portal.

FSx-Monitoring

vSphere 7 with Kubernetes and Tanzu on VMware Cloud Foundation

vSphere 7 Cloud Infrastructure for Modern Applications Part 1

vSphere 7 Cloud Infrastructure for Modern Applications Part 2:

VMware Cloud Foundation 4

VMware Cloud Foundation (VCF) 4 delivers vSphere with Kubernetes at cloud scale, bringing together developers and IT operations by providing a full-stack Hyper-Converged Infrastructure (HCI) for Virtual Machines (VMs) and containers. By utilising software-defined infrastructure for compute, storage, network, and management IT operations can provide agility, flexibility, and security for modern applications. The automated provisioning and maintenance of Kubernetes clusters through vCenter Server means that developers can rapidly deploy new applications or micro-services with cloud agility, scale, and simplicity. At the same time, IT operations continue supporting the modern application framework by leveraging existing vSphere functionality and tooling.

VMware has always been an effective abstraction provider, VCF 4 with Tanzu Services View takes infrastructure abstraction to the next level. Within vSphere underlying infrastructure components are abstracted into a set of services exposed to APIs, allowing the developer to look down from the application layer to consume the hybrid infrastructure services. Meanwhile, IT operations can build out policies and manage pods alongside VMs at scale using vSphere.

VCFwKubernetes

vSphere 7 with Kubernetes

Now and over the next 5 years, we will see a shift in how applications are built and run. In 2019 Line of Business (LOB) IT, or shadow IT, spend exceeded Infrastructure and Operations IT spend for the first time*. Modern applications are distributed systems built across serverless functions or managed services, containers, and Virtual Machines (VMs), replacing typical monolithic VM application and database deployments. The VMware portfolio is expanding to meet the needs of customers building modern applications, with a portfolio of services from Pivotal, Wavefront, Cloud Health, bitnami, heptio, bitfusion, and more. In the container space, VMware is strongly positioned to address modern application challenges for developers, business leaders, and infrastructure administrators.

Launched on March 10 2020, with expected April 2020 availability, vSphere 7 with Kubernetes is powering VMware Cloud Foundation 4. vSphere 7 with Kubernetes integration, the first product including capabilities announced as part of Project Pacific, provides real-time access to infrastructure in the Software-Defined Data Centre (SDDC) through familiar Kubernetes APIs, delivering security and performance benefits even over bare-metal hardware. The Kubernetes integration enables the full SDDC stack to utilise the Hybrid Infrastructure Services from ESXi, vSAN, and NSX-T, which provide the Storage Service, Registry Service, Network Service, and Container Service. Developers do not need to translate applications to infrastructure, instead leveraging existing APIs to provision micro-services, while infrastructure administrators use existing vCenter Server tooling to support Kubernetes workloads along with side Virtual Machines.

You can read more about the workings of vSphere with Kubernetes in this Project Pacific Technical Overview for New Users by Ellen Mei. Also, Initial Placement of a vSphere Pod by Frank Denneman is another useful and recent article detailing the process behind the ESXi container runtime.

VMware Application Modernisation Portfolio with Tanzu

VMware Cloud Foundation services is the first manifestation of Project Pacific; now vSphere with Kubernetes, and provides consistent services managed by IT operations to developers, ultimately anywhere that is running the VMware Cloud platform.

In the past, VMware was efficient at taking many Virtual Machines and running them across multiple hypervisors in a cluster, the challenge was consolidating numerous physical servers for cost-saving, management, and efficiency gains. Today the challenge is that application deployments are groups of VMs, which presents a new challenge of consolidating distributed applications across multiple clouds. Project Pacific brings Kubernetes and Tanzu to the vSphere environment, making it operationally easier to get upstream Kubernetes running, but also to effortlessly in-place upgrade and maintain Kubernetes clusters. This functionality accelerates vSphere into a much more modern, API driven, self-service and fast provisioning interface backed by optimised ESXi for all workloads.

Tanzu Kubernetes Grid (TKG) is a Kubernetes runtime built into VMware Cloud Foundation Services; allowing installing and maintaining of multi-cluster Kubernetes environments across different infrastructure. Tanzu Kubernetes Grid also works for operational consistency across Amazon Web Services (AWS), Azure, and Google Compute Engine (GCE). This is different to public cloud-managed Kubernetes services such as EKS, AKS, GKE, etc. as it integrates natively into the existing infrastructure; meeting the needs of organisations who require abstracted logging, events, governance policies, and admission policies. This capability is delivering not just Kubernetes but a set of management services to provision, deploy, upgrade and maintain Kubernetes clusters. By having this granular level of control over the underlying VMs or cloud environment customers can implement, monitor, or enforce their own security policies and governance.

Tanzu Mission Control provides operator consistency for deployment, configuration, security, and policy enforcement for Kubernetes across multiple clouds, simplifying the management of Kubernetes at scale. Tanzu Mission Control is a Software as a Service (SaaS) control plane offering allowing VMware administrators to standardise Identity Access Management (IAM), configuration and policy control, backup and recovery, ingress control, cost accounting, and more. The multi-cluster control plane supports the propagation of Kubernetes across vSphere, Pivotal Container Services (PKS), AWS, Azure, and Google Cloud Platform (GCP), all from a single point of control.

VMware have announced the availability of Tanzu Kubernetes Grid, Tanzu Mission Control, and Tanzu Application Catalog (open-source software catalog powered by Bitnami), providing a unified platform to build, run, and manage modern applications.

VMware Cloud Foundation Additional Updates

VMware Cloud Foundation (VCF) 4 is expected in April 2020 and includes fully updated software building blocks for the private cloud, including vCenter, ESXi, and vSAN 7.0, plus the addition of NSX-T.

VCF with NSX-T is made up of workload domain constructs,  and by default, every architecture starts with a management domain, which hosts vCenter, private NSX managers, and the edge cluster. There are a couple of changes in VCF 4 that reduce the footprint of the management domain; NSX-T is being fully utilised for the first time, the NSX Edge cluster can be deployed at day X, NSX Manager and controllers are now integrated, and the Platform Services Controller (PSC) is also now using the embedded model with vCenter. Additionally, we have the capability to use Application Virtual Networks (AVM) using BGP peering on deployment, or again as a day X action. Another side note is that Log Insight has been changed from a default deployment requirement to an optional day X action.

Workload domains are built out to server vSphere with Kubernetes and expose the network services for developers to use. Workload domains can be built on new or existing NSX-T managers; offering the choice of one-to-one or one-to-many relationship for NSX-T instances with VCF 4. This provides customers with the option of separating out NSX-T instances, while simultaneously protecting the management domain. Day X automation then can be used to place edge deployments in the appropriate cluster:

NSXWorkloadDomains

SDDC Manager and Lifecycle Manager (LCM) provide automated provisioning and upgrades. Lifecycle Manager enhances ease of upgrade and patching by providing automated lifecycle management; with update notifications, review and schedule options, and monitoring and reporting. Also, LCM can manage all inter-dependencies of versioning at a cluster level, from vSphere right through to the Kubernetes runtime. SDDC Manager is orchestrating and automating the provisioning of vSphere with Kubernetes workload domains, and crucially enabling the LCM functionality for maintaining upgrades for the entire software stack, and eliminating typical day 2 challenges for developers.

SDDCManager

Multi-Instance Management: multiple VCF instances can now be federated to provide a global view of workload domains without the installation of any additional components. Administrators can click-through to any VCF data centre to centrally view patching, upgrades, maintenance, and remediation operations.

New Security Enhancements: native Workspace ONE Identity Access integration for vRealize suite and NSX-T using AD or LDAP identity sources. Admin and Operator roles for API and UI, with the operator role providing all privileges minus password management, user management, and backup and restore. Token-based authentication is also now enforced across all APIs.

You can find out more about the VMware Cloud Foundation 4 update at What’s New in VMware Cloud Foundation 4  and Delivering Kubernetes at Cloud Scale with VMware Cloud Foundation 4.

VMware vRealize Cloud Management Integration

The vRealize Cloud Management product suite has been comprehensively updated to include vSphere 7 with Kubernetes support. vRealize Operations (vROps) 8.1 is now available for the first time as a SaaS (Software as a Service) offering with an enhanced feature-set. Some of the key new functionality enables self-driving operations across multi-cloud, hybrid-cloud and data centre environments.

vROps 8.1 and Cloud now fully support integrations with GCP, native VMware Cloud on AWS as a cloud account (including additional vSAN, NSX-T, and Cloud Services Portal information with billing), an enhanced portfolio of AWS objects, Cloud Health, and vSphere Kubernetes constructs. With the latter crucially enabling Kubernetes cluster onboarding, discovery, continuous performance optimisation, capacity and cost optimisation, monitoring and troubleshooting, and configuration and compliance management. Furthermore, new dashboards and topology views of workload management can be leveraged to display all Kubernetes objects visible from vCenter, for a complete end-to-end view of the infrastructure.

K8s_Dashboard

vRealize Operations 8.1 and Cloud integration for vSphere with Kubernetes:

  • Automatically discover new constructs of supervisor cluster, namespaces, pods, and Tanzu Kubernetes clusters.
  • New dashboards and summary pages for performance, capacity, utilisation, and configuration management of Kubernetes constructs, with full topology views from Kubernetes substrate to physical infrastructure.
  • Capacity forecasting detects utilisation and potential bottlenecks for supervisor clusters and pods and shows time remaining projections for CPU, memory, and disk space.
  • Out of the box reporting functionality for workload management, inventory, configuration, and capacity, with configurable alerting to operationalise the workload platform and provide complete visibility and control.
  • Container management pack extends visibility to monitor and visualise multiple Kubernetes clusters, map and co-relate virtual infrastructure to Kubernetes infrastructure, set up alerts and monitoring, and provide support for PKS.

You can find out more about what’s new in the vRealize suite at Delivering Modern Infrastructure for Modern Apps with vRealize Suite.

*LOB spend 51% to infrastructure operations spend 49% – source IDC WW Semiannual IT Spending Guide: Line of Business, 09 April 2018 (HW, SW and services; excludes Telecom)