Tag Archives: VMware

Yorkshire VMUG 2020 Announcement

Following a short leave of absence the Yorkshire VMUG is back with a bang in 2020.

The first confirmed event is May 7th 2020 at the Malmaison Leeds. The venue is central and a 5-minute walk from Leeds train station, full address: 1 Swinegate, Leeds LS1 4AG.

Lunch, teas, coffees, soft drinks, WiFi, and networking vBeers are all provided

We are thrilled to partner with Softcat and Veeam for this event, as gold sponsors

  • 09:00-09:30 Arrivals and Registration
  • 09:30-10:10 Key Note Speaker: Joe Baguley, VP & CTO EMEA VMware
  • 10:10-10:50 James Seaman, Chief Technologist Softcat
  • 10:50-11:00 Break
  • 11:00-11:40 Lee Dilworth, Chief Technologist Storage & Availability EMEA VMware
  • 11:40-12:00 Community Speaker TBA
  • 12:00-12:40 VMware Mental Health First Aider (MHFA) Team
  • 12:40-13:20 Lunch
  • 13:20-14:00 Martin Hosken, Principal Architect VMware
  • 14:00-14:20 Community Speaker TBA
  • 14:20-15:00 Russell Nolan, Senior Enterprise Systems Engineer Veeam
  • 15:00-15:10 Break
  • 15:10-15:50 Darren Hirons, Senior Solutions Engineer VMware
  • 15:50-16:30 Vern Bolinius, Cloud Architect VMware
  • 16:30-17:30 Closing Notes, Prize Giveaway & vBeers

Full session details will be released shortly. For the latest information follow @YorkshireVMUG on Twitter.

Registration is open on the Yorkshire VMUG Community Page.

Yorkshire VMUG Logo Transparent

The VMware User Group (VMUG) is an independent, global, customer-led organisation, created to maximise members’ use of VMware and partner solutions through knowledge sharing, training, collaboration, and events.

Understand VMware Tanzu, Pacific, and Kubernetes for VMware Administrators

This post was last updated 26/10/2019 and provides an overview of VMware Tanzu and Project Pacific.

Peanut Butter & Jelly VMware and Kubernetes

There will be more apps deployed in the next 5 years than in the last 40 years (source: Introducing Project Pacific: Transforming vSphere into the App Platform of the Future). The VMware strategy of late has seen a significant shift towards cloud agnostic software and the integration of cloud-native application development. In November 2018 VMware announced the Acquisition of Heptio to help accelerate enterprise adoption of Kubernetes on-premise and across multi-cloud environments. In May and August 2019 VMware announced its intent to Acquire Bitnami and Pivotal Software, following the successful launch of Pivotal Container Service (PKS) which was later re-branded VMware Enterprise PKS.

To help better address application support complexities between development and operations teams, VMware have now announced VMware Tanzu:

“In Swahili, ’tanzu’ means the growing branch of a tree. In Japanese, ’tansu’ refers to a modular form of cabinetry. At VMware, Tanzu represents our growing portfolio of solutions to help you build, run and manage modern apps.”

VMware Tanzu is a portfolio of capabilities that empowers cloud-native development by enabling build, run, and manage operations across platforms. Using VMware Tanzu Mission Control Kubernetes clusters can be built and managed from a single control point.

Another key announcement alongside VMware Tanzu was code-named Project Pacific; enabling IT operators and developers to build and run modern applications with VMware vSphere and native Kubernetes. Project Pacific is focused on re-architecting vSphere for Kubernetes containers to run along side VMware Virtual Machines (VMs) in ESXi, enabling the development of portable cloud-native applications and micro-services, whilst protecting existing investments in products and skills. You can review the press release of all products in the VMware Tanzu portfolio here, and the split of build, run, manage products here.

Introduction to Kubernetes

Kubernetes is an open-source orchestration and management tool that provides a simple Application Programming Interface (API), exposing a set of capabilities for defining workloads and services. Kubernetes enables containers to run and operate in a production-ready environment at enterprise scale by managing and automating resource utilisation, failure handling, availability, configuration, scale, and desired state. Micro-services can be rapidly published, maintained, and updated.

Kubernetes managed containers and containers package applications and their dependencies into a distributed image that can run almost anywhere, simplifying application path to live. Kubernetes makes it easier to run applications across multiple cloud platforms, accelerates application development and deployment, increases agility, flexibility, and the ability to adapt to change.

For VMware administrators with little exposure to DevOps the following high level resources can help set a foundation understanding of Kubernetes, and why VMware are making some of these key changes in architecture and strategy. You can try Kubernetes for yourself using the Kubernetes Academy by VMware, or a Kind Way to Learn Kubernetes.

Kubernetes for Executives: “Containers encapsulate an application in a form that’s portable and easy to deploy. Containers can run on any compatible system—in any
cloud—without changes. Containers consume resources efficiently, enabling high density and utilization. Kubernetes makes it possible to deploy and run complex applications requiring multiple containers by clustering physical or virtual resources for application hosting. Kubernetes is extensible, self-healing, scales applications automatically, and is inherently multi-cloud.”

Introduction to Project Pacific (Run)

Kubernetes uses a cluster of nodes to distribute container instances. The master node is the management plane containing the API server and scheduling capabilities. Worker nodes make up the control plane and act as compute resources for running workloads (known as pods). VMware have re-designed vSphere to include a Kubernetes control plane for managing Kubernetes workloads on ESXi hosts. The control plane is made up of a supervisor cluster using ESXi as the worker nodes, allowing workloads or pods to be deployed and run natively in the hypervisor, along side traditional Virtual Machine workloads. This new functionality is provided by a new container runtime built into ESXi called CRX. CRX optimises the Linux kernel and hypervisor, and strips some of the traditional heavy config of a Virtual Machine enabling the binary image and executable code to be quickly loaded and booted. The container runtime is able to produce some of the performance benchmarks VMware have been publishing, such as improvements even over bare metal, in combination with ESXi’s powerful scheduler.

To ensure containers are running in pods an agent called a Kubelet runs on Kubernetes cluster nodes. With the supervisor cluster the role of the Kubelet agent is handled by a new ‘Spherelet’ running on each ESXi host. Pods are created on a network internal to the Kubernetes nodes. By default pods cannot talk to each other across the cluster of nodes unless a Service is created. A Service in Kubernetes allows a group of pods to be exposed by a common IP address, helping define network routing and load balancing policies without having to understand the IP addressing of individual pods.

Another of the great features of Kubernetes is namespaces. Namespaces are commonly used to provide multi-tenancy across applications or users, and to manage resource quotas (backed in this instance by vSphere Resource Pools) . Kubernetes namespaces segment resources for large teams working on a single Kubernetes cluster. Resources can have the same name as long as they belong to different namespaces, think of them as sub-domains and the Kubernetes cluster as the root domain the namespace gets attached to. Multiple namespaces can exist within the supervisor cluster, with different storage policies assigned to them, for persistent storage, etc.

Kubernetes can be accessed through a GUI known as the Kubernetes dashboard, or through a command-line tool called kubectl. Both query the Kubernetes API server to get or manage the state of various resources like pods, deployments, and services. Labels assigned to pods can be used to look up pods belonging to the same application, tier, or service. With Project Pacific; developers use Kubernetes APIs to access the Software Defined Data Centre (SDDC) and ultimately consume Kubernetes clusters as a service using the same application deployment tools they use currently. This service is delivered by Infrastructure Operations teams using existing vSphere tools, with the flexibility of running Kubernetes workloads and Virtual Machine workloads side by side.

By applying application focused management Project Pacific allows application level control over policies, quota, and role-based access for Developers. Service features provided by vSphere such as High Availability (HA), Distributed Resource Scheduler (DRS) and vMotion can be applied at application level across Virtual Machines and containers. Unified visibility in vCenter for Kubernetes clusters, containers, and existing Virtual Machines is provided for a consistent view between Developers and Infrastructure Operations alike.

The following resources provide further reading on Project Pacific for enabling Kubernetes on vSphere.

Project_Pacific

Introduction to VMware Tanzu Mission Control (Manage)

VMware Tanzu Mission Control brings together Kubernetes clusters providing operator consistency for deployment, configuration, security, and policy enforcement across multiple clouds, whilst maintaining developer independence and self-service.

VMware Tanzu Mission Control is a Software as a Service (SaaS) control plane offering allowing administrators to deploy, monitor, and manage ALL Kubernetes clusters from a single point of control. The beauty of this approach is that lifecycle management, access management, health and diagnostics, security and configuration policies, quota management, and backup or restore capabilities are all consolidated into a single toolset.  Kubernetes clusters running on vSphere, VMware Enterprise or Essential PKS, Public Cloud (AWS, Microsoft Azure, Google Cloud Platform), and managed services or other implementations can all be attached to VMware Tanzu Mission Control. New Kubernetes clusters can also be deployed to all of these platforms from the Tanzu Mission Control interface.

For more information on VMware Tanzu Mission Control see the product page here, and Introducing VMware Tanzu Mission Control to Bring Order to Cluster Chaos. If you are attending VMworld Europe 2019 have a look through VMware Tanzu Sessions in the content catalog and also Explore Kubernetes at VMworld 2019. At the time of writing VMware Tanzu and Project Pacific are in tech preview, this post will be updated when more information is released. Please use the comments section below if you feel there are any key elements missing or not explained clearly. There are some additional useful video tutorials available from the Project Pacific at Tech Field Day Extra at VMworld 2019,

How to Consolidate vCenter External PSC with the vSphere Converge Tool

This post gives an overview of the vCenter Server converge process using the HTML5 vSphere client. The converge functionality was added to the GUI with vSphere 6.7 U2, and enables consolidation of external Platform Services Controller (PSC) into the embedded deployment model. This was previously achieved in vSphere 6.5 onwards using a CLI tool.

Following an upgrade of 4 existing vCenter Servers with external PSC nodes I log into the vSphere client. From the drop-down menu click Administration, on the left hand task pane under Deployment I select System Configuration. The starting topology is as follows:

PSC_4

You can view a VMware produced tutorial below, or the documentation here.

As the vCenter Server appliances do not need internet access I need to mount the ISO I used for the vCenter upgrade, see here for more information. This step is not required if internet connectivity exists.

For each vCenter Server with external PSC I select Converge to Embedded.

PSC_1

Next I confirm the Single Sign-On (SSO) details and click Converge.

PSC_3

If I am logged into the vCenter Server being converged I will be kicked out while services are restarted.

PSC_5

Alternatively if I am logged into another vCenter Server in linked mode I can monitor progress.

PSC_6

Once all 4 vCenter Servers have been converged I check that each of the vCenter Servers is using the embedded PSC, SSH to the vCenter appliance in shell run:

/usr/lib/vmware-vmafd/bin/vmafd-cli get-ls-location --server-name localhost

The command should return the vCenter Server for the lookup service, and not the external PSC node. Once you are happy there are no outstanding connection to the external PSC nodes remove them by selecting them individually and clicking Decommission PSC.

PSC_7PSC_8

With the converge process now complete and the PSC nodes decommissioned, the topology is as desired with all vCenter Servers running embedded PSC.

PSC_9

At this point I needed to re-register any external appliances (such as NSX Manager) or third party services that are pointing at the lookup service URL, or referencing the old external PSC node. I also cleaned up DNS as part of the decommission process.

msp-banner-sample-5