Category Archives: VMware Other

Understand VMware Tanzu, Pacific, and Kubernetes for VMware Administrators

This post was last updated 26/10/2019 and provides an overview of VMware Tanzu and Project Pacific.

Peanut Butter & Jelly VMware and Kubernetes

There will be more apps deployed in the next 5 years than in the last 40 years (source: Introducing Project Pacific: Transforming vSphere into the App Platform of the Future). The VMware strategy of late has seen a significant shift towards cloud agnostic software and the integration of cloud-native application development. In November 2018 VMware announced the Acquisition of Heptio to help accelerate enterprise adoption of Kubernetes on-premise and across multi-cloud environments. In May and August 2019 VMware announced its intent to Acquire Bitnami and Pivotal Software, following the successful launch of Pivotal Container Service (PKS) which was later re-branded VMware Enterprise PKS.

To help better address application support complexities between development and operations teams, VMware have now announced VMware Tanzu:

“In Swahili, ’tanzu’ means the growing branch of a tree. In Japanese, ’tansu’ refers to a modular form of cabinetry. At VMware, Tanzu represents our growing portfolio of solutions to help you build, run and manage modern apps.”

VMware Tanzu is a portfolio of capabilities that empowers cloud-native development by enabling build, run, and manage operations across platforms. Using VMware Tanzu Mission Control Kubernetes clusters can be built and managed from a single control point.

Another key announcement alongside VMware Tanzu was code-named Project Pacific; enabling IT operators and developers to build and run modern applications with VMware vSphere and native Kubernetes. Project Pacific is focused on re-architecting vSphere for Kubernetes containers to run along side VMware Virtual Machines (VMs) in ESXi, enabling the development of portable cloud-native applications and micro-services, whilst protecting existing investments in products and skills. You can review the press release of all products in the VMware Tanzu portfolio here, and the split of build, run, manage products here.

Introduction to Kubernetes

Kubernetes is an open-source orchestration and management tool that provides a simple Application Programming Interface (API), exposing a set of capabilities for defining workloads and services. Kubernetes enables containers to run and operate in a production-ready environment at enterprise scale by managing and automating resource utilisation, failure handling, availability, configuration, scale, and desired state. Micro-services can be rapidly published, maintained, and updated.

Kubernetes managed containers and containers package applications and their dependencies into a distributed image that can run almost anywhere, simplifying application path to live. Kubernetes makes it easier to run applications across multiple cloud platforms, accelerates application development and deployment, increases agility, flexibility, and the ability to adapt to change.

For VMware administrators with little exposure to DevOps the following high level resources can help set a foundation understanding of Kubernetes, and why VMware are making some of these key changes in architecture and strategy. You can try Kubernetes for yourself using the Kubernetes Academy by VMware, or a Kind Way to Learn Kubernetes.

Kubernetes for Executives: “Containers encapsulate an application in a form that’s portable and easy to deploy. Containers can run on any compatible system—in any
cloud—without changes. Containers consume resources efficiently, enabling high density and utilization. Kubernetes makes it possible to deploy and run complex applications requiring multiple containers by clustering physical or virtual resources for application hosting. Kubernetes is extensible, self-healing, scales applications automatically, and is inherently multi-cloud.”

Introduction to Project Pacific (Run)

Kubernetes uses a cluster of nodes to distribute container instances. The master node is the management plane containing the API server and scheduling capabilities. Worker nodes make up the control plane and act as compute resources for running workloads (known as pods). VMware have re-designed vSphere to include a Kubernetes control plane for managing Kubernetes workloads on ESXi hosts. The control plane is made up of a supervisor cluster using ESXi as the worker nodes, allowing workloads or pods to be deployed and run natively in the hypervisor, along side traditional Virtual Machine workloads. This new functionality is provided by a new container runtime built into ESXi called CRX. CRX optimises the Linux kernel and hypervisor, and strips some of the traditional heavy config of a Virtual Machine enabling the binary image and executable code to be quickly loaded and booted. The container runtime is able to produce some of the performance benchmarks VMware have been publishing, such as improvements even over bare metal, in combination with ESXi’s powerful scheduler.

To ensure containers are running in pods an agent called a Kubelet runs on Kubernetes cluster nodes. With the supervisor cluster the role of the Kubelet agent is handled by a new ‘Spherelet’ running on each ESXi host. Pods are created on a network internal to the Kubernetes nodes. By default pods cannot talk to each other across the cluster of nodes unless a Service is created. A Service in Kubernetes allows a group of pods to be exposed by a common IP address, helping define network routing and load balancing policies without having to understand the IP addressing of individual pods.

Another of the great features of Kubernetes is namespaces. Namespaces are commonly used to provide multi-tenancy across applications or users, and to manage resource quotas (backed in this instance by vSphere Resource Pools) . Kubernetes namespaces segment resources for large teams working on a single Kubernetes cluster. Resources can have the same name as long as they belong to different namespaces, think of them as sub-domains and the Kubernetes cluster as the root domain the namespace gets attached to. Multiple namespaces can exist within the supervisor cluster, with different storage policies assigned to them, for persistent storage, etc.

Kubernetes can be accessed through a GUI known as the Kubernetes dashboard, or through a command-line tool called kubectl. Both query the Kubernetes API server to get or manage the state of various resources like pods, deployments, and services. Labels assigned to pods can be used to look up pods belonging to the same application, tier, or service. With Project Pacific; developers use Kubernetes APIs to access the Software Defined Data Centre (SDDC) and ultimately consume Kubernetes clusters as a service using the same application deployment tools they use currently. This service is delivered by Infrastructure Operations teams using existing vSphere tools, with the flexibility of running Kubernetes workloads and Virtual Machine workloads side by side.

By applying application focused management Project Pacific allows application level control over policies, quota, and role-based access for Developers. Service features provided by vSphere such as High Availability (HA), Distributed Resource Scheduler (DRS) and vMotion can be applied at application level across Virtual Machines and containers. Unified visibility in vCenter for Kubernetes clusters, containers, and existing Virtual Machines is provided for a consistent view between Developers and Infrastructure Operations alike.

The following resources provide further reading on Project Pacific for enabling Kubernetes on vSphere.

Project_Pacific

Introduction to VMware Tanzu Mission Control (Manage)

VMware Tanzu Mission Control brings together Kubernetes clusters providing operator consistency for deployment, configuration, security, and policy enforcement across multiple clouds, whilst maintaining developer independence and self-service.

VMware Tanzu Mission Control is a Software as a Service (SaaS) control plane offering allowing administrators to deploy, monitor, and manage ALL Kubernetes clusters from a single point of control. The beauty of this approach is that lifecycle management, access management, health and diagnostics, security and configuration policies, quota management, and backup or restore capabilities are all consolidated into a single toolset.  Kubernetes clusters running on vSphere, VMware Enterprise or Essential PKS, Public Cloud (AWS, Microsoft Azure, Google Cloud Platform), and managed services or other implementations can all be attached to VMware Tanzu Mission Control. New Kubernetes clusters can also be deployed to all of these platforms from the Tanzu Mission Control interface.

For more information on VMware Tanzu Mission Control see the product page here, and Introducing VMware Tanzu Mission Control to Bring Order to Cluster Chaos. If you are attending VMworld Europe 2019 have a look through VMware Tanzu Sessions in the content catalog and also Explore Kubernetes at VMworld 2019. At the time of writing VMware Tanzu and Project Pacific are in tech preview, this post will be updated when more information is released. Please use the comments section below if you feel there are any key elements missing or not explained clearly. There are some additional useful video tutorials available from the Project Pacific at Tech Field Day Extra at VMworld 2019,

Veeam Backup Error: Out of the Vector Bound

When running a backup job using Veeam Backup & Replication v8 or v9 the job fails with Error: Out of the vector bound. Record index: [0]. Vector Size: [1] Job finished with error. Running an active full produces the same result. In our case this issue was caused by corruption to the metadata file. This can occur when the metadata file is not properly closed and breaks the chain, potentially down to a file system filling up, or server failure.

To resolve we start a new chain to re-create both full data and metadata. This is done by cleanly deleting records about the backup job from the Veeam Backup & Replication console and configuration database, and deleting backup files themselves from the destination storage. The job itself remains so does not need recreating.

  • First disable the job; open the Veeam Backup & Replication client. Ensure Backup & Replication is selected on the task pane on the left hand side and select Jobs. Right click the failed job and click Disable.

veeamfix1

  • Next we need to remove the corrupted files.  Still in the Backup & Replication task pane select Backups. Right click the failed job and click Delete from disk to remove the backup files and records.

veeamfix2

  • Now go back to the Jobs page and enable the job. Run an Active Full to create new data and metadata files.

veeamfix3

Physical to Virtual Machine Conversion Guide

VMware vCenter Converter enables physical and virtual machines to be converted into VMware virtual machines. The source can be a physical server, or a virtual machine using another platform such as Hyper-V. The client can also be used to convert a VMware Workstation or Fusion virtual machine to a vCenter infrastructure VM or vice-versa. This post will walk through the physical to virtual (P2V) process; using VMware vCenter Converter to migrate the operating system, applications, and data over the network to the virtualisation platform.

converter

Requirements

  • There will be down time during the switch-over, best practises indicate any databases or applications should be stopped during the conversion, as well as Anti-Virus.
  • Most Windows and Linux operating systems are supported, you can view a full list of supported operating systems on page 19 of the vCenter Server Converter White Paper.
  • Communication across ports 22, 443, and 902 is required. If you are connecting the Converter client to a remote machine then ports 139, 445, and 9089 are also required.
  • If the source machine is Windows then ensure Windows Firewall does not block File and Printer Sharing.
  • Make sure the virtual infrastructure has the variables required to run the physical machine as virtual, such as the correct VLAN configuration if you intend on keeping the same IP address, as well as Backup, DR, monitoring, and any other third party application compatibility.
  • The vCenter Converter itself does not require a license, you will need a (free) VMware account to download.

Client Install

First we install the vCenter Converter client on the physical server to be virtualised. This is straight forward installer and can be done any time, a reboot is not required. Click any of the thumbnails below to enlarge.

  • Download the latest version of the vCenter Converter from VMware Downloads and run the application.
  • Click Next to start the wizard.
  • Click Next to accept the patent agreement.
  • Agree to the license terms and click Next.
  • Accept the default installation directory and click Next.
  • Select Local Installation and click Next.
  • Review the customer experience program option and click Next.
  • Click Install to start the installation.
  • Once complete click Finish.

P2V Process

With the vCenter Converter client installed the next step is to actually run the physical to virtual conversion. To ensure a clean conversion it is recommended to stop any databases or applications during this process, even if you cannot afford the downtime it is likely there will be a performance degradation while the conversion is running. You should also consider disabling any AV program during the conversion.

Open the VMware vCenter Converter Standalone Client. Click Convert machine.

converter1

Change the source type to This local machine and click Next. You can also convert a remote Windows or Linux machine using the appropriate option.

converter2

Change the destination type to VMware Infrastructure virtual machine, this allows us to connect to a vCenter Server. Enter the vCenter name and credentials, click Next.

converter3

Enter a unique name for the virtual machine and the organisational folder where it will be stored, click Next.

converter4

Select the host or cluster for the virtual machine and the storage to use, click Next.

Review the options page, there are a number of useful settings here. For example you can set the physical machine to shut down once complete and the virtual machine to power on, for automatic switch over. You can also set the correct VLAN for the vNIC to use, exclude drives from the P2V task, or change virtual hardware settings; such as increasing resources. Change any desired settings and click Next.

converter5

On the summary page click Finish to begin the P2V conversion. The P2V job will be added to the tasks list with an estimated completion time, this varies depending on the amount of data to migrate and the speed of the network.

Post P2V

Once complete the virtual machine will be powered off by default, unless you specified otherwise in the options page. If required, manually shut down the physical machine and power on the new virtual machine. If you power on the virtual machine while the physical server is still connected to the network make sure you have disconnected the vNIC from the virtual machine to avoid an IP address or host name conflict.

There are a number of ‘tidy up’ jobs we need to carry out on the new VM after the P2v process.

  • Install VMware Tools.
  • From the Edit Settings option of the virtual machine remove any unnecessary virtual hardware that may have been added as a hardware equiviliant was detected, but not used.
  • Remove any physical server, or hardware, management software such as RAID management, drivers, etc.
  • Remove any ghosted entries from device manager, to see non-present devices open command prompt as administrator and run set devmgr_show_nonpresent+devices=1. In Device Manager click View, Show Hidden Devices. Uninstall any greyed out devices.
  • Remove any system restore points from before the conversion.
  • Remove the VMware vCenter Converter client.

Resources – for additional information or troubleshooting steps see the VMware vCenter Converter User Guide. Calculate cost savings for converting physical servers to virtual using the VMware ROI TCO Calculator.