Tag Archives: Cloud

Oracle Cloud Infrastructure Demo

This opening post will give an overview and demo of Oracle Cloud Infrastructure (OCI). Oracle Cloud offers fast and scalable compute and storage resources, combined with enterprise-grade private virtual cloud networks. Oracle Cloud offers a range of flexible operating models including traditional Virtual Machine (VM) instances, container infrastructure, databases on demand, and dedicated hardware through bare metal servers and Bring Your Own Hypervisor (BYOH).

You can sign up for a free trial account with $300 credit here. When you sign up for an Oracle account you are creating a tenant. Resources inside a tenant can be organised and isolated using compartments, separate projects, billing, and access policies are some use case examples.

Oracle Cloud Infrastructure is deployed in regions. Regions are localised geographical areas, each containing at least 3 Availability Domains. An Availability Domain is a fault-independent data centre with power, thermal, and network isolation. A Virtual Cloud Network (VCN) is deployed per region across multiple Availability Domains, thereby allowing us to build high availability and fault tolerance into our cloud design. Virtual Cloud Networks are software defined versions of traditional on-premise networks running in the cloud, containing subnets, route tables, and internet gateways. VCNs can be connected together using VCN Peering, and connected to a private network using Fast Connect or VPN with the use of a Dynamic Routing Gateway (DRG).

Product Page | Getting Started | Documentation | Sizing and Pricing | Architecture

Product Demo

The demo below creates a VCN and VM instances in the second generation of Oracle Cloud for lab purposes. Before deploying your own environment you should review all the above linked documentation and plan your cloud strategy including IP addressing, DNS, authentication, access control, billing, governance, network connectivity and security.

Log into the Oracle Cloud portal here, the home dash board is displayed.

Oracle_Dashboard

You’ll need a subscription to get into the second generation Oracle Cloud Infrastructure portal. Under Compute select Open Service Console.

Oracle_Cloud_Dashboard

The region can be selected from the drop-down location pin icon in the top right corner, in this example the region is set to eu-frankfurt-1. Select Manage Regions to subscribe to new regions if required. Use the top left Menu button to display the various options. The first step in any deployment is to build the VCN, select Networking and Virtual Cloud Networks.

Oracle_Cloud_Dashboard

Make sure you are in the correct compartment in the left hand column and click Create Virtual Cloud Network. Select the compartment and enter a name, in this example I am going to create the Virtual Cloud Network only which will allow me to manually define resources such as the CIDR block, internet gateway, subnets, and routes. The DNS label is auto-populated.

Oracle_VCN_1

The newly created VCN is displayed, all objects are orange during provisioning and green when available.

Oracle_VCN_3

Once the VCN is available click the VCN name to display more options.

Oracle_VCN_4

Use the options in the Resources menu to view and create resources assigned to the VCN. In this example first we’ll create the Internet Gateway.

Oracle_Cloud_IG_1

Next we can create a subnet, in this example I have created a public subnet that I will later attach a VM instance to.

Oracle_Cloud_Subnet_1Oracle_Cloud_Subnet_2

We also need to add a route table or new routes into the default route table.

Oracle_Cloud_Route

The final step to allow connectivity in and out of our new subnet(s) is to define ingress and egress rules using security lists. Again you can either add rules to the default section or split out environments into additional security lists.

Oracle_Cloud_Security_1

Define the source and destination types and port ranges to allow access. In this example we are allowing in port 22 to test SSH connectivity for a VM instance.

Oracle_Cloud_Security_2

Now that we have a fully functioning software defined network we can deploy a VM instance. From the left hand Menu drop-down select Compute, Instances. Use the Create Instance wizard to deploy a virtual machine or bare metal machine.

Oracle_Cloud_Instance

In this example I have deployed a virtual machine using the Oracle Linux 7.5 image and VM.Standard2.1 shape (1 OCPU, 15 GB RAM). The machine is deployed to Availability Domain 1 in the Frankfurt region and has been assigned the public subnet in the VCN we created earlier. I used PUTTYgen to generate public and private key pairs for SSH access.

Oracle_Cloud_Instance_2

Once deployed the instance turns green.

Oracle_Cloud_Instance_3

Click the instance name to view further details or terminate, when removing you have the option to keep or delete the attached boot volume.

Oracle_Cloud_Instance_4

Additional block volumes can be added to instances. Block volumes can be created under Block Storage, Block Volumes.

Oracle_Cloud_Block_2

For object based storage we can create buckets under Object Storage, Object Storage.

Oracle_Cloud_Bucket_1

Buckets can be used to store objects with public or private visibility, pre-auth requests can also be added for short term access.

Oracle_Cloud_Bucket_2

Oracle_Cloud_Bucket_3

VMware Cloud on AWS Demo

This opening post will give an overview and demo of VMware Cloud on AWS. VMware Cloud on AWS provides on-demand, scalable cloud environments based on existing vSphere Software-Defined Data Center (SDDC) products. VMware and AWS have worked together to optimise running vSphere, vSAN and NSX, directly on dedicated, elastic, bare-metal AWS infrastructure without the need for nested virtualization. A SDDC cloud can be deployed in a few hours and then capacity scaled up and down within minutes.

Key Benefits

There are a number of benefits and use cases for extending on-premise data centers to the cloud with VMware Cloud on AWS:

  • VMware maintains software updates, emergency software patches, and auto-remediation of hardware failures
  • Increasing capacity in the cloud is generally quicker, easier, and sometimes more cost effective than increasing physical capacity in the data center
  • Scale capacity to protect services when met with temporary or unplanned demand
  • Improve business continuity by using the cloud for Disaster Recovery (DR) with VMware Site Recovery
  • Consistent operating environments allows for simplified cloud migrations with minimal re-training for system administrators
  • Transfer your existing operating system and third party licensing to the cloud and make use of existing support contracts with VMware
  • Expand footprint into additional geographical locations without needing to provision new data centers

Key Details

The following links contain enough reading to plan your VMware Cloud on AWS implementation and cloud migration strategy, the points below should be enough to get you started.

VMware Cloud on AWS: Product Documentation | Technical Overview | VMware Product Page | VMware FAQ| AWS Product Page | AWS FAQRoadmap | Case Study

Try first @ VMware Cloud on AWS – Getting Started Hands-on Lab

  • Each SDDC supports 4 to 32 hosts, each with 512 GB of memory and 2.3 GHz CPUs (custom-built Intel Xeon Processor E5-2686 v4 CPU package) with 18 cores per socket for a total of 36 cores
  • Each SDDC cluster uses an all-flash vSAN configuration utilising NVMe storage
  • An initial 4 host cluster provides roughly 21 TB usable capacity
  • Each ESXi host is connected to an Amazon Virtual Private Cloud (VPC) through Elastic Networking Adapter (ENA), which supports throughput up to 25 Gbps
  • Hybrid Cloud Extension allows stretched subnets between on-premise and cloud data centers for live migration of virtual machines
  • Hybrid Linked Mode allows administrators to connect vCenter Server running in VMware Cloud on AWS to an on-premises vCenter server to view both cloud and on-premises resources from a single interface
  • VMware Cloud on AWS complies with ISO 27001, ISO 27017, ISO 27018, SOC 1, SOC 2, SOC 3, HIPAA, and GDPR
  • VMware Cloud on AWS is managed from a web-based console or RESTful API
  • At the time of writing VMware Cloud on AWS is available in the AWS Europe (Frankfurt and London), AWS US East (N. Virginia) and AWS US West (Oregon) Regions
  • Basic pricing before discount can be calculated here

VMware_AWS

Product Demo

The demo below creates a SDDC in the cloud for lab purposes. Before deploying your own environment you should review all the above linked documentation and do your own research to plan your cloud strategy as well as the following:

  • Identify or create an AWS account and ensure that all technical personnel have access to the account
  • Identify a VPC and subnet by cross-linking the AWS account to the SDDC
  • Allocate IP ranges for the SDDC, and determine a DNS strategy
  • Identify the authentication model for the SDDC
  • Plan connectivity to the SDDC
  • Develop a network security policy for the SDDC

Browse to the VMware Cloud Services portal (https://console.cloud.vmware.com) and login using your VMware ID. At the time of writing to access VMware Cloud on AWS you need to be invited or you can register for a 30 day single host trial here.

VMware_Cloud

Select VMware Cloud on AWS. If you have not used the service before you will be prompted to create a new organisation. Enter a name for your new organisation and accept the terms of service, click Continue.

AWS_1

Add a credit card to be billed if you use the service. If you are using one of the free or trial methods outlined above you will not be billed.

aws_2.png

After you have created the organisation and added payment information you will be sent to the VMware Cloud on AWS dashboard. The first step is to create our SDDC in the cloud, click Create SDDC.

Billing: annual subscriptions are listed under the Subscriptions tab, you can see other billing information from the drop-down menu next to your organisation name: select Organisation Settings, View Organisation. From here you have services, identity and access management, billing and subscriptions, and support options.

AWS_3

Select a region and deployment model for the SDDC, enter a name and the number of hosts if you are not using the single host deployment. Click Next.

AWS_4

Follow the instructions to connect an AWS account and assign the relevant capabilities.

AWS_5

Once the connection is successfully established click Next.

AWS_7

Select the VPC and subnet to use then click Next.

AWS_8

Specify a private subnet range for the management subnet or leave blank to use default addressing. As mentioned above ensure you have planned accordingly and are not using any ranges that will conflict with other networks you may connect in the future. Click Deploy SDDC.

AWS_9

The SDDC will now be deployed, it takes around 2 hours to provision the ESXi hosts and all management components.

AWS_10

Once the deployment is complete the dashboard will show the new SDDC and assigned resources. Click View Details (you can toggle the web portal theme using the Dark/Light options in the top right hand corner).

AWS_14

From either the SDDC Summary tab or back on the SDDC dashboard you can seamlessly add additional hosts or clusters at any time.

AWS_15

If needed the chat bubble in the bottom right hand corner of the screen will take you through to support.

AWS_Support

The Network tab shows the network topology and is where you can configure firewall rules, NAT rules, VPN, Direct Connect, etc.

AWS_12

To access the vCenter Server through the vSphere client the port needs opening, a VPN can also be used. Under Management Gateway select Firewall Rules, click Add Rule. Configure the rule to allow access to the vCenter on port 443 and click Save.

AWS_11

Click Open vCenter from either the Summary or Network tab, if access is in place you are given the cloudadmin@vmc.local credentials to open vCenter. Active Directory can also be configured as an identity source later on.

Once you are logged into the vSphere client you will see the familiar vSphere layout.

vCenter_AWS

It is also possible to see your on-premise vCenter Server(s) in the same pane of glass using Hybrid Linked Mode, click here for more information.

Back in the VMware Cloud on AWS portal the Add Ons tab features Site Recovery and Hybrid Cloud Extension for protecting and migrating workloads to your SDDC in the cloud.

AWS_16

You can delete a SDDC from the Actions drop-down menu in either the SDDC Summary tab or the SDDC dashboard. Once a SDDC is deleted all workloads, data, and interfaces are destroyed and any public IP addresses released.

AWS_17

VMware vRealize Business for Cloud Install

VMware vRealize Business for Cloud provides automated cost analysis and consumption metering; allowing administrators to make workload placement decisions between private and pulic clouds based on cost and available services. Furthermore infrastructure stakeholders have full visibility of virtual machine provisioning costs and are able to accurately manage capital expenditure and operating expenditure. For more information see the vRealize Business product page, you can try vRealize Business for Cloud using the Hands on Labs available here.

This post will walk through the installation of vRealize Business for Cloud 7.3; we’ll be provisioning to a vSphere environment running vRealize Automation 7.3. Each vRealize Business instance scales up to 20,000 virtual machines and 10 vCenter Servers, remote data collectors can be deployed to distributed geographical sites. vRealize Business is deployed in OVA format as a virtual appliance, you should ensure this appliance is backed up appropriately. There is no built in HA or DR functionality within vRealize Business, but you can take advantage of VMware components such as High Availability, Fault Tolerance, or Site Recovery Manager. Logs can be output to a syslog server such as vRealize Log Insight.

vRB_Launchpad

Requirements

  • vRealize Business for Cloud must be deployed to an ESXi host, and can be used to mange vCenter Server, vCloud Director, vCloud Air, vRealize Automation, and vRealize Operations Manager.
  • vRB 7.3 is compatible with vCenter and ESXi versions 5.5 through to 6.5, and vRealize Automation verisons 6.2.4 through to 7.3 (latest versions at the time of writing).
  • For compatibilty with other VMware products see the VMware Product Interoperability Matrix.
  • The vRB appliance requires 8 GB memory, 4 vCPU and 50 GB disk (thick provisioned).
  • If you use any remote data collectors the memory on these appliances can be reduced to 2 GB.
  • vRealize Business for Cloud is licensed as part of the vRealize suite, per CPU, or in packs of 25-OSI.
  • There are 2 available editions; standard and advanced. Features such as public cloud costing require the advanced version, for more information see the feature comparison section of the product page.
  • The web UI can be accessed from IE 10 or later, Chrome 36.x or later, and Firefox 31.x and later.
  • Time synchronization and name resolution should be in place across all VMware components.
  • For a full list of pre-requisites including port requirements see here.

Before beginning review the following VMware links:

Installing vRB

Download the VMware vRealize Business for Cloud 7.3 OVA file here. Log into the vSphere web client and right click the datastore, cluster, or host where you want to deploy the virtual appliance. Select Deploy OVF Template and browse to the location of the OVA file.

  • Enter a name for the virtual appliance and select the deployment location, click Next.
  • Confirm the compute resource and click Next.
  • Review the details of the OVF template and click Next.
  • Accept the end user license agreement and click Next.
  • Select the storage for the virtual appliance, ensure the virtual disk format is set to Thick provision eager zeroed, and click Next.
  • Select the network to attach to the virtual appliance and click Next.
  • Set the Currency, note that at this time the currency cannot be changed after deployment. Ensure Enable Server is checked, select or de-select SSH and the customer experience improvement program based on your own preferences. Configure a Root user password for the virtual appliance and enter the network settings for the virtual appliance in the Networking Properties fields.
  • Click Next and review the summary page. Click Finish to deploy the virtual appliance.

Once the virtual appliance has been deployed and powered on open a web browser to https://vRB:5480, where vRB is the IP address or FQDN of the appliance. Log in with the root account configured during setup.

vRB_Mgmt

Verify the settings under AdministrationTime Settings, and Network. At this stage the appliance is ready to be registered with a cloud solution. In this example I will be using vRealize Automation, for other products or further information see the install guide referenced above. Return to the Registration tab and ensure vRA is selected.

vRB_Register

Enter the host name or IP address of the vRA appliance or load balancer. Enter the name of the vRA default tenant and the default tenant administrator username and password. Select Accept vRealize Automation certificate and click Register.

Accessing vRB

vRealize Business for Cloud can be integrated into vRealize Automation, or you can enable stand-alone access. To access vRB after integrating with vRA log into the vRA portal. First open the Administration tab, select Directory Users and Computers, search for a user or group and assign the relevant business management roles. A user with a business management role has access to the Business Management tab in vRA.

vRB_Roles

Optional: to enable stand-alone access first enable SSH from the Administration tab. Use a client such as Putty to open an SSH connection to the virtual appliance, log in with the root account. Enter cd /usr/ITFM-Cloud/va-tools/bin to change directory, enter sh manage-local-user.sh and select the operation, in this case 5 to enable local authentication.

ssh

If you want to create new local users user option 1 and enter the username and password, when prompted for permissions VCBM_ALL provides administrator access and VCBM_VIEW read-only. You can also log in to the web UI with the root account, although it would be better practice to create a separate account.

Disable SSH from the Administration tab if required. Wait a few minutes for the services to restart and then browse to https://IP/itfm-cloud/login.html, where IP is the IP address of your appliance. If you try to access this URL without enabling stand-alone access you will receive a HTTP Status 401 – Authentication required error message.

vRB Configuration

We will continue with the configuration in the vRA portal, open the Administration tab and click Business Management.

vRB_Connections

Expand License Information, enter a license key and click Save. Expand Manage Private Cloud Connections, configure the required connections. In this example I have added multiple vCenter Server endpoints. Open the Business Management tab, the Launchpad will load.

vRB_Launchpad

Select Expenses, Private Cloud (vSphere) and click Edit Expenses. At this stage you will need the figures associated with hardware, storage, and licensing for the environment. You can also add costs for maintenance, labour, network, facilities, and any other additional costs.

vRB_Expenses_vSphere

Once vRB is populated with the new infrastructure costs utilisation and projected pricing will start to be updated. Consumption showback, what-if analysis, and public cloud comparisons can all be accessed from the navigation menu on the left hand side. For further guidance on getting the most out of vRB see the vRealize Business for Cloud User Guide.

vRB_Operational

vRealize Operations 6.4 Install Guide

The vRealize product suite is a complete, enterprise, cloud management and automation platform for private, public, and hybrid clouds. Specifically vRealize Operations Manager provides intelligent operations management across heterogeneous physical, virtual, and cloud environments from a wide range of vendors. vRealize Operations Manager is able to deliver proactive and automated performance improvements  by implementing resource reclamation, configuration standardisations, workload placement, planning, and forecasting techniques. By leveraging vRealize Operations Manager users can protect their environment from outages with preventative and predictive analytics and monitoring across the estate; utilising management packs to  unify operations management. The image below is taken from the vRealize Operations Manager datasheet.

vro

vRealize Operations Manager can be deployed as a single node cluster, or a multiple node cluster. In single node cluster environments the master node is deployed with adapters installed which collect data and perform analysis. For larger environments additional data nodes can be added to scale out the solution, these are known as multiple node clusters. In a multiple node cluster the master node is responsible for the management of all other nodes. Data nodes handle data collection and analysis. High availability can be achieved by converting a data node into a replica of the master node. For distributed environments remote collector nodes are deployed to gather inventory objects and navigate firewalls in remote locations. These nodes do not store data or perform analytics; you can read more about remote collector nodes here. In this post we will deploy a single node cluster for small environments, proof of concept, test, or lab purposes, and link it to a vCenter Server instance. There will also be references to larger deployments and scaling out the application throughout the guide. If you have already deployed your vRealize cluster and want to add additional nodes or configure High Availability click here.

Licensing is split out into 3 editions; standard, advanced, and enterprise. To view the full feature list of the different editions see the vRealize Operations page. There are a number of VMware product suites bundling vRealize Operations, or it can be purchased standalone. Licensing is allocated in portable license units (vCloud suite and vRealize suite only), per processor with unlimited VMs, or in packs of 25 VMs (or OS instances).

Design Considerations

  • Additional data nodes can be added at any time using the Expand an Existing Installation option.
  • When scaling out the cluster by 25% or more the cluster should be restarted to optimise performance.
  • The master node must be online before any other nodes are brought online (except for when adding nodes at first setup of the cluster).
  • When adding additional data nodes keep in mind the following:
    • All nodes must be running the same version
    • All nodes must use the same deployment type, i.e. virtual appliance, Windows, or Linux.
    • All nodes must be sized the same in terms of CPU, memory, and disk.
    • Nodes can be in different vSphere clusters, but must be in the same physical location and subnet.
    • Time must be synchronised across all nodes.
  • These rules also apply to replica nodes. Click here to see a full list of multiple node cluster requirements.
  • Remote collector nodes can be deployed to remote locations to gather objects for monitoring. These nodes do not store data or perform any analytics but connect remote data sources to the analytics cluster whilst reducing bandwidth and providing firewall navigation. Read more about remote collector nodes here.
  • When designing a larger vROps environment check the Environment Complexity guide to determine if you should engage VMware Professional Services. You should also review the following documentation:

Requirements

  • The vRealize Operations Manager virtual appliance can be deployed to hosts running ESXi 5.1 U3 or later, and requires vCenter Server 5.1 U3 or later (it is recommended that vSphere 5.5 or later is used).
  • The virtual appliance is the preferred deployment method, a Windows and Linux installer is also available however the Windows installer will no longer be offered after v6.4, and end of life for the Linux installer is also imminent.
  • A static IP address must be used for each node (to change the IP after deployment see this kb).
  • Review the list of Network Ports used by vRealize Operations Manager.
  • The following table is from the vRealize Operations Manager Sizing Guide and lists the hardware requirements, latency, and configuration maximums.

sizing

Installation

Download vRealize Operations Manager here, in virtual appliance, Windows, or Linux formats. Try for free with hands on labs or a 60 day trial here.

In this example we are going to deploy as an appliance. Navigate to the vSphere web client home page, click vRealize Operations Manager and select Deploy vRealize Operations Manager.

vro1

The OVF template wizard will open. Browse to the location of the OVA file we downloaded earlier and click Next.

vro2

Enter a name for the virtual appliance, and select a location. Click Next.

vro3

Select the host or cluster compute resources for the virtual appliance and click Next.

vro4

Review the details of the OVA, click Next.

vro5

Accept the EULA and click Next.

vro6

Select the configuration size based on the considerations listed above, then click Next.

vra7

Select the storage for the virtual appliance, click Next.

vra8

Select the network for the virtual appliance, click Next.

vra9

Configure the virtual appliance network settings, click Next.

vra10

Click Finish on the final screen to begin deploying the virtual appliance.

vra11

Setup

Once the virtual appliance has been deployed and is powered on, open a web browser to the FQDN or IP address configured during deployment. Select New Installation.

install1

Click Next to begin the setup wizard.

install2

Configure a password for the admin account and click Next.

install3

On the certificate page select either the default certificates or custom. For assistance with adding custom certificates click here.

install4

Enter the host name for the master node and an NTP server, click Next.

install5

Click Finish.

install6

If required you can add additional data nodes before starting the cluster, or add them at a later date. See the Design Considerations section of this post before scaling out. To add additional data nodes or configure High Availability follow the steps at vRealize Operations High Availability before starting the cluster. Alternatively, you can start the cluster as a single node cluster and add data nodes or High Availability at a later date.

Since we are deploying a single node cluster we will now click Start vRealize Operations Manager. Depending on the size of the cluster it may take 10-30 minutes to fully start up.

install7

Confirm that the cluster has adequate nodes for the environment and click Yes to start up the application.

install8

After the cluster has started you will be diverted to the user interface. Log in with the admin details configured earlier.

install9

The configuration wizard will automatically start, click Next.

install10

Accept the EULA and click Next.

install11

Enter the license key or use the 60 day product evaluation. Click Next.

install12

Select whether or not to join the VMware Customer Experience Improvement Program and click Next.

install13

Click Finish.

install14

The vRealize Operations Manager dashboard will be loaded. The installation process is now complete. The admin console can be accessed by browsing to http:///admin where is the IP address of FQDN of your vRealize Operations Manager appliance or server.

install15

To add additional data nodes or configure High Availability see the vRealize Operations High Availability post.

Post Installation

After first setup we need to secure the console by creating a root account. Browse to the vROps appliance in vSphere and open the console. Press ALT + F1 and log in as root. You will be prompted to create a root password. All other work in this post is carried out using the vRealize Operations web interface.

The vRealize Operations web interface can be accessed by browsing to the IP address or FQDN of any node in the vRealize Operations management cluster (master node or replica node). During the installation process the admin interface is presented, after installation the IP address or FQDN resolves to the user interface. To access the admin interface browse to https:///admin where is the IP address or FQDN of either node in the management cluster. For supported browsers see the vRealize Operations Manager 6.4 Release Notes.

The next step is to configure the vCenter Adapter to collect and analyse data. Select Administration from the left hand navigation pane. From the Solutions menu select VMware vSphere and click the Configure icon.

config1

Enter the vCenter Server details and credentials with administrator access.

config2

Click Test Connection to validate connectivity to the vCenter Server.

config3

Expand Advanced Settings and review the default settings, these can be changed if required. Click Define Monitoring Goals and review the default policy, again this can be changed to suit your environment.

config4

When you’re ready click Save Settings and Close. The vCenter adapter will now begin collecting data. Collection cycles begin every 5 minutes, depending on the size of your environment the initial collection may take more than one cycle.

config5

Once data has been collected from the vCenter Server go back to the Home page and browse the different tabs and dashboards.

dashboard

Customise your vRealize Operations Manager instance to suit you environment using the VMware guides below.

Windows 2016 Containers

Containers are portable operating environments which typically utilise the same kernel whilst isolating applications. Software developers use containers to build, ship, and run applications. To the application the container gives the illusion of a totally isolated and independent operating system, in much the same way that a virtual machine doesn’t know it shares compute with other virtual machines; applications within containers are unaware they share a base operating system with other containers.

Using namespace isolation the host projects a virtualised namespace containing all the resources that an application can interact with, such as files, network ports, and running processes. Namespace isolation is extremely efficient since many of the underlying OS files, directories and running services are shared between containers. If and when an application makes changes to these resources then those changes are written to a distinct copy of that file or service using copy-on-write.

Containers house everything an application needs to run, and that gives it greater portability; allowing for exact copies between development and production environments. By using containers software developers and IT professionals can also benefit from improved efficiency in use of existing infrastructure, standardised environments, and simplified administration. This is evident from the Microsoft images below.

Deploying applications using traditional virtual machines:

containers2

Deploying applications using containers:

containers1

The user of containers isn’t new technology, it has been around for years in Linux before the toolset was properly utilised by Docker. Docker is a container technology which automates and simplifies the creation and deployment of containers to build, ship, and run distributed applications from any environment. Docker have partnered with Microsoft to develop a Docker engine for Windows 2016 and Windows 10, enabling users to take advantage of container functionality with Windows.

Windows containers run in two different formats; Windows Server containers which isolate applications using namespace isolation technology, and Hyper-V Containers which run containers inside optimised virtual machines.

Hyper-V containers have identical functionality to their Windows counterparts, the only difference is the isolation of the kernel. Whereas Windows containers share the same kernel with other containers and the host, Hyper-V containers provide kernel level isolation by provisioning individually optimised virtual machines for each container. A use case for such isolation could be a secure environment such as PCI compliance. Hyper-V containers need nested virtualisation to be enabled and this is currently only compatible with Intel processors.

Windows containers require installation of the Containers feature, and installation of the Docker engine. Once these two components are installed you can go ahead and begin building Windows server containers.

containers

Microsoft Azure are offering a free trial with £125 credit, to deploy a Windows 2016 virtual machine and try containers out for yourself see Azure Virtual Machine Deployment.

See also VMware Container Projects.