Recently Hornetsecurity announced the latest major release of their flagship data protection software VM Backup v9. VM Backup is a mature and feature-rich product, and in this post, we’ll look at the overall solution and latest functionality.
Data protection is more important now than ever before. Data and its corresponding digital services are at the heart of our modern lives. Organisations are extracting value and enhancing customer experience from data, whilst simultaneously ensuring data is available and durable for business continuity.
Cyber security is now on the agenda at executive boards regardless of industry. The potential impact of cyber-attacks, specifically ransomware, is now well publicised.
It’s great to see VM Backup v9 add a ransomware protection feature, storing an immutable copy of the data in the cloud. The immutability of the objects ensures that they cannot be deleted or changed, either accidentally or by a bad actor. Using cloud storage provides flexibility and cost efficiencies; cloud object storage is often cheaper than buying dedicated storage arrays because of the consumption-based pricing model.
VM Backup integrates natively with Azure Blob storage, AWS S3, and Wasabi for secure and seamless data transfer. What’s more, the Offsite Backup Server agent can be used with an IaaS VM to proxy data transfer to other cloud providers, like Google and Oracle.
The extension of hyperscaler support will please many organisations who are starting to utilise the capabilities of public cloud. For a lot of these companies, a hybrid-cloud operating model is the most likely approach. To this end, VM Backup offers full support for the latest releases of Azure Stack HCI (21H2) and Windows Server (2022).
Another popular approach for forward-thinking organisations is multi-cloud. A multi-cloud strategy helps ensure applications are running on the most suited platform, with higher levels of flexibility and availability. VM Backup can support multi-cloud strategies with the introduction of multiple offsite locations for virtual machine copies.
With so many options for running VMware and Hyper-V infrastructure, a potential challenge for dispersed environments can be operational consistency. Really pleased to see VM Backup tackle this with a consistent REST API, allowing backup administrators to scale environments through a consistent toolset.
Speaking of scaling, VM Backup v9 can now handle much larger environments, with more backup repository and long-term storage options. Improvements have been made to disk space utilisation and backup efficiencies; with automated disk space reclaim, and the ability to run more concurrent backups and tasks, reducing backup windows.
When it comes to backup storage, capacity requirements for local and offsite locations are reduced significantly with inline deduplication. This means only unique or changed data is sent to your backup repositories, creating incredibly fast backup speeds.
VM Backup further enhances an organisations disaster recovery strategy through WAN-Optimised Replication. This feature reduces the Recovery Time Objective (RTO), providing faster recovery, by creating a continuous copy of a virtual machine to a remote site. The IT team can switch to the remote copy (or copies) immediately should there be an issue at the primary site.
As I mentioned at the start, VM Backup is now a mature product and has built an impressive feature-set over time. This includes things like Continuous Data Protection (CDP) for improving Recovery Point Objectives (RPO) with backups as frequently as every 5 minutes, granular backup scheduling and retention policies, Cluster Shared Volumes support, live backups leveraging Microsoft Volume Shadow Copy Service (VSS), Grand-Father-Son-Archiving, and 3-2-1-1 backups. You can view the full list of features here.
The technical capabilities of a solution don’t paint the full picture. We really need to know how this solution will be adopted by teams such as IT Operations, Service Management, and Finance. As good as the feature-set is, and we’ve seen that so far in this article, there are some vital operational elements I’m pleased to say VM Backup excels at. Let’s take a look:
Operations – VM Backup is easy to install and configure out of the box. When it comes to protecting valuable business data, simplicity and reliability are key. VM Backup doesn’t require complex architectures or components, which means there is less to go wrong. The product is easy to use day-to-day with an intuitive User Interface (UI). Backup administrators don’t need extensive training courses to start using the software, and Hornetsecurity run a series of free webinars and demos which are all you need to get up to speed.
Support – Having support on a product is another critical element to maintain that reliability. The support model for VM Backup is exactly what you want, instant access to product experts through a variety of channels, 24 hours a day, 7 days a week. No chatbots or first line call screeners.
Licensing – VM Backup has flexible license and purchase options, whether you want perpetual licenses or a subscription. You can license environments per physical host or per virtual machine, and there are 3 paid for editions plus a free version. You can download a fully functional free 30-day trial of VM Backup to try for yourself here, without any sales walls.
In summary, VM Backup v9 for protecting VMware or Hyper-V virtual machines looks like an essential part of an IT administrator’s toolkit. The software is mature enough to boast an impressive list of features, whilst maintaining a cost-efficient licensing model.
I’m really pleased to see the ransomware protection included this time around, and the improvements to multi-site locations with native public cloud integration. Having these options with a consistent operating experience and REST API are going to help organisations with multiple platforms or who are transitioning to the cloud.
The number of awards and industry recognition VM Backup v9 has picked up speaks volumes. The easiest way to see if VM Backup is a good fit for your infrastructure is to try it out yourself, using the free 30-day trial with full functionality here.
Cloud computing services have grown exponentially in recent years. In many cases they are the driving force behind industry 4.0, or the fourth industrial revolution, enabling Artificial Intelligence (AI) and Machine Learning (ML), or the Internet of Things (IoT) powering smart homes and smart cities.
High speed networks are enabling secure data sharing over the Internet, resulting in a shift from compute processing in ones own server rooms or data centres to a central processing plant. Here technology can be agile and highly available whilst taking advantage of economies of scale. In much the same way as our ancestors built their own generators to consume electricity; each factory buying and installing components with specialist staff to keep systems running, before eventually moving to utility based consumption of electricity as a service.
Data sharing and data analytics are at the heart of digital transformation. Successful companies are using data with services consumed over the Internet to innovate faster and deliver value; enhancing user experience and increasing revenue.
It is important for organisations adopting cloud computing to define a cloud strategy; this helps ensure coordination and connectivity of existing and new applications, whilst providing a sustainable delivery method for future digital services. A cloud strategy can assist with standardising security and governance alongside reducing shadow IT sprawl and spiralling costs.
The first step is to have a clear understanding of what the organisation as a whole expects to gain from the consumption of cloud technologies. This isn’t limited to the IT teams but is predominantly about business outcomes, for example improved innovation and agility; faster deployment of products or features, application performance and security enhancements for remote workforce, or simply the change in consumption and charging model.
It may be that a compelling event triggered the cloud focus, such as a security breach, site reliability issue, or major system outage. Reducing carbon emissions is part of the wider corporate strategy for many public sector organisations, and replacing old or inefficient data centre and cooling equipment with hyperscalers generating renewable energy can certainly help. Whatever the reasons, they should be captured and worked into your strategy. Doing so will help identify deliverables and migration assessments for brownfield environments.
Public Sector Cloud First
The UK Government first introduced the Government Cloud First policy in 2013 for technology decisions when procuring new or existing services. The definition being that public cloud should be considered in the first instance, primarily aimed at Software as a Service (SaaS) models, unless it can be demonstrated that an alternative offers better value for money.
During the COVID-19 outbreak, the UK saw unprecedented demand for digital services. The National Health Service (NHS) in particular responded quickly; scaling out the 111 Online service to handle 30 million users between 26 February and 11 August, with 6 million people completing the dynamic coronavirus assessment. The peak number of users in a single day was over 950,000; up 95 times from the previous 10,000 daily average. NHS Digital and NHSmail rolled out Microsoft Teams to 1.3 million NHS staff in 4 days, which would go on to host over 13 million meetings and 63 million messages between 23 March and 5 October. Both of these achievements were made possible virtually overnight by the speed and agility of cloud services.
Following up on the Government Cloud First policy of 2013, the UK Government released further information in 2017 around the use of cloud first, how to choose cloud computing services for your organisation, how to approach legacy technology, and considerations for vendor lock-in. The guidance reiterates the need to consider cloud computing before other options to meet point 5 of the Technology Code of Practice (use cloud first). The Technology Code of Practice can also feed into your cloud strategy:
Define user needs
Use open source and open standards to ensure interoperability and future compatibility
Make sure systems and data are secured appropriately
More recently, in March 2020, the Government Digital Service published Cloud Guidance for the Public Sector. The guidance is set out in easy to consume chunks with links out to further content for each area. Noteworthy sections include:
People and Skills: the way technical, security, commercial, and financial teams work will change. New processes and skills will be introduced, and people need to be fully informed throughout the process. It is essential that HR are able to recruit and retain the right skillsets, and upskill people through training and development. Roles and responsibilities should be defined, and extended to service providers and teams as the strategy is executed.
Security: the first 2 words in the above guidance paper are key; “Properly implemented”. The overwhelming majority of security breaches in the cloud are due to incorrect configurations. Links are included to the National Cyber Security Centre (NCSC) guidance on cloud security and zero trust principles. Published by the National Cyber Security Centre in November 2020, the Security Benefits of Good Cloud Service whitepaper also provides some great pointers that should be incorporated into any cloud strategy.
Data Residency and Offshoring: each data controller organisation is responsible for their own decisions about the use of cloud providers and data offshoring. The government say you should take risk-based decisions whilst considering the Information Commissioner’s Office guidance. Data offshoring is not just the physical location of the data but also who has access to it, and whether any elements of the service are managed outside of the UK.
Further documentation from the UK Government on Managing Your Spending in the Cloud identifies procurement models and cost optimisation techniques when working with cloud services. It advises that a central cloud operations team, made up of both technical and commercial specialists, is formed to monitor usage, billing, and resource optimisation to reduce costs.
Tools like CloudHealth by VMware help simplify financial management. CloudHealth makes recommendations on cost savings, works across cloud platforms, and crucially provides financial accountability by cost centre. A charging model where internal departments or lines of business pay for what they consume will typically yield reduced consumption and therefore lower costs.
Build management tooling into your cloud framework and aim for consolidated and cloud agnostic tooling. This blog article with Sarah Lucas, Head of Platforms and Infrastructure at William Hill, discusses some best practices for a successful hybrid and multi-cloud management strategy.
Incorporating hybrid and multi-cloud into your strategy can help protect against vendor lock-in, enhance business continuity, and leverage the full benefit of the cloud by deploying applications to their most suited platform or service. Furthermore, having an exit strategy insures against any future price rises, service issues, data breaches, or political changes. The NHS COVID-19 track and trace app for example, was moved between hyperscalers overnight during development. All the more impressive considering it needed to scale securely on a national level, whilst incorporating new features and updates as more virus symptoms and medical guidance was added. This blog article with Joe Baguley, CTO at VMware, outlines the lessons learned developing during a pandemic.
The National Data Strategy
In September 2020 the UK Government published the National Data Strategy. The strategy focuses on making better use of data to improve public services and society as a whole. It does this by identifying the following pillars; data foundations, data skills, data availability, and responsible data. Underpinning the National Data Strategy is a modern infrastructure which should be safe and secure with effective governance, joined-up and interoperable, resilient and highly available. New technology models like cloud, edge, and secure computing enhance our capabilities of providing shared data in a secure manner. The infrastructure on which data relies is defined by the strategy as the following:
“The infrastructure on which data relies is the virtualised or physical data infrastructure, systems and services that store, process and transfer data. This includes data centres (that provide the physical space to store data), peering and transit infrastructure (that enable the exchange of data), and cloud computing that provides virtualised computing resources (for example servers, software, databases, data analytics) that are accessed remotely.“
Section 4.2.1 of the document notes that “Even the best-quality data cannot be maximised if it is placed on ageing, non-interoperable systems“, and identifies long-running problems of legacy IT as one such technical barrier. The theme of this section is that data, and we can extend this to applications, should be independent of the infrastructure it runs on. Some of the commitments outlined are also relevant to cloud strategy and can be used as part of an internal IT governance framework:
Creating a central team of experts ensuring a consistent interpretation and implementation of policies
Building a community of good practice
Learning and setting best practice and guidance through lighthouse projects
Further demonstrating the importance of data, NHSx launched the Centre for Improving Data Collaboration; a new national resource for data-driven innovation. In a blog announcing the new team Matthew Gould, CEO, NHSx, said “Good quality data is crucial to driving innovation in healthcare. It can unlock new technologies, power the development of AI, accelerate clinical trials and enable better interactions with patients“. NHSx are working on a new UK Data Strategy for Health and Social Care expected late 2020, and have also collaborated with Health Education England on the Digital Readiness Programme to support data as a priority specialism in health and care.
NHS Digital Public Cloud Guidance
In January 2018 NHS Digital, along with the Department of Health and Social Care, NHS England, and NHS Improvement, released guidance for NHS and social care data: off-shoring and the use of public cloud services. This national guidance for health and care organisations can also be applied to the wider public sector dealing with personal information. Andy Callow, CDIO, Kettering General Hospital, also makes a great case for the NHS to embrace the cloud in this Health Tech Newspaper article.
As per the Government Cloud Guidance for the Public Sector; each data controller is responsible for security of their data. The NHS Digital guidance outlines a 4-step process for making risk-based decisions on cloud migrations.
Steps 1 & 2 are to understand the data and assess the risks:
The National Data Guardian advises that a Senior Information Risk Owner (SIRO) is involved in the decision-making process and is comfortable with the security arrangements in place, where patient data is being hosted this should also include Caldicott Guardians.
In its review into patient data in the NHS, the Care Quality Commission defines data security as an umbrella for availability, integrity, and confidentiality. With this in mind systems should always be designed with the expectation of failure, across multiple Availability Zones or regions where offshoring policies permit, and with appropriate Disaster Recovery and backup strategies.
As systems and dependencies become cloud based and potentially distributed across multiple providers, more importance than ever is placed on network architecture, latency, and resilience. Software Defined Wide Area Network (SD-WAN) and Secure Access Service Edge (SASE) solutions like VeloCloud by VMware provide secure, high performance connectivity to enterprise and cloud or SaaS based applications.
Where digital services need to be accessed externally using national private networks, like the Health and Social Care Network (HSCN), organisations may consider moving them to Internet facing. This reduces network complexity and duplication whilst making services more accessible and interoperable. According to NHS Digital’s Internet First Policy “new services should be made available on the internet, secured appropriately using the best available standards-based approaches“.
When writing your cloud strategy document, it should be based on the goals and objectives of the organisation. The strategy document does not necessarily need to define the cloud provider or type of hosting, instead it should set out how you meet or solve your business needs or problems, creating outcomes that have a direct impact on the experience of patients, users, or service consumers.
The strategy should be kept simple and high level enough that all areas of the business are able to understand it. Cloud technology moves fast, and guidance shifts with it, your strategy and policies should be reviewed regularly but the overarching strategy should not require wholesale changes that create ambiguity. Eventually, leaders will need to define lower level frameworks that balance visibility, cost, availability and security, with agility, flexibility, choice, and productivity. These frameworks along with the high-level strategy should be well documented and easily accessible.
This opening post will give an overview and demo of Oracle Cloud Infrastructure (OCI). Oracle Cloud offers fast and scaleable compute and storage resources, combined with enterprise-grade private virtual cloud networks. Oracle Cloud offers a range of flexible operating models including traditional Virtual Machine (VM) instances, container infrastructure, databases on demand, and dedicated hardware through bare metal servers and Bring Your Own Hypervisor (BYOH).
You can sign up for a free trial account with $300 credit here. When you sign up for an Oracle account you are creating a tenant. Resources inside a tenant can be organised and isolated using compartments, separate projects, billing, and access policies are some use case examples.
Oracle Cloud Infrastructure is deployed in regions. Regions are localised geographical areas, each containing at least 3 Availability Domains. An Availability Domain is a fault-independent data centre with power, thermal, and network isolation. A Virtual Cloud Network (VCN) is deployed per region across multiple Availability Domains, thereby allowing us to build high availability and fault tolerance into our cloud design. Virtual Cloud Networks are software defined versions of traditional on-premise networks running in the cloud, containing subnets, route tables, and internet gateways. VCNs can be connected together using VCN Peering, and connected to a private network using Fast Connect or VPN with the use of a Dynamic Routing Gateway (DRG).
The demo below creates a VCN and VM instances in the second generation of Oracle Cloud for lab purposes. Before deploying your own environment you should review all the above linked documentation and plan your cloud strategy including IP addressing, DNS, authentication, access control, billing, governance, network connectivity and security.
Log into the Oracle Cloud portal here, the home dash board is displayed.
You’ll need a subscription to get into the second generation Oracle Cloud Infrastructure portal. Under Compute select Open Service Console.
The region can be selected from the drop-down location pin icon in the top right corner, in this example the region is set to eu-frankfurt-1. Select Manage Regions to subscribe to new regions if required. Use the top left Menu button to display the various options. The first step in any deployment is to build the VCN, select Networking and Virtual Cloud Networks.
Make sure you are in the correct compartment in the left hand column and click Create Virtual Cloud Network. Select the compartment and enter a name, in this example I am going to create the Virtual Cloud Network only which will allow me to manually define resources such as the CIDR block, internet gateway, subnets, and routes. The DNS label is auto-populated.
The newly created VCN is displayed, all objects are orange during provisioning and green when available.
Once the VCN is available click the VCN name to display more options.
Use the options in the Resources menu to view and create resources assigned to the VCN. In this example first we’ll create the Internet Gateway.
Next we can create a subnet, in this example I have created a public subnet that I will later attach a VM instance to.
We also need to add a route table or new routes into the default route table.
The final step to allow connectivity in and out of our new subnet(s) is to define ingress and egress rules using security lists. Again you can either add rules to the default section or split out environments into additional security lists.
Define the source and destination types and port ranges to allow access. In this example we are allowing in port 22 to test SSH connectivity for a VM instance.
Now that we have a fully functioning software defined network we can deploy a VM instance. From the left hand Menu drop-down select Compute, Instances. Use the Create Instance wizard to deploy a virtual machine or bare metal machine.
In this example I have deployed a virtual machine using the Oracle Linux 7.5 image and VM.Standard2.1 shape (1 OCPU, 15 GB RAM). The machine is deployed to Availability Domain 1 in the Frankfurt region and has been assigned the public subnet in the VCN we created earlier. I used PUTTYgen to generate public and private key pairs for SSH access.
Once deployed the instance turns green.
Click the instance name to view further details or terminate, when removing you have the option to keep or delete the attached boot volume.
Additional block volumes can be added to instances. Block volumes can be created under Block Storage, Block Volumes.
For object based storage we can create buckets under Object Storage, Object Storage.
Buckets can be used to store objects with public or private visibility, pre-auth requests can also be added for short term access.