AWS FSx File Server Storage for VMware Cloud on AWS

Amazon FSx for Windows File Server is an excellent example of quick and easy native AWS service integration with VMware Cloud on AWS. Hosting a Windows file share is a common setup in on-premises data centres, it might be across Windows Servers or dedicated file-based storage presenting Server Message Block (SMB) / Common Internet File System (CIFS) shares over the network. When migrating Virtual Machines to VMware Cloud on AWS, an alternative solution may be needed if the data is large enough to impact capacity planning of VMware Cloud hosts, or if it indeed resides on a dedicated storage array.


FSx is Amazon’s fully managed file storage offering that comes in 2 flavours, FSx for Windows File Server and FSx for Lustre (high-performance workloads). This post will focus on FSx for Windows File Server, which provides a managed file share capable of handling thousands of concurrent connections from Windows, Linux, and macOS clients that support the industry-standard SMB protocol.

FSx is built on Windows Server with AWS managing all the underlying file system infrastructure and can be consumed by users and compute services such as VMware Cloud on AWS VMs, and Amazon’s WorkSpaces or Elastic Compute Cloud (EC2). File-based backups are automated and use Simple Storage Services (S3) with configurable lifecycle policies for archiving data. FSx integrates with Microsoft Active Directory enabling standardised user permissions and migration of existing Access Control Lists (ACLs) from on-premises using tools like Robocopy. As you would expect, file systems can be spun up and down on-demand, with a consumption-based pricing model and different performance tiers of disk. You can read more about the FSx service and additional features such as user quotas and data deduplication in the AWS FSx FAQs.

Example Setup


In the example above, FSx is deployed to the same Availability Zones as VMware Cloud on AWS for continuous availability. Disk writes are synchronously replicated across Availability Zones to a standby file server. In the event of a service disruption FSx automatically fails over to the standby server. Data is encrypted in transit and at rest, and uses the 25 Gbps Elastic Network Interface (ENI) between VMware Cloud and the AWS backbone network. There are no data egress charges for using the ENI connection, but there may be cross-AZ charges from AWS in multi-AZ configurations. For more information on the connected VPC and services see AWS Native Services Integration With VMware Cloud on AWS.

A reference architecture for Integrating Amazon FSx for Windows Servers with VMware Cloud on AWS is available from VMware, along with a write up by Adrian Roberts here. AWS FSx allows single-AZ or multi-AZ deployments, with single-AZ file systems supporting Microsoft Distributed File System Replication (DFSR) compatible with your own namespace servers, which is the model used in the VMware reference architecture. At the time of writing custom DNS names are still road mapped for multi-AZ. You can see the full table of feature support by deployment type in the Amazon FSx for Windows File Server User Guide.

FSx Setup

To provide user-based authentication, access control, and DNS resolution for FSx file shares, you can use your existing Active Directory domain or deploy AWS Managed Microsoft AD using AWS Directory Services. You will need your Active Directory details ready before starting the FSx deployment, along with the Virtual Private Cloud (VPC) and subnet information to use.

Log into the AWS console and locate FSx under Storage from the Services drop-down. In the FSx splash-screen click Create file system. On this occasion, we are creating a Windows file system.


Enter the file system details, starting with the file system name, deployment type, storage type, and capacity.


A throughput capacity value is recommended and can be customised based on the data requirements. Select the VPC, Security Group, and subnets to use. In this example, I have selected the subnets connected to VMware Cloud on AWS as defined in the ENI setup.


Enter the Active Directory details, including service accounts and DNS servers. If desired, you can make changes to the encryption keys, daily backup window, maintenance window, and add any required resource tags. Review the summary page and click Create file system.


The file system is created and will show a status of Available once complete.


If you’re not using the default Security Group with FSx, then the following ports will need defining in rules for inbound and outbound traffic: TCP/UDP 445 (SMB), TCP 135 (RPC), TPC/UDP 1024-65535 (RPC ephemeral port range). There may be additional Active Directory ports required for the domain the file system is being joined to.

Further to the FSx Security Group, the ENI Security Group also needs the SMB and RPC port ranges adding as inbound and outbound rules to allow communication between VMware Cloud on AWS and the FSx service in the connected VPC. In any case, when configuring Security Group or firewall rules, the source or destination should be the clients accessing the file system, or if applicable any other file servers participating in DFS Replication. AWS Security Groups are accessible in the console under VPC. You can either create a dedicated Security Group or modify an existing ruleset. The Security Group in use by the VMware Cloud ENI can be found under EC2 > ENI.


With the SMB ports open for the FSx and ENI Security Groups, remember that the traffic will also hit the VMware Cloud on AWS Compute Gateway. In the VMware Cloud Services Portal add the same rules to the Compute Gateway, and to the Distributed Firewall if you’re using micro-segmentation. The Compute Gateway Firewall is accessible from the Networking & Security tab of the SDDC.


Virtual Machines in VMware Cloud on AWS will now be able to access the FSx file shares across the ENI using the DNS name for the share or UNC path.

The FSx service in the AWS console provides some options for managing file systems. Storage capacity, throughput, and IOPS can be viewed quickly and added to a CloudWatch dashboard. CloudWatch Logs can also be ingested by vRealize Log Insight Cloud from the VMware Cloud Services Portal.


Building AWS Environments for VMware Cloud Customers

This post will walk through the example design below; building out the Amazon Web Services (AWS) framework enabling VMware Cloud on AWS customers to start using AWS at scale, alongside VMware workloads. The key focus will be around the control of multiple accounts, using AWS Organizations and Service Control Policies, and cross-account connectivity, with Transit Gateway and the role of the VMware Cloud-connected Virtual Private Cloud (VPC).

Example VMC AWS Setup

To enlarge the image right click and select open image in new tab.

VMware Cloud on AWS Focus

This article assumes you already have a working knowledge of VMware Cloud on AWS and have either deployed or are planning the deployment of, your Software-Defined Data Centre (SDDC). If you are unclear about the requirements for the connected AWS account and VPC review the VMware Cloud documentation here.

In the example architecture, we are working on, a Stretched Cluster has been deployed in the eu-west-2 (London) region. During the SDDC deployment, I connected an existing AWS account,, and now have a 25 Gbps cross-VPC link between VMware Cloud and my own VPC using the Elastic Network Interface (ENI). More information on how the connected VPC works can be found in AWS Native Services Integration With VMware Cloud on AWS.


In this setup, I have also configured some Elastic Compute Cloud (EC2) instances to back up my Virtual Machines (VMs) to Simple Storage Services (S3). Great, so how do I start deploying AWS services at scale, and onboard the rest of my business that wants to begin creating their own AWS accounts?

AWS Organizations & Service Control Policies

AWS Organizations is a good starting point for those wanting to implement policies and governance across multiple accounts, for compliance, security, and standardised environments. Organizations can consolidate billing for your accounts, and automate the creation of new accounts as your environments grow. There is no additional charge for using AWS Organizations or Service Control Policies (SCP). An AWS Organization can be deployed manually, or as part of a Landing Zone which is discussed in the next section.

First, log into the AWS console with the account you will assign as the master. This account is used for account and policy management (it is itself exempt from Service Control Policies we will cover shortly), and assumes the role of a payer account for charges accrued by accounts within the organization hierarchy. Once the master account is set, it cannot be changed.

From the Services drop-down locate AWS Organizations, click Create Organization. Name your organization and select either consolidated billing only features or all features. From the Accounts tab, you can create and manage AWS accounts, and add existing accounts to your organization. A member, and master, account can only be a member of one organization. When you create an account with AWS Organizations, an Identity and Access Management (IAM) role is created with full administrative permissions in the new account. The master account can assume this IAM role if required to gain access.

The Organize Accounts tab is where you can start creating the hierarchy of Organizational Units (OU). The canvas starts with the top-most container, which is the administrative root. OUs, and nested OUs (up to 5 levels deep including root), are added for separate groupings of departments or business units allowing policies to be applied to groups of accounts. An OU will inherit policies from the parent OU in addition to any policies assigned directly to it.


A Service Control Policy contains statements defining the controls that are applied to an account or group of accounts. SCPs can only be used for organizations created with all features enabled, they are not available with consolidated billing only. Multiple SCPs can be attached or inherited to accounts in the hierarchical OU chain, however, a deny will always override any allow policies. SCPs can be created and managed in the Policies tab.

A default FullAWSAccess policy exists an is attached to the organization root allowing access to any operation. In this example, I have created a DenyInternet policy to be applied to my DataOps OU, who have a requirement to analyse sensitive data from data sets running in VMware Cloud. The SCP is a JSON policy that specifies the maximum available permission for accounts or grouping of accounts (OU) that the policy is attached to. You can write the JSON out yourself or use the statement filter on the left-hand side.


Once the policy is created, I attach it to the relevant OU, where it is instantly applied to any member accounts residing in that particular OU. I attach the policy either from the Policies tab, or directly on the account, OU, or organization root.


Now, when logging in with the user account, I am unable to create an Internet Gateway as defined in the SCP statement. For more information on Service Control Policies review the Service Control Policies User Guide which details example statements, relationship with IAM permissions, and scenarios where SCPs would not apply, such as resource-based policies, and users or roles outside of the organization.


Outside of the master account, my AWS hierarchy now looks like this. With a repeatable process in place for members of the DataOps team to create new accounts which do not have internet access. Furthermore, I may want to create some root policies to limit the tampering of AWS security tools such as CloudTrail, GuardDuty, and Config. You can read more about these services in the next section.


Additional Baseline AWS Services

To protect the AWS Organization, I can look to implement a security baseline across all my accounts, using central management of the services outlined below. These tools can be implemented individually or automated as part of AWS Landing Zone. For VMware Cloud, the connected AWS account that I have full control over can fall into the remit of these services, my organizational hierarchy, and Service Control Policies. However, remember that the SDDC environment is deployed to a shadow AWS account that the customer does not have access to, and this means that we need to utilise Log Insight Cloud to capture and analyse any syslog output from vCenter, NSX-T, etc. Log Insight Cloud can also pull logs from AWS as a log source, from services like CloudTrail and CloudWatch. You can read more about VMware Cloud security measures in VMware Cloud on AWS Security One Stop Shop.

IAM is a mechanism by which we can manage, control, and govern authentication, authorisation, and access to resources within your AWS account. For administrators overseeing multiple accounts, IAM can help with enforcing password policies, Multi-Factor Authentication (MFA), and Identity Federation or Single Sign-On. IAM policies can be applied to users, groups, or roles.

CloudTrail records and tracks all Application Programming Interface (API) requests in an AWS account. Each API request is captured as an event, containing associated metadata such as caller identity, timestamp, and source IP. The event is recorded and stored as a log file in an S3 bucket, with custom retention periods, and optional delivery to CloudWatch Logs for metric monitoring and alerting. CloudTrail logs for multiple accounts can be stored in a central encrypted S3 bucket for effective auditing and security analysis.

GuardDuty is a regional-based intelligent threat detection service that monitors for unusual behaviour from CloudTrail event logs, VPC flow logs, and DNS logs. Logs are assessed against multiple security feeds for anomalies and known malicious sources. GuardDuty can provide continuous security analysis, powered by machine learning, for your entire AWS environment across multiple-accounts. GuardDuty findings are presented in a dashboard with priority level and severity score and integrate with other services such as CloudWatch and Lambda for remediation automation.

Config can record and capture resource changes in your environment for Configuration Items, detail resource relationships, store configuration history, provide a snapshot of configurations, act as a resource inventory for AWS resources, and allow you to check the compliance of those resources against pre-defined and custom rules. Config can enable notifications of changes, as well as detailing who made the change and when, by integrating with CloudTrail. When coupled with rules like encryption checks, Config can become a powerful security analysis tool.

The Security Pillar White Paper of the AWS Well-Architected Framework is worth reviewing as a starting point to these services.

AWS Control Tower & Landing Zone

AWS Control Tower is a paid-for option for customers who want to quickly setup and govern new AWS environments based on AWS best practices, for those with multiple and distributed applications that will span many accounts. AWS Control Tower consists of established blueprints allowing for automated setup and configuration of your multi-account AWS environments and Identity Federation. Account Factory automates and standardises account provisioning from a configurable account template, with pre-approved network and region settings. Guardrails prevent resources from being deployed that do not conform to policies and detect and remediate non-compliant accounts and resources. Control Tower dashboards provide visual summaries to monitor security and compliance across the organization.

One of the components included with Control Tower is AWS Landing Zone. A Landing Zone can also be implemented yourself outside of Control Tower, it deploys a multi-account AWS environment based on AWS well-architected and security and compliance best practices. The Landing Zone deployment is made up of 4 accounts for AWS Organization & Single Sign-On (SSO), shared infrastructure services, log archive, and security. The good thing about AWS Landing Zone is it provides a security baseline for several security services and settings, you can see the full list here. Once again, you can create these accounts and services yourself manually if there is a need for greater customisation or granular control, however doing so is time-consuming.

SDDC Cross-Account AWS Connectivity

Not having a Landing Zone or Organization and account structure in place does not stop or delay the VMware Cloud on AWS deployment. For example, you can still create the connected AWS account, and your own central shared services or network account, if it is appropriate to your design, and retrospectively fit these accounts into the Organization hierarchy.

In the setup below, the connected AWS VPC has been reserved for SDDC operations only, in this case, VM backups. The SDDC router is connected to this VPC / account using the subnets defined in the ENI configuration at deployment, meaning backups will run over the 25 Gbps cross-VPC link with no additional data charges. Further services can be deployed to this account, but as the number of AWS services and environments (prod, dev, test, etc.) start to scale, it is good practice to use separate accounts. This is where the Transit Gateway and centralised shared network account can help.


The Transit Gateway (TGW) allows customers to connect many VPCs (including the SDDC), and on-premises networks to a single gateway. In the example architecture, we have the following, which provides connectivity between VMware Cloud on AWS, multiple VPCs and accounts, and the on-premises data centre, using a central shared network services model:

  • Direct Connect has been attached to the TGW using a Direct Connect Gateway, you can read how here.
  • VMware Cloud on AWS has been connected to the TGW using a VPN attachment. The VPN needs setting up in the Cloud Services Portal, you can read how here. Note that to my knowledge using this model in conjunction with HCX L2 extension may not be supported end to end.
  • Additional VPCs are connected to the TGW using VPC attachments, you can read how here.

Outside of VMware cloud, VPC Peering has traditionally been used to provide one to one private network connectivity between VPCs, including VPCs in different accounts. VPC Peering cannot be configured in the SDDC as we do not have access to the underlying AWS account. If it is unlikely that the VMware Cloud customer will be a heavy user of native AWS services, then using a TGW may be overkill, and the SDDC connected VPC may suffice.

For small environments, a VPN connection between additional VPCs can be configured on a one-to-one basis from the VMware Cloud Services Portal. However, as the number of VPCs and accounts begins to scale the VPN approach becomes harder to manage. VPC endpoints can also be used for targeted access to service-by-service resources in other accounts, you can see examples of this at AWS Native Services Integration With VMware Cloud on AWS.

In any case, when connecting VPCs and networks together, it is essential to remember that you should not have overlapping IP address ranges. This is relatively easy to plan for greenfield AWS environments but may need further consideration when connecting your existing on-premises networks.

Now, when we pull the shared network and account management together, we start to have the basis for the DataOps team to deploy their own AWS services with cross-environment access, governed by organizational policy and control. This post was intended as a high-level view on account and network management for VMware Cloud on AWS design integration with native AWS. Allowing connectivity into your SDDC requires correct firewall configuration, you can view examples at Connecting VMware Cloud on AWS to Amazon EC2.

Example VMC AWS Setup

AWS Native Services Integration With VMware Cloud on AWS

This post lists some of the available options for connecting VMware Cloud on Amazon Web Services (AWS) with native AWS services for hybrid cloud deployments. Read more about multi-account and VPC management at Building AWS Environments for VMware Cloud Customers.

The most common way of consuming AWS services with VMware Cloud on AWS is to use the built-in Elastic Network Interface (ENI) functionality. Each VMware Cloud Software-Defined Data Center (SDDC) can be connected to another AWS Virtual Private Cloud (VPC) during the deployment phase. A VPC is Amazon’s logical separation of virtual networks. At scale, you may choose to have many VPCs and many accounts for different applications and environments. Multiple VPCs can be connected together using an AWS Transit Gateway (TGW). A further option we will look at is VPC Endpoints, enabling you to privately connect to supported AWS services and endpoints.

1. Connected VPC

The AWS bare metal hosts deployed for VMware Cloud on AWS use a redundant 25 Gbps physical interface or Elastic Network Adaptor (ENA). The physical interface uses a trunk port to carry multiple VLANs for services like management, vMotion, NSX, and connectivity to the AWS backbone network.

The cross-linked VPC architecture is provided by a series of ENIs. Each host in the vSphere cluster uses the network adaptor outlined above to provide an individual cross-VPC ENI per physical host; supporting high-bandwidth, low latency connectivity to native AWS services.


VMware Cloud on AWS uses the AWS VPC  as an underlay for NSX-T. The NSX Edge (Tier 0) router is a virtual router acting as the uplink to the connected VPC. The active ENI in use is the physical ESXi host where the virtual router is running. The connected VPC is owned and managed by the customer, any native services deployed are billed separately by AWS. When deploying the SDDC, the connected account and VPC is required along with a private subnet in each applicable Availability Zone (AZ). A static route is created for the defined subnets adding the connected VPC router as the next hop.

Traffic that traverses the ENI is not chargeable; however, cross-AZ charges do need to be taken into consideration if a Stretched Cluster is in use. During provisioning of the SDDC, and connection of the customer-managed AWS account, a CloudFormation template is deployed creating the necessary AWS Identity Access Management (IAM) roles and ENI configuration.


Following the SDDC deployment, you can view the connected account, VPC, ENI, and subnet details in the Connected VPC menu under the Networking & Security tab of the SDDC, from the VMware Cloud Services Portal.

Access to and from native AWS services can be controlled and needs to be opened, using the NSX firewalls (gateway and distributed) and AWS Security Groups. To see an example configuration see the Connecting VMware Cloud on AWS to Amazon EC2 post or the Access an EC2 Instance section of the VMware Cloud on AWS Docs page.

2. VPC Endpoints

VPC Endpoints allow private connectivity between your VPC and supported AWS services or custom applications. Network traffic traversing a VPC endpoint does not leave the AWS backbone network and therefore does not require Internet Gateway, Direct Connect, or VPN.

The Access an S3 Bucket Using an S3 Endpoint section of the VMware Cloud on AWS Docs page details the process for configuring a Gateway VPC Endpoint to access AWS Simple Storage Services (S3) from VMware Cloud on AWS, without having to go out to the Internet. Furthermore, you can use Interface VPC Endpoints to connect to supported AWS services in another VPC, or VPC Endpoint Services (AWS PrivateLink) to connect to custom applications in another VPC. Here are some examples:


The general process for creating an endpoint is the same across these VPC Endpoint types. In the example below, we are connecting to a VPC Endpoint Service for Splunk, fronted by a Network Load Balancer (NLB) in another VPC. The administrator of the VPC Endpoint Service needs to grant IAM service consumer permissions and accept the incoming connection, as detailed in the AWS documentation here.

In the AWS console, I log into the connected account and select the VPC service. I choose Endpoints and Create Endpoint. To create a Gateway VPC Endpoint, e.g. for S3, or an Interface VPC Endpoint, e.g. for DynamoDB or other services, I would select the appropriate service from the AWS services service category. In this instance, I use Find service by name and enter the endpoint private service name. Either way, the key point is that I select the connected VPC from the VPC drop-down and the subnets that match up with those used for the ENI when deploying the VMware Cloud on AWS SDDC.


By using the cross-VPC linked subnets the Virtual Machines in the SDDC will utilise the static route across the ENI outlined in the Connected VPC section above. AWS Security Groups can be used to limit this to certain source IP addresses from within the SDDC or the wider VPC if required. In this instance, we can successfully test the connection over port 443 following the creation of the VPC Endpoint.

3. Additional VPC Connectivity

Traditionally VPC Peering has been used to provide one to one private network connectivity between VPCs, including VPCs in different accounts. VPC Peering cannot be configured in the SDDC as we do not have access to the underlying AWS account. VPN connections between additional VPCs and the SDDC router (Tier 0) can be configured from the VMware Cloud Services Portal, enabling VMware Cloud on AWS connectivity with other VPC environments. As the number of VPCs and accounts begins to scale the VPN approach becomes harder to manage.

This predicament is resolved with a relatively new addition to AWS; the Transit Gateway (TGW). The native AWS TGW is available now and acts as a transit network hub allowing you to connect multiple VPCs and on-premise networks (using Direct Connect or VPN attachments). A Managed Transit Gateway is being developed by VMware to assist with multi-SDDC and multi-VPC connectivity. You can review how the native AWS Transit Gateway fits into the VMware Cloud on AWS architecture on the VMware Network Virtualization blog: VMware Cloud on AWS with Transit Gateway Demo:


Image VMware Cloud on AWS with Transit Gateway Demo. Further Resources: