Setting Service Dependencies in Windows

It may be necessary to delay the loading of a specific service until another service has started and is available for use, such as in an application stack, or for troubleshooting purposes. This quick post will walk-through creating a dependency, or sequence of dependencies, for services on a Windows machine.

Many built in Windows components, and third party applications, include dependencies configured during installation, these are visible from the Services GUI. In order to add dependencies after installation we can use the Windows Service Control (SC) command or add the entries manually in the registry.

services

Command Line

Open an elevated command prompt, be aware that when we set dependencies any existing dependencies are overwritten. So first let’s list the current dependencies using sc qc, the example below will list the properties, including dependencies, of Service1.

sc qc "Service 1"

Use sc config to add a dependency. In the example below Service1 depends on Service2, this means that Service1 will not start until Service2 has successfully started.

sc config "Service 1" depend= "Service 2"

For multiple services use a forward slash.

sc config "Service 1" depend= "Service 2"/"Service 3"

To remove all dependencies use the following command.

sc config "Service 1" depend= /

Registry

Open regedit and locate the following key.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services

There will be a subkey listed for each installed service, click the subkey for the service you wish to configure.

Click Edit, and New Multi-String Value. Right click and Rename the value DependOnService. Right click and select Modify, enter the names of the services you want this service to depend on (one per line) and click Ok.

regedit

Windows 2016 Storage Spaces Direct

Storage Spaces Direct for Windows Server 2016 is a software defined storage solution providing pooled storage resources across industry standard servers with attached local drives. Storage Spaces Direct (S2D) is able to provide scalability, built-in fault tolerance, resource efficiency, high performance, simplified management, and cost savings.

Storage Spaces Direct is a feature included at no extra cost with Datacentre editions of Windows Server 2016. S2D can be deployed across Windows clusters comprising of between 2 and 16 physical servers, with over 400 drives, using the Software Storage Bus to establishe a software-defined storage fabric spanning the cluster. Existing clusters can be scaled out by simply adding more drives, or more servers to the cluster. Storage Spaces Direct will automatically detect additional resources and absorb these drives into the pool; redistributing existing volumes. Resiliency is provided across not only drives, components, and servers; but can also be configured for chasis, rack, and site fault tolerance by creating fault domains to which the data spread will comply. The video below provided by Microsoft goes into more detail about fault domains and how they provide resiliency.

Furthermore volumes can be configured to use mirror resiliency or parity resiliency to protect data. Using mirror resiliency provides resiliency to drive and server failures by storing a default of 3 copies across different drives in different servers. This is a simple deployment with minimal CPU overhead but a relatively inefficient use of storage. Alternatively we can use parity resiliency, where parity symbols are spread across a larger set of data symbols to provide both drive and server resiliency, but also a more efficient use of storage resources (requires 4 physical servers). You can learn more about both these methods at the Volume Resiliency blog by Microsoft.

The main use case for Storage Spaces Direct is a private cloud (either on or off-premises) using one of two deployment models. Hyper-Converged where compute and storage reside on the same servers, in this use case virtual machines would sit directly on top of the volumes provided by S2D. Using a Private Cloud Storage or Converged deployment method S2D is disaggregated from the hypervisor, providing a separate storage cluster for larger-scale deployments such as Iaas (Infrastructure as a Service). A SoFS (Scale-out File Server) is built on S2D to provide network-attached storage over SMB3 file shares.

Storage Spaces Direct is configured using a number of PowerShell cmdlets, and utilises Failover Clustering and Cluster Shared Volumes. For instructions on enabling and configuring S2D see Configuring Storage Spaces Direct – Step by Step, Robert Keith, Argon Systems. The requirements are as follows:

  • Windows Server 2016 Datacentre Edition.
  • Minimum of 2 servers, maximum of 16, with local-attached SATA, SAS, or NVMe drives.
  • Each server must have at least 2 solid-state drives plus at least 4 additional drives, the read/write cache uses the fastest media present by default.
  • The SATA and SAS devices should be behind a HBA and SAS expander.
  • Storage Spaces Direct uses SMB3, including SMB Direct and SMB Multichannel, over Ethernet to communicate between servers. 10 GbE or above is recommended for optimum performance.
  • All hardware must support SMB (Server Message Block) and RDMA (Remote Direct Memory Access).

s2ddeployments

Windows 2016 Containers

Containers are portable operating environments which typically utilise the same kernel whilst isolating applications. Software developers use containers to build, ship, and run applications. To the application the container gives the illusion of a totally isolated and independent operating system, in much the same way that a virtual machine doesn’t know it shares compute with other virtual machines; applications within containers are unaware they share a base operating system with other containers.

Using namespace isolation the host projects a virtualised namespace containing all the resources that an application can interact with, such as files, network ports, and running processes. Namespace isolation is extremely efficient since many of the underlying OS files, directories and running services are shared between containers. If and when an application makes changes to these resources then those changes are written to a distinct copy of that file or service using copy-on-write.

Containers house everything an application needs to run, and that gives it greater portability; allowing for exact copies between development and production environments. By using containers software developers and IT professionals can also benefit from improved efficiency in use of existing infrastructure, standardised environments, and simplified administration. This is evident from the Microsoft images below.

Deploying applications using traditional virtual machines:

containers2

Deploying applications using containers:

containers1

The user of containers isn’t new technology, it has been around for years in Linux before the toolset was properly utilised by Docker. Docker is a container technology which automates and simplifies the creation and deployment of containers to build, ship, and run distributed applications from any environment. Docker have partnered with Microsoft to develop a Docker engine for Windows 2016 and Windows 10, enabling users to take advantage of container functionality with Windows.

Windows containers run in two different formats; Windows Server containers which isolate applications using namespace isolation technology, and Hyper-V Containers which run containers inside optimised virtual machines.

Hyper-V containers have identical functionality to their Windows counterparts, the only difference is the isolation of the kernel. Whereas Windows containers share the same kernel with other containers and the host, Hyper-V containers provide kernel level isolation by provisioning individually optimised virtual machines for each container. A use case for such isolation could be a secure environment such as PCI compliance. Hyper-V containers need nested virtualisation to be enabled and this is currently only compatible with Intel processors.

Windows containers require installation of the Containers feature, and installation of the Docker engine. Once these two components are installed you can go ahead and begin building Windows server containers.

containers

Microsoft Azure are offering a free trial with £125 credit, to deploy a Windows 2016 virtual machine and try containers out for yourself see Azure Virtual Machine Deployment.

See also VMware Container Projects.

Updating WSUS Group Policy

If you need to update group policy to change an update schedule or make other alterations you can do so, even after patches have been approved on the WSUS server.

Open Group Policy Management and browse to the relevant GPO you want to update, right click and Edit the GPO. If you’re using Advanced Group Policy Management you’ll need to check out the policy before editing. Expand Computer Configuration > Policies > Administrative Components > Windows Components > Windows Update.

gpo

Double click the setting you want to change and update as appropriate. For the purpose of this post I have updated the scheduled install day from ‘1 – Every Sunday’ to ‘4 – Every Wednesday’.

gpo2

Click Ok to save the change. If you’re using Advanced Group Policy Management you’ll need to right click the GPO and check in, and then deploy the GPO.

Depending on your environment you may need to wait a short while for replication, you can force a group policy refresh on a server by running gpupdate /force from the command line. Furthermore if you are running Windows 2012 or 2012 R2 you can right clicking an OU in group policy management and select Group Policy Update.

gpoupdate

We can test if the group policy has updated by opening the registry on one of the servers and browsing to: COMPUTER\HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU. Cross check the settings in the registry with those you changed on the group policy.

gpocheck

You’ll notice straight away the data is a decimal or hexadecimal value, you may have noticed too that the options in the GPO editor had a corresponding number. In this case I changed the scheduled install day from ‘1 – Sunday’ to ‘4 – Wednesday’, the value of the registry option ScheduledInstallDay has changed from 1 to 4, so I know the change has taken effect.

Another important thing to note is the UseWUServer option, this must be set to 1 to use a WSUS server, or none of the other options apply. You can go up a level to ‘Windows Update’ to check the configured Windows Update server.

Finally, here is a really useful list of registry values for Automatic Updates: https://technet.microsoft.com/en-us/library/dd939844%28v=ws.10%29.aspx.

Configuring Nimble Storage

Once your Nimble storage array is setup and zoned with the appropriate servers, you can start presenting volumes. The first step is to create an initiator group; a volume or LUN is mapped to an initiator group which contains the initiator World Wide Port Names (WWPN).

Open a browser and navigate to the IP address of the Nimble device. Select Manage and Initiator Groups, click Create. Enter a name and add the aliases and WWPN of the servers you want to present storage to. You can create multiple initiator groups and a server can be a member of multiple initiator groups.

nimble1

When all your initiators are added and grouped you can proceed to create and map a volume. Click Manage and Volumes, select New Volume.

Enter a volume name and chose a performance policy based on the application or platform you are using, or create your own. From the drop down list of initiator groups you created earlier select the initiator group to present the new volume to. The LUN number is auto populated but can be changed, click Next.

nimble-2

Enter the size of the volume and any reserves or quotas you wish to apply, click Next. Volumes are thin provisioned by default.

If you want to configure snapshots to protect your volume, or volume caching you can do so here by selecting the relevant options, otherwise click No volume collection, Next and Finish.

The volume is now mapped to the initiator. There are a couple of further steps you should take on your server before using the newly presented storage.

nimble4

Windows Initiator

If you are presenting Nimble storage direct to a Windows host you should install the Nimble Windows Toolkit. The toolkit includes the Nimble Device Specific Module (DSM) and you will need the Multipath I/O (MPIO) feature installed. For the purpose of this post I will be installing version 2.3.2 of the Nimble Windows Toolkit which requires Windows 2008 R2 or later and .NET Framework 4.5.2.

Log in to Infosight and select Software Downloads from the Resources drop down. Click Windows Toolkit and download the appropriate version. Check for any required Windows hot-fixes in the release notes.

nwt

On your Windows server run the setup as an administrator and follow the onscreen prompts. Once the Nimble Windows Toolkit is installed the server will require a reboot. You will then see Nimble Connection Manager listed in programs, and the Nimble DSM in use on the properties of the disk in Disk Management.

ESXi Initiator

If you are presenting Nimble storage to an ESXi host you can install Nimble Connection Manager which includes the Nimble Path Selection Policy (PSP) for VMware. You will need ESXi 5.x or above.

Log in to Infosight.nimblestorage.com and select Software Downloads from the Resources drop down. Click Connection Manager (NCM) for VMware and download the appropriate version.

Within the zip file that is downloaded you will find the Nimble vibs; you can either deploy the zip file using VMware Update Manager, inject the vibs to your hosts manually or build them into your image if you are using VMware Auto Deploy.

nmp