Azure Site Recovery (ASR): a great way to create a SharePoint test environment


For all the IT Pro’s who have managed SharePoint farms, you know that it is difficult to set up a SharePoint test environment on premises.  It takes money, time and effort to replicate a production SharePoint farm.  Azure Site Recovery (ASR) is a cloud replication service that provides a great way to quickly create a SharePoint test environment.

  1. The Problem we are trying to solve
  2. The Solution: Azure Site Recovery to quickly and reliably create a SharePoint est environment
  3. Steps to Create a SharePoint test environment with ASR
    • 0: (optional) Create a local SP environment if you do not have one yet
    • 1: Check the requirements for ASR replication and prepare your servers
    • 2: Replicate your SP farm servers to Azure using ASR
    • 3: Using the ASR replica, create the SP Test Environment in Azure
Steps to create a SharePoint Test Environment using ASR

The Problem: creating a SharePoint test environment – the traditional way – takes MONTHS

In my last IT Manager post, I was responsible for a SharePoint farm.  For a long time, we did not have the luxury of a test environment for SharePoint.  I was finally able to procure an HP Z220 workstation and used it for creating a SharePoint test environment.

I increased the memory and hard disk space on the HPZ220 and then used it to build a SharePoint test farm.  However, setting up this SharePoint test environment took 2 months time.  I spent time ordering and receiving the server and parts (upgrades).  And I spent time setting up the server and copying the production virtual machines – DC, Sql Server and SharePoint server – to the sandbox test environment.

The Solution: Azure Site Recovery – a quick and reliable mechanism to create a SharePoint test environment

What is Azure Site Recovery (ASR)? 

Azure Site Recovery is a business continuity and disaster recovery solution in the cloud.  You can protect on-premises servers or virtual machines by replicating them either to Azure or to a secondary site (data center).  If the primary data center location is down, you can fail over to the secondary site (Azure or the secondary data center).  Then when the primary site is back up and running, you can fail back to it.

Can Azure Site Recovery be used to create a SharePoint test environment?

Yes, since ASR can protect whole servers or virtual machines running different workloads, it can certainly replicate those SharePoint servers.  ASR can replicate all the components of a SharePoint farm:

  • Active Directory Domain Services (ADDS) Domain Controller (DC)
  • Sql Server
  • SharePoint server

ASR can take images of your Production servers.  From those images, you can then create application consistent replica servers – in the cloud or on the secondary site.  The replica servers may then be used as test servers in a SharePoint test environment.

The whole process is fast.  At home, I have a fiber optics connection.  I was able to replicate a 3-server SharePoint farm to Azure within a few days.  However, the speed of replication depends on your network’s speed and your network’s load.

Steps to create a SharePoint Test Environment with ASR

We will cover the steps – to create the SharePoint test environment – in three major steps.

O) Optional: if, like me, you want to create a local SharePoint environment – on your laptop at home – so that you can try out Azure Site Recovery, find out how to create such a SharePoint environment using Windows 10 and Windows Server 2016 Nested Virtualization step-by-step.


Steps to create a SharePoint Test Environment using ASR

The following steps will be covered in separate posts.  Stay tuned:

1) Check the requirements for Azure Site Recovery (ASR) replication and prepare the SharePoint Farm’s servers
2) I will explain how to Set up a Recovery Services Vault in Azure and how to go through the Getting Started wizard.  Subsequently, we will start Replication of the SharePoint servers
3) Once we have replicated all the SharePoint servers, we can go ahead and Create a SharePoint test farm environment in the Cloud



Create a local SharePoint farm – on your laptop – using Windows 10 Hyper-V and Windows Server 2016 Nested Virtualization

  1. Overview
  2. The Problem: how to set up a SharePoint farm on your laptop
  3. The Solution: Hyper-V and Nested Virtualization
  4. Steps in Brief
  5. Detailed Steps


The combination of Windows 10 and Windows Server 2016 enable you to create a test lab on your local PC or laptop!  This is because most versions of Windows 10 have the Hyper-V feature available.  And Windows Server 2016 has a new feature called Nested Virtualization.  So you can easily create a virtual test lab consisting of one virtual machine hosting several other virtual machines!

Nested Virtualization on a Windows 10 laptop

Recently, I needed to create a SharePoint farm on my laptop.  This SharePoint farm would become my pseudo “Production” environment that I will then replicate to the cloud using Azure Site Recovery (ASR).  So Windows 10 and Windows Server 2016 were the perfect tools for the job.

Note: The solution presented in this post can be used to create any test lab (sandbox environment) – not just a SharePoint farm.

The Problem – how to set up a SharePoint farm on a laptop?

As mentioned earlier, I will be using Azure Site Recovery (ASR) to replicate my local SharePoint farm to the Cloud.  This means that I need nested virtualization.  I cannot use Windows 10 Hyper-V to create the three SharePoint VM’s because Windows 10 is not supported as an ASR host.  So I have to install a host server VM (Windows Server 2012 R2 or Windows Server 2016) on Windows 10 Hyper-V.  And subsequently, on this host server, I have to install the three SharePoint server VM’s.

Furthermore, having the SharePoint Farm VM’s hosted on Windows 10 does not represent a real production environment.  The chosen architecture – 3 SP Farm VM’s hosted on Windows Server 2016 host – is more realistic and represents the architecture of a real Production SharePoint farm.

The Solution – Hyper-V and Nested Virtualization:

In order to create a SharePoint farm with 3 servers: a DC, a Sql Server, and a SharePoint server, you either need 3 physical servers or 3 virtual machines.  At home, I do not have 3 physical servers.  VM’s are the only way to go.

Windows Server 2016 has a wonderful new feature called Nested Virtualization!!   Nested Virtualization is great for test scenarios and allows such a test lab to be created.  In previous Windows Server versions you could not nest a Hyper-V environment inside another Hyper-V environment.  Or at least, the nesting was was not supported by Microsoft.

And certain versions of Windows 10 have the Hyper-V feature readily available.  You just need to turn it ON.  This allows you to create the Windows Server 2016 host Virtual Machine (VM) without installing any new software or applications.

Setting Up the SharePoint Farm – In Brief

  1. Check the software and hardware requirements on your laptop or desktop PC
  2. Turn Hyper-V feature ON in Windows 10
  3. Create a new VM and install Windows Server 2016 on it.  Enable Nested Virtualization for this VM.
  4. Set up networking for your SharePoint farm.  We will use a separate subnet for each SharePoint VM and we will link them up with RRAS.  You do not have to use this many subnets.  You can use one subnet for all your VM’s.  I chose to use 3 subnets because I wanted my environment to mimic a VNet in Azure.
  5. Install the 3 SharePoint VM’s: DC, Sql Server 2016, SharePoint 2016

Setting up the SharePoint Farm – Detailed Steps:

1. Check Windows 10 Hyper-V requirements:

Software requirements:

The following Windows 10 versions support Hyper-V virtualization: Enterprise, Professional and Education.

Hardware requirements:

Hyper-V normally requires at least 4GB memory.  However, for this SharePoint farm (1 host VM and 3 guest VM’s), at least 10 GB of RAM would be needed.  I recommend 16 GB of RAM.  With 16GB on your laptop or PC, you will have about 10 GB RAM left for the SharePoint farm.  I assigned 1GB RAM for the DC VM and 2GB RAM each for SharePoint VM and Sql Server VM.

Remember that Windows 10 OS uses about 2 GB and there is something called the host reserve which takes about 2.5GB (depending on how much physical RAM exists on the machine).

Other hardware requirements:

  • 64-bit Processor with Second Level Address Translation (SLAT).
  • CPU support for VM Monitor Mode Extension (VT-c on Intel CPU’s).

A good way to check on all the system requirements for Hyper-V is to run the command line utility systeminfo.exe.  Open a Command Prompt window and type systeminfo.exe.  The output of the command will contain a section on Hyper-V Requirements.  Make sure all tests return a “Yes”.

2. Turn Hyper-V Feature ON in Windows 10

In the Control Panel, start Programs and Features and click on “Turn Windows Features On or Off”.  Select the Hyper-V checkbox.  This includes “Hyper-V” platform and “Hyper-V Management Tools”.  Finally, perform a restart.

3. Create a new Windows Server 2016 VM and Enable Nested Virtualization.

1. Install Windows Server 2016 on this VM.  This Windows Server 2016 VM will host your SharePoint farm.

2. With the Windows Server 2016 VM OFF – in Windows 10 – run the following Powershell commands:

3. Under the VM settings for the Windows Server 2016 VM, turn OFF Dynamic memory for the VM

4. Give the VM enough memory.  I assigned 10 GB to the VM.

4. Set up networking for your SharePoint farm

I wanted the network of my lab to resemble a real Production environment and to mimic the virtual network (VNet) that is used in Azure.  So I configured Hyper-V for multiple subnets while using only one NIC.

On the Windows Server 2016 host:

      1. Create 4 Virtual Switches.  One external and 3 internal.
        I chose: for the DC VM, for Sql Server and for the SharePoint VM.

        Hyper-V Switches
      2. Configure the network adapters on the Hyper-V host

        Network Adapter on Host VM
      3. Configure Routing and Remote Access Service (RRAS) on the host.  RRAS acts as a software router and connects the subnets.
        1. In Server Manager, click on Manage – Add Roles and Features
        2. The first page is informational and may be skipped
        3. On the second page: choose Role-based or Feature-based installation
        4. Choose the server (local server) where the RRAS feature will be added
        5. Select Remote Access on the Roles page
        6. Under the Features page:
          1. select RAS Connection Manager Administration Kit (CMAK)
          2. under Remote Server Admin Tools – Role Admin Tools: select Remote Access Management Tools
        7. On the Role Services page, select “DirectAccess and VPN (RAS)” and select Routing
        8. You will be prompted to add features required by DirectAccess and VPN (RAS)”, click YES.
        9. Make sure Routing is still selected
        10. Review the information on the Web Server Role (IIS) page
        11. Click Next on the Roles Services page
        12. Do a final confirmation and Install

          RRAS confirmation page
        13. When it is done, Close the wizard
        14. Open the Routing and Remote Access application
        15. Right click on your server name and select “Configure and Enable Routing and Remote Access”

          RRAS Configure and Enable your Server
        16. Select Custom Configuration

          Select Custom
        17. Select NAT and LAN Routing.  NAT allows your VM’s Intern

          Select NAT and LAN Routing

5. Create 3 new VM’s inside the Hyper-V Host

    1. Install the Domain Controller (DC).  You can use Desired State Configuration (DSC) to set up the ADDS Domain Controller automatically by using a script.  You can see the steps in my blog post on the subject.
    2. Install the Sql Server 2016 and
    3. SharePoint Server 2016

Create a Load Balanced Azure IAAS Virtual Machine (VM) with an RDP connection


A very common scenario in Azure Infrastructure As A Service (IAAS) is to have two or more VM’s behind an Azure external Internet-facing load balancer.  The load balancer provides high availability and network performance by distributing traffic according to a predetermined algorithm.

Azure VM’s behind a Load Balancer

In this exercise we will create a load-balanced VM in Azure – using Powershell.  And we will enable RDP access to the VM by creating a NAT rule.


In our scenario, we are creating one VM and the load balancer is forwarding Internet traffic to the VM by using an RDP NAT rule that maps an Azure public IP address to a private IP address.  The private IP address is on the Network Interface (NIC) card attached to the VM.

Consequently, no Load Balancing rules are created or used – only a NAT rule – to allow RDP into the VM.

Please notice that the script below creates a second data disk on the VM (because the VM is to be a Domain Controller).  You can easily remove the code for the second disk if it is not needed.

Lessons Learned

1. If $backendSubnet is used instead of $backendSubnet1 and Error is produced: “Parameter set cannot be resolved using the specified named parameters” OR “Cannot parse the request”.

2. The Backend Address pool will contain the objects (IPs) targeted by the Load Balancer. If you want to redirect via NAT, The VM’s NIC should be part of the backend pool.  Therefore, you need to associate the Backend AddressPool with the IP address of the VM (held in the NIC).  This needs to be done after the VM is created.

Build an on-premises Domain Controller (DC) using Desired State Configuration

In my last post I described how to create a DC, in Azure, using DSC and ARM Templates.  In this post, we will discuss how to automate the creation of a local – on premises – Acitve Directory Domain Services (ADDS) Domain Controller (DC) using DSC.


DSC has two modes: push mode and pull mode.

In push mode, you will author the configuration.  You will then stage the configuration by creating MOF files.  And finally, you will manually push the desired configuration onto the target server or node.  The target server can be the local server or a remote server.


On the other hand, in DSC PULL mode, you author the configuration and stage it onto a designated Pull server.  The target nodes contact the central pull server at regular intervals to obtain their desired configuration.

In our scenario, we will be using DSC in push mode.  We will author the configuration and push it onto the local Windows server (not remotely).



On the target Windows Server (Windows Server 2008 R2 SP1 and Windows Server 2012 or 2012 R2):

  1. Download and install Windows Management Framework (WMF).  This WMF 5.0 is currently available and is the recommended version.
  2. Copy the script below to the target server
  3. Open the script below in Powershell ISE as administrator
  4. Install the required Powershell modules using install-module: xActiveDirectory, xComputerManagement, xNetworking and xStorage.

Run the script

Run the script in Powershell ISE.  The first command creates the .mof files which contain the desired configuration.  The second command actually applies the configuration to the local server.  After about half an hour and one reboot, you will have a fully functional Domain Controller with a new user (domain admin).

Lessons Learned

Since the Powershell xActiveDirectory module is being updated all the time, a DSC script that worked a year ago needs to be updated to work with WMF 5.0 (in the last quarter of 2016).

WMF 5.0 is included in the latest version of Windows 10 and included on Windows Server 2016.

Issues and Solutions

With some of the xActiveDirectory resources, the use of the fully qualified domain name (FQDN) produced an error: “Could not find mandatory property DomainName. Add this property and try again.”

Solution: use the first part of the domain name

Building an ADDS DC in Azure IAAS using ARM and DSC

Sometimes we may need to create an Active Directory Domain Services Domain Controller (DC) in an Azure IAAS Virtual Machine (VM).  For example, I recently set up an IAAS SharePoint farm in the cloud.  A DC on an Azure IAAS VM was a natural fit.

This post will discuss the steps required to deploy and provision such a DC in Azure – using Azure Resource Manager (ARM) templates combined with Powershell Desired State Configuration (DSC).

Brief Introduction to ARM templates and to Powershell Desired State Configuration (DSC)

ARM and DSC are tools that allow the description of infrastructure as code.  Infrastructure described in code allows you to deploy infrastructure consistently.  And it makes the build and deployment process repeatable.

ARM templates

ARM templates are JSON files that describe the resources (infrastructure components) that you need to deploy to Azure.  For example, a VM, a Web App, a VNET or a NIC etc.


On the other hand, Powershell DSC allows you to describe the end state of how you like your – on premises or Cloud – server to look like.  Once you have described your configuration with all the needed roles and features, DSC goes ahead and “makes it so”.  In other words, it provisions the server per your specifications.

ARM Templates & DSC

Both ARM templates and Powershell DSC provide the necessary tools for consistently deploying applications.  And consistently deployed applications are critical factors needed for DevOps.

Steps to create a DC in Azure

Overview of the Process
  1. Create a new Azure Resource Group project is VS.
  2. Add the Windows Virtual Machine template to the project.
  3. Subsequently, add the DSC Extension resource.
  4. A new DSC configuration data file (.psd1 aka Powershell Script data file) is added to the project.  Alternatively, the PS data file may be hosted online.
  5. Customize (edit) your JSON template files and your DSC files.
  6. Deploy the solution to Azure.
  7. Check on your Virtual Machine in Azure.  Remote Desktop to the VM and verify that ADDS, DNS and ADDS Recycle Bin roles and features are enabled.
Detailed Steps:

1. I created a New Project in Visual Studio and chose Azure Resource Group for the type:


2. I chose Windows Virtual Machine as the Template:


3. I opened the windowsvirtualmachine.json file.  In the JSON outline (left-hand side), I right-clicked on Resources and added a new resource – PowerShell DSC Extension:


4. I added the Powershell Data file (.psd1).  However, I ended up not using it because the deployment script did not find it (see details below).  Instead, I uploaded the data file on Github and used a link (URL) to it in the JSON template.


5. I modified the WindowsVirtualMachine.json file adding the parameters and variables that will be used in the JSON document:

6. I edited the Virtual Machine resource section to add a Data Disk.  A second disk (with caching off) is required for Domain Controllers in Azure.  ADDS database and SYSVOL must NOT be stored on an Azure OS disk type.

7. Subsequently, I edited DSC section of the JSON template to include: configurationArguments (parameters passed to the DSC script) and ConfigurationData (URL to .psd1 file on Github)

8. The parameters list was updated in WindowsVirtualMachines.parameters.json file:

9. When the DSC Extension is added to your project, the DSC script provided contains an example configuration.  Since we are creating our own configuration, that example was removed and the following was added.  The following DSC configuration provisions the DC and adds an administrative user account.

10. The DSC configuration data file (.psd1) on Github contains:

11. Once all the JSON and DSC files are customized, the project can be deployed to Azure.  Right-click on the project and select Deploy-New.  Choose your Azure account and subscription.  Verify the parameters entered and click OK to deploy to the Azure Resource Group.

12. After about 30 minutes, the deployment will be completed.  Log in to the Azure portal and verify your DC is up and running.  You can Connect to the DC via Remote Desktop to view your newly created DC.

DC in Azure

DC in Azure 2

Lessons learned

Why use Visual Studio?

Microsoft Visual Studio 2015 Community Edition is a wonderful development IDE and it is quite good for authoring ARM templates and the DSC scripts.  It goes ahead and deploys the whole solution to Azure and allows you to monitor the progress and catch any errors.

The advantage of VS for ARM and DSC

You can author the ARM templates and DSC scripts in any text editor or IDE and then deploy them using Powershell.  However, in my experience, deploying the templates and DSC code from VS provided more feedback about the progress of the deployment.  I do not have to add any parameters for verbose output and I do not have to check any log files.

Furthermore, VS is suitable for PowerShell DSC because you can add a VS Powershell extension (from the Visual studio Marketplace), that allows you to author and edit PS code.

Issues and Solutions

DSC Configuration Data File not being found during deployment

As mentioned above the .psd1 was originally added to the VS project.  Once it is added to VS, you have to right-click on the .psd1 file and select Properties.  For All Configurations select:

  • Build Action : Always
  • Copy to Ouput Directory: Copy Always

When I tried to change the Build Action, I received an error: An error occurred while saving the edited properties listed below: Build Action.  One or more values are invalid.  Mismatched PageRule with the wrong ItemType.


The workaround is to apply the second setting first (Copy Always).  Then apply the first setting (Build Action).

The above workaround was good and allowed the .psd1 file to be copied to the correct location in the staging area along with the .ps1 file and the Zip archive.  However, the deployment process still complained that the .psd1 file was not found “after 19 attempts”.

Online search on this error resulted in posts suggesting that a duplicate path in the PSModulePath environment variable may be the culprit.  However, none of the workarounds and suggestions allowed the .psd1 file to be found.

Workaround: place the .psd1 file online.  I placed the file on Github in order to get it deployed.

Active Directory (ADDS) and Hyper-V posts on Spiceworks Community

I have written two articles on Spiceworks:

I have written about virtual domain controllers (DC’s) on the Spiceworks IT Pro Community site.  The two articles are:

Migrate Active Directory domain controllers and keep the same hostname and ip address

We had 2 ADDS DC’s on 2 HP Proliant servers.  We purchased new HP servers and those DC’s needed to be moved to the new HP servers.  However, we wanted to keep the Domain Controller’s hostname and IP address the same.

Here you can find a step-by-step tutorial on how we did that.  We completed the migration in about 4 hours time with no active directory problems post the migration.

How to synchronize a virtual Domain Controller (DC) with a time source

In this article, I discuss the recommended way for a Active Directory Domain Services (ADDS) Domain Controller (DC) on a Hyper-V Virtual Machine (VM) to synchronize and update its time with a time source.

Normally, a Hyper-V guest VM gets its time from its host.  And the host gets its time from the DC with the PDC emulator role.  However, when the DC (with PDC emulator role) is on a guest VM, the Hyper-V host will try to synchronize its time with its own guest VM and the VM would in turn synchronize its time with its Hyper-V host.  Consequently, this can lead to time synchronization problems.

You can find the recommended solution in this article.  And it does not involve turning off the time Synchronization Integration Service on the VM.

An IOT solution using IOT Hub, Azure Stream Analytics and Power BI Desktop

Goal of the IOT Solution

Get the current temperature for Jeddah City and monitor it for anomalies (ups and downs) over a period of time (a few days).

Overview of the IOT Solution

The IOT solution gets the current weather data in Jeddah (from a weather web site) and sends it to Azure IOT Hub in Azure.  From the IOT Hub the data goes through Azure Stream Analytics (ASA).  ASA does some filtering on the data and then stores it in table storage in Azure.  Subsequently, I then read the data from the Power BI Desktop application and I plotted it on a chart.  Subsequently, using the plot,  I monitored the temperature data for anomalies or variances.

Details of the IOT Solution

Since the sensors for my TI micro-controller are yet to arrive, I developed a solution that simulates those temperature sensors by reading current temperature data from  The weather data is in the JSON format and it is consumed using a REST API.  In the future, instead of using a weather site to get temperature data, I can easily get real weather data from temperature sensors on a micro controller or a computer on a board.  For example, sensors attached to a Rasperry PI or sensors attached to an Arduino board can easily provide this data.

Reading the Weather Data from OpenWeatherMap web site


A Windows’ console application gets this temperature data (from and sends this telemetry to an Azure IOT Hub.

The data is sent to Azure IOT Hub and picked up by Azure Stream Analytics

From the IOT Hub the data is picked up by an Azure Stream Analytics job.  This is basically a query (similar to SQL) that takes the data from IOT hub, validates it and places it in an Azure Storage table.


Finally analysis – for anomalies – is done in Power Bi Desktop

Once the data is in an Azure Storage table, I performed cold-path analytics on it using PowerBI Desktop.  I used PowerBI Desktop to read the data directly from Azure storage.  I then used it to analyze the data and plot it in different ways.

The image below is a screenshot of a Stacked Column chart of (temperature readings against time).  It shows the variance in Jeddah weather (temperature) over the course of a day on 15 May 2016.

Ghassan Hariz
PowerBI Desktop screenshot showing temperature data plot


Azure Cognitive Services – Text Analysis Demo


The Azure Cognitive Services Text Analysis algorithms, used for this demo, can be used to track sentiments (happy, sad etc.) by analyzing text that you enter on web sites that you visit every day.  Or the algorithms may be used to analyze the words that you speak on the phone when calling a customer service help desk.


I have created a Text Analysis Demo.  On the web site (URL is below), there is an Azure Web Application where you will find a .NET Web Forms application.  The application demonstrates Azure’s Machine Learning Text Analysis algorithms.  You can enter text and it returns a value between 1 and 10 which reflects the sentiment of the text entered.

Use Case Example:

A good example, of when you may use these Cognitive Services Text Analysis algorithms, is for monitoring the general sentiment of a customer’s email message or his chat text message.  If this customer’s email or sms reflects unhappy sentiments, then immediate customer service action can be taken to help him or her and to address the issue.  This might mean that a customer service manager steps in and finds out what is making the customer unhappy.

This kind of analysis can be used in a call center to track customers’ sentiments.

Try it for your self

Give it a go: try entering sentences – containing happy or sad sentiments – and see what the Cognitive Services text analysis algorithms return:


Azure Web App: an example of using the Bottle Framework to create a survey site

As part of my learning, I am going through sample applications in Visual Studio.  One of those samples is a Bottle Polls application.  Bottle is a fast, simple and lightweight micro web-framework for Python.

Visual Studio is a mature Python IDE.  I used Visual Studio to customize this application to include a survey about Jeddah and successfully published it as an Azure Web App.  The app uses Azure storage tables to store the polling data.  If you click on the About menu, this application will indicate that it is using Azure Storage Tables to store it data.

You can access this sample Survey Web App here: