Connect your Azure Sphere device to Azure IoT Central – using Visual Studio 2019

Steps Overview

Note: that the following exercise assumes you are already set up your Azure Sphere Tenant and configured your device to connect to IoT Hub as described in my previous Blog Post.

  1. Download the sample code from Github repository
  2. Set up an Azure IoT Central application
  3. Configure Azure IoT Central to work with your Azure Sphere Tenant
  4. Configure the sample application to work with your Azure IoT Central Application

Detailed Steps

  1. If you have not done so already, download or clone the Azure Sphere Samples from the Github repository
  2. Create an Azure IoT Central trial application. 
    IoT Central Home Page
  3. Under the Build sub-menu pick Custom Apps
    IoT Central Start Page
  4. In the New application page, enter an Application Name and a unique URL.
    IoT Central Create New Application
  5. Pick 7-day free trial (supports 5 devices).
  6.  Open your new application
    IoT Central Dashboard
  7. Click on Device Templates and then click “+New” to add a new template
  8. Select IoT Device and click on “Next: Custom”
    IoT Central - select IoT Device template
  9. Click on “Next: Review” button:
    IoT Central Device Template Review
  10. Then click on Create button
    IoT Central - Device Template - Create
  11. Enter a Name for the new Device Template
    IoT Central - Device Template - enter name
  12. Create a Custom Capability Model
    IoT Central - Create Custom Capability Model
  13. Click on Add Interface and Choose Custom interface
  14. Choose the Interface (in the left hand column) and
  15. Click on Add Capability: enter Temperature, temperature. Choose Capability Type Telemetry and Semantic Type Temperature and Choose Degrees Centigrade for Units. Finally, click Save button.
    IoT Central - Add Capability
  16. Then click Customize in left hand column and click on Down Arrow (right-hand-side) to expand the fields.
    IoT Central - Customize Capability
  17. Now you can enter a Min value,a Max value and decimal places. You can also customize the Color on the chart.
    IoT Central - Enter Capability Custom Details
  18. Click on “+Add Capability” and enter “Button Press” as the name
    IoT Central - Add Button Press Capability
  19. Publish the Device Template to your application
    IoT Central Publish Device Template
  20. Click on Devices (left hand side column), pick your device template from the list, then click +New to create a REAL device (not a simulated device).
    IoT Central - Create New Device
  21. For the device ID enter the Azure Sphere device ID (in lower cases letters with the help of PowerShell)
    > azshpere device show-attached
    > powershell -Command ((azsphere device show-attached)[0] -split ': ')[1].ToLower()

    Azure Sphere Show Attached lower case
  22. Configure Azure IoT Central to work with your Azure Sphere Tenant
    > azsphere tenant download-CA-certificate --output CAcertificate.cer
  23. In IoT Central’s dashboard, go to Administration, click on Device Connection and then X509 Certificates
  24. Under the Primary field – click on the folder icon and choose the .cer file created above (choose *.* in the filter field to see the .cer file)
    IoT Central X509 certificate
  25. Then click the Refresh button to generate a Verification Code
  26. Use the verification code to generate a validation certificate
    > azsphere tenant download-validation-certificate --output ValidationCertification.cer --verificationcode <code>
  27. Go back to IoT Central and click on Verify
  28. When prompted, navigate to the validation certificate that you downloaded in step #26 and select the .cer file.
  29. Click Close. This completes the step and verifies that you own the Azure Sphere Tenant.
  30. Configure the sample application to work with your Azure IoT Central Application
    Start Visual Studio 2019
  31. Choose File-Open-CMake
  32. Choose the CMakeLists.txt file in the sample you downloaded from Github
  33. Open the app_manifest.json file
  34. Get the Tenant ID for your Azure Sphere device and enter it under DeviceAuthentication in app_manifest file:
    > azsphere tenant show-selected
  35. On the Azure Sphere command prompt type:
    > ShowIoTCentralConfig
    This command will provide the following information needed in app_manifest.json:
    a) The IoT hub URL for your Azure IoT Central application
    b) The Azure DPS global endpoint address
  36. Power up your Azure Sphere device and make sure it has network access.
  37. Build and Run your application
    IoT Central Analytics - View Data 3
    1IoT Central Analytics - View Data 1
  38. Press button A a few times and see the events in IoT Central (as diamonds below the chart)
    IoT Central Analytics - view event
  39. You can even view the data points in table format:
    IoT Central Analytics - view table of data

Lessons Learned

  • Much of the Microsoft documentation gets outdated quickly because IoT Central and Azure Sphere are being updated and changed continuously. With some trial and error, I am sure you will be able to figure out the right steps.
  • I faced an issue where the device was not connecting to IoT Central. However, I redid the tenant verification step and it worked again.

Connect your Azure Sphere device to Azure IoT Hub – using Visual Studio 2019

I recently ordered an Azure Sphere MT3620 Starter Kit from Avnet to try out Azure Sphere first-hand. In this article I will go over how to connect such a device to Azure IoT Hub. The steps assume that this is your first time setting up your device. So some steps are to be done once (the first time).

Azure Sphere MT3620 Starter Kit

Steps Overview

  1. Test connecting your Azure Sphere development kit to the PC
  2. Install the Azure Sphere SDK
  3. Claim your device: add your device to your Azure Sphere tenant.  This can only be done ONCE in the lifetime of the device!
  4. Configure networking
  5. Check and download any updates
  6. Download the sample code from Github
  7. Connect your device by USB and verify that network connectivity is available
  8. Enable development on your device
  9. Configure the cloud services: create an IoT Hub and a Device Provisioning Service (DPS) and link them together
  10. Download the tenant authentication CA certificate
  11. Upload the tenant CA certificate to DPS and generate a verification code
  12. Verify the tenant CA certificate
  13. Use the validation certificate to add your device to an enrollment group
  14. Add the IoT Hub configuration settings to your Visual Studio project in the app_manifest.json file
  15. Build and run the code and watch for device-to-cloud messages coming to your IoT Hub

Detailed Steps

Prerequisites on Windows: a PC running Windows 10 Anniversary Update or later

Install the Azure Sphere SDK

  1. Attach your Azure Sphere dev kit (e.g. the Avnet Starter Kit) to your PC.  The drivers should be installed automatically.  Then, to verify the Dev Kit installed correctly, open Device Manager and look for 3 COM ports (e.g. USB Serial Port (COM10), USB Serial Port (COM11) and USB Serial Port (COM8)).  See this Troubleshooting page if there any connection errors.
  2. Install the Azure Sphere SDK in order to use Visual Studio 2019 (Enterprise, Professional or Community edition) version 16.04 or later for Azure Sphere development.

Claim your device

An Azure Sphere Tenant provides a way to isolate your devices and allows you to manage them.  Hence, if your organization already has a tenant, you may want to ask to join the existing tenant rather than creating a new one.  And note that once a tenant is created it cannot be moved or deleted.
Important note: once a device is claimed into a tenant then it is PERMANENTLY associated with that Azure Sphere tenant.

  1. Connect your device to your PC
  2. Open the Azure Sphere Command Prompt from the Start Menu
  3. Sign in using a Microsoft Account.  To use Azure Sphere you need a Microsoft account.  Depending on how your Azure Sphere tenant is set up, you can use your Microsoft account as the user or you can ask your administrator to add you.
    azsphere login 
  4. For those who have never logged in to Azure Sphere before or have just installed the 19.10 SDK, they must add the –newuser parameter to the login command:
    azsphere login –newuser <email-address>
  5. Subsequently, once logged in:
    1. If you have one Azure Sphere Tenant (already created) it will be selected as default and you can proceed
    2. Otherwise, if no tenants have been created yet, you will need to create a new tenant
      azsphere tenant create --name <my-tenant>
      Now that the command is successful, you will something like the following message :
      Created a new Azure Sphere Tenant
      --> Tenant Name: new-tenant
      --> Tenant ID: 4c556667-8 …
      Selected Azure Sphere tenant 'new-tenant' as the default.
      You may now wish to claim the attached device into this tenant using 'azsphere device claim'.
      Command completed successfully in 00:00:39.2176539.
    3. If Azure Sphere was previously used with 19.09 SDK or earlier, then the tenant needs to be migrated
    4. If you have multiple tenants, you will need to select one.
    5. Claim your device:
      azsphere device claim
      Upon success, you will see:
      Claiming device.
      Successfully claimed device ID 'AF0A42 ... 7AF0' into tenant 'new-tenant' with ID '4c556667-8 ...'
      Command completed successfully in 00:00:03.4460543

Configure Networking

Now on to configure networking: after you claim your device you need to set up networking so that the device can receive updates from the Azure Sphere Security Service and so that the device can communicate with Azure services such as IoT Hub.

  1. Connect your device to your PC via the USB cable
  2. Open the Azure Sphere device prompt
  3. Register the device’s MAC address if needed.  The following command will display the device’s MAC address:
    azsphere device wifi show-status


Get MAC ID of device

  1. Join your device to the WiFi network by using the following command:
    Note: Azure Sphere supports WPA and WPA2 protocols only.
    azsphere device wifi add –ssid <WIFI SSID> --psk <network security key>
  2. Verify that the device connected to the wireless network by typing:
    azsphere device wifi show-status


Get the WIFI status of your device

Update the Sofware on the Avent device and Enable Application Development on it

  1. Update the software (OS or application) on the device.  The Azure Sphere device checks for updates at boot up time and at every 24-hour interval going forward.  If the device gets updated, the download and update process can take up to 20 minutes.  And the device wifi show-status command will show configuration unknown while the update is progressing.
    To check on the status of an update, you can use:
    azsphere device show-deployment-status
    Upon successful completion you will see:
    Your device is running Azure Sphere OS version 19.11.
    The Azure Sphere Security Service is targeting this device with Azure Sphere OS version 19.11.
    Your device has the expected version of the Azure Sphere OS: 19.11.
  2. The Azure Sphere Samples can be found on this Github repository.  Download or clone the repository and go to the AzureIoT folder. 
  3. Connect your device via USB cable.  And verify that wireless connectivity is available by using:
    azsphere device WIFI show-status
  4. Enable application development on your device:
    azsphere device enable-development


Azure Sphere Enable Development

Configure the cloud services:

  1. Configure the cloud services: create an IoT Hub and a Device Provisioning Service (DPS) and link them together
  2. Download the tenant authentication CA certificate
  3. Upload the tenant CA certificate to DPS and generate a verification code
  4. Verify the tenant CA certificate
  5. Use the validation certificate to add your device to an enrollment group
  6. Add the IoT Hub configuration settings to your Visual Studio project in the app_manifest.json file
    1. The Tenant ID for your Azure Sphere Device can be obtained from the following command.  Enter it into the DeviceAuthentication field in the app_manifest.json file.
      azsphere tenant show-selected
    2. The Scope ID for your DPS instance can be obtained from the Summary screen (top right section).  Paste it into the CmdArgs section of app_manifest.json
    3. The IoT Hub URL for your IoT Hub goes into the AllowedConnections field in the app_manfest.json file
  7. Build and run the code and watch for device-to-cloud messages coming to your IoT Hub

Lessons Learned

  • Development in the C language is not for the faint of heart. However, Visual Studio makes it easy to code using Intellisense and it provides full debug capability.
  • Update to the latest OS version ASAP. There were some issues where the board could not connect to the service before I upgraded to OS version 19.10
  • Once big issue, that occurred during development, was that the LPS22HH sensor was not found. However, after contacting Avnet through the element14 community, they promptly released a fix on Github.
  • Some of the Azure Sphere videos talk about configuring the IoT Hub settings by selecting the project in Visual Studio, selecting Overview and then choosing Connected Services, then clicking on IoT Hub. That never worked. After researching the issue, I found out that this method has been deprecated. In the new OS versions, the IoT Hub settings are all done in the app_manifest.json file. 
  • As of OS version 19.10 Azure Sphere will now be built using a cross-platform build system called CMake.  CMake may be used to build high-level applications or to build real-time applications.  It may also be used for development from the command line.
  • Using CMake paves the way for using the Azure Sphere SDK on Linux and for using the Azure Sphere extension for Visual Studio Code.
  • There are many Azure Sphere sample apps.

How to make sure your connected IoT devices and appliances are secured

The problem that Azure Sphere is trying to tackle?

We are connecting more devices to the Internet all the time. In 2008 the number of network connected things exceeded the earth’s population.  And by 2025, we will connect more than 75 billion of these devices – also known as Internet of Things (IoT) devices – to the Internet.

These connected things are not immune to hacking.  In many cases, these devices are not secured from the beginning and the devices are left with many vulnerabilities. Hence, hackers can control IoT devices remotely. Examples of such vulnerabilities and hacks include:

  1. Hackable cardiac devices from St. Jude or Owlet baby heart rate monitor. 
  2. TRENDNet webcams allowed anyone to see through the cameras or even listen in.
  3. A heater in a casino’s aquarium allowed hackers to access the casino’s customers list.
  4. The Jeep hack where some hackers demonstrated how they can turn the engine off or steer the car remotely.  The vulnerability came from the car’s use of a dashboard system called Uconnect, which provided the ability to re-write the firmware on the chip.  This in turn, enabled access to the rest of the car’s controls via the CANBus interface.
  5. The Mirai Botnet DDoS attack infected many devices (including digital cameras and DVR players). Then, Mirai Botnet used these devices to attack a service provider (Dyn). Subsequently, the Mirai Botnet brought down huge portions of the Internet.
  6. You can other examples of such vulnerabilities and hacks online.

The definition of a truly secured device.

It used to be that only high-end devices had strong security.  Going forward though, it is critical that all network-connected IoT devices are secured.  This includes children’s toys, household appliances and factory equipment.  In the end, an IoT solution is a secure as its weakest link.

To secure such IoT devices, a Microsoft research team came up with the 7 criteria which they assert are required in highly secured devices.  The 7 properties of highly secure devices

  1. Highly secure devices have a hardware-based root of trust: the device has a unique identity tied to the hardware.
  2. Such secured devices have a small trusted computing base. As a result, the security enforcement features are protected from other hardware or software.
  3. These IoT devices have defense in depth. This means that several countermeasures lessen the effect of a successful attack.
  4. Secured devices provide compartmentalization by using different security layers. Therefore, if one layer is compromised the other layers are not affected.
  5. They use certificate-based authentication: trust brokered using signed certificates
  6. These secured devices have renewable security. Consequently, you can update the device’s software automatically.
  7. And finally, secure devices have failure reporting: the device can report failures to its owner.

How does Microsoft Azure Sphere secure Internet connected devices?

Azure Sphere is a secured, high-level ecosystem with built-in communication and security features for Internet connected devices.  It consists of:

  1. The hardware: secured microcontroller unit (MCU).  Microsoft is working with several device manufacturer to produce these certified MCU’s.  The first such MCU is the MT3620 from Mediatek.  And other MCU’s should be coming from Qualcomm and NXP.  And several existing Azure Sphere hardware partners are developing starter kits (prototyping boards) based on the MT3620 MCU.  These include: Seeed Studio, AI-Link and USI.
  1. The OS: a new Linux-based operating system (OS).  Microsoft will service the OS on the device for the 13 years of its life.
  2. The Service: the Azure Sphere Security Service that provides:
    • Over the air updates infrastructure
    • Application deployment and updates
    • Reliable system software updates
    • The Service reports errors at a global scale. The Service will report software bugs or security attacks.

Azure Sphere use cases

  1. You can use Azure Sphere for Brownfield scenarios. Hence, Azure Sphere can protect existing IoT devices that cannot be connected themselves to the Internet. Security concerns or the lack of networking capability prevents these devices from connecting directly to the Internet. For such IoT devices, you can use Azure Sphere Guardian modules to retrofit these older devices.
  1. Greenfield scenarios. With new IoT devices or appliances that you want to connect to the Internet with end-to-end security

Learn more about Azure Sphere secured MCU’s and how they may be used to send data securely to the cloud

Azure Sphere has been generally available since February 2020. I recently got my hands on an Azure Sphere MT3620 Starter Kit from Avnet.  You can read about connecting such an Azure Sphere device to the cloud by reading articles on my blog:

  1. Connecting the MT3620 to IoT Hub
  2. Connecting the MT3260 to IoT Central Service

Azure Site Recovery (ASR): a great way to create a SharePoint test environment

Overview

For all the IT Pro’s who have managed SharePoint farms, you know that it is difficult to set up a SharePoint test environment on premises.  It takes money, time and effort to replicate a production SharePoint farm.  Azure Site Recovery (ASR) is a cloud replication service that provides a great way to quickly create a SharePoint test environment.

  1. The Problem we are trying to solve
  2. The Solution: Azure Site Recovery to quickly and reliably create a SharePoint est environment
  3. Steps to Create a SharePoint test environment with ASR
    • 0: (optional) Create a local SP environment if you do not have one yet
    • 1: Check the requirements for ASR replication and prepare your servers
    • 2: Replicate your SP farm servers to Azure using ASR
    • 3: Using the ASR replica, create the SP Test Environment in Azure

The Problem: creating a SharePoint test environment – the traditional way – takes MONTHS

In my last IT Manager post, I was responsible for a SharePoint farm.  For a long time, we did not have the luxury of a test environment for SharePoint.  I was finally able to procure an HP Z220 workstation and used it for creating a SharePoint test environment.

I increased the memory and hard disk space on the HPZ220 and then used it to build a SharePoint test farm.  However, setting up this SharePoint test environment took 2 months time.  I spent time ordering and receiving the server and parts (upgrades).  And I spent time setting up the server and copying the production virtual machines – DC, Sql Server and SharePoint server – to the sandbox test environment.

The Solution: Azure Site Recovery – a quick and reliable mechanism to create a SharePoint test environment

What is Azure Site Recovery (ASR)? 

Azure Site Recovery is a business continuity and disaster recovery solution in the cloud.  You can protect on-premises servers or virtual machines by replicating them either to Azure or to a secondary site (data center).  If the primary data center location is down, you can fail over to the secondary site (Azure or the secondary data center).  Then when the primary site is back up and running, you can fail back to it.

Can Azure Site Recovery be used to create a SharePoint test environment?

Yes, since ASR can protect whole servers or virtual machines running different workloads, it can certainly replicate those SharePoint servers.  ASR can replicate all the components of a SharePoint farm:

  • Active Directory Domain Services (ADDS) Domain Controller (DC)
  • Sql Server
  • SharePoint server

ASR can take images of your Production servers.  From those images, you can then create application consistent replica servers – in the cloud or on the secondary site.  The replica servers may then be used as test servers in a SharePoint test environment.

The whole process is fast.  At home, I have a fiber optics connection.  I was able to replicate a 3-server SharePoint farm to Azure within a few days.  However, the speed of replication depends on your network’s speed and your network’s load.

Steps to create a SharePoint Test Environment with ASR

We will cover the steps – to create the SharePoint test environment – in three major steps.

O) Optional: if, like me, you want to create a local SharePoint environment – on your laptop at home – so that you can try out Azure Site Recovery, find out how to create such a SharePoint environment using Windows 10 and Windows Server 2016 Nested Virtualization step-by-step.

The following steps will be covered in separate posts.  Stay tuned:

1) Check the requirements for Azure Site Recovery (ASR) replication and prepare the SharePoint Farm’s servers
2) I will explain how to Set up a Recovery Services Vault in Azure and how to go through the Getting Started wizard.  Subsequently, we will start Replication of the SharePoint servers
3) Once we have replicated all the SharePoint servers, we can go ahead and Create a SharePoint test farm environment in the Cloud

Create a local SharePoint farm – on your laptop – using Windows 10 Hyper-V and Windows Server 2016 Nested Virtualization

  1. Overview
  2. The Problem: how to set up a SharePoint farm on your laptop
  3. The Solution: Hyper-V and Nested Virtualization
  4. Steps in Brief
  5. Detailed Steps

Overview

The combination of Windows 10 and Windows Server 2016 enable you to create a test lab on your local PC or laptop!  This is because most versions of Windows 10 have the Hyper-V feature available.  And Windows Server 2016 has a new feature called Nested Virtualization.  So you can easily create a virtual test lab consisting of one virtual machine hosting several other virtual machines!

Nested Virtualization on a Windows 10 laptop

Recently, I needed to create a SharePoint farm on my laptop.  This SharePoint farm would become my pseudo “Production” environment that I will then replicate to the cloud using Azure Site Recovery (ASR).  So Windows 10 and Windows Server 2016 were the perfect tools for the job.

Note: The solution presented in this post can be used to create any test lab (sandbox environment) – not just a SharePoint farm.

The Problem – how to set up a SharePoint farm on a laptop?

As mentioned earlier, I will be using Azure Site Recovery (ASR) to replicate my local SharePoint farm to the Cloud.  This means that I need nested virtualization.  I cannot use Windows 10 Hyper-V to create the three SharePoint VM’s because Windows 10 is not supported as an ASR host.  So I have to install a host server VM (Windows Server 2012 R2 or Windows Server 2016) on Windows 10 Hyper-V.  And subsequently, on this host server, I have to install the three SharePoint server VM’s.

Furthermore, having the SharePoint Farm VM’s hosted on Windows 10 does not represent a real production environment.  The chosen architecture – 3 SP Farm VM’s hosted on Windows Server 2016 host – is more realistic and represents the architecture of a real Production SharePoint farm.

The Solution – Hyper-V and Nested Virtualization:

In order to create a SharePoint farm with 3 servers: a DC, a Sql Server, and a SharePoint server, you either need 3 physical servers or 3 virtual machines.  At home, I do not have 3 physical servers.  VM’s are the only way to go.

Windows Server 2016 has a wonderful new feature called Nested Virtualization!!   Nested Virtualization is great for test scenarios and allows such a test lab to be created.  In previous Windows Server versions you could not nest a Hyper-V environment inside another Hyper-V environment.  Or at least, the nesting was was not supported by Microsoft.

And certain versions of Windows 10 have the Hyper-V feature readily available.  You just need to turn it ON.  This allows you to create the Windows Server 2016 host Virtual Machine (VM) without installing any new software or applications.

Setting Up the SharePoint Farm – In Brief

  1. Check the software and hardware requirements on your laptop or desktop PC
  2. Turn Hyper-V feature ON in Windows 10
  3. Create a new VM and install Windows Server 2016 on it.  Enable Nested Virtualization for this VM.
  4. Set up networking for your SharePoint farm.  We will use a separate subnet for each SharePoint VM and we will link them up with RRAS.  You do not have to use this many subnets.  You can use one subnet for all your VM’s.  I chose to use 3 subnets because I wanted my environment to mimic a VNet in Azure.
  5. Install the 3 SharePoint VM’s: DC, Sql Server 2016, SharePoint 2016

Setting up the SharePoint Farm – Detailed Steps:

1. Check Windows 10 Hyper-V requirements:

Software requirements:

The following Windows 10 versions support Hyper-V virtualization: Enterprise, Professional and Education.

Hardware requirements:

Hyper-V normally requires at least 4GB memory.  However, for this SharePoint farm (1 host VM and 3 guest VM’s), at least 10 GB of RAM would be needed.  I recommend 16 GB of RAM.  With 16GB on your laptop or PC, you will have about 10 GB RAM left for the SharePoint farm.  I assigned 1GB RAM for the DC VM and 2GB RAM each for SharePoint VM and Sql Server VM.

Remember that Windows 10 OS uses about 2 GB and there is something called the host reserve which takes about 2.5GB (depending on how much physical RAM exists on the machine).

Other hardware requirements:

  • 64-bit Processor with Second Level Address Translation (SLAT).
  • CPU support for VM Monitor Mode Extension (VT-c on Intel CPU’s).

A good way to check on all the system requirements for Hyper-V is to run the command line utility systeminfo.exe.  Open a Command Prompt window and type systeminfo.exe.  The output of the command will contain a section on Hyper-V Requirements.  Make sure all tests return a “Yes”.

2. Turn Hyper-V Feature ON in Windows 10

In the Control Panel, start Programs and Features and click on “Turn Windows Features On or Off”.  Select the Hyper-V checkbox.  This includes “Hyper-V” platform and “Hyper-V Management Tools”.  Finally, perform a restart.

3. Create a new Windows Server 2016 VM and Enable Nested Virtualization.

1. Install Windows Server 2016 on this VM.  This Windows Server 2016 VM will host your SharePoint farm.

2. With the Windows Server 2016 VM OFF – in Windows 10 – run the following Powershell commands:

Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true

Get-VMNetworkAdapter -VMName <VMName> | Set-VMNetworkAdapter -MacAddressSpoofing On

3. Under the VM settings for the Windows Server 2016 VM, turn OFF Dynamic memory for the VM

4. Give the VM enough memory.  I assigned 10 GB to the VM.

4. Set up networking for your SharePoint farm

I wanted the network of my lab to resemble a real Production environment and to mimic the virtual network (VNet) that is used in Azure.  So I configured Hyper-V for multiple subnets while using only one NIC.

On the Windows Server 2016 host:

      1. Create 4 Virtual Switches.  One external and 3 internal.
        I chose: 10.0.0.1/24 for the DC VM, 10.0.1.0/24 for Sql Server and 10.0.2.0/24 for the SharePoint VM.

        Hyper-V Switches
      2. Configure the network adapters on the Hyper-V host
      3. Configure Routing and Remote Access Service (RRAS) on the host.  RRAS acts as a software router and connects the subnets.
        1. In Server Manager, click on Manage – Add Roles and Features
        2. The first page is informational and may be skipped
        3. On the second page: choose Role-based or Feature-based installation
        4. Choose the server (local server) where the RRAS feature will be added
        5. Select Remote Access on the Roles page
        6. Under the Features page:
          1. select RAS Connection Manager Administration Kit (CMAK)
          2. under Remote Server Admin Tools – Role Admin Tools: select Remote Access Management Tools
        7. On the Role Services page, select “DirectAccess and VPN (RAS)” and select Routing
        8. You will be prompted to add features required by DirectAccess and VPN (RAS)”, click YES.
        9. Make sure Routing is still selected
        10. Review the information on the Web Server Role (IIS) page
        11. Click Next on the Roles Services page
        12. Do a final confirmation and Install
        13. When it is done, Close the wizard
        14. Open the Routing and Remote Access application
        15. Right click on your server name and select “Configure and Enable Routing and Remote Access”
        16. Select Custom Configuration
        17. Select NAT and LAN Routing.  NAT allows your VM’s Intern

5. Create 3 new VM’s inside the Hyper-V Host

    1. Install the Domain Controller (DC).  You can use Desired State Configuration (DSC) to set up the ADDS Domain Controller automatically by using a script.  You can see the steps in my blog post on the subject.
    2. Install the Sql Server 2016 and
    3. SharePoint Server 2016

Create a Load Balanced Azure IAAS Virtual Machine (VM) with an RDP connection

Overview

A very common scenario in Azure Infrastructure As A Service (IAAS) is to have two or more VM’s behind an Azure external Internet-facing load balancer.  The load balancer provides high availability and network performance by distributing traffic according to a predetermined algorithm.

In this exercise we will create a load-balanced VM in Azure – using Powershell.  And we will enable RDP access to the VM by creating a NAT rule.

Details

In our scenario, we are creating one VM and the load balancer is forwarding Internet traffic to the VM by using an RDP NAT rule that maps an Azure public IP address to a private IP address.  The private IP address is on the Network Interface (NIC) card attached to the VM.

Consequently, no Load Balancing rules are created or used – only a NAT rule – to allow RDP into the VM.

Please notice that the script below creates a second data disk on the VM (because the VM is to be a Domain Controller).  You can easily remove the code for the second disk if it is not needed.

break

Login-AzureRmAccount
Get-AzureRmSubscription
Select-AzureRmSubscription -SubscriptionId <your subscription ID>

#################################################
# Create Resource Group - or use an existing one
#################################################
$loc = 'westeurope'
$rgName = 'RG-ASRConfigServer'
New-AzureRmResourceGroup -Name $rgname -Location $loc -Force

# Create a storage account
$stoName = 'storasrconsrvr4'
$stoType = 'Standard_LRS'
$naResult = Get-AzureRmStorageAccountNameAvailability $stoName

if ($naResult.NameAvailable) {
New-AzureRmStorageAccount -ResourceGroupName $rgName -Name $stoName `
                          -Location $loc -SkuName $stotype -Kind Storage
}
else { # The LoadBalancer is not used to create Load Balancing rules
# But to create NAT rules (to allow RDP from the Internet)
 Write-Host ''
 Write-Host -ForegroundColor Yellow "This storage account name: $stoName is not available"
 break
}
$storageAccount = Get-AzureRmStorageAccount -ResourceGroupName $rgName -Name $stoName

###############################
# Set up networking for the VM
###############################
# Create VNet and Subnet(s) - if needed
$beSubnetName = 'LB-SUBNET-BE'
$backendSubnet = New-AzureRmVirtualNetworkSubnetConfig -Name $beSubnetName `
                                                        -AddressPrefix '10.0.0.0/24'
$vnetName = 'vnetConfserv'
$vnet = New-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rgname `
                                  -Location $loc -AddressPrefix '10.0.0.0/16' `
                                  -Subnet $backendSubnet 

#Create Public IP address 
#########################
$ipName = 'confservPublicIP'
$locName = 'West Europe'
$domainName = 'asrconfig'
$pip = New-AzureRmPublicIpAddress -Name $ipName -ResourceGroupName $rgName `
                                  -Location $locName -AllocationMethod Dynamic `
                                  -DomainNameLabel $domainName

#Create a front-end IP address pool tied to the Public IP address
#################################################################
$frontendIP = New-AzureRmLoadBalancerFrontendIpConfig -Name 'LB-FrontendIP' `
                                                      -PublicIpAddress $pip

#Create a back-end IP address pool that will be tied to the NIC
###############################################################
$backendAP = New-AzureRmLoadBalancerBackendAddressPoolConfig -Name 'LB-BackendAP'

#Create an Inbound NAT rule for Remote Desktop
##############################################
$natRuleRDP = New-AzureRmLoadBalancerInboundNatRuleConfig -Name 'natRuleRDP' `
                            -FrontendIpConfiguration $frontendIP -Protocol TCP `
                            -FrontendPort 3442 -BackendPort 3389 

#Create the external Load Balancer
##################################
$lbName = 'vmLoadBalancer'
$loadBal = New-AzureRmLoadBalancer -ResourceGroupName $rgName -Name $lbName `
                               -Location 'West Europe' -FrontendIpConfiguration $frontendIP `
                               -InboundNatRule $natRuleRDP -BackendAddressPool $backendAP

# Attach backend address pool to load balancer                                 
Add-AzureRmLoadBalancerBackendAddressPoolConfig -Name $backendAP -LoadBalancer $loadBal | `
                                                Set-AzureRmLoadBalancer
                                                     
#Create the Network Interface
#############################

$vnet1 = Get-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rgName
$backendSubnet1 = Get-AzureRmVirtualNetworkSubnetConfig -Name $beSubnetName -VirtualNetwork $vnet1
$nicName = 'vmNIC'
$vmNIC = New-AzureRmNetworkInterface -ResourceGroupName $rgName -Name $nicName `
                             -Location $locName -PrivateIpAddress '10.0.0.4' `
                             -Subnet $backendSubnet1 `
                             -LoadBalancerBackendAddressPool $loadBal.BackendAddressPools[0] `
                             -LoadBalancerInboundNatRule $loadBal.InboundNatRules[0]

##############################################
#Create the VM
##############################################
$vmName = 'vm-asrcsrvr'
$AvID = (New-AzureRmAvailabilitySet -ResourceGroupName $rgName `
                                    -Name 'adAvailabiltySet' -Location 'West Europe').id

#create a VM configuration
##########################
$vmObj = New-AzureRmVMConfig -VMName $vmName -VMSize 'Standard_A1' -AvailabilitySetId $AvID

$compName = 'vm-asrcsrvr'
$cred = Get-Credential -Message 'Enter name / password of VM administrator account.'
$vmObj = Set-AzureRmVMOperatingSystem -VM $vmObj -Windows -ComputerName $compName `
                                     -Credential $cred -ProvisionVMAgent `
                                     -EnableAutoUpdate

$vmObj = Set-AzureRmVMSourceImage -VM $vmObj -PublisherName MicrosoftWindowsServer `
                                 -Offer WindowsServer -Skus 2012-R2-Datacenter `
                                 -Version 'latest'

# Add the network interface (NIC) to the VM
###########################################
$vmObj = Add-AzureRmVMNetworkInterface -VM $vmObj -Id $vmNIC.Id

# ADD OS Disk and Data Disk
###########################
$blobPath = 'vhds/spfarm-ad-osdisk.vhd'
$osDiskUri = $storageAccount.PrimaryEndpoints.Blob.ToString() + $blobPath

$diskName = 'adVmOSDisk'
$vmObj = Set-AzureRmVMOSDisk -VM $vmObj -Name $diskName -VhdUri $osDiskUri `
                            -CreateOption fromImage -DiskSizeInGB 1000

# Add data disks by using the URLs of the copied data VHDs at the appropriate 
#  Logical Unit Number (Lun).
$dataDiskBlobPath = 'vhds/ADDataDisk-1.vhd'
$dataDiskUri = $storageAccount.PrimaryEndpoints.Blob.ToString() + $dataDiskBlobPath

# AD DC in Azure need the data disk storing sysvol to have NO caching
$vmObj = Add-AzureRmVMDataDisk -VM $vmObj -Name 'ADDataDisk-1' -Caching None `
                            -VhdUri $dataDiskUri -Lun 0 -DiskSizeInGB 1000 `
                            -CreateOption empty

# Create the actual VM
New-AzureRmVM -ResourceGroupName $rgName -Location $locName -VM $vmObj

######################
####Post VM Creation:
######################

# Add the network interface (NIC) to the load balancer
######################################################

$lb = Get-AzureRmLoadBalancer -Name $lbName -ResourceGroupName $rgName
$backend = Get-AzureRmLoadBalancerBackendAddressPoolConfig -Name $backendAP `
                                                           -LoadBalancer $lb
$nic = Get-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $rgName
$nic.IpConfigurations[0].LoadBalancerBackendAddressPools = $backend
Set-AzureRmNetworkInterface -NetworkInterface $nic



Lessons Learned

1. If $backendSubnet is used instead of $backendSubnet1 and Error is produced: “Parameter set cannot be resolved using the specified named parameters” OR “Cannot parse the request”.

2. The Backend Address pool will contain the objects (IPs) targeted by the Load Balancer. If you want to redirect via NAT, The VM’s NIC should be part of the backend pool.  Therefore, you need to associate the Backend AddressPool with the IP address of the VM (held in the NIC).  This needs to be done after the VM is created.

Build an on-premises Domain Controller (DC) using Desired State Configuration

In my last post I described how to create a DC, in Azure, using DSC and ARM Templates.  In this post, we will discuss how to automate the creation of a local – on premises – Acitve Directory Domain Services (ADDS) Domain Controller (DC) using DSC.

Overview

DSC has two modes: push mode and pull mode.

In push mode, you will author the configuration.  You will then stage the configuration by creating MOF files.  And finally, you will manually push the desired configuration onto the target server or node.  The target server can be the local server or a remote server.

On the other hand, in DSC PULL mode, you author the configuration and stage it onto a designated Pull server.  The target nodes contact the central pull server at regular intervals to obtain their desired configuration.

In our scenario, we will be using DSC in push mode.  We will author the configuration and push it onto the local Windows server (not remotely).

Details

Prerequisites

On the target Windows Server (Windows Server 2008 R2 SP1 and Windows Server 2012 or 2012 R2):

  1. Download and install Windows Management Framework (WMF).  This WMF 5.0 is currently available and is the recommended version.
  2. Copy the script below to the target server
  3. Open the script below in Powershell ISE as administrator
  4. Install the required Powershell modules using install-module: xActiveDirectory, xComputerManagement, xNetworking and xStorage.

Run the script

Run the script in Powershell ISE.  The first command creates the .mof files which contain the desired configuration.  The second command actually applies the configuration to the local server.  After about half an hour and one reboot, you will have a fully functional Domain Controller with a new user (domain admin).

# Configure all of the settings we want to apply for this configuration
$ConfigData = @{
    AllNodes = @(
        @{
            NodeName = 'localhost'
            MachineName = 'spfarm-ad'
            IPAddress = '10.0.0.4'
            InterfaceAlias = 'Ethernet'
            DefaultGateway = '10.0.0.1'
            PrefixLength = '24'
            AddressFamily = 'IPv4'
            DNSAddress = '127.0.0.1', '10.0.0.4'
            PSDscAllowPlainTextPassword = $true
            PSDscAllowDomainUser = $true
        }
    )
}

Configuration BuildADDC {

    param (
        [Parameter(Mandatory)]
        [String]$FQDomainName,

        [Parameter(Mandatory)]
        [PSCredential]$DomainAdminstratorCreds,

        [Parameter(Mandatory)]
        [PSCredential]$AdmintratorUserCreds,

        [Int]$RetryCount=5,
        [Int]$RetryIntervalSec=30
    )

    Import-DscResource -ModuleName PSDesiredStateConfiguration
    Import-DscResource -ModuleName xActiveDirectory, `
                                    xComputerManagement, `
                                    xNetworking,
									xStorage
 
    Node $AllNodes.NodeName 
    {
        LocalConfigurationManager 
        {
            ActionAfterReboot = 'ContinueConfiguration'            
            ConfigurationMode = 'ApplyOnly'            
            RebootNodeIfNeeded = $true  
        }

        # Change Server Name
        xComputer SetName { 
          Name = $Node.MachineName 
        }

        # Networking
        xDhcpClient DisabledDhcpClient
        {
            State          = 'Disabled'
            InterfaceAlias = $Node.InterfaceAlias
            AddressFamily  = $Node.AddressFamily
        }

        xIPAddress NewIPAddress
        {
            IPAddress      = $Node.IPAddress
            InterfaceAlias = $Node.InterfaceAlias
            PrefixLength   = $Node.PrefixLength
            AddressFamily  = $Node.AddressFamily
        }

        xDefaultGatewayAddress SetDefaultGateway
        {
            Address        = $Node.DefaultGateway
            InterfaceAlias = $Node.InterfaceAlias
            AddressFamily  = $Node.AddressFamily
            DependsOn = '[xIPAddress]NewIPAddress'
        }
       
        xDNSServerAddress SetDNS {
            Address = $Node.DNSAddress
            InterfaceAlias = $Node.InterfaceAlias
            AddressFamily = $Node.AddressFamily
        }

        # Install the Windows Feature for AD DS
        WindowsFeature ADDSInstall {
            Ensure = 'Present'
            Name = 'AD-Domain-Services'
        }

        # Make sure the Active Directory GUI Management tools are installed
        WindowsFeature ADDSTools            
        {             
            Ensure = 'Present'             
            Name = 'RSAT-ADDS'             
        }           

        # Create the ADDS DC
        xADDomain FirstDC {
            DomainName = $FQDomainName
            DomainAdministratorCredential = $DomainAdminstratorCreds
            SafemodeAdministratorPassword = $DomainAdminstratorCreds
            DependsOn = '[xComputer]SetName','[xDefaultGatewayAddress]SetDefaultGateway','[WindowsFeature]ADDSInstall'
        }   
        
        $domain = $FQDomainName.split('.')[0] 
        xWaitForADDomain DscForestWait
        {
            DomainName = $domain
            DomainUserCredential = $DomainAdminstratorCreds
            RetryCount = $RetryCount
            RetryIntervalSec = $RetryIntervalSec
            DependsOn = '[xADDomain]FirstDC'
        } 

        #
        xADRecycleBin RecycleBin
        {
           EnterpriseAdministratorCredential = $DomainAdminstratorCreds
           ForestFQDN = $domain
           DependsOn = '[xADDomain]FirstDC'
        }
        
        # Create an admin user so that the default Administrator account is not used
        xADUser FirstUser
        {
            DomainAdministratorCredential = $DomainAdminstratorCreds
            DomainName = $domain
            UserName = $AdmintratorUserCreds.UserName
            Password = $AdmintratorUserCreds
            Ensure = 'Present'
            DependsOn = '[xADDomain]FirstDC'
        }
        
        xADGroup AddToDomainAdmins
        {
            GroupName = 'Domain Admins'
            MembersToInclude = $AdmintratorUserCreds.UserName
            Ensure = 'Present'
            DependsOn = '[xADUser]FirstUser'
        }
        
    }
}

# Build MOF (Managed Object Format) files based on the configuration defined above 
# (in folder under current dir) 
# Local Admin is assigned 
BuildADDC -ConfigurationData $ConfigData `
          -FQDomainName 'spdomain.local' `
          -DomainAdminstratorCreds (get-credential -Message "Enter Admin Credentials" -UserName "Administrator" ) `
          -AdmintratorUserCreds (get-credential -Message "Enter New Admin User Credentials" -UserName "admin1" ) 

# We now enforce the configuration using the command syntax below
Start-DscConfiguration -Wait -Force -Path .\BuildADDC -Verbose -Debug

Lessons Learned

Since the Powershell xActiveDirectory module is being updated all the time, a DSC script that worked a year ago needs to be updated to work with WMF 5.0 (in the last quarter of 2016).

WMF 5.0 is included in the latest version of Windows 10 and included on Windows Server 2016.

Issues and Solutions

With some of the xActiveDirectory resources, the use of the fully qualified domain name (FQDN) produced an error: “Could not find mandatory property DomainName. Add this property and try again.”

Solution: use the first part of the domain name

Building an ADDS DC in Azure IAAS using ARM and DSC

Sometimes we may need to create an Active Directory Domain Services Domain Controller (DC) in an Azure IAAS Virtual Machine (VM).  For example, I recently set up an IAAS SharePoint farm in the cloud.  A DC on an Azure IAAS VM was a natural fit.

This post will discuss the steps required to deploy and provision such a DC in Azure – using Azure Resource Manager (ARM) templates combined with Powershell Desired State Configuration (DSC).

Brief Introduction to ARM templates and to Powershell Desired State Configuration (DSC)

ARM and DSC are tools that allow the description of infrastructure as code.  Infrastructure described in code allows you to deploy infrastructure consistently.  And it makes the build and deployment process repeatable.

ARM templates

ARM templates are JSON files that describe the resources (infrastructure components) that you need to deploy to Azure.  For example, a VM, a Web App, a VNET or a NIC etc.

DSC

On the other hand, Powershell DSC allows you to describe the end state of how you like your – on premises or Cloud – server to look like.  Once you have described your configuration with all the needed roles and features, DSC goes ahead and “makes it so”.  In other words, it provisions the server per your specifications.

ARM Templates & DSC

Both ARM templates and Powershell DSC provide the necessary tools for consistently deploying applications.  And consistently deployed applications are critical factors needed for DevOps.

Steps to create a DC in Azure

Overview of the Process
  1. Create a new Azure Resource Group project is VS.
  2. Add the Windows Virtual Machine template to the project.
  3. Subsequently, add the DSC Extension resource.
  4. A new DSC configuration data file (.psd1 aka Powershell Script data file) is added to the project.  Alternatively, the PS data file may be hosted online.
  5. Customize (edit) your JSON template files and your DSC files.
  6. Deploy the solution to Azure.
  7. Check on your Virtual Machine in Azure.  Remote Desktop to the VM and verify that ADDS, DNS and ADDS Recycle Bin roles and features are enabled.
Detailed Steps:

1. I created a New Project in Visual Studio and chose Azure Resource Group for the type:

2. I chose Windows Virtual Machine as the Template:

choose Azure Virtual Machine
choose Azure Virtual Machine

3. I opened the windowsvirtualmachine.json file.  In the JSON outline (left-hand side), I right-clicked on Resources and added a new resource – PowerShell DSC Extension:

Open VM json file

4. I added the Powershell Data file (.psd1).  However, I ended up not using it because the deployment script did not find it (see details below).  Instead, I uploaded the data file on Github and used a link (URL) to it in the JSON template.

add Powershell data file

5. I modified the WindowsVirtualMachine.json file adding the parameters and variables that will be used in the JSON document:

  "parameters": {
    "adminUsername": {
      "type": "string",
      "minLength": 1,
      "metadata": {
        "description": "Username for the Virtual Machine."
      }
    },
    "adminPassword": {
      "type": "securestring",
      "metadata": {
        "description": "Password for the Virtual Machine."
      }
    },
    "dnsNameForPublicIP": {
      "type": "string",
      "minLength": 1,
      "metadata": {
        "description": "Globally unique DNS Name for the Public IP used to access the Virtual Machine."
      }
    },
    "windowsOSVersion": {
      "type": "string",
      "defaultValue": "2012-R2-Datacenter",
      "allowedValues": [
        "2008-R2-SP1",
        "2012-Datacenter",
        "2012-R2-Datacenter"
      ],
      "metadata": {
        "description": "The Windows version for the VM. This will pick a fully patched image of this given Windows version. Allowed values: 2008-R2-SP1, 2012-Datacenter, 2012-R2-Datacenter."
      }
    },
    "_artifactsLocation": {
      "type": "string",
      "metadata": {
        "description": "Auto-generated container in staging storage account to receive post-build staging folder upload"
      }
    },
    "_artifactsLocationSasToken": {
      "type": "securestring",
      "metadata": {
        "description": "Auto-generated token to access _artifactsLocation"
      }
    },
    "DSC-VMDCUpdateTagVersion": {
      "type": "string",
      "defaultValue": "1.0",
      "metadata": {
        "description": "This value must be changed from a previous deployment to ensure the extension will run"
      }
    },
    "sizeOfDataDiskInGB": {
      "type": "int",
      "defaultValue": 100,
      "metadata": {
        "description": "Size of each data disk in GB"
      }
    },
    "FQDomainName": {
      "type": "string",
      "metadata": {
        "description": "Domain name to give to new ADDS Domain"
      }
    },
    "AdminUser1CredsParam": {
      "type": "securestring",
      "metadata": {
        "description": "Admin User password"
      }
    }
  },
  "variables": {
    "imagePublisher": "MicrosoftWindowsServer",
    "imageOffer": "WindowsServer",
    "OSDiskName": "osdiskforwindowssimple",
    "DataDiskName": "datadisk1nocache",
    "nicName": "myVMNic",
    "addressPrefix": "10.0.0.0/16",
    "subnetName": "Subnet",
    "subnetPrefix": "10.0.0.0/24",
    "vhdStorageType": "Standard_LRS",
    "publicIPAddressName": "myPublicIP",
    "publicIPAddressType": "Dynamic",
    "vhdStorageContainerName": "vhds",
    "vmName": "ARMDCVM",
    "vmSize": "Standard_A2",
    "virtualNetworkName": "MyVNET",
    "vnetId": "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
    "subnetRef": "[concat(variables('vnetId'), '/subnets/', variables('subnetName'))]",
    "vhdStorageName": "[concat('vhdstorage', uniqueString(resourceGroup().id))]",
    "diagnosticsStorageAccountName": "[variables('vhdStorageName')]",
    "diagnosticsStorageAccountResourceGroup": "[resourcegroup().name]",
    "accountid": "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', variables('diagnosticsStorageAccountResourceGroup'), '/providers/', 'Microsoft.Storage/storageAccounts/', variables('diagnosticsStorageAccountName'))]",
    "wadmetricsresourceid": "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', variables('diagnosticsStorageAccountResourceGroup'), '/providers/', 'Microsoft.Compute/virtualMachines/', variables('vmName'))]",
    "DSC-VMDCArchiveFolder": "DSC",
    "DSC-VMDCArchiveFileName": "armdscvmdc.zip",
    "DSC-ConfigFile": "armdscvmdc.ps1",
    "DSC-ConfigDataFile": "armdscvmdc.psd1"
  },

6. I edited the Virtual Machine resource section to add a Data Disk.  A second disk (with caching off) is required for Domain Controllers in Azure.  ADDS database and SYSVOL must NOT be stored on an Azure OS disk type.

          "dataDisks": [
            {
              "name": "datadisk1",
              "diskSizeGB": "[parameters('sizeOfDataDiskInGB')]",
              "lun": 0,
              "vhd": {
                "uri": "[concat('https://', variables('vhdStorageName'), '.blob.core.windows.net/', variables('vhdStorageContainerName'), '/', variables('DataDiskName'), '.vhd')]"
              },
              "createOption": "Empty",
              "caching": "None"
            }
          ]

7. Subsequently, I edited DSC section of the JSON template to include: configurationArguments (parameters passed to the DSC script) and ConfigurationData (URL to .psd1 file on Github)

        {
          "name": "Microsoft.Powershell.DSC",
          "type": "extensions",
          "location": "[resourceGroup().location]",
          "apiVersion": "2015-06-15",
          "dependsOn": [
            "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]"
          ],
          "tags": {
            "displayName": "DSC-VMDC"
          },
          "properties": {
            "publisher": "Microsoft.Powershell",
            "type": "DSC",
            "typeHandlerVersion": "2.9",
            "autoUpgradeMinorVersion": true,
            "forceUpdateTag": "[parameters('DSC-VMDCUpdateTagVersion')]",
            "settings": {
              "configuration": {
                "url": "[concat(parameters('_artifactsLocation'), '/', variables('DSC-VMDCArchiveFolder'), '/', variables('DSC-VMDCArchiveFileName'))]",
                "script": "[variables('DSC-ConfigFile')]",
                "function": "Main"
              },
              "configurationArguments": {
                "nodeName": "[variables('vmName')]",
                "FQDomainName": "[parameters('FQDomainName')]"
              },
              "configurationData": {
                "url": "https://raw.githubusercontent.com/GhassanHariz/AzureRMDSC/master/DCVM-DSC.psd1"
              }
            },
            "protectedSettings": {
              "configurationArguments": {
                "DomainAdmin1Creds": {
                  "userName": "[parameters('adminUsername')]",
                  "password": "[parameters('adminPassword')]"
                },
                "AdminUser1Creds": {
                  "userName": "AdminUser1",
                  "password": "[parameters('AdminUser1CredsParam')]"
                }
              },
              "configurationUrlSasToken": "[parameters('_artifactsLocationSasToken')]"
            }
          }
        }

8. The parameters list was updated in WindowsVirtualMachines.parameters.json file:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "adminUsername": {
      "value": "srvradmin"
    },
    "dnsNameForPublicIP": {
      "value": "dscvmdc"
    },
    "windowsOSVersion": {
      "value": "2012-R2-Datacenter"
    },
    "sizeOfDataDiskInGB": {
      "value": 100
    },
    "DSC-VMDCUpdateTagVersion": {
      "value": "2.0"
    },
    "FQDomainName": {
      "value": "spdomain.local"
    }

  }
}

9. When the DSC Extension is added to your project, the DSC script provided contains an example configuration.  Since we are creating our own configuration, that example was removed and the following was added.  The following DSC configuration provisions the DC and adds an administrative user account.

Configuration Main
{
	Param(
		[string] $NodeName,

		[Parameter(Mandatory)]
		[String]$FQDomainName,

		[Parameter(Mandatory)]
		[PSCredential]$DomainAdmin1Creds,

		[Parameter(Mandatory)]
		[PSCredential]$AdminUser1Creds,

		[Int]$RetryCount=5,
		[Int]$RetryIntervalSec=30
		)

Import-DscResource -ModuleName PSDesiredStateConfiguration
Import-DscResource -ModuleName xActiveDirectory, `
                                    xComputerManagement, `
                                    xNetworking,
									xStorage
 
Node $AllNodes.Where{$_.Role -eq "DC"}.Nodename
       {
		LocalConfigurationManager 
        {
            ActionAfterReboot = 'ContinueConfiguration'            
            ConfigurationMode = 'ApplyOnly'            
            RebootNodeIfNeeded = $true  
			AllowModuleOverwrite = $true
        }

        xWaitforDisk Disk2
        {
             DiskNumber = 2
             RetryIntervalSec =$RetryIntervalSec
             RetryCount = $RetryCount
        }

        xDisk ADDataDisk
        {
            DiskNumber = 2
            DriveLetter = 'F'
        }

		# Add DNS
		WindowsFeature DNS
        {
            Ensure = "Present"
            Name = "DNS"
        }

        # Install the Windows Feature for AD DS
        WindowsFeature ADDSInstall {
            Ensure = 'Present'
            Name = 'AD-Domain-Services'
        }

        # Make sure the Active Directory GUI Management tools are installed
        WindowsFeature ADDSTools            
        {             
            Ensure = 'Present'             
            Name = 'RSAT-ADDS'             
        }           

        # Create the ADDS DC
        xADDomain FirstDC {
            DomainName = $FQDomainName
            DomainAdministratorCredential = $DomainAdmin1Creds
            SafemodeAdministratorPassword = $DomainAdmin1Creds
			DatabasePath = 'F:\NTDS'
            LogPath = 'F:\NTDS'
            SysvolPath = 'F:\SYSVOL'
            DependsOn = '[WindowsFeature]ADDSInstall'
        }   
        
        xWaitForADDomain DscForestWait
        {
            DomainName = $FQDomainName
            RetryCount = $RetryCount
            RetryIntervalSec = $RetryIntervalSec
            DependsOn = '[xADDomain]FirstDC'
        } 

        #
        xADRecycleBin RecycleBin
        {
           EnterpriseAdministratorCredential = $DomainAdmin1Creds
           ForestFQDN = $FQDomainName
           DependsOn = '[xADDomain]FirstDC'
        }
        
        # Create an admin user so that the default Administrator account is not used
        xADUser FirstUser
        {
            DomainAdministratorCredential = $DomainAdmin1Creds
            DomainName = $FQDomainName
            UserName = $AdminUser1Creds.UserName
            Password = $AdminUser1Creds
            Ensure = 'Present'
            DependsOn = '[xADDomain]FirstDC'
        }
        
        xADGroup AddToDomainAdmins
        {
            GroupName = 'Domain Admins'
            MembersToInclude = $AdminUser1Creds.UserName
            Ensure = 'Present'
            DependsOn = '[xADUser]FirstUser'
        }     
    }

}

10. The DSC configuration data file (.psd1) on Github contains:

# Configure all of the settings we want to apply for this configuration
@{
    AllNodes = @(
        @{
            NodeName = '*'
            PSDscAllowPlainTextPassword = $true
            PSDscAllowDomainUser = $true
        },
        @{ 
            Nodename = "localhost"
            Role = "DC"
        }
    )
}

11. Once all the JSON and DSC files are customized, the project can be deployed to Azure.  Right-click on the project and select Deploy-New.  Choose your Azure account and subscription.  Verify the parameters entered and click OK to deploy to the Azure Resource Group.

12. After about 30 minutes, the deployment will be completed.  Log in to the Azure portal and verify your DC is up and running.  You can Connect to the DC via Remote Desktop to view your newly created DC

Domain Controller Azure

Lessons learned

Why use Visual Studio?

Microsoft Visual Studio 2015 Community Edition is a wonderful development IDE and it is quite good for authoring ARM templates and the DSC scripts.  It goes ahead and deploys the whole solution to Azure and allows you to monitor the progress and catch any errors.

The advantage of VS for ARM and DSC

You can author the ARM templates and DSC scripts in any text editor or IDE and then deploy them using Powershell.  However, in my experience, deploying the templates and DSC code from VS provided more feedback about the progress of the deployment.  I do not have to add any parameters for verbose output and I do not have to check any log files.

Furthermore, VS is suitable for PowerShell DSC because you can add a VS Powershell extension (from the Visual studio Marketplace), that allows you to author and edit PS code.

Issues and Solutions

DSC Configuration Data File not being found during deployment

As mentioned above the .psd1 was originally added to the VS project.  Once it is added to VS, you have to right-click on the .psd1 file and select Properties.  For All Configurations select:

  • Build Action : Always
  • Copy to Ouput Directory: Copy Always

When I tried to change the Build Action, I received an error: An error occurred while saving the edited properties listed below: Build Action.  One or more values are invalid.  Mismatched PageRule with the wrong ItemType.

The workaround is to apply the second setting first (Copy Always).  Then apply the first setting (Build Action).

The above workaround was good and allowed the .psd1 file to be copied to the correct location in the staging area along with the .ps1 file and the Zip archive.  However, the deployment process still complained that the .psd1 file was not found “after 19 attempts”.

Online search on this error resulted in posts suggesting that a duplicate path in the PSModulePath environment variable may be the culprit.  However, none of the workarounds and suggestions allowed the .psd1 file to be found.

Workaround: place the .psd1 file online.  I placed the file on Github in order to get it deployed.

Active Directory (ADDS) and Hyper-V posts on Spiceworks Community

I have written articles on the Spiceworks IT-Professionals Community:

I have written about virtual domain controllers (DC’s) on the Spiceworks IT Pro Community site.  The two articles are:

Migrate Active Directory domain controllers and keep the same hostname and ip address

We had 2 ADDS DC’s on 2 HP Proliant servers.  We purchased new HP servers and those DC’s needed to be moved to the new HP servers.  However, we wanted to keep the Domain Controller’s hostname and IP address the same.

Here you can find a step-by-step tutorial on how we did that.  We completed the migration in about 4 hours time with no active directory problems post the migration.

How to synchronize a virtual Domain Controller (DC) with a time source

In this article, I discuss the recommended way for a Active Directory Domain Services (ADDS) Domain Controller (DC) on a Hyper-V Virtual Machine (VM) to synchronize and update its time with a time source.

Normally, a Hyper-V guest VM gets its time from its host.  And the host gets its time from the DC with the PDC emulator role.  However, when the DC (with PDC emulator role) is on a guest VM, the Hyper-V host will try to synchronize its time with its own guest VM and the VM would in turn synchronize its time with its Hyper-V host.  Consequently, this can lead to time synchronization problems.

You can find the recommended solution in this article.  And it does not involve turning off the time Synchronization Integration Service on the VM.

An IoT solution using IoT Hub, Azure Stream Analytics and Power BI Desktop

Goal of the IoT Solution

Get the current temperature for Jeddah City and monitor it for anomalies (ups and downs) over a period of time (a few days).

Overview of the IoT Solution

The IOT solution gets the current weather data in Jeddah (from a weather web site) and sends it to Azure IOT Hub in Azure.  From the IOT Hub the data goes through Azure Stream Analytics (ASA).  ASA does some filtering on the data and then stores it in table storage in Azure.  Subsequently, I then read the data from the Power BI Desktop application and I plotted it on a chart.  Subsequently, using the plot,  I monitored the temperature data for anomalies or variances.

Details of the IoT Solution

Since the sensors for my TI micro-controller are yet to arrive, I developed a solution that simulates those temperature sensors by reading current temperature data from http://www.openweathermap.org.  The weather data is in the JSON format and it is consumed using a REST API.  In the future, instead of using a weather site to get temperature data, I can easily get real weather data from temperature sensors on a micro controller or a computer on a board.  For example, sensors attached to a Rasperry PI or sensors attached to an Arduino board can easily provide this data.

Reading the Weather Data from OpenWeatherMap web site
// Get temperature from OPenWeatherMap
JObject jsonData = JObject.Parse(new System.Net.WebClient().DownloadString(string.Format("http://api.openweathermap.org/data/2.5/forecast/daily?q=Jeddah&type=accurate&mode=json&units=metric&cnt=3&appid=key")));

if (jsonData.SelectToken("cod").ToString() == "200")
{
todayTemperature = (double)jsonData.SelectToken("list[0].temp.day");
Console.WriteLine($"Temp: {todayTemperature.ToString()}"
);

A Windows’ console application gets this temperature data (from openweathermap.org) and sends this telemetry to an Azure IOT Hub.

The data is sent to Azure IoT Hub and picked up by Azure Stream Analytics

From the IOT Hub the data is picked up by an Azure Stream Analytics job.  This is basically a query (similar to SQL) that takes the data from IOT hub, validates it and places it in an Azure Storage table.

SELECT
DeviceId,
telemetryTime,
temperature as TemperatureReading
INTO
TemperatureTableStorage
from TempSensor
WHERE
DeviceId is not null
and EventProcessedUtcTime is not null
and telemetryTime is not null
Finally analysis – for anomalies – is done in Power Bi Desktop

Once the data is in an Azure Storage table, I performed cold-path analytics on it using PowerBI Desktop.  I used PowerBI Desktop to read the data directly from Azure storage.  I then used it to analyze the data and plot it in different ways.

The image below is a screenshot of a Stacked Column chart of (temperature readings against time).  It shows the variance in Jeddah weather (temperature) over the course of a day on 15 May 2016.

PowerBI Desktop showing temperature data plot
PowerBI Desktop showing temperature data plot