Azure & Cloud Foundry – Setting up a Multi-Cloud Environment

This week I was presenting at the CloudFoundry Summit 2016 Europe in Frankfurt, of course about running CloudFoundry on Azure and Azure Stack. It was greate being here, especially because one of my two main Global ISV partners I am working with on the engineering side, have been here as well and are even a Gold-sponser of the event. It was indeed an honor and great pleasure for me to be part of this summit here … and great to finally have a technical session at a non-Microsoft conference, again:)

Indeed, one reason for that blog-post is because I ran out of time during my session and was able to show only small parts of the last demo.

Anyways, let’s get to the more techncial part of this blog-post. My session was all about running CF in Public, Private as well as Hybrid Clouds with Azure being involved in some way. This is highly relevant since most enterprises are driving a multi-cloud strategy of some way:

  • Either they are embracing Hybrid cloud and run deployments in the public cloud as well as in their own data centers for various reasons or
  • they want to distribute and minimize risk by running their solutions across two (or more) public cloud providers.

Despite the fact my session was focused on running Cloud Foundy on Azure, a lot of the concepts and architectural insights presented, can be re-used for other kinds of deployments with other cloud vendors or private clouds, as well.

The basics – Running Cloud Foundry on Azure and Pivotal

Microsoft has developed a Bosh CPI that enables bosh-based deployments of Cloud Foundry on Azure. The CPI is entirely developed as an Open Source Project and contributed to the Cloud Foundry Incubator project on GitHub.

Based on this CPI, there are two main ways for deploying deploying Cloud Foundry clusters on Microsoft Azure:

There’s a very detailed guidance on all of those GitHub repositories available that do explain all the details, I would suggest to follow this one since it is by far the easiest one: Deploy Cloud Foundry on Azure and always follow the via ARM templates suggestions of the docs.

Finally, in addition to Azure, to completly follow this post you need a 2nd CF cluster running in another cloud. The by far easiest way is to setup a trial account on Pivotal Cloud, which provides you with some sort of "CloudFoundry-as-a-Service". Follow these steps here for doing so…

A Multi-Cloud CF Architecture with Azure on one side

There are many reasons for multi-cloud environments. Some might include running parts in private clouds because of legal and compliance reasons while others including spreading risk across multiple cloud providers for disaster recovery reasons. The example in this post is focused exactly around the multi-cloud DR case since it covers two public cloud providers:


  • Azure Traffic Manager acts as a DNS-based load balancer. We will configure traffic manager with a Priority-Policy, which essentially leads traffic based on priority and if one cloud has a failure, Traffic Manager will route traffic to the other cloud.
  • The Azure Load Balancer is a component you get "for free" in Azure and don’t really need to take care off. It balances traffic across the front-nodes of your CF cluster and is automatically configured for you if you follow the guidance above for deploying CF on Azure.
  • Inside of each CF cluster, we need to make sure to register the DNS names used by Traffic Manager and configure the CF routers to route to our apps in the CF cluster, apropriately.

Setting up traffic manager

Let’s start with setting up the Azure Traffic Manager since we’ll need it’s domain name for the configuration of the apps in both Cloud Foundry targets. You can just add Azure Traffic Manager as a Resource to the Resource Group of your Cloud Foundry deployment or any other resource group. In my case, I deployed the Traffic Manager in another resource group as shown in the following screen shot:

Traffic Manager Setup

The important piece to take for now is the Domain Name of your traffic manager end-points. The actual end-points for traffic manager do not need to be configured at this point in time – we will look at it later.

Deploying the sample app to Pivotal Web Services

As a next step, we need to deploy the sample application to Pivotal web services and need to take note of the (probably random) domain name it has associated ot the application.

cf login -a $pivotalApiEndpoint
cf target -o $pivotalOrg -s $pivotalSpace
cf push -f ./sampleapp/manifest.yml -p ./sampleapp
cf set-env multicloudapp REGION "Pivotal Cloud"
cf restage multicloudapp

To get the domain name and IP, just execute a cf app multicloudapp and take note of the domain name as shown in the following figure:

Pivotal App Domain Name

Deploying the App into Cloud Foundry on Azure

The deployment of the sample app into Azure goes exactly the same way, except that we’ll need to use different API end-points, organization names and spaces inside of Cloud Foundry:

cf login -a $azureCfApiEndpoint
cf target -o $azureOrg -s $azureSpace
cf push -f ./sampleapp/manifest.yml -p ./sampleapp
cf set-env multicloudapp REGION "Microsoft Azure"
cf restage multicloudapp

The Cloud Foundry API end-point I used above is the one that is registered by default when using the ARM-based deployment of open source Cloud Foundry with the Azure Quickstart Templates. The DNS-registration mechanism used there is documented here.

Also note the environment variables I am setting in the scripts above using cf set-env multicloudapp REGION "xyz". Indeed, that is used by our sample application (which is written with Ruby in this case) to output, in which region we are running the app. That way, we can see, if we are directed to the app deployed in Azure or in Pivotal Web Services.

Finally, if you’re new to Azure, the best way to find out the public IP which has been created for your CF cluster, is looking up a public IP address in the Azure Portal which has been created inside of the Resource Group for your Cloud Foundry cluster. Another way – if you are a Shell Scripter – would be to use the following command with the Azure Cross Platform CLI:

azure network public-ip show --resource-group YOUR-RESOURCE-GROUP YOUR-IP-NAME
info:    Executing command network public-ip show
+ Looking up the public ip "YOUR-IP-NAME"
data:    Id                              : /subscriptions/YOUR-SUBSCRIPTION-ID/resourceGroups/YOUR-RESOURCE-GROUP/providers/Microsoft.Network/publicIPAddresses/mszcfbasics-cf
data:    Name                            : YOUR-IP-NAME
data:    Type                            : Microsoft.Network/publicIPAddresses
data:    Location                        : northeurope
data:    Provisioning state              : Succeeded
data:    Allocation method               : Static
data:    IP version                      : IPv4
data:    Idle timeout in minutes         : 4
data:    IP Address                      :
data:    IP configuration id             : /subscriptions/YOUR-SUBSCRIPTION-ID/resourceGroups/marioszpCfSimple/providers/Microsoft.Network/networkInterfaces/SOME-ID/ipConfigurations/ipconfig1
data:    Domain name label               : marioszpcfsimple
data:    FQDN                            :
info:    network public-ip show command OK

Configuring Traffic Manager Endpoints

Next, we need to tell Azure Traffic Manager the endpoints it should direct request which do approach on the DNS record registered with Traffic Manager to.

In our case, we use a simple Priority-based policy which means, Traffic Manager tries to always direct requests to an endpoint with the more important priority except that endpoint is not responsive. For a full documentation about policy routes, please refer to the Azure Traffic Manager docs.

Traffic Manager Endpoints

As you can see from the above, we have two endpoints:

  • Azure Endpoint which goes against the Public IP that the scripts and Bosh deployed for us when we deployed Cloud Foundry on Azure at the beginning.
  • External Endpoint which goes against the domain name for the app that Pivotal Web Services has registered for us (something like

Let’s give it a try…

Now, in the previous configuration for Traffic Manager, we defined that the Pivotal Deployment has priority #1 and therefore will be preferred by Traffic Manager for Traffic routing. So, let’s open up a browser and navigate to the Traffic Manager DNS name for your deployment (in my screen shots and at my CF session that is

not working

Of course, a Cloud Foundry veteran spots immediately, what that means. I am not a veteran in that area, so I was falling into the trap…

Configuring Routes in Cloud Foundry

What I forgot when setting this up, originally, was configuring routes for the Traffic Manager Domain in my Cloud Foundry clusters. Otherwise, Cloud Foundry will reject requests coming in through that domain as it does not know about it.

We need to configure the routes on both ends to make it working, as shown below, we’re adding the traffic manager domain to the routes and ensure, CF routes traffic from those domains to our multi-cloud sample app:


# First do this for Pivotal
cf login -a $pivotalApiEndpoint
cf target -o $pivotalOrg -s $pivotalSpace

cf create-domain $pivotalOrg $trafficMgrDomain
cf create-route $pivotalSpace $trafficMgrDomain
cf map-route multicloudapp $trafficMgrDomain

# Then do this for the CF Cluster on Azure
cf login -a $azureCfApiEndpoint
cf target -o $azureOrg -s $azureSpace

cf create-domain $azureOrg $trafficMgrDomain
cf create-route $azureSpace $trafficMgrDomain
cf map-route multicloudapp $trafficMgrDomain

Now let’s give it a try, again, and see what happens. This time we should see our Ruby sample app running and showing that it runs in Pivotal since we defined the priority for the Pivotal-based deployment within Azure Traffic Manager.
it works

Fixing Routes on Azure with Traffic Manager

After I indeed did the route mapping on Azure, Traffic Manager still claimed that the Azure-side of the house is Degraded, despite having the route configured. Initially, I didn’t understand why.

I didn’t have this problem when I initially tried this setup before. But when I initially tried this, I did not have assigned a DNS name to the Cloud Foundry Public IP in Azure. I’ve changed that because I tried something else in between and assigned a DNS name to the Azure Public IP for the CF Cluster. This lead traffic manager to route against that DNS name instead of the IP.

For troubleshooting that, I initated a fail-over and stopped the app on the Pivotal side (see next section) to make sure, Traffic Manager would try to route to Azure. A tracert finally told me, what was going on:

C:\code\github\mszcool\cfMultiCloudSample [master ≡]> tracert

Tracing route to []
over a maximum of 30 hops:

  1     5 ms     5 ms     4 ms
  2     2 ms     1 ms     1 ms
  3     2 ms     1 ms     2 ms
  4     5 ms     5 ms     5 ms
  5     8 ms     7 ms     7 ms  f-ed1-i.F.DE.NET.DTAG.DE []

When looking at the selected routes, we immediately spot, that the traffic manager domain gets resolved to the domain of the Azure Public IP. So my route on the CF-side of the house was just wrong. The route for Azure should not go against the traffic manager, but rather on the custom domain assigned to the cloud foundry cluster’s public IP in Azure:

cf map-route multicloudapp

C:\code\github\mszcool\cfMultiCloudSample [master ≡]> cf routes
Getting routes for org default_organization / space dev as admin ...

space   host   domain                                            port   path   type   apps            service
dev                          multicloudapp
dev                                multicloudapp

Testing a failover

Of course, we want to test if our failover strategy really works. For this purpose, we kill the App on the Pivotal-environment by executing the following commands:

cf login -a $pivotalApiEndpoint
cf target -o $pivotalOrg -s $pivotalSpace
cf stop multicloudapp

After that, we need to wait a while until traffic manager detects, that the application is not healthy. It then also might take a few seconds or minutes until the DNS record updates are propagated until we see the failover working (the smallest DNS TTL you can set, is 300s as of today).

So watch, what goes on, the simplest way is looking at the Azure Portal and opening up the Azure Traffic Manager configuration. At some point in time we should see, that one of the endpoints changes its status from Online to Degraded. When opening up a browser and trying to navigate to the traffic manager URL, we should no get redirected to the Azure-based deployment (which we see given our App is outputing the content of the environment variable we did set different for each of the deployments, before):

failover test

Final Words

I hope this gives you a nice start in setting up a Multi-Cloud Cloud Foundry environment across Azure and a 3rd-party cloud or your own data center. I will try to continue this conversation on my blog, for sure. There are tons of other cool things to explore with Cloud Foundry in relationship to Azure, and I’ll at least try to cover some of those. Let me know what you think by contacting me through!

As usual – all the code is available on my GitHub in the following repository:

Azure Virtual Machines – A Solution for Instance Metadata in Linux (and Windows) VMs

At SAP Sapphire we announced the availabiltiy of SAP HANA on Azure. My little contribution to this was working on case that was shown as a demo in the key note at SAP Sapphire 2016: Sports Basement with HANA on Azure. It was meant as a show-case and proof for running HANA One workloads in Azure DS14 VMs and it was the first case of HANA on Azure productive outside of the SAP HANA on Azure Large Instances.

While we proved we can run HANA One in DS14, what’s still missing is the official Marketplace image. We are working on that on-boarding of HANA One into the Azure Marketplace at the time I am writing this post here. This post is about a very specific challenge which I know is needed by many others, as well. While Azure will have a built-in solution, it is not available, today (August 2016), so this might be of help for you!

Scenario: A VM reading and modifying data about itself

This is a very common scenario. HANA One needs it as well. On other cloud platforms, especially AWS, a Virtual Machine can query information about itself without any hurdles through an instance metadata service. On Azure, as powerful as it is, we don’t have such a service available, yet (per August 2016). To be precise, we do, but it currently delivers information about regular maintenance, only. See here for further details. While such a service is in the works, it is not available, yet.

Instance metadata is especially interesting for software providers which want to offer their solutions through the marketplace. The metadata can be used for various aspects including association and validation of licenses or protection of software assets inside of the VM.

But what if a VM needs to modify settings through Cloud Provdier Management APIs, automatically? Even with an instance metadata service available, such requirements need a more advanced approach.

Solution: A possible approach outlined (and code on my GitHub Repo)

Based on that I started thinking about this challenge, prototyping it and sharing it with the broader technical community. With Azure having the concept of Service Principals available, I tried the following path:

  1. If we could pass in a Service Principal at the creation of the VM, we’d have all we need to call into Azure Resource Manager APIs.
  2. The VM can identify itself through it’s “Unique VM ID”. So we could query into Azure Resource Manager APIs and find the VM based on this ID.
  3. For Marketplace use cases it is necessary, that the user is FORCED to enter the credentials. So an ARM template with mandatory parameters for passing in the details for the Service Credential is needed.

With this in place we can solve both problems with a single solution: with the right permissions equipped, a Service Principal can query instance metadata through Azure Resource Manager APIs and modify virtual machine settings at the same time. Indeed, the Azure Cloud Foundry Bosh solution uses that approach as well, although it does not need to “identify” virtual machines. It just creates and deletes them…

For most Marketplace Vendors incl. the case above, the VM needs to change details about itself. So their would need to be a way for the VM to find itself through the VM Unique ID. Since nobody was able to answer the quesiton if that’s possible, I prototyped it with the Azure CLI.

Important Note: This is considered to be a prototype to proof if what is outlined above generally works. For production scenarios you’d need to code this in professional frameworks, better protect secrets by using those and build this into your product.

GitHub Repository: I’ve prototyped the entire solution and published it on my GitHub Repository here:


Step #1: Create a Service Principal

The first step is creating a Service Principal. That is not an easy task, especially when you think about offerings in a Marketplace where business people want to have fast and simple on-boarding.

Guess for what I’ve created this solution-prototype on my GitHub repository (with a blog-post followed). The idea of this prototype is to provide a ready-to-use service that creates Service Principals in your own subscription.

I still run this on my Azure Subscription, so if you need a Service Principal and you don’t like scripting, just use my tool for creating it. Note: please use in-private browsing and sign-in with a Global Admin (or get a Global Admin who does an Admin-Consent for my tool in your tenant).

If you love scripting, then you can use tools such as the Azure PowerShell or the Azure Cross Platform CLI. In my prototype, I built the entire set of scripts with the Azure CLI and tested it on Ubuntu Linux (14.04 LTS). Even cooler, I indeed developed and debugged all the Scripts on the new Bash on Ubuntu on Windows:
Bash on Windows

The script shows a sample-script which creates a Service Principal and assigns the needed roles to the Service Principal to read VM metadata in the subscription (it would be better to just target the resource group in which you want to create the VM… I just kept it like that for convenience).

# Each Service Principal in Azure AD is backed by an 'Application-registration'
azure ad app create --name "$servicePrincipalName" \
                    --home-page "$servicePrincipalIdUri" \
                    --identifier-uris "$servicePrincipalIdUri" \
                    --reply-urls "$servicePrincipalIdUri" \
                    --password $servicePrincipalPwd

# I use JQ to extract data out of JSON results such as the AppId
createdAppJson=$(azure ad app show --identifierUri "$servicePrincipalIdUri" --json)
createdAppId=$(echo $createdAppJson | jq --raw-output '.[0].appId')

azure ad sp create --applicationId "$createdAppId"

Note: I created the App and the Service Principal separately since the AppID is needed to login using Azure CLI with the Service Principal, anyways. Therefore I separated those steps since I needed to read the App and the Service Principal Object IDs, anyways.

Note: JQ is really a handy command line tool to extract data from the neat JSON-responses of the Azure CLI. Take a look at further details here.

After the Service Principal and the App are both created, I can assign the roles to the Service Principal so that he can query the VM Metadata in my subscription:

# If I would create the resource group earlier, I could use the
# --resource-group switch instead of the --subscription switch here to scope
# permissions to the resource group of the VM to-be-created, only.
azure role assignment create --objectId "$createSpObjectId" \
                             --roleName Reader \
                             --subscription "$subId" 

Finally, to complete the work, I needed the Tenant ID of the Azure AD Tenant for the target subscription which is also needed for the Login with a Service Principal with the Azure CLI. Indeed the following code-snippet is at the very beginning of the

# Get the entry for the target subscription
accountsJson=$(azure account list --json)

# The Subscription ID is needed throughout the script
subId=$(echo $accountsJson | jq --raw-output --arg pSubName $subscriptionName '.[] | select(.name == $pSubName) | .id')

# Finally get the TenantID of the Azure AD tenant which is associated to the Azure Subscription:
tenantId=$(echo $accountsJson | jq --raw-output --arg pSubName $subscriptionName '.[] | select(.name == $pSubName) | .tenantId')

With those data-assets above in place, the tenantId, the appId and the password selected for the app-creation, we can log-in with the service principal using the Azure CLI as follows:

azure telemetry --disable
azure config mode arm
azure login --username "$appId" --service-principal --tenant "$tenantId" --password "$pwd"

Note: Since we want to login in a script that runs automated in the VM to extract the metadata for an application at provisioning-time (in my sample – in the real world this could happen on a regular basis with a cron-job or something similar), we need to make sure to avoid any user prompts. The latest versions of Azure CLI prompt for telemetry data collection on the first call after installation. In an automation script you should always turn this off with the first command (azure telemetry --disable) in your script.

Step #2: A Metadata Extraction Script

Okay, now we have a Service Principal that could be used from backend jobs to extract metadata for the VM in an automated way, e.g. with the Azure CLI. Next we need a script to do exactly that. For my prototpye, I’ve created a shell script ( that does exactly that. For this prototype I injected this script through the Custom Script Extension for Linux.

Note: Since the SAP HANA One team uses Linux as their primary OS, I just developed the entire prototype with Shell-Scripts for Linux. But fortunately, due to the Bash on Ubuntu on Windows 10, you can also run those from your Windows 10 machine right away (if you have the 2016 Anniversary Update installed).

You can dig into the depths of the entire if you’re interested. I just extract VM and Networking details in their to show, how to crack the VM UUID and to show, how-to extract related items which are exposed as separate resources in ARM attached to the VM.

Let’s start with first things first: the script requires the Azure Cross Platform CLI installed. On a newly provisioned Azure VM, that’s not there. So the script starts with installing stuff:

sudo mkdir /home/metadata
export HOME=/home/metadata

# Install the pre-requisites using apt-get

sudo apt-get -y update
sudo apt-get -y install build-essential
sudo apt-get -y install jq

curl -sL | sudo -E bash -
sudo apt-get -y install nodejs

sudo npm install -g azure-cli

Important Note: Since the script will run as a Custom Script extension, it does not have things like a user HOME directory set. To make NodeJS and NPM work, we need a Home-Directory. Therefore I set the HOME to /home/metadata to which I also save all the metadata JSON responses during the script.

The next hard thing was cracking the VM Unqiue ID. This Unique ID is available for some time in Azure and it identifiers a Virtual Machine for its entire lifetime in Azure. That ID changes when you take the VM off from Azure or delete it and re-create it. But as long as you just provision/de-provision or start/shutdown/start the VM, this ID remains the same.

But, the key question is whether you can use that ID to find a VM using ARM REST APIs to read metadata about itself, or even change its settings through Azure Resource Manager REST APIs. Obviously, the answer is yes, otherwise I would not write this post:). But the VM ID presented in responses from Azure Resource Manager REST APIs is different from what you get when reading it inside of the VM out of its asset tags – due to Big Endian bit ordering differences, also documented here.

So in my Bash-script for reading the metadata, I had to convert the VM ID before trying to use it to find my VM through the ARM REST APIs as follows:

# Read the VMID from the BIOS asset tag (skip the prefix, i.e. the first 6 characters)
vmIdLine=$(sudo dmidecode | grep UUID)
echo "---- VMID ----"
echo $vmIdLine
echo "---- VMID ----"
echo $vmId

# Now switch the order due to encoding differences between the Windows and Linux World
echo "---- VMID fixed ----"
echo $vmId

That did the trick to get a VM ID which I can use to find my VM through ARM REST APIs, or through the Azure CLI since I am using bash-scripts here:

# Login, and don't forget to turn off telemetry to avoid user prompts in an automation script.
azure telemetry --disable
azure config mode arm
azure login --username "$appId" --service-principal --tenant "$tenantId" --password "$pwd"

# Get the details for the VM and save it
vmJson=$(azure vm list --json | jq --arg pVmId "$vmId" 'map(select(.vmId == $pVmId))')
echo $vmJson > /home/metadata/vmmetadatalist.json
echo "---- VM JSON ----"
echo $vmJson

What you see above is, that there’s today (as of August 2016) no way to query Azure Resource Manager REST APIs by using the VM Unique ID. Only attributes such as resource group and VM name can be used. Of course that applies to the Azure CLI, as well. Therefore I retrieve a list of VMs and filter it down using JQ by the VM ID… which fortunately is delivered as an attribute in the JSON response from the ARM REST APIs.

Now we have our first metadata asset, a simple list entry for the VM in which we are runnign with basic attributes. But what if you need more details. The obvious way is to execute an azure vm show --json command to get the full VM-JSON. But even that will not include all details. E.g. lets say you need the public or the private IP address assigned to the VM. What you need to do then is, navigating through relationships between those Azure Resource Manager Assets (the VM and the Network Interface Card resource, in specific). That is where it gets a bit tricky:

# Get the detailed VM JSON with relationship attributes (e.g. the NIC identified through its unique Resource ID)
vmResGroup=$(echo $vmJson | jq -r '.[0].resourceGroupName')
vmName=$(echo $vmJson | jq -r '.[0].name')
vmDetailedJson=$(azure vm show --json -n "$vmName" -g "$vmResGroup")
echo $vmDetailedJson > /home/metadata/vmmetadatadetails.json

# Then get the NIC for the VM through ARM / Azure CLI
vmNetworkResourceName=$(echo $vmJson | jq -r '.[0].networkProfile.networkInterfaces[0].id')
netJson=$(azure network nic list -g $vmResGroup --json | jq --arg pVmNetResName "$vmNetworkResourceName" '.[] | select(.id == $pVmNetResName)')
echo $netJson > /home/metadata/vmnetworkdetails.json

# The private IP is contained in the previously received NIC config (netJson)
netIpConfigsForVm=$(echo $netJson | jq '{ "ipCfgs": .ipConfigurations }')
echo $netIpConfigsForVm > /home/metadata/vmipconfigs.json

# But the public IP is a separate resource in ARM, so you need to navigate and execute a further call
netIpPublicResourceName=$(echo $netJson | jq -r '.ipConfigurations[0]')
netIpPublicJson=$(azure network public-ip list -g $vmResGroup  --json | jq --arg ipid $netIpPublicResourceName '.[] | select(.id == $ipid)')
echo $netIpPublicJson > /home/metadata/vmipconfigspublicip.json

This should give you enough of the needed concepts to get all sorts of VM Metadata for your own VM using Bash-scripting. If you want to translate this to your Java, .NET, NodeJS or whatsoever code, then you need to look at the management libraries for the respective runtimes/languages.

Step #3: Putting it all together – the ARM template

Finally we need to put this all together! That happens in an ARM template and the parameters this ARM template requests from the user to be entered on provisioning. An ARM-template similar to this could be built for a solution template based Marketplace Offer.

On my GitHub repository for this prototype, the ARM template and its parameters are baked into the files azuredeploy.json and azuredeploy.parameters.json. I won’t go through all details of these templates. The most important aspects are in the parameters-section and in the VM creation section where I hook up the Service Principal with the Script and attach it as a Custom Script Extension. Start with an excerpt of the “parameters”-section of the template:

"parameters": {
    "storageAccountName": {
      "type": "string"
    "dnsNameForPublicIP": {
      "type": "string"
    "adminUserName": {
      "type": "string"
    "adminPassword": {
      "type": "securestring"
    "azureAdTenantId": {
      "type": "string"
    "azureAdAppId": {
      "type": "string"
    "azureAdAppSecret": {
      "type": "securestring"

The important parameters are the azureAdTenantId, azureAdAppId and azureAdAppSecret parameters. Those together form the sign-in details for the Service Principal as it is used in the script described in the previous section to read out the metadata for the VM on provisioning, automatically.

Reading the metadata is initiated through specifying my as a custom script extension for the VM in the ARM template as below:

      "type": "Microsoft.Compute/virtualMachines/extensions",
      "name": "[concat(parameters('vmName'),'/writemetadatajson')]",
      "apiVersion": "2015-06-15",
      "location": "[parameters('location')]",
      "dependsOn": [
        "[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
      "properties": {
        "publisher": "Microsoft.OSTCExtensions",
        "type": "CustomScriptForLinux",
        "typeHandlerVersion": "1.5",
        "settings": {
          "fileUris": [
            "[concat('https://', parameters('storageAccountName'), '')]"
        "protectedSettings": {
          "commandToExecute": "[concat('bash ', parameters('azureAdTenantId'), ' ', parameters('azureAdAppId'), ' ', parameters('azureAdAppSecret'))]"

Since the Azure Linux Custom Script extension prints a lot of diagnostics details about what it is doing, we need to at least make sure that our sensitive data, especially the Service Principal’s password is NOT included in that diagnostics logs to keep it protected (well… as good as possible:)). Therefore the commandToExecute-setting is put into the protectedSettings-section which is NOT disclosed in any diagnostics-logs from the Custom Script Extension.

Important Note: On the Azure Quickstarts Template Gallery are many templates that are using the custom script extension version 1.2. For having the commandToExecute-setting in the protectedSettings-section, you have to use a newer version. For me, the latest version 1.5 at the time of writing the post worked. With the previous versions it just didn’t call the script.

Step #4: Trying it out…

Before you can try things out, there’s one thing you need to prepare: create the storage account and upload the into that account (argh, next time I just write the scripts to clone my GitHub-repository:)). To make it easy, I created a script called with 10 parameters that does everything:

  1. Create the Resource group
  2. Create the storage account
  3. Upload the script to the storage account
  4. Update the parameters in azuredeploy.parameters.json to reflect your service principal attributes
  5. Start the deployment with the template and the updated template parameters.

And while trying I thought the 10 parameters make it flexible, but it’s still a hard start if you’d love to just quickly try this. So I created another bash-script called That asks you for all the data interactively and calls the and scripts based on the input you interactively entered. Just like below:

Getting Started

Final Words

With this in place, you have a solution that allows you to do both, reading instance metadata of the VM in which your software runs and also (with the right permissions set on the Service Principal) modify aspects of the VM through Azure Resource Manager APIs or Command Line Interfaces.

Sure, this reads like a complex, long thing. It would be much easer for Instance Metadata if you could do it without authentication and Service Principals. All I can say is that this will change and will become easier. But for now, that’s a solution and I hope I provide you with valuable assets that make the story less complex for you to achieve this goal!

And even when we have a simpler solution for instance metadata available in Azure, the content above shows you some advanced scripting concepts of which I hope you can learn from. The coolest thing of it: since Windows 10 Anniversary Update you can run all of the above on both, Windows and Ubuntu Linux, BECAUSE all is written as Bash scripts.

For me the nice side-effect of this was experiencing, how mature the Linux Subsystem for Windows seems to be. What really surprised me is, that I even can run Node Version Manager and build-essential on it (I even tried compiling v5 of my Node.JS version using it and it ran through and works).

Anyways – if you have any questions, reach out to me on Twitter.

A Deep Dive into Azure AD Multi-Tenant Apps, OAuth/OpenIdConnect Flows, Admin Consent and Azure AD Graph API

I am currently working with one if my main Global Independent Software Vendor (ISV) partners for on-boarding their solution into the Azure Marketplace. The main challenge that we face there is, that the solution needs to do some post-provisioning steps in the end-customer’s target subscription as well as Azure Active Directory tenant:

  • Creating a Service Principal that can be used by the Software inside of the provisioned VM in the end-customer’s target directory.
  • Using that service principal to read data from the end-customer’s Azure Subscription.

Note: the end customer in this case is the customer, who purchases the product published by the ISV in the store!

Such cases typically require the creation of "multi-tenant" Azure Active Directory applications. And this application then needs to access the end-customer’s target directory using the Azure AD Graph API. At the same time, creating service principals is not an easy task.

A Multi-Tenant Web App to create Service Principals as Sample

To make this as practical as possible, I decided to create a web app that creates service principals in the target Azure Active Directory of an end-customer that’s using the web app.

This shows, how the general multi-tenancy challenge can be solved and at the same time provides a handy tool for creating Service Principals, which is a harder task on its own.

All the details for using the app and for cloning the source code are available on my GitHub-repository under the link below. In addition, I also run the app on my Azure Subscription as a free-tier Azure Web App.

The documentation shows, how-to register a multi-tenant application in your Azure AD tenant to make such an application available as a multi-tenant application. It shows, how such an application is reflected in a customer’s target Azure AD tenant and how-to manage access to it.

The sample also demonstrates the various OAuth- and OpenIdConnect-flows which are needed in a simple yet practical and useful scenario. All of this should be easy to reflect to your own scenarios and I found that, despite Microsoft has decent docs for Azure Active Directory out there, such a sample is not easy to find in an end-2-end and focused way. That’s what I tried to create.

The basic/initial OpenIdConnect-Flow for Signing-In

So, let’s start with digging into the OAuth details. First of all, all the theory is well-explained on the official Microsoft Azure and MSDN documentation pages (see last section of the article).

I just get down at the protocol-trace level so that it’s easy for developers to understand what’s going on and how simple those protocols are, indeed. Also it should help configuring/using other frameworks on all sorts of platforms appropriately to fit into this model.

For all of the below I am using the real deployment of my Service Principal Web App Demo mentioned above (note: I might remove that deployment at any point in time since I’ve guidance on my GitHub-repo for how-to deploy it in your own Azure AD tenant, as well).

  1. First the user browses to the target application which is secured by Azure AD.

  2. That typically ends up in a redirect to Azure AD as an IDP to get an initial token. A typical Redirect Request for an OAuth Sign-In flow looks as follows (using line-breaks to make it easier to read):

        client_id=---your client id from azure ad app registration---
        &redirect_uri=https%3a%2f%2flocalhost%3a44330%2f HTTP/1.1
    Connection: keep-alive
    Upgrade-Insecure-Requests: 1
    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
    Accept-Encoding: gzip, deflate, sdch, br
    Accept-Language: en-US,en;q=0.8
    • The client_id-parameter reflects the Client ID that is configured in Azure Active Directory for that application.
    • The scope-parameter contains various additional items used for token validation.
    • The nonce is used to protect against token replay attacks (typically). It’s value provided in the request must match the response and is unique per user session, typically.
  3. When the user (assume Admin) signs in for the first time, a consent dialog is displayed. This is part of the OAuth Authorization flow and gives the user a chance to "Accept" or decline the permissions the app needs. Since that is handled by Azure AD as an IdP, we don’t look into the details of the requests issued there.


  4. Once the user accepted this consent, Azure AD posts a token to a target URL which was specified in the earlier request with the redirect_uri parameter. Let’s look at the details (again with newlines for readability):

    POST https://localhost:44330/ HTTP/1.1
    Host: localhost:44330
    Connection: keep-alive
    Content-Length: 2428
    Cache-Control: max-age=0
    Upgrade-Insecure-Requests: 1
    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36
    Content-Type: application/x-www-form-urlencoded
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
    Accept-Encoding: gzip, deflate, br
    Accept-Language: en-US,en;q=0.8
    Cookie: OpenIdConnect.nonce.Z9f6E8u...

    That post contains an OAuth-Authorization code in the body. This Code can be used to request tokens from Azure AD for downstream API-calls of APIs which are also secured by Azure AD. Of course, the code will only work for APIs to which the app has been given permissions in the Azure AD portal.

    Permissions of the App

    For the "Service Principal Demo App" those permissions are highlighted in the screen shot above. The Code therefore would work for requests of tokens for the Azure Active Directory Graph API (identified as and the Azure Service Management and Resource Manager APIs (identified as

  5. When the Service Principal Web App receives the request, it actually uses it to request an additional token that permits the app to call into Azure Active Directory Graph APIs. This is another token-request which the app tries to execute when it received the post above.

    POST HTTP/1.1
    Accept: application/json
    x-client-last-request: a5db36d8-ab46-4dfc-b96e-9dc31cf06a5c
    x-client-last-response-time: 1284
    x-client-last-endpoint: token
    x-client-SKU: PCL.Desktop
    x-client-CPU: x64
    x-client-OS: Microsoft Windows NT 10.0.10586.0
    x-ms-PKeyAuth: 1.0
    client-request-id: 1eb9034c-e02c-4e7b-8c4f-0fe5e2faabfe
    return-client-request-id: true
    Content-Type: application/x-www-form-urlencoded
    Content-Length: 1079
    Expect: 100-continue client id from azure ad app registration---&client_secret=---your client secret configured in the azure ad portal&grant_type=authorization_code&code=---previously received authorization code---&redirect_uri=https%3A%2F%2Flocalhost%3A44330%2F

    Such a request would the respond with a new OAuth Bearer Token that would permit us to call into the Azure AD Graph APIs. This token needs to be added to the HTTP Authorize header on each request, then. Here’s an example response for the request above:

    HTTP/1.1 200 OK
    Cache-Control: no-cache, no-store
    Pragma: no-cache
    Content-Type: application/json; charset=utf-8
    Expires: -1
    Server: Microsoft-IIS/8.5
    Strict-Transport-Security: max-age=31536000; includeSubDomains
    X-Content-Type-Options: nosniff
    x-ms-request-id: fb2db119-ca9e-421d-8007-6ae7e97d163e
    client-request-id: 1eb9034c-e02c-4e7b-8c4f-0fe5e2faabfe
    x-ms-responsehealth: TargetId=ESTSFE_IN_329;Action=None;Category=None;Health=0;Load=9;
    Set-Cookie: esctx=AAABAA ...;; path=/; secure; HttpOnly
    Set-Cookie: x-ms-gateway-slice=productionb; path=/; secure; HttpOnly
    Set-Cookie: stsservicecookie=ests; path=/; secure; HttpOnly
    X-Powered-By: ASP.NET
    Date: Wed, 29 Jun 2016 21:31:37 GMT
    Content-Length: 3826
      "token_type": "Bearer",
      "scope": "Directory.AccessAsUser.All Directory.ReadWrite.All Group.ReadWrite.All User.Read",
      "expires_in": "3599",
      "ext_expires_in": "3600",
      "expires_on": "1467239498",
      "not_before": "1467235598",
      "resource": "",
      "access_token": "eyJ0eXAiOiJK...",
      "refresh_token": "AAABAAAAiL9Kn2..."

    The response is a JSON-response containing some helpful details about the issued token as well as a refresh-token to renew the actual access token. Note: if you need to get a new access token with the refresh token, you still need to have the Client ID and the App Secret available in that refresh-request.

OAuth Admin Consent for Multi-Tenant Azure AD Apps

Yikes, the biggest challenge I faced with the tool when building it was, that ordinary Azure AD Users (role = ‘User’) where not able to use it. You had to be a ‘Global Admin’ to execute it.

The main reason for that was, that my app requires "acting as the Signed-in User" against Azure AD Graph API. And for that, the Azure AD team changed the default behavior for a good reason a while ago (well, in March 2015):

So, to enable ordinary users to make use of such applications, a Global Admin first needs to "approve" the application for the target directory by running through an OAuth Admin Consent. This is a special type of consent that asks the Global Admin if he wants to make the permissions the App requires available to ordinary users inside of the Organization (technically: in the target directory against the multi-tenant app tries to work depending on the signed-in user).

The steps are:

  1. The Global Admin needs to Sign-in into the application.

  2. The application needs to provide the appropriate "on-boarding"-function, which essentially initiates the Admin-Consent against the target directory of the signed-in user. I did this by just adding a button to my app that starts the Admin Consent.

    Admin Consent Function

  3. All that button does is composing a URL that goes against the Azure Active Directory OAuth endpoints to walk through the Admin Consent. This leads to the following request that initiates the Admin Consent:

    Connection: keep-alive
    Cache-Control: max-age=0
    Upgrade-Insecure-Requests: 1
    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
    Accept-Encoding: gzip, deflate, sdch, br
    Accept-Language: en-US,en;q=0.8    

    The really important aspect of that request is the query-string parameter prompt=admin_consent which does the work. I combine the request with issuing an authorization code right away, but I do think that’s optional (would need to read back in the specs:)).

  4. After that initial admin-consent is completed, every other ordinary user (role = ‘user’) can sign-in to the application and make use of it. The admin-consent literally approved the application through an administrator for the organization for security reasons.

The Graph API calls with the issued tokens

Finally, with that Access Token we can make calls into the Azure AD Graph API. The sample-calls for creating a Service Principal are similar to the following types of requests.

  1. First, the app tries to find if the needed "Application" for the Service Principal has been created in Azure AD, already:

    GET$filter=identifierUris/any(iduri:iduri%20eq%20'http%3A%2F%2Fyourappidurienteredinthescreen')&api-version=1.6 HTTP/1.1
    DataServiceVersion: 3.0;NetFx
    MaxDataServiceVersion: 3.0;NetFx
    Accept: application/json;odata=minimalmetadata
    Accept-Charset: UTF-8
    DataServiceUrlConventions: KeyAsSegment
    User-Agent: Microsoft Azure Graph Client Library 2.1.1
    Authorization: Bearer eyJ0eXAiOiJK...
    X-ClientService-ClientTag: Office 365 API Tools 1.1.0612
    Connection: Keep-Alive

    The request above looks, if an Application is registered in the target tenant yourazureadtenantid with the App ID URI http://yourappidurienteredinthescreen. The HTTP-response will have an OData-based JSON with the resulting elements in it if an App exists, already.

      "odata.metadata": "$metadata#directoryObjects/Microsoft.DirectoryServices.Application",
  2. If no application exists, it actually creates the application by posting an ApplicationEntity into the Graph API:

    POST HTTP/1.1
    DataServiceVersion: 3.0;NetFx
    MaxDataServiceVersion: 3.0;NetFx
    Content-Type: application/json;odata=minimalmetadata
    Accept: application/json;odata=minimalmetadata
    Accept-Charset: UTF-8
    DataServiceUrlConventions: KeyAsSegment
    User-Agent: Microsoft Azure Graph Client Library 2.1.1
    Authorization: Bearer eyJ0eXAiOiJKV1...
    X-ClientService-ClientTag: Office 365 API Tools 1.1.0612
    Content-Length: 201
    Expect: 100-continue
     "odata.type": "Microsoft.DirectoryServices.Application",
     "displayName": "YourAppDisplayName",
     "identifierUris@odata.type": "Collection(Edm.String)",
     "identifierUris": [

    This post will return with a detailed JSON object which contains all the details about the created App including it’s AppId.

  3. Then the application does the same for checking if a Service Principal exists for the Application previously created, already.

    GET$filter=appId%20eq%20'b3ccae52-19bc-45a1-a4e4-f572f6963213'&api-version=1.6 HTTP/1.1
    DataServiceVersion: 1.0;NetFx
    MaxDataServiceVersion: 3.0;NetFx
    Accept: application/json;odata=minimalmetadata
    Accept-Charset: UTF-8
    DataServiceUrlConventions: KeyAsSegment
    User-Agent: Microsoft Azure Graph Client Library 2.1.1
    Authorization: Bearer eyJ0eXAiOiJKV1...
    X-ClientService-ClientTag: Office 365 API Tools 1.1.0612

    The response will again contain an OData JSON document with the service principal if it exists, already. I am skipping the details for now…

  4. Finally, if the Service Principal does not exist, the app creates one with a password credential attached to it. That means this principal can be used by service- and backend-applications.

    POST HTTP/1.1
    DataServiceVersion: 3.0;NetFx
    MaxDataServiceVersion: 3.0;NetFx
    Content-Type: application/json;odata=minimalmetadata
    Accept: application/json;odata=minimalmetadata
    Accept-Charset: UTF-8
    DataServiceUrlConventions: KeyAsSegment
    User-Agent: Microsoft Azure Graph Client Library 2.1.1
    Authorization: Bearer eyJ0eXAiOiJKV1...
    X-ClientService-ClientTag: Office 365 API Tools 1.1.0612
    Content-Length: 627
    Expect: 100-continue
     "odata.type": "Microsoft.DirectoryServices.ServicePrincipal",
     "accountEnabled": true,
     "appId": "b3ccae52-19bc-45a1-a4e4-f572f6963213",
     "displayName": "tttttteeeeeeeessssstttt",
     "passwordCredentials@odata.type": "Collection(Microsoft.DirectoryServices.PasswordCredential)",
     "passwordCredentials": [
             "customKeyIdentifier": null,
             "endDate": "2017-06-29T21:43:15.6654372Z",
             "keyId": "0259571d-a663-4507-94e9-9381629e2116",
             "startDate": "2016-06-29T21:43:15.6639533Z",
             "value": "pass@word1"
     "servicePrincipalNames@odata.type": "Collection(Edm.String)",
     "servicePrincipalNames": [

Note: One piece missing is to assign appropriate roles for executing on Service Management Operations for Azure Resource Manager Rolebased Access Control so that the Service Principal can execute the needed operations against the management APIs.

Do you really need to know all of these details?

With that we went through all the protocol details for the OAuth, OpenIdConnect and Graph API calls that are needed to accomplish an end-2-end task. It’s actually a very practical look at how all these "sequence diagrams" that are talking about OAuth are looking in the real world.

My intent to show these details was, to help people which are working with programming languages and runtimes that do not have nice SDKs available for encapsulating those protocol details to at least have a high-level overview and starting-point without reading the OAuth and OpenIdConnect specs. I know it’s high-level, but it’s practical.

OAuth and OpenId Connect Azure AD Resources

The following links do explain all the different query string parameters of the OAuth/OpenIdConnect flows with Azure AD. They are a great resource to better understand the http-requests I’ve outlined above.

SDKs for languages and Runtimes

Fortunately, if you are a .NET, Java, Node.js, PHP or Python developer, there are numerous examples and resources available. Also for Azure AD’s Graph API there’s a nice tool available to dig into all the JSON and protocol details.

Here are the most important links:

For Graph API there are also good samples and SDKs out there:

I hope that was helpful and gives you a great background or even a handy tool to create Service Principals. My partner needed the understanding of how-to build such multi-tenant Azure AD applications that do access the Azure AD Graph API and they needed to create Service Principals out of such a multi-tenant web application. So I thought it’s worth spending the additional time and getting it documented!

Microsoft Cognitive Services – Shopping Offer Comparisons with Bing Image Search v5

This week I got the chance for my first attempts with Cognitive services and image search – based on a request from my Global ISV partner:) While the services are actually easy to use for a developer, documentation is behind and Internet Search is highly miss-leading (since it mostly points to a previous Bing-search API that does not exist, anymore). That’s why I decided to blog about it and point people into the right direction.

The Case – Price Comparisons

The use case is simple and has been made available with our latest Bing-App for the iOS: perform price comparisons of products across multiple shops. But instead of an App, this blog-post is about doing this from within any application with right usage of the new Bing Search APIs in their version 5.0.

Cognitive Services

At the annual //build 2016 conference, Microsoft announced its new Cognitive services. These are services exposed through simple-to-use APIs for doing all sorts of intelligent stuff based on extensive data and machine learning algorithms. Face recognition, Speech Recognition and the likes are some of the more advanced APIs.

What many people don’t know is, that Bing APIs are also part of the Cognitive Services, now. That’s because a lot of the intelligence behind Cognitive Services is powered by Bing services incl. their intelligence and machine learning components. So don’t let yourself miss-lead by Internet-search results pointing to any kind of previous services.

That means don’t search the Internet, just navigate to Cognitive services right away and dig into the documentation.

Walking through the Use Case

Ok, let’s walk through the Use Case in a schematic way by looking at the APIs and their responses. That will give you a good view on how it actually works.

  1. You need a Microsoft Account, so if you don’t have one, sign up for one first.
  2. If you have a Microsoft Account, the first thing you need is signing up for the Cognitive Services Preview. For that purpose just navigate to the Cognitive Services Subscriptions Page.
  3. Once you have signed up for the subscriptions, you get application keys for each of the different types of APIs as shown in my screen-shot below. You need to “Show” and “Copy” the key for Bing Search to implement the case I’ve described above:
    Subscription Keys
  4. Once you have copied the subscription key for Bing Search, you can use Bing Image Search to look for images products you want to get price comparisons for.
    • I know, that sounds a bit confusing. But let’s assume you want to look for offerings of a sports watch such as Garmin Forerunner 225 (which I am currently interested in:)).
    • To get to shopping offers through Bing or Bing APIs, you’d do an image search for “Garmin Forerunner 225” and then the “new Bing” and “new Bing APIs” will give you that additional details of shopping offers.
    • Let me show you how that works with the API, but note that the same thing works with Bing Search in the browser for end-users.
  5. For testing the APIs I use the available Test User Interfaces that you can use for learning the APIs without writing code right away.
  6. Now, let’s dig into a few requests. If you want to get offers for e.g. a “Garmin Forerunner” watch, first you need to find images for that watch.
    GET forerunner 225&count=10&offset=0&mkt=en-us&safeSearch=Moderate HTTP/1.1
    Ocp-Apim-Subscription-Key: <<your API Key taken from the previous screen shot above>>
  7. Next you need to examine the results of the request above. The interesting pieces of the JSON response are the imageInsightsToken and the insightsSourcesSummary elements as highlighted below:
    Insights Details Highlighted
  8. These two attributes are used for the following purposes:
    • imageInsightsToken is used in the next subsequent request to get further details about the image kept and managed by Bing.
    • insightsSourcesSummary is something you can use to asses if it’s worth querying further insights for the image. E.g. in my case I wanted to get as many shop-offers for the Garmin as possible. So if I’d write this in a program I
      would search the top-most search results (first page of JSON elemetns I got from the previous API request) and pick those with the highest shoppingSourcesCount value as a simple strategy.
  9. So, let’s use the imageInsightsToken to get further details from the image. the following code shows the next request we’re executing with the token passed in as an additional parameter. Also note the use of the modulesRequested parameter
    which I use to specify which kinds of additional details I’d love to get from Bing for that image. That said, there are different modules providing additional information beyond the shopping sources.

      q=garmin forerunner 225&count=10&offset=0
    Ocp-Apim-Subscription-Key: <<your API Key taken from the previous screen shot above>>
  10. Finally we get the results we want to have from this request. We see a list of sources which are offering the product for a given price and the basic stock-information which Bing extracted from the web pages of that source for my Garmin Forerunner 225 search.
    Shopping Sources Results

I really find this kind-of cool. I found it really interesting to work with my ISV partner through that and given that documentation is not always most up2date on those new services I thought it’s worth blogging about it:).

Further resources for reading: Here are a few helpful links with further details and information. Fortunately from the time I started writing this post until I got it published, the team has updated additional documentation on MSDN!

Let me know your thoughts, best via!

NServiceBus, Azure Service Bus and Service Bus for Windows Server – A PoC for a Hybrid Cloud / Portable Solution

NServiceBus is a very popular messaging and workflow framework for .NET developers across the globe. This week a few peers of mine and I were working for one of our global ISV partners to evaluate, if NServiceBus can be used for Hybrid Cloud and portable solutions, that can be moved seamless from On-Premises to the Public Cloud and vice-versa.

My task was to evaluate, if NServiceBus can be used with both, Microsoft Azure Service Bus in the public cloud as well as Service Bus 1.1 for Windows Server in the private cloud. It was a very interesting collaboration and finally I got to write some prototype code for one of our partners, again. How cool is that – doing interesting stuff and at the same time it helps a partner. That’s how it should be!

Part #1: On-Premises Service Bus 1.1 Environment

The journey and prototyping began with setting up an On-Premises Service Bus 1.1 environment in my home lab. Fortunately there are some good instructions out there, but of course nothing goes without any pitfalls. Here’s a good set of instructions to start with – note that I did setup an entire Azure Pack express setup, which is clearly optional. But it makes things more convenient, especially for presentations, since it provides the nice, good old Azure Management Portal experience for your Service Bus on-premises. Here’s where you should look at how-to setup things:

  • Install the Azure Pack Express Setup
    • This shows, how-to setup a basic Azure Pack environment using Web Platform Installer.
    • I did not run into any problems installing it on a Hyper-V Box on my Home Lab. So it should be fairly straight forward.
    • You can install all on a single machine. What you need is SQL Server (Express is sufficient) pre-installed.
  • Install Service Bus for Windows Server
    • Again this happens via Web Platform Installer. With this I had a little challenge. Unfortunately as of writing this article, the link to the required version of the Windows Fabric in the Web Platform Installer was broken. I’ve uploaded it
      on my public OneDrive for convenience. You find it here. But I’ve had conversations with the product team, they will fix the broken link. So when you try it, it might work, already.
  • Configure Service Bus using the Wizard.
    • After installing you need to configure Service Bus for Windows Server. That happens through a Wizard. It essentially allows you to configure endpoints, ports and certificates used for security purposes for Service Bus 1.1 for Windows Server.
    • The configuration failed on the first attempt because something went wrong with installing the Service Bus patch. Uninstall and re-install did solve the problem.
  • Configure Service Bus for Windows Server for the Azure Pack Portal
    • That’s the final step to get the Azure Pack management portal experience for Service Bus 1.1 for Windows Server.
    • If you are fine with managing Service Bus through PowerShell, you can skip the entire Azure Pack Express stuff, start with Service Bus 1.1 right away and manage it through PowerShell.

At the end of this journey, which took me about half a day overall starting from scratch and figuring out the little gotchas mentioned above, I had a lab environment to test against. I am a fan of using Royal TS, hence the screen shot with my on-premises Service Bus as web pages embedded in Royal TS:

Part #2: NServiceBus and Azure Service Bus

The second part of the challenge was easy – figure out if NServiceBus supports Azure Service Bus, already. Because that would give us a good starting point, wouldn’t it!? Here are the docs for the NServiceBus transport extension for the Azure Serivce Bus: source code.

But the point is, the earliest version that supports Azure seems to be NServiceBus v5.0.0 and they started with a Microsoft.ServiceBus.dll above 3.x in the code-base. This version of the library is not compatible with Service Bus 1.1 for Windows Server. So I had to dig into the source code and back-port the library. Fortunately, Particular is open sourcing most of the framework’s bits and pieces on GitHub – so it does for the NServiceBus.AzureServiceBus connector here.

Note that I am referring to the version 6.2 of the implementation, directly, since that works with NServiceBus 5.0.0 which our global ISV partner is using at this point in time. I also tried back-porting the current development branch, but that turned
out to be way more complex and risky. And it was not needed for the partner, either:)

Part #3: Back-Porting to Microsoft.ServiceBus.dll v2.1

So the needed step is to back-port to a Service Bus SDK library that also works with Service Bus 1.1. Service Bus for Windows Server recently received a patch to work with .NET 4.6.1, but it has not received any major updates since its original release.
So it is behind with regards to its APIs compared to Service Bus in Azure.

I’ve done all of the steps below on my GitHub repository in a fork of the original implementation. Note that you should only look at my work in the branch ‘support-6.2’ which is the one that works with NServiceBus 5.0.0. The rest is considered to be experiments as we speak right now:)

Here is the link to my GitHub repo and the fork!!

The first step for doing so was to remove the NuGet package and replace it with one that works with Service Bus for Windows Server. Microsoft fortunately released a separate NuGet package that contains the version that is compatible with Service Bus for Windows Server:

The rest was all about looking where the NServiceBus-implementation is using features that are not available in version 2.1 of the Service Bus SDK and testing it against my Service Bus 1.1 for Windows Server lab setup. I think the best place to look at
what actually changed is by looking at the change-logs on my GitHub-Repository:

  • Initial back-port with most code changes (click here to open details on GitHub)
    • Update the NuGet Package to “ServiceBus.v1_1” instead of “WindowsAzure.ServiceBus”.
    • Remove EnablePartitioning because that’s not supported on SB 1.1.
    • Use MessagingFactory.CreateFromConnectionString() instead of MessagingFactory.Create() because the latter one does not assume different ports on different EndPoints for different APIs Service Bus is exposing. But that’s typically the case on default-setups of Service Bus for Windows Server (see my first ScreenShot).
    • I also added some regular expressions to detect, if a Service Bus Connection String is one for on-premises or the public cloud to keep most of the default behaviors when connecting against the public cloud true. See the code-snippet below. It might not be complete or perfect, but it fulfills the basic needs.
  • Added some samples (click here to open details on GitHub)
    • This contains a basic Sender and Receiver implementation that uses the transport.
    • You need to set the Environment Variable AzureServiceBus.ConnectionString in a command prompt and start Visual Studio from that one to successfully execute. Btw. that’s also needed if you need to run the tests. In that case you
      also need to set AzureServiceBus.ConnectionString.Fallback with an alternate Service Bus connection string.

Here is the little code-snippet that checks if the code is used for on-premises Service Bus services or for Azure Service Bus instances:

class CreatesMessagingFactories : ICreateMessagingFactories
    #region mszcool 2016-04-01

    // mszcool - Added Connection String parsing to detect whether a public or private cloud Service Bus is addressed!
    public static readonly string Sample = "Endpoint=sb://[namespace name];SharedAccessKeyName=[shared access key name];SharedAccessKey=[shared access key]";
    private static readonly string Pattern =

    public static readonly string OnPremSample = "Endpoint=[namespace name];StsEndpoint=[sts endpoint address];RuntimePort=[port];ManagementPort=[port];SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=[shared access key]";
    private static readonly string OnPremPattern =
        "^Endpoint=sb\\://(?<serverName>[A-Za-z][A-Za-z0-9\\-\\.]+)/(?<namespaceName>[A-Za-z][A-Za-z0-9]{4,48}[A-Za-z0-9])/?;" +
        "StsEndPoint=(?<stsEndpoint>https\\://[A-Za-z][A-Za-z0-9\\-\\.]+\\:[0-9]{2,5}/[A-Za-z][A-Za-z0-9]+)/?;" +
        "RuntimePort=[0-9]{2,5};ManagementPort=[0-9]{2,5};" +
        "SharedAccessKeyName=(?<sharedAccessPolicyName>[\\w\\W]+);" +

    private bool DetectPrivateCloudConnectionString(string connectionString)
        if (Regex.IsMatch(OnPremPattern, connectionString, RegexOptions.IgnoreCase))
            return true;
        else if (Regex.IsMatch(Pattern, connectionString, RegexOptions.IgnoreCase))
            return false;
        else {
            throw new ArgumentException($"Invalid Azure Service Bus connection string configured. " +
                                        $"Valid examples: {Environment.NewLine}" +
                                        $"public cloud: {Pattern} {Environment.NewLine}", 
                                        $"private cloud (SB 1.1): {OnPremPattern}");


    ICreateNamespaceManagers createNamespaceManagers;
    // ... rest of the implementation ...

The part where I needed this detection most was to decide, how-to instantiate the MessagingFactory. This is the relevant piece of code – note that MessagingFactory.Create() with the NamespaceManager-Address passed in does only work in the public cloud, not with Service Bus 1.1 on Windows Server:

class CreatesMessagingFactories : ICreateMessagingFactories
    // ... earlier stuff in that class including 'DetectPrivateCloudConnectionString' ...

    ICreateNamespaceManagers createNamespaceManagers;

    public CreatesMessagingFactories(ICreateNamespaceManagers createNamespaceManagers)
        this.createNamespaceManagers = createNamespaceManagers;

    public MessagingFactory Create(Address address)
        var potentialConnectionString = address.Machine;
        var namespaceManager = createNamespaceManagers.Create(potentialConnectionString);

        // mszcool - Updated to detect if Service Bus 1.1 for Windows Server is used
        if (DetectPrivateCloudConnectionString(potentialConnectionString))
            // mszcool - Need to use this approach because different ports for control and transport endpoints are used
            return MessagingFactory.CreateFromConnectionString(potentialConnectionString);
        else {
            var settings = new MessagingFactorySettings
                TokenProvider = namespaceManager.Settings.TokenProvider,
                NetMessagingTransportSettings =
                BatchFlushInterval = TimeSpan.FromSeconds(0.1)
            return MessagingFactory.Create(namespaceManager.Address, settings);

Finally with those fixes incorporated, I was able to get almost all things working and all except two tests passing for now. For the proof-of-concept that is successful for now, since it proofs that the partner could achieve what they need to achieve.

Part #4: See it in Action

Now comes the cool part – testing it out and seeing it in Action. The Samples I’ve added to the git-repository are simple messaging examples which I’ve modified from the NServiceBus Samples repository as well. Note that I’ve taken the Non-Durable MSMQ sample to proof since starting-point for the partner was MSMQ and I wanted something super-simple to start with. That is just how far I got, eventually I try other samples (but no promise at this time:)). Below the code-snippet of the sender – the receiver looks nearly identical and you can look it up on my repository on GitHub:

static void Main()
    string connStr = System.Environment.GetEnvironmentVariable("AzureServiceBus.ConnectionString");

    Console.Title = "Samples.MessageDurability.Sender";
    #region non-transactional
    BusConfiguration busConfiguration = new BusConfiguration();

    using (IBus bus = Bus.Create(busConfiguration).Start())
        bus.Send("Samples.MessageDurability.Receiver", new MyMessage());
        Console.WriteLine("Press any key to exit");

Here’s the code actually in action and working. You see what it produced on my on-premises Service Bus as well as the log output from the console windows. Note that the message handler part of the receiver outputs that it received a message.

One thing I played around with was having two receivers, that’s why you see two output-lines for one single message in my receiver. To clarify, here’s the code of MyHandler.cs from the Receiver project which outputs those lines:

public class MyHandler : IHandleMessages<MyMessage>
    static ILog logger = LogManager.GetLogger<MyHandler>();

    public void Handle(MyMessage message)
        logger.Info("Hello from MyHandler");

public class MyHandler2 : IHandleMessages<MyMessage>
    static ILog logger = LogManager.GetLogger<MyHandler2>();

    public void Handle(MyMessage message)
        logger.Info("Hello from MyHandler2!");

Final Words

I think this Proof-Of-Concept we’ve built alongside with other aspects we covered for that Global Software Vendor partner in UK demonstrates several aspects:

  • That it is possible to have a solution that works (nearly) seamless on-premises and in the public cloud with largely the same code base.
    • The situation should DRAMATICALLY improve once Microsoft has released the Azure Stack, which is the successor of what I’ve used here (which was the Azure Pack).
    • We can expect that the Azure Stack will deliver a much more up2date and consistent experience with Azure in the public cloud once it is fully available incl. Service Bus.
  • That with NServiceBus one of the most important 3rd-party middle-ware frameworks plays very well together with Azure and that it also can be used for Service Bus On-Premises with some caveats (like back-porting the transport-library).
    • An alternative, which I also tried to demonstrate with the simple sample, would be to use the MSMQ NServiceBus transport on-premises and use the AzureServiceBus Transport from NServiceBus for public cloud deployments. As long as only features
      supported on both sides are used, that might be the preferred way since with that you can fully rely in code delivered for you by NServiceBus without any changes.

Note that my attempts are meant to be a Proof-of-Concept, only. You can look at them, try them and even apply them for your solutions fully at your own risk:)

I think it was a great experience working with the team in UK on this part of a larger Proof-of-Concept (which also included e.g. Azure Service Fabric for software that
needs to be portable between on-premises and the public cloud but wants to make use of a true Platform-as-a-Service foundation).

I hope you enjoyed reading this and found it interesting and useful.

How-to remove secrets from the entire history of a Git-repository!!

I really do like Git a lot and even for my private projects I use it as the default. But some aspects of it are quite tricky. A well-known practice is, that you should never check-in secrets or things you don’t want to share with others into a Git-repository. That is especially interesting with public repositories hosted on e.g. GitHub.

Well, saying you should not and actually not forgetting about it are two different things. Sometimes it just happens. And even if you are careful with secrets, it can also be other stuff you checked in but didn’t want to share with others. So it happened to me when I wrote the last blog-post about automating my developer machine setup and published my Machine Setup Script.

The secrets in the history on GitHub!?

As explained in my previous blog-post, in the setup automation script I use for re-setting up a fresh developer machine, I also do clone a hand of repositories which are of relevance to me and/or to which I contributed some code. The majority of those repositories is public on GitHub. But some of them are from real-world projects with our customers and partners which are hosted in a private VSTS environment we run for our global team. I accidentally published that list of git clone commands as well.

No passwords, no secrets – but the repository names sometimes contained the names of the partners/customers and some of that work is not done or public, yet. So even though these were not secrets, I am not supposed to share them, yet.

Unfortunately, I realized that only after a few check-ins. So the entire history in my public GitHub repository contained those repository-names from an internal VSTS environment which I didn’t want to share. Damn… the post is out, the link points to
the repository… what to do?

How-to remove secrets/content from the entire history with Git?

Of course the “easy” way for this specific case would have been to delete the repository and re-create a new one with the fixed file published. That works for cases where the history is not really important and where you have a truly small repository. In other words, it works for samples and the likes. But even I have received a pull-request for that file which I didn’t want to loose, either. By all means, deleting and re-creating is not something that should be considered as a solution for this problem.

So, I did a little Internet-search and came across something that can save many Git-Hub repositories from mandatory deletion to remove things from the entire history, I guess:

The BFG Repo Cleaner

This is an awesome tool if you ran into the problem I’ve had. Let’s say you have published something into a Git-repository across multiple commits and pushes that you want to get rid of from the entire history. All you need to do are the following steps:

  1. Download the BFG Repo Cleaner into a local directory of your choice.
    1. The app is written in Scala
    2. It requires a Java-runtime on your machine.
    3. It is distributed as a JAR-package that contains all dependenices.
  2. Open up a command prompt and switch to a temporary directory.
    1. I did this in a temp-directory because it requires a new git clone --mirror of your repository which is a 1:1 mirror of the remote repository.
    2. After that you need to push that mirror back to the remote repository again. And then you can delete the mirror and return to your ordinary repository clone.
  3. Perform a clone of your repository with the option --mirror (I am using my devmachinesetup-repo here since I had to do it with this one, so just replace devmachinesetup with any of your repository-names in the commands below).
    1. git clone --mirror
    2. This clones a mirror of your remote repository with the entire history into a sub-folder of the current folder called devmachinesetup.git.
  4. Stay in the folder that contains the devmachinesetup.git folder with the mirrored repository in it.
  5. Create a text file that contains the text you want to purge from the history of all files in your git repository.
    1. Each line contains a string (incl. spaces, special characters etc.) that you want to remove. In my case these strings were the complete git clone <<repositoryname>> commands which I wanted to remove from the history
      of commits of the script in the repository. Each line in this text-file contained one of those entire commands.
    2. BFG searches every file in your git-mirror folder and replaces each instance of each lines from the text-file with the text *** REMOVED *** in the target files of the repository.
    3. A little sample excerpt for how the content of that text file shows, how simple it is – in my case it was just one git-clone per line which I wanted to remove from the history:git clone
      git clone
      git clone thirdrepo
  6. Execute the BFG command. Note that BFG is based on the Java-runtime, so either add the folder with BFG JAR-package to your CLASSPATH environment variable or specify the full path to the JAR-package when executing Java. This looks similar to:
    1. java -jar C:\Temp\bfg-1.12.8.jar --replace-text myunwantedtext.txt devmachinesetup.git
    2. Note that when you download the BFG JAR package, the version in the name of the .jar-file might be different.
    3. file myunwantedtext.txt contained the full
  7. Now BFG has replaced the unwanted content in the local clone. Last-but-not-least you need to push that one back to the remote repository.
    1. Again, in your command prompt window remain in the directory which contains the devmachinesetup.git sub-directory with your git-mirror.
    2. Execute git push to push the mirror back.

That’s it, you’re done. After I executed the steps above on my repository, I checked online with several commits if it worked. In your case, the result should now look similar to what I’ve achieved once you’ve completed the steps above:

Results of Removing unwanted content

Final words

Removing unwanted content from the entire history of a Git-repository is needed sometimes. Whether it’s about accidental commits of secrets or other (sensitive) content or e.g. large files you want to clean up from your repository.

The BFG Repo Cleaner is a handy tool for such cases. It can indeed be used for cases such as the one I described. But it also contains options for other cases such as removing large files from the history of your repository which are not needed there, anymore.

BFG is cool and handy, but if you need more advanced scenarios, you might need to fall-back to the way more powerful, yet much more complex git-filter-branch tool (here). I guess that for 80% of the cases, BFG might be good enough and given the fact it is super-easy to use I’d first give it a chance before digging through the docs of git-filter-branch.

Kudos to the folks which built BFG… great job and thank you very much for saving my day (I will donate;))…

My “Developer Machine Setup Automation Script” / Chocolatey & PowerShell published

I know, I wanted to blog more often… but then a happy event came in between which was the birth of my little baby daughter – Reinhard’s 2 year younger sister Linda. Now I am sitting here at the registry office for her documents. What could be a
better thing to write a blog-post while waiting for the next steps!?

Since my brand-new Surface Book arrived, yesterday, one thing I have to do is setting it up for my daily use. That means installing all the handy tools that I use on a daily basis such as PDF readers, KeePass and the likes. It also means setting up the
many tools and development environments I use (some of them more often, others less often).

Well, now the cool thing is that I’ve automated most of these setup procedures using Chocolatey and PowerShell. And given the situation I am
right now in (new daughter, waiting at the registry office, Surface Book arrived) I thought why not share my script with the rest of the world and explain it a bit…

Chocolatey and PowerShell Script

A while ago when one of my peers, Kristofer Liljeblad, pointed me to Chocolatey, I started building out a fairly comprehensive script that automates most of the steps for setting up of my machines. I’ve now published that script on my github repositiory:

mszCool’s Dev Machine Setup Script

Since I thought I don’t want to install all tools on every machine, I added some switches to the script that allows me to install a group of tools, only. E.g. when I setup a machine for one of my relatives that typically is not used for development,
I only install a bunch of end-user tools such as a PDF reader. On my developer machines I typically run all of the switches.

But in that case it turned out, that sometimes PowerShell needs to be restarted after certain install procedures. Therefore I added a few other switches to the script so that I can re-start PowerShell in between of those steps.

Finally, I still do install certain parts manually. E.g. for SQL Server Management Studio on Windows 10 I need to add the .NET Framework 3.5 Feature Set which I have not coded into PowerShell, yet (I know it does work).

All of this resulted into a workflow of several phases for which most of the times the script just does its work but sometimes I need to intervene and re-start PowerShell or do some manual installation steps. I know, there might be more elegant solutions
to this. But remember, it’s just for setting up my developer machines, so it’s just quick and pragmatic.

Using the Script

To use the script, as I said there are a few steps that need to be executed manually upfront.

  • Install the .NET Framework 3.5 Feature through the Windows Settings Panel
  • Start an elevated PowerShell Window as
  • Set the PowerShell Execution Policy to Unrestricted by executing Set-ExecutionPolicy Unrestricted

After that you can start the script the first time. For that purpose I typically download it into a temp-directory on my local machine since it downloads things (which it typically deletes afterwards, but not always:)). From there, just execute the following
commands in the previously opened PowerShell Window:

.\Install-WindowsMachine.ps1 -installChoco -tools -ittools -dev -data

This installs the following items:

  • -installChoco installs Chocolatey as per their homepage.
  • -tools installs tools I commonly use such as Adobe Reader or KeePass.
  • -ittoolsinstalls tools I categorized as OS tools such as SysInternals.
  • -dev installs a set of independent tools I use for development such as Fiddler.
  • -data installs tools to manage databases, but not the DB-engines themselves. Examples are SQL Server Management Studio or MySQL Workbench.

Since this phase installs many tools which are added to the environment path, it is now necessary to re-start PowerShell before moving on.

Installing IDEs

The next step is installing Integrated Development Environments such as Visual Studio. If you are fine with the (free Community Edition of Visual Studio)[],
the script includes a switch for that.

As Eclipse-based IDE I decided to stick with Spring Tool Suite. The main reasons for that are one, because Spring is one of the most popular Java-frameworks and two, because it comes with a whole lot
of add-ins (e.g. Maven and many more) pre-packaged. That makes it convenient! Sometimes for Java I also play around with IntelliJ IDEA, so I also included that.

For all the rest I am using Visual Studio Code and Sublime, although I haven’t VS Code included in the script, yet (didn’t have the time to update:)), and still
install it manually.

If you don’t like those choices, now is the time to manually install your favorite versions of Visual Stuido, Eclipse and co. If you like it, the command below does the job and installs all the aforementioned environments – for Visual Studio Versions
I support both, 2013 and 2015:

.\Install-WindowsMachine.ps1 -installVs -vsVersion 2015 -installOtherIDE

After that phase it’s needed to re-start PowerShell, again, because IDEs and especially Visual Studio adds a few things to the environment path.

Visual Studio Extensions, Web PI and Database Engines

Yes, I also got bored by installing Visual Studio Extensions over and over again. So, what I did is writing a little PowerShell function which downloads Visual Studio Extensions and installs them using the Visual Studio Extensions Installer. Also there
are a few more tools which I typically need in Visual Studio which are available in the Web Platform Installer (WebPI). These are things such as the Azure SDKs and tools or Azure PowerShell
and the likes.

Simply call the script with the following switches after you’ve Visual Studio installed (note that -dev2 and -vsext are not dependent on each other):

.\Install-WindowsMachine.ps1 -dev2 -vsext -vsVersion 2015

To install database engines such as SQL Server Express, Cassandra (based on Datastax’s distribution), I’ve included an additional switch:

.\Install-WindowsMachine.ps1 -dataSrv

Cloning Repositories

Finally, on every developer machine I need to clone many repositories from GitHub or private TFS and Bitbucket environments. I’ve also added that to the script, although I did remove the ones from private environments for this publication. For this
I’ve added the following switch, which can be combined with ANY of the above AFTER -dev has ran (because that installs git):

.\Install-WindowsMachine.ps1 -cloneRepos

What’s missing & Caveats

This script really saves me a lot of time whenever setting up machines on a regular basis (i.e. development VMs for trying out something or re-setting up my machines because of Windows Insider Build stuff etc.).

A few things are missing which I typically install. I plan to add those pieces to the script at some point in time in the future as soon as I have some spare time. But for now I do install them manually:

For those I need to update the script to be able to download and run MSIs in a silent mode. Also Git-Posh requires cloning a git-repo and executing a few statements. All of that is simple, but I haven’t had the time to do it, yet. But I also do accept
pull-requests if you find the time to work on it and think it’s a useful extension to this work.

Final Words

This script is a simple and pragmatic thing. It installs most of the things in an automated way and typically with it and with a good Internet connection I have a fully working developer machine after a few hours without really doing a lot. The things
I install manually (e.g. Visual Studio 2015 Enterprise instead of Community) also do run a bit longer. So they don’t distract me while I do some other work on one of my other machines. And that’s the point – I can install new images while
working in parallel nearly end-2-end and it won’t hold me off from other work (as opposed to continuously clicking through some installer wizards:)).

I hope you like the work and as mentioned, I am happy about any feedback and pull-requests for it. Just go, look, download and use it from my github repository

Changes and reflections…

It’s been a while since I wrote my last blog-post. When I re-launched my blog on Azure Web Sites I just moved into a new team and a newly formed organization inside of Microsoft Corp.

That organization – DX TED (Developer Experiences, Technical Evangelism and Development) – was meant to bring the technical depth back to Microsoft’s evangelism unit. Indeed, we were all moved to engineering roles (Program Managers, Software Development Engineers). At least speaking about DX Corp. I have the feeling this mission is progressing at huge steps. I’ve spent more time in Visual Studio and even other IDEs such as Eclipse in the past 3 years compared to the time before.

When TED was created I thought it is a good opportunity to re-start blogging after I was absent as a blogger for a while. And given we were supposed to be the spearhead for Azure as a platform with Global Software Vendors I thought ‘how could it be better
than running it on Azure’ … which I did and which is still the case. You’re reading a page served from WordPress-blog running on Azure Web Apps with a ClearDB MySQL database.

why quiet so long?

I think the strategy worked well in the first 1.5 years after joining TED. But then I reduced the frequency of blogging, again. It was mainly due to the high workload with our partners combined with the principles I have applied for my new blog. These were:

  • Only publish after major achievements with a partner.
  • Embrace a white-paper style of writing.
  • Back the post with some sort of a “reference implementation” as source code.
  • Provide that code with a script that automates the setup as far as possible.
  • Don’t talk about anything unfinished.

Based on those five principles, writing a blog-post always resulted in a larger piece of work. In addition, I did not blog or express anything about the “journey” to achieve a goal – which is often more interesting than the reached goal itself. And my posts usually became very long.

In addition, with those principles posts were generally around how-to get things done with less focus on why we did things, how we got there and what my personal opinions are about those learnings and achievements.

a career-change…

Then last summer a change came across – I moved from working with medium-sized Global Software Vendors which we generally call “enablers” to the Global Commercial Independent Software Vendors Group (GISV Commercial) inside of TED.

Organizationally this was a side-step. But a really good one since I consider this move as a huge next challenge! The reason for that is that I do have the pleasure to work on a dedicated basis with one of the largest Software Vendors of the world – SAP! That’s a huge ground for learning a lot of new things at a totally different scale.

SAP might sound very traditional for a personality like me – but it is not, actually. That company is changing rapidly and has become something different, already, compared to the time right after I left my former employer who was an SAP partner. They are anything else but old-school and traditional. They do a lot with new tech stuff including modern web technologies, app technologies (Cordova), cloud etc. … it is just so much that I am confronted with these days. Given that I was careful with broadly spreading that message too early. But after I’ve spent some time in the new role I feel more settled and therefore more comfortable with spreading the word.

…that made me re-think blogging as well

Due to the work with SAP I also thought about changing my attitude towards blogging. With the principles I’ve established above, I realized that I could will definitely not write many blog-posts, either. There’s just so much going on with SAP… The job means a lot of coordination between engineering teams. It means facilitating and guiding engineering teams on both sides, SAP and Microsoft. But it also means learning from them a lot and driving decisions based out of those learnings.

Finally it also means digging into a lot of new technologies which I haven’t touched before or at least for a long time. That includes SAP technologies such as HANA, HANA Cloud Platform, SAP Gateway and Gateway for Microsoft or SAP Fiori.

But it also means broadening my technical capabilities on the Microsoft-side into Office 365 Add-In and App development, Windows 10 UWP app development, development of Cross-Platform Apps with Visual Studio Tools for Cordova and much more. Not to speak about all the Azure stuff which is the most important area for us. Sure, I need to specialize in a few things (identity is still one of my favorites which I can make use of a lot in this alliance), but at least I need to get to a 200-level across all of those technologies above.

I thought about this, and for me it also meant I need to change my approach towards blogging. While the principles I established are good, they’re not good enough on their own. Therefore, I need to extend them with the following principles to increase the cadence of blogging and provide more value through my blog:

  • Share more personal thoughts about both, Microsoft and SAP tech.
  • Develop ideas and get feedback through this blog.
  • Evolve and grow technical information through multiple posts instead of one long post.

To be honest – I don’t know if it will work and how it will work out. But I’ll definitely try it, increase my cadence of blogging with those new principles and hence share much more (hopefully) valuable information for you as far as possible…

Detecting if a Virtual Machine Runs in Azure – Part 2 – Updates for Linux VMs

A few months ago I did blog-post about how-to detect whether a virtual machine runs in Azure or not. This is vital for many independent software vendors who are planning to offer their own software through the Azure Marketplace for Virtual Machines.

The main detection strategy (Windows, Ubuntu)

In the post I did explain a few tricks on detecting whether the VM runs in Azure or not for both, Windows and Linux. Still the most reliable check known as of today is to check if the DHCP option “unknown-245” is set for in the DHCP-lease options for
a virtual machine.

  • Ubuntu Linux: I’ve posted a bash script in my previous blog. I generally stated that this works for Linux all up without considering that other Linux distributions might have different configuration files for storing DHCP lease details. Hence the following script works on Ubuntu-Linux based flavors, only:
      if `grep -q unknown-245 /var/lib/dhcp/dhclient.eth0.leases`; then
          echo “Running in an Azure VM”

Detecting if a CentOS VM runs on Azure

My peer and colleague Arsen Vladimirskiy pointed out that on CentOS the file for DHCP leases is stored on a different location. Hence the detection strategy for the DHCP-lease option I’ve explained in my original post does not work in CentOS-based virtual machines.

For CentOS based virtual machines the DHCP lease options are indeed stored in the path /var/lib/dhclient/dhclient.leases (or in case of multiple network interfaces dhclient-eth0.leases whereas the part eth0 needs to be replaced with the networking interface device you’re going to check against).

Therefore in a default configuration with just one ethernet adapter the script needs to be updated as follows to work inside of a CentOS virtual machine:

# manually start dhclient (seems to be a workaround)

# then check against the lease files
if `grep -q unknown-245 /var/lib/dhclient/dhclient.leases`; then
   echo "Running in Azure VM"

Note: There was one weird issue I ran into when trying the approach above, hence the script starts with launching dhclient. On a fresh deployed CentOS 7 VM in Azure from the marketplace stock image dhclient is not started by default. Therefore files such as dhclient.leases or dhclient-*.leases do not exist by default under /var/lib/dhclient/.

Only after manually executing the command sudo dhclient for starting the DHCP-client the files where created successfully and the check works. Well, now someone could think that this might be related to static IP addresses – but in Azure that’s not correct since IP addresses are always assigned by the Azure DHCP server. In case you want to have static IPs you configure those through the Azure Portal or Management APIs so that the Azure DHCP server always assigns the same, static IP address to the VM in the private, virtual network. So that cannot be the reason.

A more Complete Story for detecting DHCP unknown-245 in Linux

Well, now the distributions above are very common ones but are by var not all of the supported ones on Azure. The source code for the Azure Linux Agent contains all the secrets currently valid. If you really want to be on the save side across multiple Linux distributions. A few hints in the Python-based source code are:

  • Line 99-100 do show the directories you should consider for your detection strategy
      VarLibDhcpDirectories = 
         ["/var/lib/dhclient", "/var/lib/dhcpcd", "/var/lib/dhcp"]
      EtcDhcpClientConfFiles = 
         ["/etc/dhcp/dhclient.conf", "/etc/dhcp3/dhclient.conf"]
  • Further down in the code starting at line 5107 there is a section that makes use of option 245 as well:
      # ... other code before
      elif option == 3 or option == 245:
          # ...
          # ...
      # ... more code goes here

This code has been updated to version 2.0.15 24 days before writing/publishing this post. So it should still be safe to leverage option 245 for your detection strategy. As soon as there’s something better available, I’ll definitely post another update for this blog-post!

Final Disclaimer

The approaches outlined above did work on both, Ubuntu and CentOS 7 based VMs in Resource Manager based deployments (using the new ARM-template approach introduced by the Azure teams earlier this year) at the time of publishing this post (2015-09) during my tests. When I published the original post I did test them with classic service management based VMs, of course.

Therefore and as there is still no better way introduced at the time of publishing this post, yet, the options outlined in this and my original post are still valid and eventually the best you can get so far for detecting if your VM runs inside of Microsoft Azure or not…

If you found better options don’t hesitate to contact me via my twitter feed

Azure VMs – SQL Server AlwaysOn Setup across multiple Data Centers fully automated (Classic Service Management)

Last December I started working with two of my peers, Max Knor and Igor Pagliai, with a partner in Madrid on implementing a Cross-Data Center SQL Server AlwaysOn availability group setup for a financial services solution which is supposed to be provided to 1000s of banks across the world running in Azure. Igor posted about our setup experience which we partially automated with Azure PowerShell and Windows PowerShell – see here.

At the moment the partner’s software still requires SQL Server in VMs as opposed to Azure SQL Databases because of some legacy functions they use from full SQL Server – therefore this decision.

One of the bold goals was to fully enable the partner and their customers to embrace DevOps and continuous delivery across multiple environments. For this purpose we wanted to FULLY AUTOMATE the setup of their application together with an entire cross-data-center SQL Server AlwaysOn environment as outlined in the following picture:

In December we did a one-week hackfest to start these efforts. We successfully did setup the environment, but partially automated, only. Over the past weeks we went through the final effort to fully automate the process. I’ve published the result on my github repository here:

Deployment Scripts Sample Published on my GitHub Repository

Note: Not Azure Resource Groups, yet

Since Azure Resource Manager v2 which would allow us to dramatically improve the performance and reduce the complexity of the basic Azure VM environment setup is still in Preview, we were forced to use traditional Azure Service Management.

But about 50%-60% of the efforts we have done are re-usable given the way we built up the scripts. E.g. all the database setup and custom service account setup which is primarily built on-top of Azure Custom Script VM Extensions can be re-used after the basic VM setup is completed. We are planning to create a next version of the scripts that does the fundamental setup using Azure Resource Groups because we clearly see the advantages.

Basic Architecture of the Scripts

Essentially the scripts are structured into the following main parts which you would need to touch if you want to leverage them or understand them for learning purposes as shown below:

  • Prep-ProvisionMachine.ps1 (prepare deployment machine)
    A basic script you should execute on a machine before starting first automated deployments. It installs certificates for encrypting passwords used as parameters to Custom Script VM Extensions as well as copying the basic PowerShell modules into the local PowerShell module directories so they can be found.
  • Main-ProvisionConfig.psd1 (primary configuration)
    A nice little trick by Max which is nice to provide at least some sort of declarative configuration was to build a separate script file that creates an object-tree with all the configuration data typically used for building up the cluster. It contains cluster configuration settings, node configuration settings and default subscription selection data.
  • Main-ProvisionCrossRegionAlwaysOn.ps1 (main script for automation)
    This is the main deployment script. It performs all the actions to setup the entire cross-region cluster including the following setups:
    • Setup your subscription if requested
    • Setup storage accounts if they do not exist, yet
    • Upload scripts required for setup inside of the VMs to storage
    • Setup cloud services if requested
    • Create Virtual Networks in both regions (Primary/Secondary)
    • Connect the Virtual Networks by creating VPN Gateways
    • Set the primary AD Forest VM and the Forest inside of the VM
    • Setup secondary AD DC VMs including installing AD
    • Provision SQL Server VMs
    • Setup the Internal Load Balancer for the AlwaysOn Listener
    • Configure all SQL VMs to have AlwaysOn enabled
    • Configure the Primary AlwaysOn node with the initial database setup
    • Join secondary AlwaysOn nodes and restore databases for sync
    • Configure a file-share based witness in the cluster
  • VmSetupScripts Folder
    This is essentially a folder with a series of PowerShell scripts that do perform single installation/configuration steps inside of the Virtual Machines. They are downloaded with a Custom Script VM Extension into the Virtual Machines and executed through VM Extensions, as well.

Executing the Script and Looking at the Results

Before executing the main command make sure to execute .\Prep-ProvisionMachine.ps1 to setup certificates or import the default certificate which I provide as part of the sample. If you plan to seriously use those scripts, please create your own certificate. Prep-ProvisionMachine.ps1 provides you with that capability assuming you have makecert.exe somewhere on your machines installed (please check Util-CertsPasswords for the paths in which I look for makecert.exe).

# To install a new certificate

# To install a new certificate (overwriting existing ones with same Subject Names)
.\Prep-ProvisionMachine.ps1 -overwriteExistingCerts

# Or to install the sample certificate I deliver as part of the sample:
.\Prep-ProvisionMachine.ps1 -importDefaultCertificate

Then everything should be fine to execute the main script. If you don’t specify the certificate-related parameters as shown below I assume you use my sample default certificate I include in the repository to encrypt secrets pushed into VM Custom Script Extensions.

# Enter the Domain Admin Credentials
$domainCreds = Get-Credential

# Perform the main provisioning

.\Main-ProvisionCrossRegionAlwaysOn.ps1 -SetupNetwork -SetupADDCForest -SetupSecondaryADDCs -SetupSQLVMs -SetupSQLAG -UploadSetupScripts -ServiceName "mszsqlagustest" -StorageAccountNamePrimaryRegion "mszsqlagusprim" -StorageAccountNameSecondaryRegion "mszsqlagussec" -RegionPrimary "East US" -RegionSecondary "East US 2" -DomainAdminCreds $domainCreds -DomainName "msztest.local" -DomainNameShort "msztest" -Verbose

After executing a main script command such as the following, you will get 5 VMs in the primary region and 2 VMs in the secondary region acting as a manual failover. 

The following image shows several aspects in action such as the failover cluster resources which are part of the AlwaysOn availability group as well as SQL Server Management Studio accessing the AlwaysOn Availability Group Listener as well as SQL Nodes, directly. Click on the image to enlarge it and see all details.

Please note that the failover in the secondary region needs to happen MANUALLY by executing either a planned manual failover or a forced manual failover as documented on MSDN. Failover in the primary region (from the first to the second SQL Server) is configured to happen automatically.

In addition on Azure it means to take the IP cluster resource for the secondary region online which by default is offline in the cluster setup as you can see on the previous image.

Customizing the Parts you Should Customize

As you can see in the image above, the script creates sample databases which it sets up for the AlwaysOn Availability Group to be synchronized across two nodes in the main. This happens based on *.sql scripts you can add to your configuration. To customize the SQL Scripts and Databases affected, you need to perform the following steps:

  • Create *.sql scripts with T-SQL code that creates the databases you want to create as part of your AlwaysOn Availability Group.
  • Copy the *.sql Files into the VmSetupScripts directory BEFORE starting the execution of the main script. That leads to have them included into the package that gets pushed to the SQL Server VMs
  • Open up the main configuration file and customize the database list based on the databases created with your SQL scripts as well as the list of SQL Scripts that should be pushed into osql.exe/sqlcmd.exe as part of the setup process for creating the databases.
  • Also don’t forget to customize the subscription name if you plan to not override it through the script-parameters (as it happens with the example above).

The following image shows those configuration settings highlighted (in our newly released Visual Studio Code editor which also has basic support for PowerShell):

Fundamental Challenges

The main script can primarily be seen as a PowerShell workflow (we didn’t have the time to really implement it as a Workflow, but that would be a logical next step after applying Azure Resource Groups).

It creates one set of Azure VMs after another and joins them to the virtual networks it has created before. It then executes scripts on the Virtual Machines locally which are doing the setup by using Azure VM Custom Script Extensions. Although custom script extensions are cool, you have two main challenges with them for which the overall package I published provides re-usable solutions:

  • Passing “Secrets” as Parameters to VM Custom Script Extensions such as passwords or storage account keys in a more secure way as opposed to clear-text.
  • Running Scripts under a Domain User Account as part of Custom Script Extensions that require full process level access to the target VMs and Domains (which means PowerShell Remoting does not work in most cases even with CredSSP enabled … such as for Cluster setups).

For these two purposes the overall script package ships with some additional PowerShell Modules I have written, e.g. based on a blog-post from my colleague Haishi Bai here.

Running Azure VM Custom Script Extensions under a different User

Util-PowerShellRunAs.psm1 includes a function called Invoke-PoSHRunAs which allows you to run a target script with its parameters under a different user account as part of a custom script VM Extension. A basic invocation of that script looks as follows:

$scriptName = [System.IO.Path]::Combine($scriptsBaseDirectory, "Sql-Basic01-SqlBasic.ps1") 
Write-Host "Calling into $scriptName"
try {
    $arguments = "-domainNameShort $domainNameShort " + `
                 "-domainNameLong $domainNameLong " +  `
                 "-domainAdminUser $usrDom " +  `
                 "-dataDriveLetter $dataDriveLetter " +  `
                 "-dataDirectoryName $dataDirectoryName " +  `
                 "-logDirectoryName $logDirectoryName " +  `
                 "-backupDirectoryName $backupDirectoryName " 
    Invoke-PoSHRunAs -FileName $scriptName -Arguments $arguments -Credential $credsLocal -Verbose:($IsVerbosePresent) -LogPath ".\LogFiles" -NeedsToRunAsProcess
} catch {
    Write-Error $_.Exception.Message
    Write-Error $_.Exception.ItemName
    Write-Error ("Failed executing script " + $scriptName + "! Stopping Execution!")

This function allows you to either run through PowerShell remoting or in a separate process. Many setup steps of the environment we setup do actually not work through PowerShell remoting because they rely on impersonation/delegation or do PowerShell Remoting on their own which imposes several limitations.

Therefore the second option this script provides is executing as a full-blown process. Since Custom Script Extensions to run as local system, it is nevertheless not as simple as just doing a Start-Process with credentials being passed in (or a System.Diagnostics.Process.Start() with different credentials). Local System does not have those permissions, unfortunately. So the work-around is to use the Windows Task Scheduler. For such cases the function performs the following actions:

  • Schedule a task in the Windows Task Scheduler with the credentials needed to run the process as.
  • Manually start the task using PowerShell cmdLets
    • (Start-ScheduledTask -TaskName $taskName)
  • Wait for the task to be finished from running
  • Look at the exit code
  • Throw an Exception if the exit code is non-zero, otherwise assume success
  • Delete the task again from the task scheduler

This “work-around” helped us to completely execute the entire setup steps successfully. We were also discussing with the engineers building the SQL AlwaysOn single-data-center Azure Resource Group template that is available for single-data-center deployments in the new Azure Portal, today. They are indeed doing the same thing, details are just a bit different.

Encrypting Secrets Passed to Custom Script VM Extensions

Sometimes we were just required to pass secret information to custom script extensions such as storage account keys. Since Azure VM Custom Script Extensions are logged very verbose, it would be a piece of cake to get to that secret information by doing a Get-AzureVM and looking at the ResourceExtensionStatusList member which contains the status and detailed call information for all VM Extensions.

Therefore we wanted to encrypt secrets as they are passed to Azure VM Extensions. The basic (yet not perfect) approach works based on some guidance from a blog post from Haishi Bai as mentioned earlier.

I’ve essentially written another PowerShell module (Util-CertsPasswords) which can perform the following actions:

  • Create a self-signed certificate as per guidance on MSDN for Azure.
  • Encrypt Passwords using such a certificate and return a base64-encoded, encrypted version.
  • Decrypt Passwords using such a certificate and return the clear-text password.

In our overall workflow all secrets including passwords and storage account keys which are passed to VM Custom Script Extensions as parameters are passed as encrypted values using this module.

Using Azure CmdLets we make sure that the certificates are published with the VM as part of our main provisioning script as per Michael Washams guidance from the Azure Product group.

Every script that gets executed as part of a custom VM Script Extension receives an encrypted password and uses the module I’ve written to decrypt it and use it for the remaining script such as follows:

# Import the module that allows running PowerShell scripts easily as different user
Import-Module .\Util-PowerShellRunAs.psm1 -Force
Import-Module .\Util-CertsPasswords.psm1 -Force

# Decrypt encrypted passwords using the passed certificate
Write-Verbose "Decrypting Password with Password Utility Module..."
$localAdminPwd = Get-DecryptedPassword -certName $certNamePwdEnc -encryptedBase64Password $localAdminPwdEnc 
$domainAdminPwd = Get-DecryptedPassword -certName $certNamePwdEnc -encryptedBase64Password $domainAdminPwdEnc 
Write-Verbose "Successfully decrypted VM Extension passed password"

The main provisioning script encrypts the passwords and secrets using that very same module before being passed into VM Custom Script Extensions as follows:

$vmExtParamStorageAccountKeyEnc = `
Get-EncryptedPassword -certName $certNameForPwdEncryption `             -passwordToEncrypt ($StorageAccountPrimaryRegionKey.Primary)

That way we at least make sure that no un-encrypted secret is visible in the Azure VM Custom Script Extension logs that can easily be retrieved with the Azure Service Management API PowerShell CmdLets.

Final Words and More…

As I said, there are lots of other re-usable parts in the package I’ve just published on my Github Repository which even can be used to apply further setup and configuration steps on VM environments which have entirely been provisioned with Azure Resource Groups and Azure Resource Manager. A few examples:

  • Execute additional Custom Script VM Extensions on running VMs.
  • Wait for Custom Script VM Extensions to complete on running VMs.
  • A ready-to-use PowerShell function that makes it easier to Remote PowerShell into provisioned VMs.

We also make use of an AzureNetworking PowerShell module published on the Technet Gallery. But note that we also made some bug-fixes in that module (such as dealing with “totally empty VNET configuration XML files”).

Generally the experience of building these ~2500 lines of PowerShell code was super-hard but a great learning experience. I am really keen to publish the follow-up post on this that demonstrates how much easier Azure Resource Group templates to make such a complex setup.

Also I do hope that we will have such a multi-data-center template in the default gallery soon since it is highly valuable for all partners and customers that do need to provide high-availability across multiple data centers using SQL Server Virtual Machines. In the meantime we will try to provide a sample based on this work above as soon as we can have time/resources for implementation.

Finally – thanks to Max Knor and Igor Pagliai – without their help we would not have achieved these goals at this level of completeness!