Jekyll2021-06-06T18:55:08+00:00http://blog.mszcool.com/feed.xmlmszcool’s thoughts and cents revealedPassionate team-leader and hands-on engineer at Microsoft Azure Industry PaaS product teams with love and interest for trying out new technologies and building new stuff for the cloud and the edge. .NET (C#), Python, Go, Azure Compute, Azure AD, Identity, Kubernetes, Azure Governance and Infra Automation at scale, DevOps, HybridMario SzpusztaAzure Instance Metadata Service and Managed Service Identity2017-09-26T11:00:00+00:002017-09-26T11:00:00+00:00http://blog.mszcool.com/wordpressarchive/2017/09/26/instance-metadata-and-managed-service-identity<p>A lot changed since my last blog post… we had a great and beatiful summer with an awesome vacation and I am now part of the <a href="https://blogs.msdn.microsoft.com/azurecat/">Azure Customer Advisory Team</a> which is the customer-facing part from Azure Engineering. So, I finally ended up in <a href="https://azure.microsoft.com/en-us/blog/author/jasonz/">Jason Zander’s</a> part of Microsoft, the person who’s responsible for Azure, itself. That means I am now involved in the most complex Azure-projects we run with customers and not dedicated to SAP, only, anymore. Although I still work with SAP a lot.</p>
<p>Now, in the meantime a lot of Azure tech stuff expanded as well. In this post I want to focus on two specific features - the <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/linux/instance-metadata-service">In-VM Instance Metadata Service</a> and the <a href="https://docs.microsoft.com/en-us/azure/active-directory/msi-overview">Managed Service Identity</a> (in short, MSI) which we recently started using in a customer project even before MSI got publicly available and announced.</p>
<!--more-->
<p>I’ve posted about the need for in-VM instance metadata as well as an approach for allowing Virtual Machines to perform automated management operations in <a href="http://blog.mszcool.com/index.php/2016/08/azure-virtual-machine-a-solution-for-instance-metadata-in-linux-and-windows-vms/">a previous blog-post</a>, already. While what I wrote back then is technically still possible, MSI and in-VM Instance Metadata are the recommendation for such scenarios right now. So, you can consider this as the long-awaited follow-up post for <a href="http://blog.mszcool.com/index.php/2016/08/azure-virtual-machine-a-solution-for-instance-metadata-in-linux-and-windows-vms/">this previous one</a>!</p>
<h2 id="recap-the-scenario">Recap the scenario</h2>
<p>The scneario <a href="http://blog.mszcool.com/index.php/2016/08/azure-virtual-machine-a-solution-for-instance-metadata-in-linux-and-windows-vms/">I posted about back then</a> was about virtual machines that need to read data about themselves and also modifying configuration settings about themselves through Azure Resource Manager REST API calls. In the meantime, that very same customer I blogged about back then came with a new scenario that requires a similar capability to us.</p>
<p>Essentially, in that scenario a VM needed to capture it’s own IP addresses and determine the IP addresses of its peers for performing automated configurations of networking routes and <a href="http://www.keepalived.org/">keepalived</a> settings for an HA setup (more details to follow in a separate blog post).</p>
<p>All of this is possible through a combined use of the new Azure in-VM instance metadata service and the Managed Service Identity!</p>
<h2 id="in-vm-instance-metadata-in-a-nutshell"><a href="https://docs.microsoft.com/en-us/azure/virtual-machines/linux/instance-metadata-service">In-VM Instance Metadata</a> in a Nutshell</h2>
<p>This is really nothing special, AWS and other cloud providers have it for ages, already. It essentially gives applications and scripts running inside of the VM an HTTP endpoint available from within the VM, only. This endpoint returns fundamental basic details about a Virtual Machine such as its name, network configurations, unqiue identifiers etc. For Azure Virtual Machines, this endpoint is available on <code class="language-plaintext highlighter-rouge">http://169.254.169.254/metadata/instance?api-version=2017-04-02</code> and returns JSON-formatted data about the virtual machine that looks similar to the following:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>myuser@mylinuxvm:~<span class="nv">$ </span>curl <span class="nt">-H</span> Metadata:true <span class="s2">"http://169.254.169.254/metadata/instance?api-version=2017-04-02"</span> | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 515 100 515 0 0 115k 0 <span class="nt">--</span>:--:-- <span class="nt">--</span>:--:-- <span class="nt">--</span>:--:-- 125k
<span class="o">{</span>
<span class="s2">"compute"</span>: <span class="o">{</span>
<span class="s2">"location"</span>: <span class="s2">"westeurope"</span>,
<span class="s2">"name"</span>: <span class="s2">"mylinuxvm"</span>,
<span class="s2">"offer"</span>: <span class="s2">"UbuntuServer"</span>,
<span class="s2">"osType"</span>: <span class="s2">"Linux"</span>,
<span class="s2">"platformFaultDomain"</span>: <span class="s2">"0"</span>,
<span class="s2">"platformUpdateDomain"</span>: <span class="s2">"0"</span>,
<span class="s2">"publisher"</span>: <span class="s2">"Canonical"</span>,
<span class="s2">"sku"</span>: <span class="s2">"16.04-LTS"</span>,
<span class="s2">"version"</span>: <span class="s2">"16.04.201708151"</span>,
<span class="s2">"vmId"</span>: <span class="s2">"d7......-9...-4..4-b..b-2..........4"</span>,
<span class="s2">"vmSize"</span>: <span class="s2">"Standard_D2s_v3"</span>
<span class="o">}</span>,
<span class="s2">"network"</span>: <span class="o">{</span>
<span class="s2">"interface"</span>: <span class="o">[</span>
<span class="o">{</span>
<span class="s2">"ipv4"</span>: <span class="o">{</span>
<span class="s2">"ipAddress"</span>: <span class="o">[</span>
<span class="o">{</span>
<span class="s2">"privateIpAddress"</span>: <span class="s2">"10.1.0.5"</span>,
<span class="s2">"publicIpAddress"</span>: <span class="s2">"xx.xx.xx.xx"</span>
<span class="o">}</span>
<span class="o">]</span>,
<span class="s2">"subnet"</span>: <span class="o">[</span>
<span class="o">{</span>
<span class="s2">"address"</span>: <span class="s2">"10.1.0.0"</span>,
<span class="s2">"prefix"</span>: <span class="s2">"24"</span>
<span class="o">}</span>
<span class="o">]</span>
<span class="o">}</span>,
<span class="s2">"ipv6"</span>: <span class="o">{</span>
<span class="s2">"ipAddress"</span>: <span class="o">[]</span>
<span class="o">}</span>,
<span class="s2">"macAddress"</span>: <span class="s2">"00........B3"</span>
<span class="o">}</span>
<span class="o">]</span>
<span class="o">}</span>
<span class="o">}</span>
myuser@mylinuxvm:~<span class="err">$</span>
</code></pre></div></div>
<p>It’s a simple REST-service only accessible to anything that runs inside of the VM. All you need to take care off is ensuring, that you pass the <code class="language-plaintext highlighter-rouge">Metadata: true</code> HTTP-header when calling into the service. The call above shows the fundamental basics, only. There’s much more the service provides, for a complete look, review <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/linux/instance-metadata-service">the documentation</a>.</p>
<h2 id="managed-service-identities-msi"><a href="https://docs.microsoft.com/en-us/azure/active-directory/msi-overview">Managed Service Identities</a> (MSI)</h2>
<p>The in-VM instance metadata service is great if you need to query details about the VM, itself. What what if you need to query more? For example, which other servers are available in the same resource group to be able to configure <a href="http://www.keepalived.org/">keepalived</a> for automatically configuring an HA-setup with <a href="http://www.linux-admins.net/2015/02/keepalived-using-unicast-track-and.html">Unicast instead of multi-cast</a> for the availability pings? That’s especially important on Azure, since Multi-Cast is blocked by the VNET infrastructure. Finding out which other servers are available in the same resource group is not possible through the in-VM instance metadata service!</p>
<p>In my previous <a href="http://blog.mszcool.com/index.php/2016/08/azure-virtual-machine-a-solution-for-instance-metadata-in-linux-and-windows-vms/">blog post</a> about this topic when Instance-Metadata and MSI where not available, yet, the scenario was for a Marketplace Image to open up ports on <a href="https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-nsg">Azure NSGs</a> as part of an automated process after the user entered more details into a post-provisioning registration application that ran inside of the VM. Again, such actions do require access to the Azure Resource Manager REST APIs… and that, in turn, requires to authenticate against Azure Active Directory with a valid principal.</p>
<p>In the past, you had to manually <a href="https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?toc=%2Fazure%2Fazure-resource-manager%2Ftoc.json&view=azure-cli-latest">create a Service Principal</a> for such actions and assign permissions in the Azure Subscription for it. Then, from within the VM, you had to sign-in against Azure AD from your script or application using this Service Principal to gain access to the Azure Resource Manager REST APIs. This introduced a very delicate challenge: where would you store the credentials for being able to sign-in with the Service Principal from within the VM!?</p>
<p>With Managed Service Identities, these kind of scenarios become way easier to implement and removes the challenge for you to manage secrets in Virtual Machines for Service Principals. With MSIs activated, all sorts of Azure Service Instances can get identities assigned which are fully managed by Azure through it’s <code class="language-plaintext highlighter-rouge">Microsoft.ManagedIdentity</code> resource provider.</p>
<p>MSIs can be enabled on Virtual Machines, but also other types of Services as you can read in the documentation. You can enable it through the <a href="https://docs.microsoft.com/en-us/azure/active-directory/msi-qs-configure-portal-windows-vm">portal</a>, via <a href="https://docs.microsoft.com/en-us/azure/active-directory/msi-qs-configure-template-windows-vm">an ARM template</a> or with <a href="https://docs.microsoft.com/en-us/azure/active-directory/msi-qs-configure-powershell-windows-vm">PowerShell</a> or the <a href="https://docs.microsoft.com/en-us/azure/active-directory/msi-qs-configure-cli-windows-vm">Azure CLI</a>!</p>
<p><img src="https://raw.githubusercontent.com/mszcool/azureMsiAndInstanceMetadata/master/images/Figure01.jpg" alt="Enabling Managed Service Identities" /></p>
<p>There are two pieces to it, which are getting more visible when you enable MSIs through:</p>
<ul>
<li>
<p><strong>Assigning an MSI to a resource</strong> which essentially results in the creation of a “managed service principal” for an Azure Resource such as a Virtual Machine that is made available to this Azure Resource, only!</p>
</li>
<li>
<p><strong>Making tokens available</strong> to the respective resource for which the Managed Service Identity has been created. For VMs, this happens through a <strong>Virtual Machine Extension</strong> called the <code class="language-plaintext highlighter-rouge">ManagedIdentityExtensionForWindows</code> or <code class="language-plaintext highlighter-rouge">ManagedIdentityExtensionForLinux</code>, respectively. When the extension is enabled for a virtual machine, any software running inside of the VM can request a token which is created as a result of an authentication against Azure AD with the MSI credentials. You don’t have to take care about those credentials since they are managed by the MSI infrastructure for you.</p>
</li>
</ul>
<p>Once you have an MSI attached to a Virtual Machine (or another Azure Resource), you can to assign permissions to this identity for performing management operations against resources in your Azure subscriptions. The following screen shot shows this in the portal:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/azureMsiAndInstanceMetadata/master/images/Figure02.jpg" alt="Assigning Permissions to a Managed Service Identity" /></p>
<p>If you need to assign the permissions via CLI, then you need to get the object IDs and App IDs for the service principals which are managed for you behind the scenes. Below is an excerpt of Azure CLI commands and results showing what you need to do!</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mszcool@dev:~<span class="nv">$ </span>az vm show <span class="nt">--resource-group</span> LinuxHaWithUdrs <span class="nt">--name</span> lxHaServerVm0 <span class="nt">--out</span> json
<span class="o">{</span>
...
<span class="s2">"id"</span>: <span class="s2">"/subscriptions/a...fe/resourceGroups/LinuxHaWithUdrs/providers/Microsoft.Compute/virtualMachines/lxHaServerVm0"</span>,
<span class="s2">"identity"</span>: <span class="o">{</span>
<span class="s2">"principalId"</span>: <span class="s2">"f3....26d"</span>,
<span class="s2">"tenantId"</span>: <span class="s2">"72....47"</span>,
<span class="s2">"type"</span>: <span class="s2">"SystemAssigned"</span>
<span class="o">}</span>,
<span class="s2">"instanceView"</span>: null,
<span class="s2">"licenseType"</span>: null,
<span class="s2">"location"</span>: <span class="s2">"westeurope"</span>,
<span class="s2">"name"</span>: <span class="s2">"lxHaServerVm0"</span>,
<span class="s2">"networkProfile"</span>: <span class="o">{</span>
...
<span class="o">}</span>,
<span class="s2">"osProfile"</span>: <span class="o">{</span>
...
<span class="o">}</span>,
<span class="s2">"plan"</span>: null,
<span class="s2">"provisioningState"</span>: <span class="s2">"Succeeded"</span>,
<span class="s2">"resourceGroup"</span>: <span class="s2">"LinuxHaWithUdrs"</span>,
<span class="s2">"resources"</span>: <span class="o">[</span>
...
<span class="o">]</span>,
<span class="s2">"storageProfile"</span>: <span class="o">{</span>
...
<span class="o">}</span>
<span class="o">}</span>,
<span class="s2">"tags"</span>: <span class="o">{}</span>,
<span class="s2">"type"</span>: <span class="s2">"Microsoft.Compute/virtualMachines"</span>,
<span class="s2">"vmId"</span>: <span class="s2">"52.....6bf"</span>
<span class="o">}</span>
mszcool@dev:~<span class="nv">$ </span>az ad sp show <span class="nt">--id</span> f3....26d
AppId DisplayName ObjectId ObjectType
<span class="nt">----------------</span> <span class="nt">----------------</span> <span class="nt">----------------</span> <span class="nt">----------------</span>
8b............f1 RN_lxHaServerVm0 f3............6d ServicePrincipal
</code></pre></div></div>
<p>As you can see, when you get the VM object through ARM, it contains a new section called <code class="language-plaintext highlighter-rouge">identity</code> which contains all the details about the managed service identity you need to retrieve further details from Azure AD (above also by using the CLI).</p>
<p>That information can be used for things such as <a href="https://docs.microsoft.com/en-us/azure/active-directory/role-based-access-control-custom-roles">creating custom roles with permissions</a> and then assigning the MSI to this custom role instead of assigning explicit permissions.</p>
<h1 id="and-end-2-end-example">And end-2-end example</h1>
<p>As I’ve mentioned before, one of the main use cases - so also for my customer - to use these assets combined is all about VMs that need to retrieve (and modify) details about themselves and peers in a joint-deployment. In an simplified example I wanted to demonstrate the fundamental the basic mechanics of the Instance Metadata Service and the Managed Service Identity so that you understrand, how you can make use of them in your own scripts and applications.</p>
<p>The sample builds the foundation for the scenarios I’ve explained earlier (VMs getting infos about themselves and their peers). Rather than trying to hit it all with a single post, you can expect more complex scenario posts later on that make use of the mechanics explained in this post.</p>
<p>Essentially, the sample creates an infrastructure with a jump-box and a set of servers as shown in the following <a href="https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-create">Azure Network Watcher topology</a> diagram.</p>
<p><strong>All of the code is available on my GitHub repository for review:</strong></p>
<p>https://github.com/mszcool/azureMsiAndInstanceMetadata</p>
<p><img src="https://raw.githubusercontent.com/mszcool/azureMsiAndInstanceMetadata/master/images/Figure03.jpg" alt="Network Watcher Topology" /></p>
<p>On each of the servers, a simple GO-based REST API runs which allows to show the instance metadata of the server itself as well as get all the other servers in the same machine. The servers are exposed through an Azure Load Balancer using NAT so that every server can be accessed, individually on a port to be able to call into specific servers. Note that I’ve set this up this way for <strong>demo-purposes, only</strong> so that you easily can access each server and examine its instance metadata and its output of getting details about its peers individually.</p>
<p>In a real-world environment I could rarely or not at all think about scenarios to expose instance metadata or data about peers to the public, directly. So, <strong>this is for demo-purposes</strong>, only, I wanted to re-iterated on that.</p>
<h2 id="assigning-msis-to-the-servers-and-giving-them-permissions">Assigning MSIs to the Servers and giving them permissions</h2>
<p>For the sample, I used ARM templates to assign MSIs to the individual Server VMs and enable the respective MSI VM extension so that an application running inside of the respective VM can get a token for accessing resources under the identity of the VM it’s running in - the excerpt is from the <a href="https://github.com/mszcool/azureMsiAndInstanceMetadata/blob/master/azuredeploy.json">azuredeploy.json</a> template on my GitHub repository.</p>
<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">...</span>
<span class="p">{</span>
<span class="dl">"</span><span class="s2">apiVersion</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[variables('computeAPIVersion')]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">type</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">Microsoft.Compute/virtualMachines</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">copy</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">name</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">serverVmCopy</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">count</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[parameters('serverCount')]</span><span class="dl">"</span>
<span class="p">},</span>
<span class="dl">"</span><span class="s2">name</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[concat(variables('serverVmNamePrefix'), copyIndex())]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">location</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[parameters('location')]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">identity</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">type</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">systemAssigned</span><span class="dl">"</span>
<span class="p">},</span>
<span class="dl">"</span><span class="s2">dependsOn</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">[resourceId('Microsoft.Network/networkInterfaces',concat(variables('serverNicNamePrefix'),copyIndex()))]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">[variables('serversAvSetId')]</span><span class="dl">"</span>
<span class="p">],</span>
<span class="dl">"</span><span class="s2">properties</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="p">...</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">...</span>
<span class="p">{</span>
<span class="dl">"</span><span class="s2">apiVersion</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[variables('computeAPIVersion')]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">type</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">Microsoft.Compute/virtualMachines/extensions</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">name</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[concat(variables('serverVmNamePrefix'),copyIndex(),'/IdentityExtension')]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">location</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[parameters('location')]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">copy</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">name</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">serverVmMsiExtensionCopy</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">count</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[parameters('serverCount')]</span><span class="dl">"</span>
<span class="p">},</span>
<span class="dl">"</span><span class="s2">dependsOn</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">[resourceId('Microsoft.Compute/virtualMachines', concat(variables('serverVmNamePrefix'), copyIndex()))]</span><span class="dl">"</span>
<span class="p">],</span>
<span class="dl">"</span><span class="s2">properties</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">publisher</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">Microsoft.ManagedIdentity</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">type</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">ManagedIdentityExtensionForLinux</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">typeHandlerVersion</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">1.0</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">autoUpgradeMinorVersion</span><span class="dl">"</span><span class="p">:</span> <span class="kc">true</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">settings</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">port</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[variables('msiExtensionPort')]</span><span class="dl">"</span>
<span class="p">},</span>
<span class="dl">"</span><span class="s2">protectedSettings</span><span class="dl">"</span><span class="p">:</span> <span class="p">{}</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">...</span>
</code></pre></div></div>
<p>As you can see above, the server-VM gets a system assigned identity in the ARM template. Further down in the template, the Managed Identity Extension is activated for each server VM instance. The variable <code class="language-plaintext highlighter-rouge">msiExtensionPort</code> is set to <code class="language-plaintext highlighter-rouge">50342</code> in my example, which means that an application or script running inside of the VM can retrieve a token for management operations from within the VM on that port (<code class="language-plaintext highlighter-rouge">http://localhost:50342/oauth2/token</code>).</p>
<h2 id="taking-care-of-rbac">Taking care of RBAC</h2>
<p>Now we have an MSI and the ability for applications to get tokens when running inside of the VM. But so far the possibilities of using that identity are limited since it does not have any permissions, yet. These are assigned through the ARM template, as well:</p>
<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="p">...</span>
<span class="p">{</span>
<span class="dl">"</span><span class="s2">apiVersion</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[variables('authAPIVersion')]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">type</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">Microsoft.Authorization/roleAssignments</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">name</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[parameters('rbacGuids')[add(mul(copyIndex(),2),1)]]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">copy</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">name</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">serverVmRbacDeployment</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">count</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[parameters('serverCount')]</span><span class="dl">"</span>
<span class="p">},</span>
<span class="dl">"</span><span class="s2">dependsOn</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">[resourceId('Microsoft.Compute/virtualMachines', concat(variables('serverVmNamePrefix'), copyIndex()))]</span><span class="dl">"</span>
<span class="p">],</span>
<span class="dl">"</span><span class="s2">properties</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">roleDefinitionId</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[variables('rbacContributorRole')]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">principalId</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[reference(concat(resourceId('Microsoft.Compute/virtualMachines',concat(variables('serverVmNamePrefix'),copyIndex())),'/providers/Microsoft.ManagedIdentity/Identities/default'),variables('managedIdentityAPIVersion')).principalId]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">scope</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[resourceGroup().id]</span><span class="dl">"</span>
<span class="p">}</span>
<span class="p">},</span>
<span class="p">..</span>
</code></pre></div></div>
<p>This assigns permissions to created MSIs for the VMs to read resources of the resource group the VMs are deployed in. To get the role definition, which is stored in the <code class="language-plaintext highlighter-rouge">[variables('rbacContributorRole')]</code> in my template, I had to execute an Azure CLI statement along the lines of the following:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az role definition list <span class="nt">--query</span> <span class="s2">"[?properties.roleName == 'Contributor']"</span> <span class="nt">--out</span> json
</code></pre></div></div>
<p>The next tricky bit is the name of the RBAC role assignment. Unfortunately, that needs to be a unqiue GUID. In my very simplified example, I pass in the GUIDs for the role assignments as parameters in the template:</p>
<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">...</span>
<span class="dl">"</span><span class="s2">rbacGuids</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">type</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">array</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">metadata</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">description</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">Exactly ONE UNIQUE GUID for each server VM is needed in this array for the RBAC assignments (sorry for that)! WARNING: if you want to keep this template deployment repeatable, you must generate new GUIDs for every run or delete RBAC assignments before running it, again!</span><span class="dl">"</span>
<span class="p">},</span>
<span class="dl">"</span><span class="s2">defaultValue</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">12f66315-2fdf-460a-9c53-8654ae72c390</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">12f66315-2fdf-460a-9c53-8654ae72c391</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">12f66315-2fdf-460a-9c53-8654ae72c392</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">12f66315-2fdf-460a-9c53-8654ae72c393</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">12f66315-2fdf-460a-9c53-8654ae72c394</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">12f66315-2fdf-460a-9c53-8654ae72c395</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">12f66315-2fdf-460a-9c53-8654ae72c396</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">12f66315-2fdf-460a-9c53-8654ae72c397</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">12f66315-2fdf-460a-9c53-8654ae72c398</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">12f66315-2fdf-460a-9c53-8654ae72c399</span><span class="dl">"</span>
<span class="p">],</span>
<span class="dl">"</span><span class="s2">minLength</span><span class="dl">"</span><span class="p">:</span> <span class="mi">4</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">maxLength</span><span class="dl">"</span><span class="p">:</span> <span class="mi">18</span>
<span class="p">}</span>
<span class="p">...</span>
</code></pre></div></div>
<p>The reason for this is to make it simple to replace those values as part of an integrated CI/CD pipeline with every continuous build that might involve such an ARM-template deployment. I might write a separate, short post about that topic. For now, I just grab a GUID for each server-RBAC-assignment I want to make as part of my template to generate a unique name for the assignment by using <code class="language-plaintext highlighter-rouge">"name": "[parameters('rbacGuids')[add(mul(copyIndex(),2),1)]]"</code>.</p>
<p>The next trick part of this section in the template is getting the ID of the principal created for the managed service identity of the respective server VM. This part of the template really gets hard to read, so I broke it up into multiple lines although you cannot do that in a real template:</p>
<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="dl">"</span><span class="s2">properties</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">roleDefinitionId</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[variables('rbacContributorRole')]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">principalId</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[reference
(
concat(
resourceId(
'Microsoft.Compute/virtualMachines',
concat(
variables('serverVmNamePrefix'),copyIndex()
)
),'/providers/Microsoft.ManagedIdentity/Identities/default'
),
variables('managedIdentityAPIVersion')
).principalId]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">scope</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[resourceGroup().id]</span><span class="dl">"</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The code is using the <code class="language-plaintext highlighter-rouge">reference()</code>-template-function to get the principal ID of the service principal created as managed identity. That principal is a child-object of the virtual machine, so we need to start with the <code class="language-plaintext highlighter-rouge">resourceId()</code> of the virtual machine and attach the identities section to it. Finally, the <code class="language-plaintext highlighter-rouge">reference()</code>-function requires an API version where we use the version for the managed identity provider from a variable <code class="language-plaintext highlighter-rouge">"managedIdentityAPIVersion": "2015-08-31-PREVIEW"</code> in the code.</p>
<h2 id="getting-a-token-for-your-msi">Getting a Token for your MSI</h2>
<p>Based on the requests from that specific customer project where we needed this functionality, I decided to use Go as a programming language. I am still not a GoLang-expert, so I took the opportunity to learn. Using MSIs always follows two major steps:</p>
<ul>
<li>
<p>Acquire a token through the locally installed VM Extension.</p>
<p>This happens by calling into <code class="language-plaintext highlighter-rouge">http://localhost:<port-selected-in-MSI-extension> settings/oauth2/token</code> endpoint which is offered by the MSI VM Extension.</p>
</li>
<li>
<p>Use that token in REST API calls to the Azure Resource Manager</p>
<p>These are regular REST-calls with the HTTP Authorization header containing the bearer token retrieved earlier.</p>
</li>
</ul>
<p>In my GoLang-based example, I have one module contained in the file <a href="https://github.com/mszcool/azureMsiAndInstanceMetadata/blob/master/app/msitoken.go">msitoken.go</a> which performs a REST-call against the local OAuth2 server offered by the VM Extension (note that this is an incomplete excerpt, for the full code look at the file msitoken.go on my GitHub repo):</p>
<div class="language-golang highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">// etc. ...</span>
<span class="k">const</span> <span class="n">msiTokenURL</span> <span class="kt">string</span> <span class="o">=</span> <span class="s">"http://localhost:%d/oauth2/token"</span>
<span class="k">const</span> <span class="n">resourceURL</span> <span class="kt">string</span> <span class="o">=</span> <span class="s">"https://management.azure.com/"</span>
<span class="c">// etc. ...</span>
<span class="k">var</span> <span class="n">myToken</span> <span class="n">MsiToken</span>
<span class="c">// Build a request to call the MSI Extension OAuth2 Service</span>
<span class="c">// The request must contain the resource for which we request the token</span>
<span class="n">finalRequestURL</span> <span class="o">:=</span> <span class="n">fmt</span><span class="o">.</span><span class="n">Sprintf</span><span class="p">(</span><span class="s">"%s?resource=%s"</span><span class="p">,</span> <span class="n">fmt</span><span class="o">.</span><span class="n">Sprintf</span><span class="p">(</span><span class="n">msiTokenURL</span><span class="p">,</span> <span class="n">msiPort</span><span class="p">),</span> <span class="n">url</span><span class="o">.</span><span class="n">QueryEscape</span><span class="p">(</span><span class="n">resourceURL</span><span class="p">))</span>
<span class="n">req</span><span class="p">,</span> <span class="n">err</span> <span class="o">:=</span> <span class="n">http</span><span class="o">.</span><span class="n">NewRequest</span><span class="p">(</span><span class="s">"GET"</span><span class="p">,</span> <span class="n">finalRequestURL</span><span class="p">,</span> <span class="no">nil</span><span class="p">)</span>
<span class="k">if</span> <span class="n">err</span> <span class="o">!=</span> <span class="no">nil</span> <span class="p">{</span>
<span class="n">log</span><span class="o">.</span><span class="n">Printf</span><span class="p">(</span><span class="s">"--- %s --- Failed creating http request --- %s"</span><span class="p">,</span> <span class="n">t</span><span class="o">.</span><span class="n">Format</span><span class="p">(</span><span class="n">time</span><span class="o">.</span><span class="n">RFC3339Nano</span><span class="p">),</span> <span class="n">err</span><span class="p">)</span>
<span class="k">return</span> <span class="n">myToken</span><span class="p">,</span> <span class="s">"{ </span><span class="se">\"</span><span class="s">error</span><span class="se">\"</span><span class="s">: </span><span class="se">\"</span><span class="s">failed creating http request object to request MSI token!</span><span class="se">\"</span><span class="s"> }"</span>
<span class="p">}</span>
<span class="c">// Set the required header for the HTTP request</span>
<span class="n">req</span><span class="o">.</span><span class="n">Header</span><span class="o">.</span><span class="n">Add</span><span class="p">(</span><span class="s">"Metadata"</span><span class="p">,</span> <span class="s">"true"</span><span class="p">)</span>
<span class="c">// Create the HTTP client and call the instance metadata service</span>
<span class="n">client</span> <span class="o">:=</span> <span class="o">&</span><span class="n">http</span><span class="o">.</span><span class="n">Client</span><span class="p">{}</span>
<span class="n">resp</span><span class="p">,</span> <span class="n">err</span> <span class="o">:=</span> <span class="n">client</span><span class="o">.</span><span class="n">Do</span><span class="p">(</span><span class="n">req</span><span class="p">);</span>
<span class="k">if</span> <span class="n">err</span> <span class="o">!=</span> <span class="no">nil</span> <span class="p">{</span>
<span class="n">t</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">Now</span><span class="p">()</span>
<span class="n">log</span><span class="o">.</span><span class="n">Printf</span><span class="p">(</span><span class="s">"--- %s --- Failed calling MSI token service --- %s"</span><span class="p">,</span> <span class="n">t</span><span class="o">.</span><span class="n">Format</span><span class="p">(</span><span class="n">time</span><span class="o">.</span><span class="n">RFC3339Nano</span><span class="p">),</span> <span class="n">err</span><span class="p">)</span>
<span class="k">return</span> <span class="n">myToken</span><span class="p">,</span> <span class="s">"{ </span><span class="se">\"</span><span class="s">error</span><span class="se">\"</span><span class="s">: </span><span class="se">\"</span><span class="s">failed calling MSI token service!</span><span class="se">\"</span><span class="s"> }"</span>
<span class="p">}</span>
<span class="c">// Complete reading the body</span>
<span class="k">defer</span> <span class="n">resp</span><span class="o">.</span><span class="n">Body</span><span class="o">.</span><span class="n">Close</span><span class="p">()</span>
<span class="c">// Now return the instance metadata JSON or another error if the status code is not in 2xx range</span>
<span class="k">if</span> <span class="p">(</span><span class="n">resp</span><span class="o">.</span><span class="n">StatusCode</span> <span class="o">>=</span> <span class="m">200</span><span class="p">)</span> <span class="o">&&</span> <span class="p">(</span><span class="n">resp</span><span class="o">.</span><span class="n">StatusCode</span> <span class="o"><=</span> <span class="m">299</span><span class="p">)</span> <span class="p">{</span>
<span class="n">dec</span> <span class="o">:=</span> <span class="n">json</span><span class="o">.</span><span class="n">NewDecoder</span><span class="p">(</span><span class="n">resp</span><span class="o">.</span><span class="n">Body</span><span class="p">)</span>
<span class="n">err</span> <span class="o">:=</span> <span class="n">dec</span><span class="o">.</span><span class="n">Decode</span><span class="p">(</span><span class="o">&</span><span class="n">myToken</span><span class="p">)</span>
<span class="c">// etc. ...</span>
<span class="p">}</span>
<span class="c">// etc. ...</span>
</code></pre></div></div>
<p>Two aspects are important:</p>
<ul>
<li>
<p>First, you always need to add the “Metadata: true” header for the call. All other calls will be rejected!</p>
</li>
<li>
<p>Second, you need to add a query-string parameter to the request called <code class="language-plaintext highlighter-rouge">resource=uri://to-your-resource-you-want-to-do-calls-to</code>. In our case, this is always the Azure Resource Manager REST APIs resource <code class="language-plaintext highlighter-rouge">https://management.azure.com/</code>.</p>
</li>
</ul>
<p>Once we have executed the call, we do have a valid token available. Note that we didn’t have to fiddle around or deal with any kinds of secrets which is super-convenient. The Azure MSI infrastructure is totally taking care of the required details and there is not even a possibility to get access to any kinds of secrets for Managed Identities.</p>
<h2 id="using-the-msi-token">Using the MSI Token</h2>
<p>This is the rather simple part of the story because it’s no different to any other Azure REST API call performed with any other kind of Azure AD user/principal. Once you have the token, you just use it in the HTTP Authorization header to call into the Azure Resource Manager REST APIs and if permissions are set up as previously outlined when I wrote about RBAC, all should go well.</p>
<p>The following snippets are parts of the GoLang Source file <a href="https://github.com/mszcool/azureMsiAndInstanceMetadata/blob/master/app/mypeers.go">mypeers.go</a></p>
<div class="language-golang highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="k">const</span> <span class="p">(</span>
<span class="n">environmentNameSubscription</span> <span class="kt">string</span> <span class="o">=</span> <span class="s">"SUBSCRIPTION_ID"</span>
<span class="n">environmentNameResourceGroup</span> <span class="kt">string</span> <span class="o">=</span> <span class="s">"RESOURCE_GROUP"</span>
<span class="n">restAPIEndpoint</span> <span class="kt">string</span> <span class="o">=</span>
<span class="s">"https://management.azure.com/subscriptions/%s/resourceGroups/%s/%s"</span>
<span class="n">vmRelativeEndpoint</span> <span class="kt">string</span> <span class="o">=</span>
<span class="s">"providers/Microsoft.Compute/virtualmachines?api-version=2016-04-30-preview"</span>
<span class="n">authorizationHeader</span> <span class="kt">string</span> <span class="o">=</span> <span class="s">"%s %s"</span>
<span class="p">)</span>
<span class="k">func</span> <span class="n">GetMyPeerVirtualMachines</span><span class="p">(</span><span class="n">msiToken</span> <span class="n">MsiToken</span><span class="p">)</span> <span class="p">(</span><span class="n">vms</span> <span class="kt">string</span><span class="p">,</span> <span class="n">errOut</span> <span class="kt">string</span><span class="p">)</span> <span class="p">{</span>
<span class="c">// etc. ...</span>
<span class="n">subID</span> <span class="o">:=</span> <span class="n">os</span><span class="o">.</span><span class="n">Getenv</span><span class="p">(</span><span class="n">environmentNameSubscription</span><span class="p">)</span>
<span class="n">resGroup</span> <span class="o">:=</span> <span class="n">os</span><span class="o">.</span><span class="n">Getenv</span><span class="p">(</span><span class="n">environmentNameResourceGroup</span><span class="p">)</span>
<span class="c">// etc. ...</span>
<span class="c">// Create the final endpoint URLs to call into the Azure Resource Manager VM REST API</span>
<span class="n">finalURL</span> <span class="o">:=</span> <span class="n">fmt</span><span class="o">.</span><span class="n">Sprintf</span><span class="p">(</span><span class="n">restAPIEndpoint</span><span class="p">,</span>
<span class="n">subID</span><span class="p">,</span> <span class="n">resGroup</span><span class="p">,</span> <span class="n">vmRelativeEndpoint</span><span class="p">)</span>
<span class="n">finalAuthHeader</span> <span class="o">:=</span> <span class="n">fmt</span><span class="o">.</span><span class="n">Sprintf</span><span class="p">(</span><span class="n">authorizationHeader</span><span class="p">,</span>
<span class="n">msiToken</span><span class="o">.</span><span class="n">TokenType</span><span class="p">,</span> <span class="n">msiToken</span><span class="o">.</span><span class="n">AccessToken</span><span class="p">)</span>
<span class="c">// Build a request to call the instance Azure in-VM metadata service</span>
<span class="n">req</span><span class="p">,</span> <span class="n">err</span> <span class="o">:=</span> <span class="n">http</span><span class="o">.</span><span class="n">NewRequest</span><span class="p">(</span><span class="s">"GET"</span><span class="p">,</span> <span class="n">finalURL</span><span class="p">,</span> <span class="no">nil</span><span class="p">)</span>
<span class="k">if</span> <span class="n">err</span> <span class="o">!=</span> <span class="no">nil</span> <span class="p">{</span>
<span class="c">// etc. ...</span>
<span class="p">}</span>
<span class="n">req</span><span class="o">.</span><span class="n">Header</span><span class="o">.</span><span class="n">Add</span><span class="p">(</span><span class="s">"Authorization"</span><span class="p">,</span> <span class="n">finalAuthHeader</span><span class="p">)</span>
<span class="c">// Create the HTTP client and call the instance metadata service</span>
<span class="n">client</span> <span class="o">:=</span> <span class="o">&</span><span class="n">http</span><span class="o">.</span><span class="n">Client</span><span class="p">{}</span>
<span class="n">resp</span><span class="p">,</span> <span class="n">err</span> <span class="o">:=</span> <span class="n">client</span><span class="o">.</span><span class="n">Do</span><span class="p">(</span><span class="n">req</span><span class="p">);</span>
<span class="k">if</span> <span class="n">err</span> <span class="o">!=</span> <span class="no">nil</span> <span class="p">{</span>
<span class="c">// etc. ...</span>
<span class="p">}</span>
<span class="c">// Complete reading the body</span>
<span class="k">defer</span> <span class="n">resp</span><span class="o">.</span><span class="n">Body</span><span class="o">.</span><span class="n">Close</span><span class="p">()</span>
<span class="c">// Now return the raw VM JSON or another error if the status code is not in 2xx range</span>
<span class="k">if</span> <span class="p">(</span><span class="n">resp</span><span class="o">.</span><span class="n">StatusCode</span> <span class="o">>=</span> <span class="m">200</span><span class="p">)</span> <span class="o">&&</span> <span class="p">(</span><span class="n">resp</span><span class="o">.</span><span class="n">StatusCode</span> <span class="o"><=</span> <span class="m">299</span><span class="p">)</span> <span class="p">{</span>
<span class="n">bodyContent</span><span class="p">,</span> <span class="n">err</span> <span class="o">:=</span> <span class="n">ioutil</span><span class="o">.</span><span class="n">ReadAll</span><span class="p">(</span><span class="n">resp</span><span class="o">.</span><span class="n">Body</span><span class="p">)</span>
<span class="k">if</span> <span class="n">err</span> <span class="o">!=</span> <span class="no">nil</span> <span class="p">{</span>
<span class="c">// etc. ...</span>
<span class="p">}</span>
<span class="c">// etc. ...</span>
<span class="k">return</span> <span class="kt">string</span><span class="p">(</span><span class="n">bodyContent</span><span class="p">),</span> <span class="s">""</span>
<span class="p">}</span>
<span class="c">// etc. ...</span>
<span class="k">return</span> <span class="s">""</span><span class="p">,</span> <span class="n">fmt</span><span class="o">.</span><span class="n">Sprintf</span><span class="p">(</span><span class="s">"{ </span><span class="se">\"</span><span class="s">error</span><span class="se">\"</span><span class="s">: </span><span class="se">\"</span><span class="s">Azure Resource Manager REST API call returned non-OK status code: %d </span><span class="se">\"</span><span class="s"> }"</span><span class="p">,</span> <span class="n">resp</span><span class="o">.</span><span class="n">StatusCode</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div></div>
<p>This code is super-simple and just retrieves all other servers in the same resource group. It assumes, that the resource group and the subscription ID are both set as environment variables before the GO-application is started. This should give you an idea, how a server in a resource group could find other servers and get their private IP addresses to automatically configure components such as e.g. <strong>keepalived</strong> during an automated post provisioning step or something similar.</p>
<h2 id="the-instance-metadata-service">The Instance Metadata Service</h2>
<p>The MSI and Azure ARM REST API calls can help retrieving details about peers or performing more complex management operations incl. creating or updating resources depending on the permissions given to a particular MSI. But for retrieving information details about itself, a VM does not necessarily need to go through MSI and ARM REST APIs since there’s a way simpler approach if it’s just about retrieving details about the VM itself.</p>
<p>For a few months, Azure makes an in-VM instance metadata service available which can be called from within the VM, only, but without additional authentication requirements. The documentation about the instance metadata service shows, how-to retrieve the data with simple tools such as <code class="language-plaintext highlighter-rouge">curl</code>. Again, the important thing is to include the metadata header as with the MSI token service, before.</p>
<p>In this end-2-end sample, I show, how to call the in-VM instance metadata service from a GoLang application. Again, I just show the mechanics, no concrete scenario for this post, but it should equip you with being able to implement scenarios such as the ones I’ve explained several times throughout the post. And I plan for subsequent blog-posts making use of these mechanics for a real scenario implementation. Below again an excerpt of the GoLang-code that retrieves instance metadata, for the full code please review <a href="https://github.com/mszcool/azureMsiAndInstanceMetadata/blob/master/app/metadata.go">metadata.go</a>:</p>
<div class="language-golang highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">const</span> <span class="n">instanceMetaDataURL</span> <span class="kt">string</span> <span class="o">=</span>
<span class="s">"http://169.254.169.254/metadata/instance?api-version=2017-04-02"</span>
<span class="c">/*GetInstanceMetadata ()
*Calls the Azure in-VM Instance Metadata service and returns the results to the caller*/</span>
<span class="k">func</span> <span class="n">GetInstanceMetadata</span><span class="p">()</span> <span class="kt">string</span> <span class="p">{</span>
<span class="c">// etc. ...</span>
<span class="c">// Build a request to call the instance Azure in-VM metadata service</span>
<span class="n">req</span><span class="p">,</span> <span class="n">err</span> <span class="o">:=</span> <span class="n">http</span><span class="o">.</span><span class="n">NewRequest</span><span class="p">(</span><span class="s">"GET"</span><span class="p">,</span> <span class="n">instanceMetaDataURL</span><span class="p">,</span> <span class="no">nil</span><span class="p">)</span>
<span class="k">if</span> <span class="n">err</span> <span class="o">!=</span> <span class="no">nil</span> <span class="p">{</span>
<span class="c">// etc. ...</span>
<span class="p">}</span>
<span class="c">// Set the required header for the HTTP request</span>
<span class="n">req</span><span class="o">.</span><span class="n">Header</span><span class="o">.</span><span class="n">Add</span><span class="p">(</span><span class="s">"Metadata"</span><span class="p">,</span> <span class="s">"true"</span><span class="p">)</span>
<span class="c">// Create the HTTP client and call the instance metadata service</span>
<span class="n">client</span> <span class="o">:=</span> <span class="o">&</span><span class="n">http</span><span class="o">.</span><span class="n">Client</span><span class="p">{}</span>
<span class="n">resp</span><span class="p">,</span> <span class="n">err</span> <span class="o">:=</span> <span class="n">client</span><span class="o">.</span><span class="n">Do</span><span class="p">(</span><span class="n">req</span><span class="p">);</span>
<span class="k">if</span> <span class="n">err</span> <span class="o">!=</span> <span class="no">nil</span> <span class="p">{</span>
<span class="c">// etc. ...</span>
<span class="p">}</span>
<span class="c">// Complete reading the body</span>
<span class="k">defer</span> <span class="n">resp</span><span class="o">.</span><span class="n">Body</span><span class="o">.</span><span class="n">Close</span><span class="p">()</span>
<span class="k">if</span> <span class="p">(</span><span class="n">resp</span><span class="o">.</span><span class="n">StatusCode</span> <span class="o">>=</span> <span class="m">200</span><span class="p">)</span> <span class="o">&&</span> <span class="p">(</span><span class="n">resp</span><span class="o">.</span><span class="n">StatusCode</span> <span class="o"><=</span> <span class="m">299</span><span class="p">)</span> <span class="p">{</span>
<span class="n">bodyContent</span><span class="p">,</span> <span class="n">err</span> <span class="o">:=</span> <span class="n">ioutil</span><span class="o">.</span><span class="n">ReadAll</span><span class="p">(</span><span class="n">resp</span><span class="o">.</span><span class="n">Body</span><span class="p">)</span>
<span class="c">// etc. ...</span>
<span class="k">return</span> <span class="kt">string</span><span class="p">(</span><span class="n">bodyContent</span><span class="p">)</span>
<span class="p">}</span>
<span class="c">// etc. ...</span>
<span class="k">return</span> <span class="n">fmt</span><span class="o">.</span><span class="n">Sprintf</span><span class="p">(</span><span class="s">"{ </span><span class="se">\"</span><span class="s">error</span><span class="se">\"</span><span class="s">: </span><span class="se">\"</span><span class="s">instance meta data service returned non-OK status code: %q </span><span class="se">\"</span><span class="s"> }"</span><span class="p">,</span> <span class="n">resp</span><span class="o">.</span><span class="n">StatusCode</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div></div>
<h2 id="the-main-go-application">The Main Go-Application</h2>
<p>Before putting it all together, let’s have a quick look at the main GoLang application so that you get a sense, where those previous pieces of code are called from. The main application is fairly simple, it bootstraps a GoLang HTTP server and configures some routes for the HTTP-handlers (full source in <a href="https://github.com/mszcool/azureMsiAndInstanceMetadata/blob/master/app/main.go">main.go</a>).</p>
<div class="language-golang highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">package</span> <span class="n">main</span>
<span class="k">import</span> <span class="p">(</span>
<span class="s">"log"</span>
<span class="s">"net/http"</span>
<span class="s">"github.com/gorilla/mux"</span>
<span class="p">)</span>
<span class="k">var</span> <span class="n">myRoutes</span> <span class="o">=</span> <span class="k">map</span><span class="p">[</span><span class="kt">string</span><span class="p">]</span><span class="k">func</span><span class="p">(</span><span class="n">http</span><span class="o">.</span><span class="n">ResponseWriter</span><span class="p">,</span> <span class="o">*</span><span class="n">http</span><span class="o">.</span><span class="n">Request</span><span class="p">){</span>
<span class="s">"/"</span><span class="o">:</span> <span class="n">Index</span><span class="p">,</span>
<span class="s">"/meta"</span><span class="o">:</span> <span class="n">MyMeta</span><span class="p">,</span>
<span class="s">"/servers"</span><span class="o">:</span> <span class="n">MyPeers</span><span class="p">}</span>
<span class="k">func</span> <span class="n">main</span><span class="p">()</span> <span class="p">{</span>
<span class="n">router</span> <span class="o">:=</span> <span class="n">mux</span><span class="o">.</span><span class="n">NewRouter</span><span class="p">()</span><span class="o">.</span><span class="n">StrictSlash</span><span class="p">(</span><span class="no">true</span><span class="p">);</span>
<span class="k">for</span> <span class="n">key</span><span class="p">,</span> <span class="n">value</span> <span class="o">:=</span> <span class="k">range</span> <span class="n">myRoutes</span> <span class="p">{</span>
<span class="n">router</span><span class="o">.</span><span class="n">HandleFunc</span><span class="p">(</span><span class="n">key</span><span class="p">,</span> <span class="n">value</span><span class="p">);</span>
<span class="p">}</span>
<span class="n">log</span><span class="o">.</span><span class="n">Fatal</span><span class="p">(</span><span class="n">http</span><span class="o">.</span><span class="n">ListenAndServe</span><span class="p">(</span><span class="s">":8080"</span><span class="p">,</span> <span class="n">router</span><span class="p">))</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The <a href="https://github.com/mszcool/azureMsiAndInstanceMetadata/blob/master/app/handlers.go">handlers.go</a> then contains the functions which are referred to in the array <code class="language-plaintext highlighter-rouge">myRoutes</code> defined in the source code above. These are the actual functions called when the respective route URLs are called:</p>
<div class="language-golang highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">/*Index (w, r)
*Returns with a list of available functions for this simple API*/</span>
<span class="k">func</span> <span class="n">Index</span><span class="p">(</span><span class="n">w</span> <span class="n">http</span><span class="o">.</span><span class="n">ResponseWriter</span><span class="p">,</span> <span class="n">r</span> <span class="o">*</span><span class="n">http</span><span class="o">.</span><span class="n">Request</span><span class="p">)</span> <span class="p">{</span>
<span class="n">fmt</span><span class="o">.</span><span class="n">Fprintln</span><span class="p">(</span><span class="n">w</span><span class="p">,</span> <span class="s">"Welcome!"</span><span class="p">);</span>
<span class="p">}</span>
<span class="c">/*MyMeta (w, r)
*Returns instance metadata retrieved through the in-VM instance metadata service of the VM*/</span>
<span class="k">func</span> <span class="n">MyMeta</span><span class="p">(</span><span class="n">w</span> <span class="n">http</span><span class="o">.</span><span class="n">ResponseWriter</span><span class="p">,</span> <span class="n">r</span> <span class="o">*</span><span class="n">http</span><span class="o">.</span><span class="n">Request</span><span class="p">)</span> <span class="p">{</span>
<span class="n">metaDataJSON</span> <span class="o">:=</span> <span class="n">GetInstanceMetadata</span><span class="p">()</span>
<span class="n">fmt</span><span class="o">.</span><span class="n">Fprintf</span><span class="p">(</span><span class="n">w</span><span class="p">,</span> <span class="n">metaDataJSON</span><span class="p">)</span>
<span class="p">}</span>
<span class="c">/*MyPeers (w, r)
*Uses the MSI to get a token and list all the other servers available in the resource group*/</span>
<span class="k">func</span> <span class="n">MyPeers</span><span class="p">(</span><span class="n">w</span> <span class="n">http</span><span class="o">.</span><span class="n">ResponseWriter</span><span class="p">,</span> <span class="n">r</span> <span class="o">*</span><span class="n">http</span><span class="o">.</span><span class="n">Request</span><span class="p">)</span> <span class="p">{</span>
<span class="n">token</span><span class="p">,</span> <span class="n">err</span> <span class="o">:=</span> <span class="n">GetMsiToken</span><span class="p">(</span><span class="m">50342</span><span class="p">)</span>
<span class="k">if</span> <span class="n">err</span> <span class="o">!=</span> <span class="s">""</span> <span class="p">{</span>
<span class="n">fmt</span><span class="o">.</span><span class="n">Fprint</span><span class="p">(</span><span class="n">w</span><span class="p">,</span> <span class="n">err</span><span class="p">)</span>
<span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
<span class="n">peerVms</span><span class="p">,</span> <span class="n">err</span> <span class="o">:=</span> <span class="n">GetMyPeerVirtualMachines</span><span class="p">(</span><span class="n">token</span><span class="p">)</span>
<span class="k">if</span> <span class="n">err</span> <span class="o">!=</span> <span class="s">""</span> <span class="p">{</span>
<span class="n">fmt</span><span class="o">.</span><span class="n">Fprint</span><span class="p">(</span><span class="n">w</span><span class="p">,</span> <span class="n">err</span><span class="p">)</span>
<span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
<span class="n">fmt</span><span class="o">.</span><span class="n">Fprint</span><span class="p">(</span><span class="n">w</span><span class="p">,</span> <span class="n">peerVms</span><span class="p">)</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<h2 id="putting-it-all-together">Putting it all together</h2>
<p>To make exploring this as easy as possible for you, the ARM templates and scripts I provide as part of this solution are setting up the entire environment automatically. To recall, here’s the screen shot of the entire environment from the Azure Network Watcher, again:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/azureMsiAndInstanceMetadata/master/images/Figure03.jpg" alt="Network Watcher Topology" /></p>
<p>The ARM template sets up the Network, Virtual Machines, Network Security Groups etc. and for making it simple to explore the responses of the different servers without SSHing into the VMs, I also added a Load Balancer that exposes the GoLang application via Port-Mapping to each of the servers on the public load balancer. That means, you can just perform an http-request against the public load balancer with a port that maps to the server for which you would like to see the responses for. A few examples:</p>
<ul>
<li>http://yourloadbalancerip:10000/meta retrieves the instance metadata service through the GoLang REST Proxy I’ve explained in this post for the first server VM.</li>
<li>http://yourloadbalancerip:10002/servers uses the Managed Serviec Identity of the third server in the deployment to list the other servers in the RG</li>
<li>http://yourloadbalancerip:10001/ just prints a welcome message… very useful:)</li>
</ul>
<p>Of course, you can also SSH into the Jump-Box set up as part of this deployment and explore everything from the inside. Essentially, what I do is the following as part of the ARM template deployment to automate the setup of the GoLang application:</p>
<ul>
<li>
<p>The ARM-template contains a custom script extension that runs on each of the servers to build the Go-application and generate a shell-script that registers the GoLang REST-API I’ve explained above as a service daemon.</p>
</li>
<li>
<p>The Service Daemon script which is generated as part of the server setup and copied to <code class="language-plaintext highlighter-rouge">/etc/init.d/msiandmeta.sh</code> sets the Subscription ID and the target resource group as an environment variable before launching the GoLang Application.</p>
</li>
</ul>
<p>For making the process simple and easy to follow, I use a template for the <code class="language-plaintext highlighter-rouge">init.d</code>-script that gets generated with the custom script extension. This script is also on my github repository called <a href="https://github.com/mszcool/azureMsiAndInstanceMetadata/blob/master/scripts/template.msiandmeta.sh">template.msiandmeta.sh</a>.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
<span class="c">### BEGIN INIT INFO</span>
<span class="c"># Provides: msiandmeta</span>
<span class="c"># Required-Start: $local_fs $network $named $time $syslog</span>
<span class="c"># Required-Stop: $local_fs $network $named $time $syslog</span>
<span class="c"># Default-Start: 2 3 4 5</span>
<span class="c"># Default-Stop: 0 1 6</span>
<span class="c"># Short-Description: GoLang App using Azure MSI and Metadata</span>
<span class="c"># Description: Runs a Go Application which is a web server that demonstrates usage of Managed Service Identities and in-VM Instance Metadata</span>
<span class="c">### END INIT INFO</span>
<span class="nv">appUserName</span><span class="o">=</span>__USER__
<span class="nv">appPath</span><span class="o">=</span>__APP_PATH__
<span class="nv">appName</span><span class="o">=</span>__APP_NAME__
<span class="nv">processIDFilename</span><span class="o">=</span><span class="nv">$appPath</span>/<span class="nv">$appName</span>.pid
<span class="nv">logFilename</span><span class="o">=</span><span class="nv">$appPath</span>/<span class="nv">$appName</span>.log
<span class="c">#</span>
<span class="c"># Starts the simple GO REST service</span>
<span class="c"># </span>
start<span class="o">()</span> <span class="o">{</span>
<span class="c"># Needed by the GO App to access subscription and resource group, correctly</span>
<span class="nb">export </span><span class="nv">SUBSCRIPTION_ID</span><span class="o">=</span><span class="s2">"__SUBSCRIPTION_ID__"</span>
<span class="nb">export </span><span class="nv">RESOURCE_GROUP</span><span class="o">=</span><span class="s2">"__RESOURCE_GROUP__"</span>
<span class="c"># Check if the service runs by looking at it's Process ID and Log Files</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-f</span> <span class="nv">$processIDFilename</span> <span class="o">]</span> <span class="o">&&</span> <span class="o">[</span> <span class="s2">"</span><span class="sb">`</span>ps | <span class="nb">grep</span> <span class="nt">-w</span> <span class="si">$(</span><span class="nb">cat</span> <span class="nv">$processIDFilename</span><span class="si">)</span><span class="sb">`</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
</span><span class="nb">echo</span> <span class="s1">'Service already running'</span> <span class="o">></span>&2
<span class="k">return </span>1
<span class="k">fi
</span><span class="nb">echo</span> <span class="s1">'Starting service...'</span> <span class="o">></span>&2
su <span class="nt">-c</span> <span class="s2">"start-stop-daemon -SbmCv -x /usr/bin/nohup -p </span><span class="se">\"</span><span class="nv">$processIDFilename</span><span class="se">\"</span><span class="s2"> -d </span><span class="se">\"</span><span class="nv">$appPath</span><span class="se">\"</span><span class="s2"> -- </span><span class="se">\"</span><span class="s2">./</span><span class="nv">$appName</span><span class="se">\"</span><span class="s2"> > </span><span class="se">\"</span><span class="nv">$logFilename</span><span class="se">\"</span><span class="s2">"</span> <span class="nv">$appUserName</span>
<span class="nb">echo</span> <span class="s1">'Service started'</span> <span class="o">></span>&2
<span class="o">}</span>
<span class="c">#</span>
<span class="c"># Stops the simple GO REST service</span>
<span class="c">#</span>
stop<span class="o">()</span> <span class="o">{</span>
<span class="k">if</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-f</span> <span class="nv">$processIDFilename</span> <span class="o">]</span> <span class="o">&&</span> <span class="o">[</span> <span class="o">!</span> <span class="s2">"</span><span class="sb">`</span>ps | <span class="nb">grep</span> <span class="nt">-w</span> <span class="si">$(</span><span class="nb">cat</span> <span class="nv">$processIDFilename</span><span class="si">)</span><span class="sb">`</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
</span><span class="nb">echo</span> <span class="s2">"Service not running"</span> <span class="o">></span>&2
<span class="k">return </span>1
<span class="k">fi
</span><span class="nb">echo</span> <span class="s2">"Stopping Service..."</span> <span class="o">></span>&2
start-stop-daemon <span class="nt">-K</span> <span class="nt">-p</span> <span class="s2">"</span><span class="nv">$processIDFilename</span><span class="s2">"</span>
<span class="nb">rm</span> <span class="nt">-f</span> <span class="s2">"</span><span class="nv">$processIDFilename</span><span class="s2">"</span>
<span class="nb">echo</span> <span class="s2">"Service stopped!"</span> <span class="o">></span>&2
<span class="o">}</span>
<span class="c">#</span>
<span class="c"># Main script execution</span>
<span class="c">#</span>
<span class="k">case</span> <span class="nv">$1</span> <span class="k">in
</span>start<span class="p">)</span>
start
<span class="p">;;</span>
stop<span class="p">)</span>
stop
<span class="p">;;</span>
restart<span class="p">)</span>
stop
start
<span class="p">;;</span>
<span class="se">\?</span><span class="p">)</span>
<span class="nb">echo</span> <span class="s2">"Usage: </span><span class="nv">$0</span><span class="s2"> start|stop|restart"</span>
<span class="k">esac</span>
</code></pre></div></div>
<p>In this script, you can see tokens such as <code class="language-plaintext highlighter-rouge">__SUBSCRIPTION_ID__</code>. These tokens are replaced by the script that’s executed at provisioning time for each of the servers through the custom script extension definition in the main ARM template for the entire solution:</p>
<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span>
<span class="dl">"</span><span class="s2">apiVersion</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[variables('computeAPIVersion')]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">type</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">Microsoft.Compute/virtualMachines/extensions</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">name</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[concat(variables('serverVmNamePrefix'),copyIndex(),'/SetupScriptExtension')]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">location</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[parameters('location')]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">copy</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">name</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">serverVmSetupExtensionCopy</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">count</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[parameters('serverCount')]</span><span class="dl">"</span>
<span class="p">},</span>
<span class="dl">"</span><span class="s2">dependsOn</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">[resourceId('Microsoft.Compute/virtualMachines',concat(variables('serverVmNamePrefix'), copyIndex()))]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">[concat('Microsoft.Compute/virtualMachines/', concat(variables('serverVmNamePrefix'),copyIndex()),'/extensions/IdentityExtension')]</span><span class="dl">"</span>
<span class="p">],</span>
<span class="dl">"</span><span class="s2">properties</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">publisher</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">Microsoft.Azure.Extensions</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">type</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">CustomScript</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">typeHandlerVersion</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">2.0</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">autoUpgradeMinorVersion</span><span class="dl">"</span><span class="p">:</span> <span class="kc">true</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">settings</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">fileUris</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">[concat(parameters('_artifactsLocation'),'/scripts/setup_server_node.sh',parameters('_artifactsStorageSasToken'))]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">[concat(parameters('_artifactsLocation'),'/scripts/template.msiandmeta.sh',parameters('_artifactsStorageSasToken'))]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">[concat(parameters('_artifactsLocation'),'/app/main.go',parameters('_artifactsStorageSasToken'))]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">[concat(parameters('_artifactsLocation'),'/app/handlers.go',parameters('_artifactsStorageSasToken'))]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">[concat(parameters('_artifactsLocation'),'/app/metadata.go',parameters('_artifactsStorageSasToken'))]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">[concat(parameters('_artifactsLocation'),'/app/msitoken.go',parameters('_artifactsStorageSasToken'))]</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">[concat(parameters('_artifactsLocation'),'/app/mypeers.go',parameters('_artifactsStorageSasToken'))]</span><span class="dl">"</span>
<span class="p">]</span>
<span class="p">},</span>
<span class="dl">"</span><span class="s2">protectedSettings</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">commandToExecute</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">[concat('./setup_server_node.sh -a ', parameters('adminUsername'), ' -s ', subscription().subscriptionId, ' -r ', resourceGroup().name)]</span><span class="dl">"</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The script that’s invoked through the custom script extension above is also on my GitHub repository and generates the final <code class="language-plaintext highlighter-rouge">init.d</code>-script for the service registration based on the input parameters. These input-parameters are exactly the subscription-name, the resource group name and the user under which the daemon should run. Here’s an excerpt of the <a href="https://github.com/mszcool/azureMsiAndInstanceMetadata/blob/master/scripts/setup_server_node.sh">setup_server_node.sh</a> that builds the GoLang App and generates the target <code class="language-plaintext highlighter-rouge">init.d</code>-script:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#</span>
<span class="c"># Next compile the Go Application</span>
<span class="c">#</span>
<span class="nb">mkdir</span> ./app
<span class="nb">mv</span> <span class="k">*</span>.go ./app
<span class="nb">export </span><span class="nv">PATH</span><span class="o">=</span><span class="s2">"</span><span class="nv">$PATH</span><span class="s2">:/usr/local/go/bin"</span>
<span class="nb">export </span><span class="nv">GOPATH</span><span class="o">=</span><span class="s2">"</span><span class="sb">`</span><span class="nb">realpath</span> ./<span class="sb">`</span><span class="s2">/app"</span>
<span class="nb">export </span><span class="nv">GOBIN</span><span class="o">=</span><span class="s2">"</span><span class="nv">$GOPATH</span><span class="s2">/bin"</span>
go get ./app
go build <span class="nt">-o</span> msitests ./app
<span class="nb">sudo mkdir</span> /usr/local/msiandmeta
<span class="nb">sudo cp</span> ./msitests /usr/local/msiandmeta
<span class="nb">sudo chown</span> <span class="nt">-R</span> <span class="nv">$adminName</span>:<span class="nv">$adminName</span> /usr/local/msiandmeta
<span class="c">#</span>
<span class="c"># Configure apache2 to use the Go application as a CGI script</span>
<span class="c">#</span>
<span class="nb">cat</span> ./template.msiandmeta.sh <span class="se">\</span>
| <span class="nb">awk</span> <span class="nt">-v</span> <span class="nv">USER</span><span class="o">=</span><span class="s2">"</span><span class="nv">$adminName</span><span class="s2">"</span> <span class="s1">'{gsub("__USER__", USER)}1'</span> <span class="se">\</span>
| <span class="nb">awk</span> <span class="nt">-v</span> <span class="nv">APP_NAME</span><span class="o">=</span><span class="s2">"msitests"</span> <span class="s1">'{gsub("__APP_NAME__", APP_NAME)}1'</span> <span class="se">\</span>
| <span class="nb">awk</span> <span class="nt">-v</span> <span class="nv">APP_PATH</span><span class="o">=</span><span class="s2">"/usr/local/msiandmeta"</span> <span class="s1">'{gsub("__APP_PATH__", APP_PATH)}1'</span> <span class="se">\</span>
| <span class="nb">awk</span> <span class="nt">-v</span> <span class="nv">SUBS</span><span class="o">=</span><span class="s2">"</span><span class="nv">$subscriptionId</span><span class="s2">"</span> <span class="s1">'{gsub("__SUBSCRIPTION_ID__", SUBS)}1'</span> <span class="se">\</span>
| <span class="nb">awk</span> <span class="nt">-v</span> <span class="nv">RGROUP</span><span class="o">=</span><span class="s2">"</span><span class="nv">$resGroup</span><span class="s2">"</span> <span class="s1">'{gsub("__RESOURCE_GROUP__", RGROUP)}1'</span> <span class="se">\</span>
<span class="o">>></span> msiandmeta.sh
<span class="c">#</span>
<span class="c"># Now make sure the script is handled by the system for starting/stopping the service</span>
<span class="c">#</span>
<span class="nb">sudo cp</span> ./msiandmeta.sh /etc/init.d
<span class="nb">sudo chmod</span> +x /etc/init.d/msiandmeta.sh
<span class="nb">sudo </span>update-rc.d msiandmeta.sh defaults
</code></pre></div></div>
<p>With that, the GoLang-application that accesses the ARM REST APIs through the MSI and the instance metadata service as part of this sample should run, automatically, and always find the correct subscription ID and resource group name as part of the environment variables since they’re set by the <code class="language-plaintext highlighter-rouge">init.d</code>-script generated from the template through this way!</p>
<h2 id="testing-the-environment">Testing the environment</h2>
<p>Once you have deployed the ARM template into your subscription, you should be able to call the GoLang-application I’ve explained above that demonstrates the mechanics of the instance metadata service and the Managed Service Identity in action through the Load-Balancer using the NAT-ports for each server. The reason for mapping each server through a port to the outside world was for demo-purposes and to make it as easy as possible for you to examine the different responses of the different servers without SSHing into any machine. The following screen shot shows this in action by comparing different responses from different servers.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/azureMsiAndInstanceMetadata/master/images/Figure04.jpg" alt="Running the app in action" /></p>
<p>Of course, in the real world you would not expose these things, directly, but rather use them from within your applications!! For this sample and for enabling you to ramp up with details, quickly, it should be helpful, hopefully!</p>
<h2 id="final-words">Final Words</h2>
<p>Managed Service Identities and the in-VM Instance Metadata Service are extremly helful and it was long overdue to have these kind of great capabilities. Both services allow you to implement complex scenarios such as:</p>
<ul>
<li>
<p>Implementing licensing and IP-protection strategies based on the in-VM instance metadata service.</p>
</li>
<li>
<p>Script automated configurations of clustered environments by being able to call into Azure Resource Manager REST APIs from within Virtual Machines without the need of managing secrets for Service Principals.</p>
</li>
<li>
<p>many, many more and similar scenarios.</p>
</li>
</ul>
<p>With both services availabe on Azure, my <a href="http://blog.mszcool.com/index.php/2016/08/azure-virtual-machine-a-solution-for-instance-metadata-in-linux-and-windows-vms/">previous blog-post</a> becomes obsolete for this specific scenario, although there might still be many reasons for leveraging service principals for other scenarios, of course (so it might still be a good source for learning details about service principals in Azure AD, in general). But the specific scenario outlined in both, that previous post and this one, can be implemented way better with Managed Service Identities and the in-VM Instance Metadata Service combined!</p>
<p>I hope you enjoyed reading this and it was valuable for you. We went through something that leverages these mechanics in a very similar way for a concrete scenario with one of my customers… my plan is to post about a concrete scenario that leverages these mechanics as one of my next blogging activities.</p>
<p>Stay Tuned!</p>Mario SzpusztaA lot changed since my last blog post… we had a great and beatiful summer with an awesome vacation and I am now part of the Azure Customer Advisory Team which is the customer-facing part from Azure Engineering. So, I finally ended up in Jason Zander’s part of Microsoft, the person who’s responsible for Azure, itself. That means I am now involved in the most complex Azure-projects we run with customers and not dedicated to SAP, only, anymore. Although I still work with SAP a lot. Now, in the meantime a lot of Azure tech stuff expanded as well. In this post I want to focus on two specific features - the In-VM Instance Metadata Service and the Managed Service Identity (in short, MSI) which we recently started using in a customer project even before MSI got publicly available and announced.SAP HANA and Azure Active Directory Authentication2017-06-29T11:00:00+00:002017-06-29T11:00:00+00:00http://blog.mszcool.com/wordpressarchive/2017/06/29/sap-hana-and-azure-active-directory<p>At last SAP Sapphire (May 2017) we announced several improvements and also new offerings for SAP on Azure as you can read <a href="https://azure.microsoft.com/en-us/blog/the-best-public-cloud-for-sap-workloads-gets-more-powerful/">here</a>. The most prominent ones are more HANA Certifications as well as SAP Cloud Platform on Azure (as you can read from <a href="http://blog.mszcool.com/index.php/2017/05/cloud-foundry-sap-cloud-platform-on-azure/">my last blog post specifically focused on SAP CP</a>.</p>
<p>One of the less discussed and visible announcements, despite being mentioned, is the broad support of Enterprise-Grade Single-Sign-On across many SAP technologies with <a href="https://azuremarketplace.microsoft.com/en-us/marketplace/apps/category/azure-active-directory-apps?page=1&search=sap">Azure Active Directory</a>. This post is solely about one of these offerings - <a href="https://azuremarketplace.microsoft.com/en-us/marketplace/apps/aad.saphanadb?tab=Overview">HANA integration with Azure AD</a>.</p>
<!--more-->
<h2 id="pre-requisites-for-hana--aad-single-sign-on">Pre-Requisites for HANA / AAD Single-Sign-On</h2>
<p>An integration of HANA with Azure AD (AAD) as primary Identity Provider works for HANA Instances you can run anywhere you want (on-premises, any public IaaS, <a href="https://azure.microsoft.com/en-us/services/virtual-machines/sap-hana/">Azure VMs or SAP Large Instances in Azure</a>). The only requirement is, that the end-user accessing apps (Web Administration, XSA, Fiori) running inside of the HANA instance has access to the Internet to be able to sign-in against Azure AD.</p>
<p>For this post, I start with an SAP HANA Instance that runs inside of an Azure Virtual Machine. You can deploy such HANA instances <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-get-started#manual-installation-of-sap-hana">either manually</a> or through the <a href="https://blogs.sap.com/2017/05/29/sap-cloud-appliance-library-now-supports-azure-resource-manager/">SAP Cloud Appliance Library</a>.</p>
<p>In addition to just running HANA, I’ve also <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/linux/use-remote-desktop">installed XRDP for the Linux VM in Azure</a> and <a href="https://help.sap.com/viewer/a2a49126a5c546a9864aae22c05c3d0e/2.0.01/en-US">SAP HANA Studio</a> inside of the Virtual Machine to be able to perform necessary configurations across both, the XSA Administration Web Interface as well as HANA Studio as needd.</p>
<p>Finally, you need to have access to an Azure Active Directory tenant for which you are the Global Administrator or have the appropriate permissions to add configurations for <a href="https://docs.microsoft.com/en-us/azure/active-directory/active-directory-enterprise-apps-manage-sso">Enterprise Applications to that Azure AD Tenant</a>!</p>
<p>The following figure gives an overview of the HANA VM environment I used for this blog-post. The important part is the <a href="https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-nsg">Azure Network Security Group</a> which opens up the ports for HTTP and HTTPS for HANA which are following the pattern 80xx and 43xx for regular HTTP and HTTPS, respectively.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure01.png" alt="HANA VM in Azure Overview" /></p>
<h2 id="azure-active-directory-marketplace-instead-of-manual-configuration">Azure Active Directory Marketplace instead of manual configuration</h2>
<p>SAP HANA is configured through the <a href="https://azuremarketplace.microsoft.com/en-us/marketplace/apps/aad.saphanadb?tab=Overview">Azure Active Directory Marketplace</a> rather than the <a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-integrating-applications">regular App Registration model</a> followed for custom developed Apps in Azure AD. There are several reasons for this, here are the most important ones outlined:</p>
<ul>
<li>
<p><strong>SAML-P is required.</strong></p>
<p>Most SAP assets follow SAML-P for Web-based Single-Sign-On. While it is possible in Azure AD, when setting it up manually with advanced options, Azure AD Premium Edition is required. For offerings from the Azure AD Marketplace (Gallery), standard edition is sufficient. While that’s not the primary reason, it’s a neat one!</p>
</li>
<li>
<p><strong>Entity Identifier Formats for SAP Assets.</strong></p>
<p>When registering an application in Azure AD through the regular App Registration model, Application IDs (Entity IDs in federation metadata documents) are required to be URNs with a protocol prefix (xyz://…). SAP applications use Entity IDs with arbirtray strings not following any specific format. Hence a regular app registration does not work. Again, this challenge can be solved through the Enterprise App Integration in AAD Premium. But when taking the pre-configured Offering from the Marketplace, you don’t need to take care of such things!</p>
</li>
<li>
<p><strong>Name ID formats in issued SAML Tokens.</strong></p>
<p>Users are typically identified using Name ID assertions (claims). In requests, Azure AD accepts <code class="language-plaintext highlighter-rouge">nameid-format:persistent</code>, <code class="language-plaintext highlighter-rouge">nameid-format:emailAddress</code>, <code class="language-plaintext highlighter-rouge">nameid-format:unspecified</code> and <code class="language-plaintext highlighter-rouge">nameid-format:transient</code>. All of these are documented <a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-single-sign-on-protocol-reference">here</a> in detail. Now, the challenge here is:</p>
<ul>
<li>HANA sends requests with <code class="language-plaintext highlighter-rouge">nameid-format:unspecified</code>.</li>
<li>This leads to Azure AD selecting the format for uniquely identifying a user.</li>
<li>But HANA expects the Name ID claim to contain the plain user name (johndoe instead of domain\johndoe or johndoe@domain.com).</li>
<li>This leads to a mismatch and HANA not detecting the user as a valid user even if the user exists inside of the HANA system!</li>
</ul>
<p>The Azure AD Marketplace item is configured and on-boarded in a way, that enables this technical challenge to be resolved.</p>
</li>
<li>
<p><strong>Pre-configured claims</strong>.</p>
<p>While that’s not a need for HANA in specific, for most of the other SAP-related offerings, the marketplace-based integrartion pre-configures the SSO-configuration with claims/assertions typically required by the respective SAP technology.</p>
</li>
</ul>
<h2 id="step-1---register-hana-in-azure-active-directory">Step #1 - Register HANA in Azure Active Directory</h2>
<p>Assuming you have HANA running in a VM as I explained earlier in this post, the first step to configure Azure AD as an Identity Provider for HANA is to add HANA as an <a href="https://docs.microsoft.com/en-us/azure/active-directory/active-directory-enterprise-apps-manage-sso">Enterprise Application to your Azure AD Tenant</a>. You need to select the offer as shown in the screen shot below:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure02.png" alt="Selecting the HANA AAD Gallery Offering" /></p>
<p>Within the first step, you just need to specify a Display Name for the app as shown in the Azure AD management portal. The details are configured later down the road as next steps. Indeed you can get more detailed instructions from within the Azure AD management portal, directly. Just open up the <code class="language-plaintext highlighter-rouge">Signle Sign-On</code>-section, select <code class="language-plaintext highlighter-rouge">SAML-based Sign-On</code> in the very top Dropdown-Box, then scroll to the bottom and click the button for detailed demo instructions.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure03.png" alt="Detailed Demo instructions for SAML-P" /></p>
<p>If you’re filling out the SAML-P Sign-In settings according to these instructions, you’re definitely on a good path. So, let’s just walk through the settings so you get an example of what you need to enter there:</p>
<ul>
<li>
<p><strong>Identifier</strong>: should be the Entity ID which HANA uses in it’s Federation Metadata. Needs to be unique across all enterprise apps you have configured. I’ll show you later down in this post, where you can find it. Essentially you need to navigate to HANA’s Federation Metadata in the XSA Administration Web Interface.</p>
</li>
<li>
<p><strong>Reply URL</strong>: use the XSA SAML login endpoint of your HANA system for this setting. For my Azure VM, it had a public IP address bound to the Azure DNS name <code class="language-plaintext highlighter-rouge">marioszpsaphanaaaddemo.westeurope.cloudapp.azure.com</code>, therefore I had to configure <code class="language-plaintext highlighter-rouge">https://marioszpsaphanaaaddemo.westeurope.cloudapp.azure.com:4300/sap/hana/xs/saml/login.xscfunc</code> for it.</p>
</li>
<li>
<p><strong>User Identifier</strong>: this is one of the most important settings you must not forget. The default, <code class="language-plaintext highlighter-rouge">user.userprincipalname</code> will <strong>NOT</strong> work with HANA. You need to select a function called <code class="language-plaintext highlighter-rouge">ExtractMailPrefix()</code> in the Dropdown and select the <code class="language-plaintext highlighter-rouge">user.userprincipalname</code> for the <code class="language-plaintext highlighter-rouge">Mail</code>-parameter of this function.</p>
</li>
</ul>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure04.png" alt="Detailed Settings Visualized" /></p>
<p><strong>Super-Important:</strong> Don’t ignore the <em>information</em>-message shown right below the certificate list and the link for getting the Federation Metadata. You need to check the box <code class="language-plaintext highlighter-rouge">Make new certificate active</code> so that the signatures will be correctly applied as part of the sign-in process. Otherwise, HANA won’t be able to verify the signature.</p>
<h2 id="step-2---download-the-federation-metadata-from-azure-ad">Step #2 - Download the Federation Metadata from Azure AD</h2>
<p>After you have configured all settings, you need to save the SAML configuration before moving on. Once saved, you need to download the Federation Metadata for configuring SSO with Azure AD within the HANA administration interfaces. The previous screen-shot highlights the download-button in the lower, right corner.</p>
<p>Downloading the federation metadata document is the easiest way to get the required certificate and the name / entity identifier configured in your target HANA system.</p>
<h2 id="step-3---login-to-your-hana-xsa-web-console-and-configure-a-saml-idp">Step #3 - Login to your HANA XSA Web Console and Configure a SAML IdP</h2>
<p>We have done all required configurations on the Azure AD side for now. As a next step, we need to enable SAML-P Authentication within HANA and configure Azure AD as a valid identity provider for your HANA System. For this purpose, open up the XSA web console of your HANA System by browsing to the respective HTTPS-endpoint. For my Azure VM, that was:</p>
<p><code class="language-plaintext highlighter-rouge">https://marioszpsaphanaaaddemo.westeurope.cloudapp.azure.com:4300/sap/hana/xs/admin</code></p>
<p>Of course, HANA will still redirect you to a Forms-based login-page because we have not configured SAML-P, yet. So, sign-in with your current XSA Administrator Account in the system to start the configuration.</p>
<p><strong>Tip:</strong> take note of the Forms-Authentication URL. If you break something in your SAML-P configuration later down the road, you can always use it to sign back in via Forms Authentication to fix the configuration! The respective URL to take note of, is: <code class="language-plaintext highlighter-rouge">https://marioszpsaphanaaaddemo.westeurope.cloudapp.azure.com:4300/sap/hana/xs/formLogin/login.html?x-sap-origin-location=%2Fsap%2Fhana%2Fxs%2Fadmin%2F</code>.</p>
<p>Now the previously downloaded federation metadata document from Step #2 above becomes relevant. In the XSA Web Interface, you need to navigate to SAML Identity Providers and from there, click the “+”-button on the bottom of the screen. In the form opening now, just paste the previously downloaded federation metadata document into the large text box on the top of the screen. This will do most of the remaining job for you! But, you need to fix a few fields.</p>
<ul>
<li>The name in the General Data must not contain any special characters, also no spaces.</li>
<li>The SSO URL is not filled by default since we don’t have it in the AAD metadata, yet. So you need to manually fill it as per the guidance from within the Azure AD portal shown above in this post.</li>
</ul>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure05.png" alt="HANA SAML IdP Data Filled" /></p>
<p>Since we are in the HANA XSA tool, it’s the right point in time to show you, where I retrieved the information required earlier in the Azure AD portal when registering HANA as an App there - the <strong>Identifier</strong> as shown in the last screen shot from the Azure AD console, above.</p>
<p>Indeed, these details are retrieved from the <code class="language-plaintext highlighter-rouge">SAML Service Provider</code> configuration section as highlighted in the screen shot below. A quick side-note: this is one of the rare cases where I constantly needed to switch to Microsoft Edge as a browser instead of Google Chrome. For some reasons, in Chrome I was unable to open the metadata tab, while in Edge I typically can open the metadata-tab which shows the entire Federation Metadata document for this HANA instance. From there, you can also grab the identifier required for Azure AD since this is the Entity ID inside of the Federation Metadata document.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure06.png" alt="HANA SAML Federation Metadata" /></p>
<p>Ok, we have configured Azure AD as valid IdP for this HANA system. But we did not really enable SAML-based authentication for anything. This happens now at the level of applications managed by the XS-environment inside of HANA (that’s how I understand this with my limited HANA knowledge:)). You can enable SAML-P on a per-package basis inside of XSA, means it’s fully up to you to decide for which components you plan to enable SAML-P and for which you stay with other authentication methods. Below a screen shot that enables SAML-P for SAP-provided package! But stick with a warning: if you enable SAML-P for those, this might also have impact on other systems interacting with those packages. They should probably also support SAML-P as a means of authentication, especially if you disable other options, entirely!</p>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure07.png" alt="HANA SAML Federation Metadata" /></p>
<p>By enabling the <code class="language-plaintext highlighter-rouge">sap</code>-package for SAML-P, we get SSO based on Azure AD for a range of built-in functions including the XSA web interface, but also Fiori-interfaces hosted inside of the HANA instance for which you configured the setting.</p>
<h2 id="step-4---troubleshooting">Step #4 - Troubleshooting</h2>
<p>So far so good, seems we could try it out, right? So, let’s logout, open an <code class="language-plaintext highlighter-rouge">In-Private</code>-Browsing session with your browser of choice and navigate to your HANA XSA Administration application, again. You will see, that this time by default you will get redirected to Azure AD for signing into the HANA System. Let’s see what happens when trying to login with a valid user from the Azure AD tenant.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure08.png" alt="HANA SAML Federation Metadata" /></p>
<p>Seems the login was not so successful. Big question is why. This is now where we need access to the HANA system with HANA Studio and access to the system’s trace log. For my configuraiton, I installed XRDP on the Linux machine and have the HANA Studio running directly on that machine. So, best way to start is connecting to the machine, starting HANA Studio and navigating to the system configuration settings.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure09.png" alt="HANA Diagnosis for Sign-In Failing" /></p>
<p>The error-message is kind of confusing and miss-leading, though. We spent some time when onboarding HANA into the AAD Marketplace to figure out what was going wrong. So much ahead - Fiddler-Traces and issues with Certificates where not the problem! The resolution for this is to be found in an entirely different section. Nevertheless, I wanted to show this here, because it really is extremly valuable to understand, how-to troubleshoot stuff when it’s not going well.</p>
<p>The main reason for this failure is a miss-match in timeout configurations. The signatures are created based on some time stamps. One of those timestamps is used for ensuring, that authentication messages are valid only a given amount of time. That time is set to a very low limit in HANA by default resulting into this, quite miss-leading error message.</p>
<p>Anyways, to fix it, you need to stay in the HANA System Level Properties within HANA Studio and make some tweaks and adjustments. Within the system properties of the Configuration Tab, just filter settings by SAML and adjust the <code class="language-plaintext highlighter-rouge">assertion_timeout</code> setting. It’s impossible to do an entire, user-driven sign-in process within 10 sec. Think about it, the user navigates to a HANA App, gets re-directed to Azure AD, needs to enter her/his username/password, then eventually there’s Multi-Factor-Auth involved and finally upon success, the user gets redirected back to the respective HANA application. Impossible within 10sec. So, in my case, I adjusted it to two min.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure10.png" alt="HANA Diagnosis for Sign-In Failing" /></p>
<p>Ok, time for a next attempt. Now, if you still have the same error message about not being able to validate the signature, you probably forgot something earlier in the game. Make sure that when configuring HANA in Azure AD you make the certificate active by hitting the <code class="language-plaintext highlighter-rouge">Make new certificate active</code> checkbox I’ve mentioned earlier… below the same screen shot with the important informational message, again!</p>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure04.png" alt="Don't forget Make new certificate active" /></p>
<h2 id="step-4---configuring-a-hana-database-user">Step #4 - Configuring a HANA Database User</h2>
<p>If you’ve followed all the steps so far, the Sign-In with a User from Azure AD will still not succeed. Again, the trace logs from HANA are giving more insights on what’s going on and why the sign-in is failing this time.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure11.png" alt="Trace about User does not exist in HANA" /></p>
<p>HANA is complaining that it does not know about the user. This is a fair complaint since Azure AD (or any other SAML Identity Provider) takes care for authentication, only. Authorization needs to happen in the actual target system (the service provider, also often called relying party application). For being able to Authorize, the user needs to be known to the service provdier. That means, at least some sort of user entity needs to be configured.</p>
<ul>
<li>
<p>With HANA, that means you essentially create a database user and enable Single Sign-On for this database user.</p>
</li>
<li>
<p>HANA then uses the NameID-assertion from the resulting SAML token to assign the user authenticated by the IdP, Azure AD in this case, to map that successfully authenticated user to a HANA database user. This is where the format of the NameID in the issued token is so important and why we had to configure the <code class="language-plaintext highlighter-rouge">ExtractMailPrefix()</code>-strategy in the Azure AD portal as part of Step #1.</p>
</li>
</ul>
<p>So, to make all of this happen and finally get to a successful login, we need to create a user in HANA, enable SSO and make sure that user has the appropriate permissions in HANA to e.g. access Fiori Apps or the XSA Administration Web Interface. This happens in HANA Studio, again.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure12.png" alt="Detailed Settings Visualized" /></p>
<p><strong>Super-Important:</strong> The left-most part of the figure above visualizes the mapping from the SAML-Token’s perspective. So it defines the IdP as per the previous configurations and the user as it will end up being set in the NameID-assertion of the resulting SAML-token. With Azure AD users, these will mostly be lower-case! Case matters here!!! Make sure you enter the value lower-case here, otherwise you’ll get a weird message about dynamic user creation failing!!!</p>
<p>The next step is to make sure that the user has the appropriate permissions. For me, as a non-HANA-expert, I just gave the user all permissions to make sure I can show success as part of this demo. Of course, that’s not a best practice. You should give those permissions appropriate for your use cases, only.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure13.png" alt="Detailed Settings Visualized" /></p>
<h2 id="step-5---a-successful-login">Step #5 - A Successful Login</h2>
<p>Finally, we made it! If you have completed all the steps above, you can start using HANA with full Single-Sign-On across applications also integrated with your Azrue AD tenant. For example, the screen shot below shows my <code class="language-plaintext highlighter-rouge">globaladmin</code>-user account signing into the HANA Test VM I used, navigating to the HANA XSA Administration web console and then navigating from there to Office 365 Outlook… It all works like a charm without me being required to enter credentials, again!</p>
<p><img src="https://raw.githubusercontent.com/mszcool/saphanasso/master/images/figure14.png" alt="Detailed Settings Visualized" /></p>
<p>That is kind-of cool, isn’t it! It would then even work with navigating back and forth between those environments. Now, this scenario would work for any application that runs inside of the XS-environment.</p>
<p>But for now, at least for enterprise administrators, it means they can secure very important parts of their HANA systems with a prooven Identity platform using Azure AD. They can even configure Multi-Factor Authentication in Azure AD and thus even further protect HANA environments along other applications using the same Azure AD tenant as an Identity Provider.</p>
<h2 id="final-words">Final Words</h2>
<p>Finally, this is the simplest possible way of integrating Single-Sign-On with SAP applications using Azure AD, only. SAP Netweaver would be similarly simple as it is documented <a href="https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-sap-netweaver-tutorial">here</a>. There’s even a more detailed tutorial available for Fiori Launch Pad on Netweaver based on these efforts we’ve implemented on <a href="https://blogs.sap.com/2017/02/20/your-s4hana-environment-part-7-fiori-launchpad-saml-single-sing-on-with-azure-ad/">SAP blogs here</a>.</p>
<p>The tip of the iceberg is then the most advanced SSO we’ve implemented with <a href="https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-sap-hana-cloud-platform-identity-authentication-tutorial">SAP Cloud Platform Identity Authentication Services</a>. This will give you centralized SSO-management through both company’s Identity-as-a-Service offerings (Azure AD, SAP Cloud Platform Identity Services). As part of that offering, SAP even includes automated identity provisioning which would remove the need for manually creating users as we did above.</p>
<p>I think, over the past year, we achieved a lot with the partnership between SAP and Microsoft. But if you ask for my personal opinion, I think the most significant achievements are HANA on Azure (of course, right:)), SAP Cloud Platform on Azure and … the Single-Sign-On Offerings across all sorts of SAP technologies and services!</p>
<p>I hope you found this super-interesting. It is most probably my last blog post as member of the SAP Global Alliance Team from the technical side since I am moving forward to the customer facing part of Azure Engineering (Azure Customer Advisory Team) as an engineer. Still, I am part of the family and will engage as needed out of my new role with SAP, that’s for sure!</p>Mario SzpusztaAt last SAP Sapphire (May 2017) we announced several improvements and also new offerings for SAP on Azure as you can read here. The most prominent ones are more HANA Certifications as well as SAP Cloud Platform on Azure (as you can read from my last blog post specifically focused on SAP CP. One of the less discussed and visible announcements, despite being mentioned, is the broad support of Enterprise-Grade Single-Sign-On across many SAP technologies with Azure Active Directory. This post is solely about one of these offerings - HANA integration with Azure AD.SAP Cloudplatform on Azure - Beta2017-05-18T11:00:00+00:002017-05-18T11:00:00+00:00http://blog.mszcool.com/wordpressarchive/2017/05/18/sap-cloudplatform-on-azure-beta<p><strong>The main project that kept me busy for the past 7 months</strong> was the work with SAP Labs Israel to get a beta version of <a href="https://azure.microsoft.com/en-us/blog/the-best-public-cloud-for-sap-workloads-gets-more-powerful/">SAP Cloud Platform running on Azure</a>. During SAP Sapphire, SCP on Azure got now announced together with other public cloud vendors as part of SAP’s multi-cloud strategy (see <a href="http://news.sap.com/sapphire-now-sap-cloud-platform-positive-sum-game/">here</a> and <a href="https://blogs.sap.com/2017/05/16/a-new-seamless-sap-cloud-platform-experience/">here</a>. It is currently available as public beta for anyone to try on Azure. Here I dig into some first basics…</p>
<!--more-->
<p><strong><em>Note:</em></strong> as outlined in my previous blog post about <a href="http://blog.mszcool.com/index.php/2017/05/developed-an-sap-hana-express-azure-quick-start-template/">HANA Express</a>, please note that this is my personal blog which reflects my personal thoughts instead of Microsoft’s official opinion. For offical announcements and statements, please refer to the <a href="https://azure.microsoft.com/en-us/blog/the-best-public-cloud-for-sap-workloads-gets-more-powerful/">Microsoft Azure Blog</a>.</p>
<h2 id="cloud-foundry-on-azure-and-scp-on-azure">Cloud Foundry on Azure and SCP on Azure</h2>
<p>Before digging into the details of SAP Cloud Platform (short: SCP), just a quick reminder about how we support Cloud Foundry as a platform on Azure in general.</p>
<ul>
<li>First, so far we did support <a href="https://azure.microsoft.com/en-us/blog/cloud-foundry-on-azure-support-for-diego-and-open-source-service-brokers/">Open Source Cloud Foundry</a> and <a href="https://pivotal.io/microsoft">Pivotal Cloud Foundry</a> on Azure.</li>
<li>For these efforts, we’ve developed a <a href="https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/">Bosh CPI for Azure</a> - all full open source.</li>
<li>In addition we have <a href="https://github.com/Azure/azure-quickstart-templates/tree/master/bosh-setup">quick start templates for getting OSS Cloud Foundry</a> set-up on Azure.</li>
</ul>
<p>With SCP, we now support the 2nd commercially available Cloud Foundry flavor on Azure next next to Pivotal Cloud Foundry. But, what <strong>makes SCP on Azure really different</strong> to the offers we had so far is, that it is <strong>a fully managed PaaS</strong>… managed by SAP. So using SCP on Azure is exactly the same as on any other public cloud platform which gives you full portability in a multi-cloud world. We do have a few differentiators in my opinion, though:</p>
<ul>
<li>Full <a href="https://docs.microsoft.com/de-de/azure/active-directory/active-directory-saas-sap-hana-cloud-platform-tutorial">Azure Active Directory integration for Single-Sign-On for SAP Cloud Platform</a> which enables you to have SSO across SAP, Azure and Office 365 assets.</li>
<li>The <a href="https://github.com/Azure/meta-azure-service-broker">Azure Meta Service Broker</a> which allows you to integrate Azure-native Services into applications you run in SCP if you want to.</li>
<li>Microsoft’s broad <a href="https://azure.microsoft.com/en-us/services/virtual-machines/sap-hana/">HANA offerings</a> reaching from bare-metal certified machines with SAP Large Instances to VM-based certifications.</li>
</ul>
<p>Finally, I strongly believe that from the work we’ve done with SAP, Cloud Foundry on Azure benefits across the board, in general. We’ve made numerous improvements to the Bosh CPI implementation for Azure as part of the efforts and we fixed several bugs. Examples of that are e.g. the <a href="https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/releases/tag/v24">Keep-Alive of unhealthy virtual machines for debugging</a> purposes or support for <a href="https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/releases/tag/v21">managed disks</a> (which was on the plans, already, but we needed it earlier:)).</p>
<p>Until General Availabiltiy of SAP Cloud Platform on Azure, you’ll see many more improvements in our CPI which I cannot elaborate in detail at the moment.</p>
<h2 id="sap-cloud-platform-on-azure---getting-started">SAP Cloud Platform on Azure - Getting Started</h2>
<p>To get SCP on Azure, you work through the regular SCP administation cockpit. So your starting point is the same as for any other work you do with SCP. You register or log-in as usual, nothing Azure specific with that. For activating Azure, which is at the time of writing this post available as Beta+Trial, you click on the “Start Cloud Foundry Trial”-button within the cockpit!</p>
<p><a href="https://account.hanatrial.ondemand.com">https://account.hanatrial.ondemand.com</a></p>
<p><img src="https://raw.githubusercontent.com/mszcool/cf-scp-on-azure-simple/master/Images/Figure01.jpg" alt="SCP Cockpit Login and highlight activation button" /></p>
<p>When clicking the button, the SCP cockpit will ask you for selecting a region in which you want to activate the trail. Now, within SCP, activated Azure regions do just appear as an SCP-region within the cockpit. That means, you would pick an SCP-region for the activation that runs on-top of Azure as shown below:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/cf-scp-on-azure-simple/master/Images/Figure02.jpg" alt="Pick the SCP region" /></p>
<p>Until now, I’ve done several trial activations. It usually takes a minute till the tenant is activated and available for you. For Cloud Foundry knowledable folks, what happens is the creation of an organization and a space for you within the SCP Cloud Foundry environment.</p>
<h2 id="exploring-whats-available">Exploring what’s available</h2>
<p>After that process is completed, you can navigate to your organization and space within the SCP cockpit. Through the cockpit, you have convenient ways to browse and montior deployed applications or also browse the marketplace to review, which backing services SAP has enabled for the Beta+Trial which we’ve enabled on Azure. Cloud Foundry veterans will find it quite easy to navigate through the portal.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/cf-scp-on-azure-simple/master/Images/Figure03.jpg" alt="SCP Marketplace" /></p>
<p>Now, since this is a beta as well as a trial, the backing services are all running as single instances in containers behind the scenes. I think that’s more than fair because you’re not paying for the service as long as it is in beta. And… this is also the general approach followed by SAP for using trial accounts on generally available regions, as well. For GA and production (non-trial) environments, these backing services will run in Virtual Machine clusters, of course.</p>
<p>The exception even for today in that list is… guess what… SAP HANA of course. The HANA backing service offered as part of the Azure SCP beta is a multi-tenant, shared HANA environment that runs in Virtual Machines behind the scenes.</p>
<p><em>Note:</em> When using any of the backing services from the SCP market place, you indeed use SAP-deployed backing services. That gives you a higher level of portability since SAP and not Azure controls the versions and configurations used for the technologies that are the foundation for those backing services. Now, if you still would prefer to use native Azure services, you can use the <a href="https://github.com/Azure/meta-azure-service-broker">Azure Meta Service Broker</a> to hook them up with applications you run in SCP the Cloud Foundry way (you can use them, directly, as well, of course). But in that case, you need to be aware that this has impact on how portable your applications are across multiple cloud vendors if that’s relevant for you.</p>
<h2 id="using-the-cloud-foundry-cli">Using the Cloud Foundry CLI</h2>
<p>Now, the cool thing about SCP is, that it is Cloud Foundry. It’s a Cloud Foundry PaaS enriched by many SAP services (ok, it will take a while till we have them all active on Azure as that requires further engineering work). It ultimately means, you manage and work with it as with other Cloud Foundry environments including the CLI and the APIs. For SAP-specific services such as HANA, SAP provides plug-ins for the CLI to make it easier to work with those services such as the <a href="https://tools.hana.ondemand.com/#cloud">HANA MTA plug-in</a> which I think SAP announced during this year’s Sapphire, as well.</p>
<p>For CLI/API interaction purposes, SCP uses different API endpoints for each region. When using the Azure Beta region which we’ve deployed right now, you’d use the following API endpoint:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cf api api.cf.us20.hana.ondemand.com
cf login
</code></pre></div></div>
<p>From that moment forward, everything looks and feels like Cloud Foundry. You can push apps, you can explore your marketplace etc. The learning curve is quite easy for Cloud Foundry experienced developers, I’d say:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/cf-scp-on-azure-simple/master/Images/Figure04.jpg" alt="SCP CF CLI in Action" /></p>
<p>Of course, I was curious what development runtimes SAP has enabled and as a .NET developer I wanted to know about .NET. Looking at the screen shot above, I found it great that .NET Core was enabled. But be aware: SAP typically offers some sort of enterprise support for languages and runtimes in their environment, but from the build packs listed, only a few enjoy that status. .NET Core does not. So, check back with SAP on that… .NET Core only gets community-level support through the Cloud Foundry Foundation.</p>
<h2 id="using-the-azure-meta-service-broker-with-sap-cloud-platform">Using the Azure Meta Service Broker with SAP Cloud Platform</h2>
<p>When you’re running on Azure, it makes sense to use Azure-native services if you want to. At the end of the day, that’s when you can unleash the full power of running on a specific cloud platform, right:)? And… you still could implement your “portability” thorugh IoC/DI at the application level to have a more effective/efficient integration into native cloud services but still stay portable to a certain extent.</p>
<p>Fortunately, SAP allows you to enable the Azure Meta Service Broker at the space-level when using SCP. I thought I wanna try that out again now this is in beta (we prototyped that in early phases). Essentially, you really only need to follow the standard-procedure to enable the Azure Meta Service Broker as documented <a href="https://github.com/Azure/meta-azure-service-broker/blob/master/docs/how-admin-deploy-the-broker.md">here</a>.</p>
<p>The first step, though, is to <a href="https://docs.microsoft.com/en-us/azure/sql-database/sql-database-get-started-portal">create an Azure SQL Database instance</a> and a Service Principal in your Azure Active Directory, as shown below. The Service Principal is needed because the service broker dynamically creates/deletes resources as they are requested through the Cloud Foundry API (or the CLI).</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>az ad app create <span class="nt">--display-name</span><span class="o">=</span>yourazureadappdisplayname <span class="nt">--homepage</span><span class="o">=</span>http://yourhomepage.com <span class="nt">--identifier-uris</span><span class="o">=</span><span class="s2">"https://yourazureadserviceprincipalname"</span> <span class="nt">--key-type</span><span class="o">=</span>Password <span class="nt">--reply-urls</span><span class="o">=</span><span class="s2">"https://notreallyimportantwhatyouputhere"</span>
AppId DisplayName Homepage ObjectId ObjectType
<span class="nt">------------------------------------</span> <span class="nt">---------------------------</span> <span class="nt">-----------------------</span> <span class="nt">------------------------------------</span> <span class="nt">------------</span>
yourazureadserviceprincipalappid yourazureadappdisplayname http:/yourhomepage.com thecreatedappobjectid Application
<span class="err">$</span>
<span class="nv">$ </span>az ad sp create <span class="nt">--id</span> https://yourazureadserviceprincipalname
AppId DisplayName ObjectId ObjectType
<span class="nt">------------------------------------</span> <span class="nt">---------------------------</span> <span class="nt">------------------------------------</span> <span class="nt">----------------</span>
yourazureadappdisplayname yourazureadappdisplayname thecreatedappobjectid ServicePrincipal
<span class="err">$</span>
<span class="nv">$ </span>az ad app update <span class="nt">--id</span><span class="o">=</span>yourazureadserviceprincipalappid <span class="nt">--password</span><span class="o">=</span><span class="s2">"yourazureadserviceprincipalpassword"</span>
<span class="err">$</span>
</code></pre></div></div>
<p><strong>Don’t forget to give the created service principal contributor-rights to the resource group you want to use for the resources of the Meta Service Broker.</strong></p>
<p>After you’ve done the basic creation of the assets mentioned above, the next step is to clone the Azure Meta Service Broker repositiory and adjust the Cloud Foundry Application Manifest for it. The Meta Service Broker essentially is a CF-application that represents the Service Broker for several Azure Services as documented. You can run it wherever you want, but it’s built for being pushed as an application into CF with configuration data about your Azure Subscription to use for the resources created by the Meta Service Broker. Here’s a sample configuration, so that’s how a filled <code class="language-plaintext highlighter-rouge">manifest.yml</code> for the Azure Meta Service Broker looks like.</p>
<p><em>Note</em> that the parameters for <code class="language-plaintext highlighter-rouge">SUBSCRIPTION_ID</code>, <code class="language-plaintext highlighter-rouge">TENANT_ID</code>, <code class="language-plaintext highlighter-rouge">CLIENT_ID</code> and <code class="language-plaintext highlighter-rouge">CLIENT_SECRET</code> need to match those you’ve used or retrieved through the Azure CLI commands above as they represent your subscription as well as the servvice principal you’ve created above! Ok, the commands above don’t give you the subscription ID. You can retrieve that by issuing <code class="language-plaintext highlighter-rouge">az account list --output=json</code> or by looking it up in the Azure Portal.</p>
<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>---
applications:
- name: meta-azure-service-broker
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
instances: 1
env:
ENVIRONMENT: AzureCloud
SUBSCRIPTION_ID: yoursubscriptionid
TENANT_ID: yourazureadtenantidfromtheserviceprincipal
CLIENT_ID: yourazureadserviceprincipalappid
CLIENT_SECRET: yourazureadserviceprincipalpassword
SECURITY_USER_NAME: usertoauthenticateagainstmetaservicebroker
SECURITY_USER_PASSWORD: passwordtoauthenticateagainstmetaservicebroker
AZURE_BROKER_DATABASE_PROVIDER: sqlserver
AZURE_BROKER_DATABASE_SERVER: yourazuresqldbserver.database.windows.net
AZURE_BROKER_DATABASE_USER: yourazuresqldbuser@yourazuresqldbserver
AZURE_BROKER_DATABASE_PASSWORD: yourazuresqldbpassword
AZURE_BROKER_DATABASE_NAME: yourazuresqldbdatabasename
AZURE_BROKER_DATABASE_ENCRYPTION_KEY: 32charactersofyourchoicegohereok
AZURE_SQLDB_ALLOW_TO_CREATE_SQL_SERVER: false
AZURE_SQLDB_ENABLE_TRANSPARENT_DATA_ENCRYPTION: true
</code></pre></div></div>
<p>Assuming you have logged-in with <code class="language-plaintext highlighter-rouge">cf login</code> as mentioned above, you can just push the meta service broker application into your SCP tenant using <code class="language-plaintext highlighter-rouge">cf push</code>. <strong>Important tip:</strong> since the settings specified above in the manifest are set as environment variables, be careful with special characters used by bash when defining the passwords. Either avoid or escapte them!! A successful deployment of the meta service broker app should end up with the following output on your console:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/cf-scp-on-azure-simple/master/Images/Figure05.jpg" alt="Service Broker App Pushed" /></p>
<p>When you’ve pushed the application, you only created the meta service broker but you did not tell Cloud Foundry that this is indeed a Service Broker and not any other type of general application. So, as a next step after the meta service broker has been deployed, successfully, is to register it as a service broker. The steps for that are:</p>
<ul>
<li>Review the deployed application and take note of it’s entry point URL.</li>
<li>Create the service broker using that URL. You also need to authenticate against the meta service broker. For that purpose, use the previously specified <code class="language-plaintext highlighter-rouge">SECURITY_USER_NAME</code> and <code class="language-plaintext highlighter-rouge">SECURITY_USER_PASSWORD</code> parameters from the manifest file above.</li>
</ul>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>cf apps
Getting apps <span class="k">in </span>org yourorgname_trial / space dev as your@username.com...
OK
name requested state instances memory disk urls
meta-azure-service-broker started 1/1 1G 1G urlfromyourapppush.cfapps.us20.hana.ondemand.com
<span class="err">$</span>
<span class="err">$</span>
<span class="nv">$ </span>cf create-service-broker nameofyourbroker usertoauthenticateagainstmetaservicebroker passwordtoauthenticateagainstmetaservicebroker https://urlfromyourapppush.cfapps.us20.hana.ondemand.com <span class="nt">--space-scoped</span>
</code></pre></div></div>
<p><strong><em>Important Note:</em></strong> since SCP is a multi-tenant Cloud Foundry environment, you need to know that you are only the administrator of the spaces you’re creating. Permissions to organizations are limited and you don’t have any permissions beyond organizations. That means when activating the service broker, you need to use the <code class="language-plaintext highlighter-rouge">--space-scoped</code> switch to activate the broker on your space. When you do that, you are using a so called <em>private broker</em> with <em>private plans</em> which become active, automatically. That means, although the documentation of the Azure Meta Service Broker states it, you <strong>do not need to call</strong> <code class="language-plaintext highlighter-rouge">cf enable-service-access</code> since all services are enabled for your space in that case, by default. A simple <code class="language-plaintext highlighter-rouge">cf marketplace</code> confirms that:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/cf-scp-on-azure-simple/master/Images/Figure06.jpg" alt="cf marketplace results" /></p>
<p>As a next step, you can start using the services exposed by the Azure Meta Service Broker. At the time of writing this post, those services included: Azure Storage, Azure SQL DB, Azure DocumentDB, Azure Service Bus, Azure Redis Cache and Azure Key Vault services. Means you can start using those services right away. If you have asks/requests for additional services Microsoft should enable, I think you’re best by posting requests/issues into the <a href="https://github.com/Azure/meta-azure-service-broker">GitHub Repository for the Azure Meta Service Broker</a> or file a pull request.</p>
<p>The way how you make use of the services follows standard cloud foundry patterns with service brokers. You create a service and then you bind the service to one or many applications using the Cloud Foundry service broker APIs or CLI-commands. Let’s create an Azure Storage account that way so you get a sense of it. The approach is always the same for all services offered through the Azure Meta Service Broker.</p>
<p>First you create a parameters-file specifying some of the provisioning paramters for creating the respective Azure service. An example of such a parameters JSON-file for creating an Azure storage account via the service broker looks as follows:</p>
<pre><code class="language-JSON">{
"resource_group_name": "marioszpScpMetaBroker",
"storage_account_name": "marioszpscpteststrg",
"location": "westus",
"account_type": "Standard_LRS"
}
</code></pre>
<p>With this JSON-file in-place, you can leverage the service broker CLI commands or Cloud Foundry APIs to create a new service and make Cloud Foundry aware of it:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>cf create-service azure-storage standard marioszpscpstoragetest <span class="nt">-c</span> .<span class="se">\p</span>arams.json
<span class="nv">$ </span>cf service marioszpscpstoragetest
Service instance: marioszpscpstoragetest
Service: azure-storage
Bound apps:
Tags:
Plan: standard
Description: Azure Storage Service
Documentation url:
Dashboard:
Last Operation
Status: create succeeded
Message: Created the storage account, state: Succeeded
Started: 2017-05-19T11:02:21Z
Updated: 2017-05-19T11:02:40Z
<span class="err">$</span>
</code></pre></div></div>
<p><img src="https://raw.githubusercontent.com/mszcool/cf-scp-on-azure-simple/master/Images/Figure07.jpg" alt="Reults of creating a service with the broker" /></p>
<p>Now, once this is done you can check on the provisioning status and availability of that service using the <code class="language-plaintext highlighter-rouge">cf service marioszpscpstoragetest</code> command which gives you all the details about the created service. Finally, by using the <code class="language-plaintext highlighter-rouge">cf bind-service</code> command, you can bind the service to any application you’ve deployed. Configuration parameters will then be exposed through environment variables as per the documentation of the respective service in the Azure Meta Service Broker GitHub. For an Azure storage account, an environment variable with a JSON-document will be added to your applications Environment variables that contains e.g. the storage account name and storage account keys you can use to authenticate and execute calls against the created storage account.</p>
<p>What’s really cool now is, that all of what I’ve done above with the Cloud Foundry CLI is visible in the SAP Cloud Platform Cockpit, as well. For example, since I’ve activated the Azure Meta Service Broker, you’ll see all the services in the marketplace in the portal as well. Even the Azure Storage Account which is surfaced as a service as shown below. That means the Azure Meta Service Broker really brings SCP and Azure close together and allows you to look at Azure resources through the SCP as well as the Azure management perspectives.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/cf-scp-on-azure-simple/master/Images/Figure08.jpg" alt="SCP Cockpit with Azure Services" /></p>
<h2 id="final-words">Final Words</h2>
<p>This is just the beginning of SAP Cloud Platform on Azure. Note that we’ve announced Beta. During our path forward for becoming generally available, we’ll get to see several improvements in Azure’s Bosh CPI and optimized backing services (currently all except HANA run in single instance containers in the Beta/Trial).</p>
<p>There’s one question a lot of people ask when looking at something like this, though: how does that relate to native Azure PaaS services such as Azure App Services or Service Fabric? The answer is simple: Azure is an open Cloud Platform. Microsoft’s strategy is clearly to enable as many relevant platforms as possible in addition to Microsoft’s own technology. This gives customers and partners choice and allows them to leverage the skills and knowledge they have available in their portfolio, already. Cloud Foundry and it’s flaviours do fit very well into this strategy. If you prefer Cloud Foundry based PaaS platforms, then Azure is a place where you can go… and if you want you can even integrate with all the native goodness of Azure including especially Azure Active Directory for Single Sign-On or all of the other services in addition to those offered with the Azure Meta Service Broker.</p>Mario SzpusztaThe main project that kept me busy for the past 7 months was the work with SAP Labs Israel to get a beta version of SAP Cloud Platform running on Azure. During SAP Sapphire, SCP on Azure got now announced together with other public cloud vendors as part of SAP’s multi-cloud strategy (see here and here. It is currently available as public beta for anyone to try on Azure. Here I dig into some first basics…SAP HANA Express Quickstart Template (working-draft)2017-05-14T11:00:00+00:002017-05-14T11:00:00+00:00http://blog.mszcool.com/wordpressarchive/2017/05/14/sap-hana-express-quickstart-template<p>This is an exciting week for me… although I am usually not that much into attending Business-oriented conferences, over the past two years I did so by attending SAP Sapphire.</p>
<p>Up until know, that mainly is caused by the fact that I’ve contributed some aspects to what was announced with regards to the partnership between Microsoft and SAP at the conference. Last year, my part was mainly about <a href="https://www.youtube.com/watch?v=fpCdD6e0sjM">Office 365 with the work I’ve supported for Concur, Ariba, Fieldglass and SuccessFactors as well as the Sports Basement Demo</a> which was shown in the Key Note to highlight the HANA One Certification for Azure in our DS14 VM-Series at that time.</p>
<p>While I cannot write about the major project I supported for this year, yet, here’s a little nugget to get started with - <strong><a href="https://www.sap.com/developer/topics/sap-hana-express.html">a Quick Start Template for SAP HANA Express</a></strong>!</p>
<!--more-->
<p><em>Important Note:</em> While I am working for Microsoft, this blog summarizes my personal opinions and my personal understanding of topics. This means that all I am writing here is not related to Microsoft’s official opinion, at all! If you want to get that view, look at the Azure Blog or Jason Zander’s Blog for official announcements!</p>
<p><em>Important Note:</em> The Pull-Request into the official Microsoft Azure Quick Start Templates GitHub repository is not completed, yet. Therefore, the link still redirects to the working branch in my GitHub repository. I’ll update the blog-post as soon as the pull request is completed (there’s currently a general issue with the Travis CI pipeline used for validation on the Azure Quick Start Templates GitHub repository that caused an unexpected delay for the pull request to go through in time for Sapphire).</p>
<h2 id="sap-hana-express">SAP HANA Express</h2>
<p>As you might know, already, SAP HANA is SAP’s in-memory Database Engine which is supposed to back all of the major future releases of SAP’s core business suites centered around S/4HANA. But, HANA can be used as a stand-alone database system for developing custom solutions, as well. The Sports Basement Demo on Azure DS14 Instances from last year’s Sapphire Key Note is an example of that. It was a plain HANA Database fronted by a Java Web Application.</p>
<p>Now, within last year’s Sapphire and now, SAP released a version of SAP HANA that is free for development and testing purposes up to 32GBs of RAM called HANA Express.</p>
<p>For more details, you should navigate to the SAP HANA Express Homepage to get the full picture and the official view:</p>
<p><a href="https://www.sap.com/developer/topics/sap-hana-express.html">https://www.sap.com/developer/topics/sap-hana-express.html</a></p>
<h2 id="azure-quick-start-templates">Azure Quick-Start-Templates</h2>
<p>Now, shortly before Sapphire some folks from Microsoft and SAP approached me for creating an Azure Marketplace Image for HANA Express. That’s something we’re working on, but it is not something that can be done in just a few days. Too short… but since HANA Express clearly addresses developers, I thought a good solution that can be implemented within a few days is an <strong>Azure Quick-Start-Template</strong>.</p>
<p>For those of you who are new to Azure: Azure Quick-Start-Templates are open source based Azure Resource Manager Templates and Deployment Scripts which can be used to quickly spin-up Solutions on Azure. Many of those are used as a learning resource, but some of them can definitely be used for dev/test scenarios or even as a starting point for production scenarios.</p>
<p>All of these are available under the following two links:</p>
<ul>
<li>Browsing/Search:
<a href="https://azure.microsoft.com/en-us/resources/templates/">https://azure.microsoft.com/en-us/resources/templates/</a></li>
<li>Source Code:
<a href="https://github.com/Azure/azure-quickstart-templates/">https://github.com/Azure/azure-quickstart-templates/</a></li>
</ul>
<p>Now, the main point with these quick start templates is, that they’re automating most of the setup/provisioning procedure by using Scripts and Templates so you can get started, quickly.</p>
<h2 id="sap-hana-express-quick-start-template">SAP HANA Express Quick Start Template</h2>
<p>I decided to build such a template for HANA Express and make it available as part of the Quick-Start-Templates. This Quick-Start works with SAP HANA Express 2.0 SPS1 and it also should work with other version, but I’ve only tested it with this one.</p>
<p><a href="http://aka.ms/sap-hana-express-quickstart">http://aka.ms/sap-hana-express-quickstart</a></p>
<p>There’s just one caveat: since SAP requires you to accept the EULA for SAP HANA Express, you first need to go to SAP’s HANA Express Home Page, register and download the SAP HANA Express Setup Images manually before being able to start using the Quick Start Template I’ve created. But from there on you can start using the template. The basic workings of the template are:</p>
<ol>
<li>First you register with SAP and download the HANA Express Setup Packge.</li>
<li>Then you use the quick start template to upload the setup package into your private Azure Storage Account.</li>
<li>From that moment forward you can use the Azure Resource Manager Template included in the template to deploy as many HANA Express Instances into your own subscription as you want.</li>
</ol>
<p><em>Starting with Step 2, everything is automated with Scripts and Templates. That means, only the first step - downloading the setup packges and accepting the EULA at the SAP HANA Express Setup Homepage is something you need to do manually</em>. Sure, a Marketplace Image would be more convenient, but we’ll work with SAP on that…</p>
<p>All the details are explained in the sections below in this blog post including how you can validate, that the installation really went well at the end of the entire process.</p>
<p><em>Important Note</em>: Please don’t forget, that quick start templates are not something that is backed by Microsoft Support of any means. They are here to help you getting started on your own, they are full open source and maintained at a best effort basis!</p>
<h2 id="requirements">Requirements</h2>
<p>Before moving on, all you need on your local machine are the following assets/tools:</p>
<ul>
<li>A Linux or Mac with Bash, or Windows with <a href="https://msdn.microsoft.com/en-us/commandline/wsl/install_guide">Bash on Ubuntu on Windows</a></li>
<li>The <a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli">Azure CLI 2.0</a> installed in your Bash environment</li>
<li>An <a href="https://azure.microsoft.com/en-us/free/">Azure Subscription</a></li>
<li>Optionally HANA Tools or HANA Developer Studio</li>
</ul>
<h2 id="register-download-hana-express-setup-from-sap">Register Download HANA Express Setup from SAP</h2>
<p>Yes, that’s the first step. It needs to be done for two reasons: first, you need to accept SAP’s EULA and second SAP will inform you about important updates and service releases for HANA when registering at the registration page.</p>
<p>That said, the first thing you do is <a href="https://www.sap.com/developer/topics/sap-hana-express.html">navigating to the SAP HANA Express Home Page</a> to register with SAP and Accept the EULA:</p>
<p><img src="(https://raw.githubusercontent.com/mszcool/azure-quickstart-templates/original/mszcool-hanaexpress/sap-hana-express/images/Figure01.png)" alt="HANA Express" /></p>
<p><img src="https://raw.githubusercontent.com/mszcool/azure-quickstart-templates/original/mszcool-hanaexpress/sap-hana-express/images/Figure02.png" alt="HANA Express Download Manager Option" /></p>
<p>Next, you’ll need to use the SAP Download Manager to download the SAP HANA Express Setup packages. The Download Manager exists as native version for Linux or Windows or as a Cross-Platform Version for Java (distributed as JAR-package). I’ve used the JAR-package version, but it should not matter at all.</p>
<p>What matters is selecting the right type of setup packages. The quick-start-template I’ve built is tested for the Server-Version, only, without XA Advanced services. So you should select the following for download when using the SAP HANA Express Download Manager:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/azure-quickstart-templates/original/mszcool-hanaexpress/sap-hana-express/images/Figure03.png" alt="HANA Express Server Only Download Option" /></p>
<p>The downloaded setup package will appear as TAR-Archive in your local Downloads-folder (or wherever you downloaded it to). It should be called something along the lines of <code class="language-plaintext highlighter-rouge">hxe.tgz</code>.</p>
<h2 id="upload-the-hana-express-to-your-azure-storage-account">Upload the HANA Express to your Azure Storage Account</h2>
<p>For automating the setup procedure of SAP HANA Express inside of an Azure Virtual Machine, the HANA Express Setup packages need to be available for download through an automation script! To avoid uploading the setup package (~ 1.6GBs of data) with each new VM, the best approach is uploading that into an <a href="https://docs.microsoft.com/en-us/azure/storage/storage-introduction">Azure storage account</a> and using a <a href="https://docs.microsoft.com/en-us/azure/storage/storage-dotnet-shared-access-signature-part-1">shared access signature</a> to feed the setup files into the provisioning script for Virtual Machines.</p>
<p>Now, performing those steps can be done manually by using a tool such as the Azure Storage Explorer. But I decided to automate the procedure using the <a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli">Azure CLI 2.0</a>.</p>
<p>Assuming that you have your SAP HANA Express Setup packages downloaded to the folder /mnt/c/temp/hxe.tgz, you can execute the following command:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Login into your Azure Subscription using the Azure CLI 2.0</span>
az login
<span class="c"># Create a resource group for the storage account</span>
az group create <span class="nt">--name</span> <span class="s2">"sampleresourcegroupname"</span> <span class="nt">--location</span> <span class="s2">"westeurope"</span>
<span class="c"># Upload the HANA Express Setup Files to your Azure Storage Account</span>
./prepare-hxe-setup-files.sh sampleresourcegroupname samplestorageaccountname samplecontainer westeurope /home/mydirectory/hxe.tgz
</code></pre></div></div>
<p>Now, as I mentioned before, the automated setup of HANA Express happens in a script that runs in the post-provisioning phase of the Azure VM. That means, this script needs to have access to those setup files for an automated download without user interaction. To enable that scenario, the <code class="language-plaintext highlighter-rouge">prepare-hxe-setup-files.sh</code>-Script of my Quick Start uploads the setup packages for HANA Express to an Azure Storage Account and generates a Shared Access Signature URL which allows to simply download the packages using that Signature as a means of authentication by using wget or any similar shell-tool.</p>
<p>The following Screen-Shot shows the output of the <code class="language-plaintext highlighter-rouge">prepare-hxe-setup-files-sh</code>-script. You <strong>should especially take note of the Shared Access Signature URL the script outputs</strong> at the end!</p>
<p><img src="https://raw.githubusercontent.com/mszcool/azure-quickstart-templates/original/mszcool-hanaexpress/sap-hana-express/images/Figure04.png" alt="Output of prepare-hxe-setup-files.sh" /></p>
<h2 id="deploy-an-sap-hana-express-using-the-templates">Deploy an SAP HANA Express using the templates</h2>
<p>With the Setup packages for SAP HANA Express uploaded to an Azure Storage Account and the Storage Shared Access Signature generated as mentioned above, you can deploy as many SAP HANA Express Virtual Machines as you need to.</p>
<p><em>Important Note</em>: the script <code class="language-plaintext highlighter-rouge">prepare-hxe-setup-files.sh</code> above generates Shared Access Signatures that are valid for a year. That means, after a year, you need to run the script, again, to generate a new Shared Access Signature. Note that the script is smart in detecting, if the files have been uploaded, already, and if so it generates the signature for the existing blob instead of uploading, again!</p>
<p>When using the quick-start-template, you can either use the “Deploy-To-Azure”-Button presented on the <a href="http://aka.ms/sap-hana-express-quickstart">landing page of the Quick Start</a> or you fill out the parameters in the <code class="language-plaintext highlighter-rouge">azuredeploy.parameters.json</code>-file as shown below and deploy the template via PowerShell or the Azure CLI:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/azure-quickstart-templates/original/mszcool-hanaexpress/sap-hana-express/images/Figure05.png" alt="Parameters filled in the azuredeploy.parameters.json" /></p>
<p>After you’ve filled out the parameters - the screen shot above shows the minimum ones you need to fill out - you can move ahead and deploy the template using code similar to the following:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Create a resource group for your HANA Express VM Resources</span>
az group create <span class="nt">--name</span> <span class="s2">"samplehanaexpressgroup"</span> <span class="nt">--location</span> <span class="s2">"westeurope"</span>
<span class="c"># Deploy the template with the filled parameters file</span>
az group deployment create <span class="nt">--resource-group</span><span class="o">=</span><span class="s2">"samplehanaexpressgroup"</span> <span class="se">\</span>
<span class="nt">--template-file</span><span class="o">=</span><span class="s2">"azuredeploy.json"</span> <span class="se">\</span>
<span class="nt">--parameters</span><span class="o">=</span><span class="s2">"@azuredeploy.sample.parameters.json"</span> <span class="se">\</span>
<span class="nt">--name</span><span class="o">=</span><span class="s2">"samplehanaexpress"</span>
</code></pre></div></div>
<p>The output of that script should look similar to the following screen shot:
<img src="https://raw.githubusercontent.com/mszcool/azure-quickstart-templates/original/mszcool-hanaexpress/sap-hana-express/images/Figure06.png" alt="Parameters filled in the azuredeploy.parameters.json" /></p>
<h2 id="validating-the-installation">Validating the Installation</h2>
<p>So far so good, if the output looks similar to the screen shot above, then you should be all set! But you can of course validate your installation - in two ways: using regular HANA tools to see if your instance is responsive or look at the installation logs from the provisioning process.</p>
<p>For that, you need to understand some background. I am using <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/linux/extensions-customscript">Azure Custom Script Extensions for Linux</a> to automatically execute the HANA Installation with all required pre-requisites during the post-VM provisioning phase of the Azure Virtual Machine. That is expressed in the Azure Resource Manager Template with the following code:</p>
<pre><code class="language-JSON">... REST OF THE TEMPLATE ...
{
"type": "extensions",
"name": "hxeinstallextension",
"apiVersion": "2016-04-30-preview",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', parameters('vmNamePrefix'))]"
],
"properties": {
"publisher": "Microsoft.Azure.Extensions",
"type": "CustomScript",
"typeHandlerVersion": "2.0",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"[parameters('hxeInstallScriptUrl')]"
]
},
"protectedSettings": {
"commandToExecute": "[concat('sudo ./', parameters('hxeInstallScriptName'), ' \"', parameters('hxeSetupFileUrl'), '\" \"', parameters('hxeMasterPwd'), '\" && exit 0')]"
}
}
}
... REST OF THE TEMPLATE ...
</code></pre>
<p>This part of the template shows, that after the Virtual Machine Resource has been provisionied, it uses the Azure Virtual Machine Agent to run the script specified in the template. This script is downloaded from the Quick-Start-Templates GitHub repository, directly. So no further steps needed to enable this.</p>
<p>If you now want to validate, whether the installation script from SAP HANA Express ran, successfully, you first should review the deployment logs within the Azure Portal similar to what’s shown in the following screen shot:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/azure-quickstart-templates/original/mszcool-hanaexpress/sap-hana-express/images/Figure07.png" alt="Azure Portal Deployment Log" /></p>
<p>If you still need to see more, then you just need to SSH into the virtual machine created (refer to the DNS name specified in the <code class="language-plaintext highlighter-rouge">azuredeploy.parameters.json</code> as per the screen shots above) and output the content of the <code class="language-plaintext highlighter-rouge">stdout</code> and <code class="language-plaintext highlighter-rouge">stderr</code> files within the <code class="language-plaintext highlighter-rouge">/var/lib/waagent/custom-script/download/0</code> directory similar to what’s shown in the following screen shot:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/azure-quickstart-templates/original/mszcool-hanaexpress/sap-hana-express/images/Figure08.png" alt="Virtual Machine Deployment Log" /></p>
<p>When you look at the output, you’ll quickly realize, how much this really accelerates you. The setup script automatically performs the following steps for you:</p>
<ul>
<li>Install the needed Oracle JDK on your machine</li>
<li>Install required library packages using zypper</li>
<li>Download and Extract the HANA Express Setup Packages to the VM</li>
<li>Install HANA Express on the VM</li>
</ul>
<h2 id="final-words">Final Words</h2>
<p>The post above went quite into the details of every step of how the quick-start works. But the essence can literally be done within about 30min. depending on how fast your Internet connection is:</p>
<ul>
<li>Register for an Azure Subscription if you don’t have one, yet</li>
<li>Setup the Azure CLI 2.0 on your machine if not done so, yet</li>
<li>Register with SAP</li>
<li>Download the SAP HANA Express Setup Packages</li>
<li>Execute the script <code class="language-plaintext highlighter-rouge">prepare-hxe-setup-files.sh</code> and note the generated Shared Access Signature</li>
<li>Click the “Deploy-to-Azure Button” or update the parameters file and execute <code class="language-plaintext highlighter-rouge">az group deployment create</code></li>
</ul>
<p>All of the needed pre-requisites and of course SAP HANA Express itself gets set up for you and within about 30min. you have an instance of it running in Microsoft Azure. I hope you find this more valuable and helps you to accelerate the setup of dev/test environments with SAP HANA Express since you don’t need to walk through all of the needed setup steps, manually.</p>Mario SzpusztaThis is an exciting week for me… although I am usually not that much into attending Business-oriented conferences, over the past two years I did so by attending SAP Sapphire. Up until know, that mainly is caused by the fact that I’ve contributed some aspects to what was announced with regards to the partnership between Microsoft and SAP at the conference. Last year, my part was mainly about Office 365 with the work I’ve supported for Concur, Ariba, Fieldglass and SuccessFactors as well as the Sports Basement Demo which was shown in the Key Note to highlight the HANA One Certification for Azure in our DS14 VM-Series at that time. While I cannot write about the major project I supported for this year, yet, here’s a little nugget to get started with - a Quick Start Template for SAP HANA Express!CloudFoundry on Azure in a Hybrid Multi-Cloud World2016-09-28T11:00:00+00:002016-09-28T11:00:00+00:00http://blog.mszcool.com/wordpressarchive/2016/09/28/cloudfoundry-on-azure-in-a-hybrid-multi-cloud-world<p>This week I was presenting at the <a href="https://www.cloudfoundry.org/community/summits/cfsummit/?summitId=11993">CloudFoundry Summit 2016 Europe</a> in Frankfurt, of course about running <a href="https://cfsummiteu2016.sched.org/event/7rVC/private-hybrid-public-cloud-cf-environments-on-microsoft-azure-mario-szpuszta-microsoft?iframe=yes&w=i:100;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no">CloudFoundry on Azure and Azure Stack</a>. It was greate being here, especially because one of my two main Global ISV partners I am working with on the engineering side, have been here as well and are even a Gold-sponser of the event. It was indeed an honor and great pleasure for me to be part of this summit here … and great to finally have a technical session at a non-Microsoft conference, again:)</p>
<p>Indeed, one reason for that blog-post is because I ran out of time during my session and was able to show only small parts of the last demo.</p>
<p>Anyways, let’s get to the more techncial part of this blog-post. My session was all about running CF in Public, Private as well as Hybrid Clouds with Azure being involved in some way. This is <strong>highly relevant</strong> since most enterprises are driving a multi-cloud strategy of some way:</p>
<ul>
<li>Either they are embracing Hybrid cloud and run deployments in the public cloud as well as in their own data centers for various reasons or</li>
<li>they want to distribute and minimize risk by running their solutions across two (or more) public cloud providers.</li>
</ul>
<p>Despite the fact my session was focused on running Cloud Foundy on Azure, a lot of the concepts and architectural insights presented, can be re-used for other kinds of deployments with other cloud vendors or private clouds, as well.</p>
<!--more-->
<h4 id="the-basics---running-cloud-foundry-on-azure-and-pivotal">The basics - Running Cloud Foundry on Azure and Pivotal</h4>
<p>Microsoft has developed a <a href="https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release">Bosh CPI</a> that enables bosh-based deployments of Cloud Foundry on Azure. The CPI is entirely developed as an Open Source Project and contributed to the <a href="https://github.com/cloudfoundry-incubator">Cloud Foundry Incubator project</a> on GitHub.</p>
<p>Based on this CPI, there are two main ways for deploying deploying Cloud Foundry clusters on Microsoft Azure:</p>
<ul>
<li>Pivitoal Cloud Foundry via the <a href="https://azure.microsoft.com/en-us/marketplace/partners/pivotal/pivotal-cloud-foundryazure-pcf/">Azure Marketplace</a> (please read Pivtal docs for updates on what is support and what not).</li>
<li>Open Source Cloud Foundry either manually as per official <a href="https://bosh.io/docs/azure-resources.html#client">Bosh Documentation</a> or via Azure Resource Manager Templates provided through the <a href="https://github.com/Azure/azure-quickstart-templates/tree/master/bosh-setup">Azure Quickstart Templates Gallery</a>.</li>
</ul>
<p>There’s a very detailed guidance on all of those GitHub repositories available that do explain all the details, I would suggest to follow this one since it is by far the easiest one: <a href="https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/blob/master/docs/guidance.md">Deploy Cloud Foundry on Azure</a> and always follow the <strong>via ARM templates</strong> suggestions of the docs.</p>
<p>Finally, in addition to Azure, to completly follow this post you need a 2nd CF cluster running in another cloud. The by far easiest way is to setup a trial account on Pivotal Cloud, which provides you with some sort of “CloudFoundry-as-a-Service”. Follow <a href="https://run.pivotal.io/">these steps here</a> for doing so…</p>
<h4 id="a-multi-cloud-cf-architecture-with-azure-on-one-side">A Multi-Cloud CF Architecture with Azure on one side</h4>
<p>There are many reasons for multi-cloud environments. Some might include running parts in private clouds because of legal and compliance reasons while others including spreading risk across multiple cloud providers for disaster recovery reasons. The example in this post is focused exactly around the multi-cloud DR case since it covers two public cloud providers:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/cfMultiCloudSample/master/images/Figure01-Architecture.png" alt="architecture" /></p>
<ul>
<li><a href="https://azure.microsoft.com/en-us/documentation/articles/traffic-manager-overview/">Azure Traffic Manager</a> acts as a DNS-based load balancer. We will configure traffic manager with a Priority-Policy, which essentially leads traffic based on priority and if one cloud has a failure, Traffic Manager will route traffic to the other cloud.</li>
<li>The <a href="https://azure.microsoft.com/en-us/documentation/articles/load-balancer-overview/">Azure Load Balancer</a> is a component you get “for free” in Azure and don’t really need to take care off. It balances traffic across the front-nodes of your CF cluster and is automatically configured for you if you follow the guidance above for deploying CF on Azure.</li>
<li>Inside of each CF cluster, we need to make sure to register the DNS names used by Traffic Manager and configure the CF routers to route to our apps in the CF cluster, apropriately.</li>
</ul>
<h4 id="setting-up-traffic-manager">Setting up traffic manager</h4>
<p>Let’s start with setting up the Azure Traffic Manager since we’ll need it’s domain name for the configuration of the apps in both Cloud Foundry targets. You can just add Azure Traffic Manager as a Resource to the Resource Group of your Cloud Foundry deployment or any other resource group. In my case, I deployed the Traffic Manager in another resource group as shown in the following screen shot:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/cfMultiCloudSample/master/images/Figure02-TrafficManager.png" alt="Traffic Manager Setup" /></p>
<p>The important piece to take for now is the <strong>Domain Name</strong> of your traffic manager end-points. The actual end-points for traffic manager do not need to be configured at this point in time - we will look at it later.</p>
<h4 id="deploying-the-sample-app-to-pivotal-web-services">Deploying the sample app to Pivotal Web Services</h4>
<p>As a next step, we need to deploy the sample application to Pivotal web services and need to take note of the (probably random) domain name it has associated ot the application.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$pivotalApiEndpoint</span><span class="o">=</span><span class="s2">"api.run.pivotal.io"</span>
cf login <span class="nt">-a</span> <span class="nv">$pivotalApiEndpoint</span>
cf target <span class="nt">-o</span> <span class="nv">$pivotalOrg</span> <span class="nt">-s</span> <span class="nv">$pivotalSpace</span>
cf push <span class="nt">-f</span> ./sampleapp/manifest.yml <span class="nt">-p</span> ./sampleapp
cf set-env multicloudapp REGION <span class="s2">"Pivotal Cloud"</span>
cf restage multicloudapp
</code></pre></div></div>
<p>To get the domain name and IP, just execute a <code class="language-plaintext highlighter-rouge">cf app multicloudapp</code> and take note of the domain name as shown in the following figure:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/cfMultiCloudSample/master/images/Figure03-PivotalDomainName.png" alt="Pivotal App Domain Name" /></p>
<h4 id="deploying-the-app-into-cloud-foundry-on-azure">Deploying the App into Cloud Foundry on Azure</h4>
<p>The deployment of the sample app into Azure goes exactly the same way, except that we’ll need to use different API end-points, organization names and spaces inside of Cloud Foundry:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$azureCfApiEndpoint</span><span class="o">=</span><span class="s2">"api.</span><span class="nv">$azureCfPublicIp</span><span class="s2">.xip.io"</span>
cf login <span class="nt">-a</span> <span class="nv">$azureCfApiEndpoint</span>
cf target <span class="nt">-o</span> <span class="nv">$azureOrg</span> <span class="nt">-s</span> <span class="nv">$azureSpace</span>
cf push <span class="nt">-f</span> ./sampleapp/manifest.yml <span class="nt">-p</span> ./sampleapp
cf set-env multicloudapp REGION <span class="s2">"Microsoft Azure"</span>
cf restage multicloudapp
</code></pre></div></div>
<p>The <strong>Cloud Foundry API end-point</strong> I used above is the one that is registered by default when using the ARM-based deployment of open source Cloud Foundry with the Azure Quickstart Templates. The DNS-registration mechanism used there is documented <a href="https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/deploy-azuredns">here</a>.</p>
<p>Also <strong>note the environment variables</strong> I am setting in the scripts above using <code class="language-plaintext highlighter-rouge">cf set-env multicloudapp REGION "xyz"</code>. Indeed, that is used by our sample application (which is written with Ruby in this case) to output, in which region we are running the app. That way, we can see, if we are directed to the app deployed in Azure or in Pivotal Web Services.</p>
<p>Finally, if you’re new to Azure, the best way to find out the public IP which has been created for your CF cluster, is looking up a public IP address in the Azure Portal which has been created inside of the Resource Group for your Cloud Foundry cluster. Another way - if you are a Shell Scripter - would be to use the following command with the Azure Cross Platform CLI:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>azure network public-ip show <span class="nt">--resource-group</span> YOUR-RESOURCE-GROUP YOUR-IP-NAME
info: Executing <span class="nb">command </span>network public-ip show
+ Looking up the public ip <span class="s2">"YOUR-IP-NAME"</span>
data: Id : /subscriptions/YOUR-SUBSCRIPTION-ID/resourceGroups/YOUR-RESOURCE-GROUP/providers/Microsoft.Network/publicIPAddresses/mszcfbasics-cf
data: Name : YOUR-IP-NAME
data: Type : Microsoft.Network/publicIPAddresses
data: Location : northeurope
data: Provisioning state : Succeeded
data: Allocation method : Static
data: IP version : IPv4
data: Idle <span class="nb">timeout </span><span class="k">in </span>minutes : 4
data: IP Address : 52.169.87.212
data: IP configuration <span class="nb">id</span> : /subscriptions/YOUR-SUBSCRIPTION-ID/resourceGroups/marioszpCfSimple/providers/Microsoft.Network/networkInterfaces/SOME-ID/ipConfigurations/ipconfig1
data: Domain name label : marioszpcfsimple
data: FQDN : marioszpcfsimple.northeurope.cloudapp.azure.com
info: network public-ip show <span class="nb">command </span>OK
</code></pre></div></div>
<h4 id="configuring-traffic-manager-endpoints">Configuring Traffic Manager Endpoints</h4>
<p>Next, we need to tell Azure Traffic Manager the endpoints it should direct request which do approach on the DNS record registered with Traffic Manager to.</p>
<p>In our case, we use a simple Priority-based policy which means, Traffic Manager tries to always direct requests to an endpoint with the more important priority except that endpoint is not responsive. For a full documentation about policy routes, please refer to the <a href="https://azure.microsoft.com/en-us/documentation/articles/traffic-manager-routing-methods/">Azure Traffic Manager docs</a>.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/cfMultiCloudSample/master/images/Figure04-TrafficManagerConfig.png" alt="Traffic Manager Endpoints" /></p>
<p>As you can see from the above, we have two endpoints:</p>
<ul>
<li><strong>Azure Endpoint</strong> which goes against the Public IP that the scripts and Bosh deployed for us when we deployed Cloud Foundry on Azure at the beginning.</li>
<li><strong>External Endpoint</strong> which goes against the domain name for the app that Pivotal Web Services has registered for us (something like <code class="language-plaintext highlighter-rouge">multicloudapp-xyz-abc.cfapps.io</code>).</li>
</ul>
<h4 id="lets-give-it-a-try">Let’s give it a try…</h4>
<p>Now, in the previous configuration for Traffic Manager, we defined that the Pivotal Deployment has priority #1 and therefore will be preferred by Traffic Manager for Traffic routing. So, let’s open up a browser and navigate to the Traffic Manager DNS name for your deployment (in my screen shots and at my CF session that is <code class="language-plaintext highlighter-rouge">marioszpcfsummithybrid.trafficmanager.net</code>):</p>
<p><img src="https://raw.githubusercontent.com/mszcool/cfMultiCloudSample/master/images/Figure05-ItDoesNotWork.png" alt="not working" /></p>
<p>Of course, a Cloud Foundry veteran spots immediately, what that means. I am not a veteran in that area, so I was falling into the trap…</p>
<h4 id="configuring-routes-in-cloud-foundry">Configuring Routes in Cloud Foundry</h4>
<p>What I forgot when setting this up, originally, was configuring routes for the Traffic Manager Domain in my Cloud Foundry clusters. Otherwise, Cloud Foundry will reject requests coming in through that domain as it does not know about it.</p>
<p>We need to configure the routes on both ends to make it working, as shown below, we’re adding the traffic manager domain to the routes and ensure, CF routes traffic from those domains to our multi-cloud sample app:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$trafficMgrDomain</span><span class="o">=</span>marioszpcfsummithybrid.trafficmanager.net
<span class="c">#</span>
<span class="c"># First do this for Pivotal</span>
<span class="c">#</span>
cf login <span class="nt">-a</span> <span class="nv">$pivotalApiEndpoint</span>
cf target <span class="nt">-o</span> <span class="nv">$pivotalOrg</span> <span class="nt">-s</span> <span class="nv">$pivotalSpace</span>
cf create-domain <span class="nv">$pivotalOrg</span> <span class="nv">$trafficMgrDomain</span>
cf create-route <span class="nv">$pivotalSpace</span> <span class="nv">$trafficMgrDomain</span>
cf map-route multicloudapp <span class="nv">$trafficMgrDomain</span>
<span class="c">#</span>
<span class="c"># Then do this for the CF Cluster on Azure</span>
<span class="c">#</span>
<span class="nv">$azureCfApiEndpoint</span><span class="o">=</span><span class="s2">"api.</span><span class="nv">$azureCfPublicIp</span><span class="s2">.xip.io"</span>
cf login <span class="nt">-a</span> <span class="nv">$azureCfApiEndpoint</span>
cf target <span class="nt">-o</span> <span class="nv">$azureOrg</span> <span class="nt">-s</span> <span class="nv">$azureSpace</span>
cf create-domain <span class="nv">$azureOrg</span> <span class="nv">$trafficMgrDomain</span>
cf create-route <span class="nv">$azureSpace</span> <span class="nv">$trafficMgrDomain</span>
cf map-route multicloudapp <span class="nv">$trafficMgrDomain</span>
</code></pre></div></div>
<p>Now let’s give it a try, again, and see what happens. This time we should see our Ruby sample app running and showing that it runs in Pivotal since we defined the priority for the Pivotal-based deployment within Azure Traffic Manager.
<img src="https://raw.githubusercontent.com/mszcool/cfMultiCloudSample/master/images/Figure06-ItWorks.png" alt="it works" /></p>
<h4 id="fixing-routes-on-azure-with-traffic-manager">Fixing Routes on Azure with Traffic Manager</h4>
<p>After I indeed did the route mapping on Azure, Traffic Manager still claimed that the Azure-side of the house is <strong>Degraded</strong>, despite having the route configured. Initially, I didn’t understand why.</p>
<p>I didn’t have this problem when I initially tried this setup before. But when I initially tried this, I did <strong>not have assigned a DNS name to the Cloud Foundry Public IP</strong> in Azure. I’ve changed that because I tried something else in between and assigned a DNS name to the Azure Public IP for the CF Cluster. This lead traffic manager to route against that DNS name instead of the IP.</p>
<p>For troubleshooting that, I initated a fail-over and stopped the app on the Pivotal side (see next section) to make sure, Traffic Manager would try to route to Azure. A <code class="language-plaintext highlighter-rouge">tracert</code> finally told me, what was going on:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">C:\code\github\mszcool\cfMultiCloudSample</span><span class="w"> </span><span class="p">[</span><span class="n">master</span><span class="w"> </span><span class="err">≡</span><span class="p">]</span><span class="err">></span><span class="w"> </span><span class="nx">tracert</span><span class="w"> </span><span class="nx">marioszpcfsummithybrid.trafficmanager.net</span><span class="w">
</span><span class="n">Tracing</span><span class="w"> </span><span class="nx">route</span><span class="w"> </span><span class="nx">to</span><span class="w"> </span><span class="nx">marioszpcfsimple.northeurope.cloudapp.azure.com</span><span class="w"> </span><span class="p">[</span><span class="mf">52.169</span><span class="o">.</span><span class="nf">87</span><span class="o">.</span><span class="nf">212</span><span class="p">]</span><span class="w">
</span><span class="n">over</span><span class="w"> </span><span class="nx">a</span><span class="w"> </span><span class="nx">maximum</span><span class="w"> </span><span class="nx">of</span><span class="w"> </span><span class="nx">30</span><span class="w"> </span><span class="nx">hops:</span><span class="w">
</span><span class="mi">1</span><span class="w"> </span><span class="mi">5</span><span class="w"> </span><span class="n">ms</span><span class="w"> </span><span class="nx">5</span><span class="w"> </span><span class="nx">ms</span><span class="w"> </span><span class="nx">4</span><span class="w"> </span><span class="nx">ms</span><span class="w"> </span><span class="nx">10.10.16.4</span><span class="w">
</span><span class="mi">2</span><span class="w"> </span><span class="mi">2</span><span class="w"> </span><span class="n">ms</span><span class="w"> </span><span class="nx">1</span><span class="w"> </span><span class="nx">ms</span><span class="w"> </span><span class="nx">1</span><span class="w"> </span><span class="nx">ms</span><span class="w"> </span><span class="nx">80.146.218.2</span><span class="w">
</span><span class="mi">3</span><span class="w"> </span><span class="mi">2</span><span class="w"> </span><span class="n">ms</span><span class="w"> </span><span class="nx">1</span><span class="w"> </span><span class="nx">ms</span><span class="w"> </span><span class="nx">2</span><span class="w"> </span><span class="nx">ms</span><span class="w"> </span><span class="nx">62.156.233.185</span><span class="w">
</span><span class="mi">4</span><span class="w"> </span><span class="mi">5</span><span class="w"> </span><span class="n">ms</span><span class="w"> </span><span class="nx">5</span><span class="w"> </span><span class="nx">ms</span><span class="w"> </span><span class="nx">5</span><span class="w"> </span><span class="nx">ms</span><span class="w"> </span><span class="nx">87.190.232.17</span><span class="w">
</span><span class="mi">5</span><span class="w"> </span><span class="mi">8</span><span class="w"> </span><span class="n">ms</span><span class="w"> </span><span class="nx">7</span><span class="w"> </span><span class="nx">ms</span><span class="w"> </span><span class="nx">7</span><span class="w"> </span><span class="nx">ms</span><span class="w"> </span><span class="nx">f-ed1-i.F.DE.NET.DTAG.DE</span><span class="w"> </span><span class="p">[</span><span class="mf">62.154</span><span class="o">.</span><span class="nf">14</span><span class="o">.</span><span class="nf">118</span><span class="p">]</span><span class="w">
</span></code></pre></div></div>
<p>When looking at the selected routes, we immediately spot, that the traffic manager domain gets resolved to the <em>.cloudapp.net</em> domain of the Azure Public IP. So my route on the CF-side of the house was just wrong. The route for Azure should not go against the traffic manager, but rather on the custom domain assigned to the cloud foundry cluster’s public IP in Azure:</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">cf</span><span class="w"> </span><span class="nx">map-route</span><span class="w"> </span><span class="nx">multicloudapp</span><span class="w"> </span><span class="nx">marioszpcfsimple.northeurope.cloudapp.azure.com</span><span class="w">
</span><span class="n">C:\code\github\mszcool\cfMultiCloudSample</span><span class="w"> </span><span class="p">[</span><span class="n">master</span><span class="w"> </span><span class="err">≡</span><span class="p">]</span><span class="err">></span><span class="w"> </span><span class="nx">cf</span><span class="w"> </span><span class="nx">routes</span><span class="w">
</span><span class="n">Getting</span><span class="w"> </span><span class="nx">routes</span><span class="w"> </span><span class="nx">for</span><span class="w"> </span><span class="nx">org</span><span class="w"> </span><span class="nx">default_organization</span><span class="w"> </span><span class="nx">/</span><span class="w"> </span><span class="nx">space</span><span class="w"> </span><span class="nx">dev</span><span class="w"> </span><span class="nx">as</span><span class="w"> </span><span class="nx">admin</span><span class="w"> </span><span class="o">...</span><span class="w">
</span><span class="n">space</span><span class="w"> </span><span class="nx">host</span><span class="w"> </span><span class="nx">domain</span><span class="w"> </span><span class="nx">port</span><span class="w"> </span><span class="nx">path</span><span class="w"> </span><span class="nx">type</span><span class="w"> </span><span class="nx">apps</span><span class="w"> </span><span class="nx">service</span><span class="w">
</span><span class="n">dev</span><span class="w"> </span><span class="nx">52.169.87.212</span><span class="w">
</span><span class="n">dev</span><span class="w"> </span><span class="nx">marioszpcfsimple.northeurope.cloudapp.azure.com</span><span class="w"> </span><span class="nx">multicloudapp</span><span class="w">
</span><span class="n">dev</span><span class="w"> </span><span class="nx">marioszpcfsummithybrid.trafficmanager.net</span><span class="w"> </span><span class="nx">multicloudapp</span><span class="w">
</span></code></pre></div></div>
<p>This indeed fixed the situation and finally my Azure deployment was recognized as <strong>Online</strong> on the traffic manager side of the house, as well.</p>
<p><strong>Important Note:</strong> this fix is needed, only, if you have a public DNS name assigned to your public IP address for the Cloud Foundry cluster in Microsoft Azure. If you just map the public IP address, itself (only do this for static IPs, if at all), then this step is not needed.</p>
<h4 id="testing-a-failover">Testing a failover</h4>
<p>Of course, we want to test if our failover strategy really works. For this purpose, we kill the App on the Pivotal-environment by executing the following commands:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cf login <span class="nt">-a</span> <span class="nv">$pivotalApiEndpoint</span>
cf target <span class="nt">-o</span> <span class="nv">$pivotalOrg</span> <span class="nt">-s</span> <span class="nv">$pivotalSpace</span>
cf stop multicloudapp
</code></pre></div></div>
<p>After that, we need to <strong>wait a while</strong> until traffic manager detects, that the application is not healthy. It then also might take a few seconds or minutes until the DNS record updates are propagated until we see the failover working (the smallest DNS TTL you can set, is 300s as of today).</p>
<p>So watch, what goes on, the simplest way is looking at the Azure Portal and opening up the Azure Traffic Manager configuration. At some point in time we should see, that one of the endpoints changes its status from <strong>Online</strong> to <strong>Degraded</strong>. When opening up a browser and trying to navigate to the traffic manager URL, we should no get redirected to the Azure-based deployment (which we see given our App is outputing the content of the environment variable we did set different for each of the deployments, before):</p>
<p><img src="https://raw.githubusercontent.com/mszcool/cfMultiCloudSample/master/images/Figure07-FailoverInAction.png" alt="failover test" /></p>
<h4 id="final-words">Final Words</h4>
<p>I hope this gives you a nice start in setting up a Multi-Cloud Cloud Foundry environment across Azure and a 3rd-party cloud or your own data center. I will try to continue this conversation on my blog, for sure. There are tons of other cool things to explore with Cloud Foundry in relationship to Azure, and I’ll at least try to cover some of those. Let me know what you think by contacting me through <a href="http://twitter.com/mszcool">twitter.com/mszcool</a>!</p>Mario SzpusztaThis week I was presenting at the CloudFoundry Summit 2016 Europe in Frankfurt, of course about running CloudFoundry on Azure and Azure Stack. It was greate being here, especially because one of my two main Global ISV partners I am working with on the engineering side, have been here as well and are even a Gold-sponser of the event. It was indeed an honor and great pleasure for me to be part of this summit here … and great to finally have a technical session at a non-Microsoft conference, again:) Indeed, one reason for that blog-post is because I ran out of time during my session and was able to show only small parts of the last demo. Anyways, let’s get to the more techncial part of this blog-post. My session was all about running CF in Public, Private as well as Hybrid Clouds with Azure being involved in some way. This is highly relevant since most enterprises are driving a multi-cloud strategy of some way: Either they are embracing Hybrid cloud and run deployments in the public cloud as well as in their own data centers for various reasons or they want to distribute and minimize risk by running their solutions across two (or more) public cloud providers. Despite the fact my session was focused on running Cloud Foundy on Azure, a lot of the concepts and architectural insights presented, can be re-used for other kinds of deployments with other cloud vendors or private clouds, as well.Instance Metadata for Azure Virtual Machines (by 2016)2016-08-11T11:00:00+00:002016-08-11T11:00:00+00:00http://blog.mszcool.com/wordpressarchive/2016/08/11/instance-metadata-and-azure-vms-in-2016<p>At SAP Sapphire we <a href="https://azure.microsoft.com/en-us/blog/azure-offers-market-leading-support-for-sap-hana-workloads/">announced the availabiltiy of SAP HANA on Azure</a>. My little contribution to this was working on case that was <a href="https://www.youtube.com/watch?v=-Qh1XMn5cHk">shown as a demo in the key note at SAP Sapphire 2016</a>: Sports Basement with HANA on Azure. It was meant as a show-case and proof for running HANA One workloads in Azure DS14 VMs and it was the first case of HANA on Azure productive outside of the SAP HANA on Azure Large Instances.</p>
<p>While we proved we can run HANA One in DS14, what’s still missing is the official Marketplace image. We are working on that on-boarding of HANA One into the Azure Marketplace at the time I am writing this post here. This post is about a very specific challenge which I know is needed by many others, as well. While Azure will have a built-in solution, it is not available, today (August 2016), so this might be of help for you!</p>
<!--more-->
<h2 id="scenario-a-vm-reading-and-modifying-data-about-itself">Scenario: A VM reading and modifying data about itself</h2>
<p>This is a very common scenario. HANA One needs it as well. On other cloud platforms, especially AWS, a Virtual Machine can query information about itself without any hurdles through an <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html">instance metadata service</a>. On Azure, as powerful as it is, we don’t have such a service available, yet (per August 2016). To be precise, we do, but it currently delivers information about regular maintenance, only. See <a href="https://azure.microsoft.com/en-us/blog/what-just-happened-to-my-vm-in-vm-metadata-service/">here</a> for further details. While such a service is in the works, it is not available, yet.</p>
<p>Instance metadata is especially interesting for software providers which want to offer their solutions through the marketplace. The metadata can be used for various aspects including association and validation of licenses or protection of software assets inside of the VM.</p>
<p>But what if a VM needs to modify settings through Cloud Provdier Management APIs, automatically? Even with an instance metadata service available, such requirements need a more advanced approach.</p>
<h2 id="solution-a-possible-approach-outlined">Solution: A possible approach outlined</h2>
<p>Based on that I started thinking about this challenge, prototyping it and sharing it with the broader technical community. With Azure having the concept of <strong><em>Service Principals</em></strong> available, I tried the following path:</p>
<ol>
<li>If we could pass in a <strong>Service Principal</strong> at the creation of the VM, we’d have all we need to call into Azure Resource Manager APIs.</li>
<li>The VM can identify itself through it’s “Unique VM ID”. So we could query into Azure Resource Manager APIs and find the VM based on this ID.</li>
<li>For Marketplace use cases it is necessary, that the user is FORCED to enter the credentials. So an ARM template with mandatory parameters for passing in the details for the Service Credential is needed.</li>
</ol>
<p>With this in place we can solve both problems with a single solution: with the right permissions equipped, a Service Principal can query instance metadata through Azure Resource Manager APIs and modify virtual machine settings at the same time. Indeed, the Azure Cloud Foundry Bosh solution uses that approach as well, although it does not need to “identify” virtual machines. It just creates and deletes them…</p>
<p>For most Marketplace Vendors incl. the case above, the VM needs to change details about itself. So their would need to be a way for the VM to find itself through the VM Unique ID. Since nobody was able to answer the quesiton if that’s possible, I prototyped it with the Azure CLI.</p>
<p><strong>Important Note:</strong> This is considered to be a prototype to proof if what is outlined above generally works. For production scenarios you’d need to code this in professional frameworks, better protect secrets by using those and build this into your product.</p>
<p><strong>GitHub Repository:</strong> I’ve prototyped the entire solution and published it on my GitHub Repository here:</p>
<p>–» <a href="https://github.com/mszcool/azureSpBasedInstanceMetadata">https://github.com/mszcool/azureSpBasedInstanceMetadata</a></p>
<h2 id="step-1-create-a-service-principal">Step #1: Create a Service Principal</h2>
<p>The first step is creating a <strong>Service Principal</strong>. That is not an easy task, especially when you think about offerings in a Marketplace where business people want to have fast and simple on-boarding.</p>
<p>Guess for what I’ve created this <a href="https://github.com/mszcool/azureAdMultiTenantServicePrincipal">solution-prototype on my GitHub repository</a> (with a <a href="http://blog.mszcool.com/index.php/2016/06/a-deep-dive-into-azure-ad-multi-tenant-apps-oauthopenidconnect-flows-admin-consent-and-azure-ad-graph-api/">blog-post followed</a>). The idea of this prototype is to provide a ready-to-use service that creates Service Principals in your own subscription.</p>
<p>I still run this on my Azure Subscription, so if you need a Service Principal and you don’t like scripting, just <a href="https://mszcoolserviceprincipal.azurewebsites.net/">use my tool for creating it</a>. <strong>Note:</strong> please use in-private browsing and sign-in with a Global Admin (or get a Global Admin who does an Admin-Consent for my tool in your tenant).</p>
<p>If you love scripting, then you can use tools such as the <a href="https://azure.microsoft.com/en-us/documentation/articles/powershell-install-configure/">Azure PowerShell</a> or the <a href="https://azure.microsoft.com/en-us/documentation/articles/xplat-cli-install/">Azure Cross Platform CLI</a>. In my prototype, I built the entire set of scripts with the Azure CLI and tested it on Ubuntu Linux (14.04 LTS). Even cooler, I indeed developed and debugged all the Scripts on the new <a href="http://www.hanselman.com/blog/DevelopersCanRunBashShellAndUsermodeUbuntuLinuxBinariesOnWindows10.aspx">Bash on Ubuntu on Windows</a>:
<img src="https://raw.githubusercontent.com/mszcool/azureSpBasedInstanceMetadata/master/blogimages/Figure01.png" alt="Bash on Windows" /></p>
<p>The script <a href="https://github.com/mszcool/azureSpBasedInstanceMetadata/blob/master/createsp.sh">createsp.sh</a> shows a sample-script which creates a Service Principal and assigns the needed roles to the Service Principal to read VM metadata in the subscription (it would be better to just target the resource group in which you want to create the VM… I just kept it like that for convenience).</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Each Service Principal in Azure AD is backed by an 'Application-registration'</span>
azure ad app create <span class="nt">--name</span> <span class="s2">"</span><span class="nv">$servicePrincipalName</span><span class="s2">"</span> <span class="se">\</span>
<span class="nt">--home-page</span> <span class="s2">"</span><span class="nv">$servicePrincipalIdUri</span><span class="s2">"</span> <span class="se">\</span>
<span class="nt">--identifier-uris</span> <span class="s2">"</span><span class="nv">$servicePrincipalIdUri</span><span class="s2">"</span> <span class="se">\</span>
<span class="nt">--reply-urls</span> <span class="s2">"</span><span class="nv">$servicePrincipalIdUri</span><span class="s2">"</span> <span class="se">\</span>
<span class="nt">--password</span> <span class="nv">$servicePrincipalPwd</span>
<span class="c"># I use JQ to extract data out of JSON results such as the AppId</span>
<span class="nv">createdAppJson</span><span class="o">=</span><span class="si">$(</span>azure ad app show <span class="nt">--identifierUri</span> <span class="s2">"</span><span class="nv">$servicePrincipalIdUri</span><span class="s2">"</span> <span class="nt">--json</span><span class="si">)</span>
<span class="nv">createdAppId</span><span class="o">=</span><span class="si">$(</span><span class="nb">echo</span> <span class="nv">$createdAppJson</span> | jq <span class="nt">--raw-output</span> <span class="s1">'.[0].appId'</span><span class="si">)</span>
azure ad sp create <span class="nt">--applicationId</span> <span class="s2">"</span><span class="nv">$createdAppId</span><span class="s2">"</span>
</code></pre></div></div>
<p><strong>Note:</strong> I created the App and the Service Principal separately since the AppID is needed to login using Azure CLI with the Service Principal, anyways. Therefore I separated those steps since I needed to read the App and the Service Principal Object IDs, anyways.</p>
<p><strong>Note:</strong> JQ is really a handy command line tool to extract data from the neat JSON-responses of the Azure CLI. Take a look at further details <a href="https://stedolan.github.io/jq/">here</a>.</p>
<p>After the Service Principal and the App are both created, I can assign the roles to the Service Principal so that he can query the VM Metadata in my subscription:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># If I would create the resource group earlier, I could use the</span>
<span class="c"># --resource-group switch instead of the --subscription switch here to scope</span>
<span class="c"># permissions to the resource group of the VM to-be-created, only.</span>
azure role assignment create <span class="nt">--objectId</span> <span class="s2">"</span><span class="nv">$createSpObjectId</span><span class="s2">"</span> <span class="se">\</span>
<span class="nt">--roleName</span> Reader <span class="se">\</span>
<span class="nt">--subscription</span> <span class="s2">"</span><span class="nv">$subId</span><span class="s2">"</span>
</code></pre></div></div>
<p>Finally, to complete the work, I needed the Tenant ID of the Azure AD Tenant for the target subscription which is also needed for the Login with a Service Principal with the Azure CLI. Indeed the following code-snippet is at the very beginning of the <a href="https://github.com/mszcool/azureSpBasedInstanceMetadata/blob/master/createsp.sh">createsp.sh</a>-Script:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Get the entry for the target subscription</span>
<span class="nv">accountsJson</span><span class="o">=</span><span class="si">$(</span>azure account list <span class="nt">--json</span><span class="si">)</span>
<span class="c"># The Subscription ID is needed throughout the script</span>
<span class="nv">subId</span><span class="o">=</span><span class="si">$(</span><span class="nb">echo</span> <span class="nv">$accountsJson</span> | jq <span class="nt">--raw-output</span> <span class="nt">--arg</span> pSubName <span class="nv">$subscriptionName</span> <span class="s1">'.[] | select(.name == $pSubName) | .id'</span><span class="si">)</span>
<span class="c"># Finally get the TenantID of the Azure AD tenant which is associated to the Azure Subscription:</span>
<span class="nv">tenantId</span><span class="o">=</span><span class="si">$(</span><span class="nb">echo</span> <span class="nv">$accountsJson</span> | jq <span class="nt">--raw-output</span> <span class="nt">--arg</span> pSubName <span class="nv">$subscriptionName</span> <span class="s1">'.[] | select(.name == $pSubName) | .tenantId'</span><span class="si">)</span>
</code></pre></div></div>
<p>With those data-assets above in place, the <strong>tenantId</strong>, the <strong>appId</strong> and the password selected for the app-creation, we can log-in with the service principal using the Azure CLI as follows:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>azure telemetry <span class="nt">--disable</span>
azure config mode arm
azure login <span class="nt">--username</span> <span class="s2">"</span><span class="nv">$appId</span><span class="s2">"</span> <span class="nt">--service-principal</span> <span class="nt">--tenant</span> <span class="s2">"</span><span class="nv">$tenantId</span><span class="s2">"</span> <span class="nt">--password</span> <span class="s2">"</span><span class="nv">$pwd</span><span class="s2">"</span>
</code></pre></div></div>
<p><strong>Note:</strong> Since we want to login in a script that runs automated in the VM to extract the metadata for an application at provisioning-time (in my sample - in the real world this could happen on a regular basis with a cron-job or something similar), we need to make sure to avoid any user prompts. The latest versions of Azure CLI prompt for telemetry data collection on the first call after installation. In an automation script you should always turn this off with the first command (<code class="language-plaintext highlighter-rouge">azure telemetry --disable</code>) in your script.</p>
<h2 id="step-2-a-metadata-extraction-script">Step #2: A Metadata Extraction Script</h2>
<p>Okay, now we have a <strong>Service Principal</strong> that could be used from backend jobs to extract metadata for the VM in an automated way, e.g. with the Azure CLI. Next we need a script to do exactly that. For my prototpye, I’ve created a shell script (<a href="https://github.com/mszcool/azureSpBasedInstanceMetadata/blob/master/readmeta.sh">readmeta.sh</a>) that does exactly that. For this prototype I injected this script through the Custom Script Extension for Linux.</p>
<p><strong>Note:</strong> Since the SAP HANA One team uses Linux as their primary OS, I just developed the entire prototype with Shell-Scripts for Linux. But fortunately, due to the <a href="http://www.hanselman.com/blog/DevelopersCanRunBashShellAndUsermodeUbuntuLinuxBinariesOnWindows10.aspx">Bash on Ubuntu on Windows 10</a>, you can also run those from your Windows 10 machine right away (if you have the 2016 Anniversary Update installed).</p>
<p>You can dig into the depths of the entire <a href="https://github.com/mszcool/azureSpBasedInstanceMetadata/blob/master/readmeta.sh">readmeta.sh</a>-Script if you’re interested. I just extract VM and Networking details in their to show, how to crack the VM UUID and to show, how-to extract related items which are exposed as separate resources in ARM attached to the VM.</p>
<p>Let’s start with first things first: the script requires the Azure Cross Platform CLI installed. On a newly provisioned Azure VM, that’s not there. So the script starts with installing stuff:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo mkdir</span> /home/metadata
<span class="nb">export </span><span class="nv">HOME</span><span class="o">=</span>/home/metadata
<span class="c">#</span>
<span class="c"># Install the pre-requisites using apt-get</span>
<span class="c">#</span>
<span class="nb">sudo </span>apt-get <span class="nt">-y</span> update
<span class="nb">sudo </span>apt-get <span class="nt">-y</span> <span class="nb">install </span>build-essential
<span class="nb">sudo </span>apt-get <span class="nt">-y</span> <span class="nb">install </span>jq
curl <span class="nt">-sL</span> https://deb.nodesource.com/setup_4.x | <span class="nb">sudo</span> <span class="nt">-E</span> bash -
<span class="nb">sudo </span>apt-get <span class="nt">-y</span> <span class="nb">install </span>nodejs
<span class="nb">sudo </span>npm <span class="nb">install</span> <span class="nt">-g</span> azure-cli
</code></pre></div></div>
<p><strong>Important Note:</strong> Since the script will run as a Custom Script extension, it does not have things like a user HOME directory set. To make NodeJS and NPM work, we need a Home-Directory. Therefore I set the HOME to <code class="language-plaintext highlighter-rouge">/home/metadata</code> to which I also save all the metadata JSON responses during the script.</p>
<p>The next hard thing was cracking the <a href="https://azure.microsoft.com/en-us/blog/accessing-and-using-azure-vm-unique-id/">VM Unqiue ID</a>. This Unique ID is available for some time in Azure and it identifiers a Virtual Machine for its entire lifetime in Azure. That ID changes when you take the VM off from Azure or delete it and re-create it. But as long as you just provision/de-provision or start/shutdown/start the VM, this ID remains the same.</p>
<p>But, the key question is whether you can use that ID to find a VM using ARM REST APIs to read metadata about itself, or even change its settings through Azure Resource Manager REST APIs. Obviously, the answer is <strong>yes</strong>, otherwise I would not write this post:). But the VM ID presented in responses from Azure Resource Manager REST APIs is different from what you get when reading it inside of the VM out of its asset tags - due to Big Endian bit ordering differences, also documented <a href="https://azure.microsoft.com/en-us/blog/accessing-and-using-azure-vm-unique-id/">here</a>.</p>
<p>So in my Bash-script for reading the metadata, I had to convert the VM ID before trying to use it to find my VM through the ARM REST APIs as follows:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#</span>
<span class="c"># Read the VMID from the BIOS asset tag (skip the prefix, i.e. the first 6 characters)</span>
<span class="c">#</span>
<span class="nv">vmIdLine</span><span class="o">=</span><span class="si">$(</span><span class="nb">sudo </span>dmidecode | <span class="nb">grep </span>UUID<span class="si">)</span>
<span class="nb">echo</span> <span class="s2">"---- VMID ----"</span>
<span class="nb">echo</span> <span class="nv">$vmIdLine</span>
<span class="nv">vmId</span><span class="o">=</span><span class="k">${</span><span class="nv">vmIdLine</span>:6:37<span class="k">}</span>
<span class="nb">echo</span> <span class="s2">"---- VMID ----"</span>
<span class="nb">echo</span> <span class="nv">$vmId</span>
<span class="c">#</span>
<span class="c"># Now switch the order due to encoding differences between the Windows and Linux World</span>
<span class="c">#</span>
<span class="nv">vmIdCorrectParts</span><span class="o">=</span><span class="k">${</span><span class="nv">vmId</span>:20<span class="k">}</span>
<span class="nv">vmIdPart1</span><span class="o">=</span><span class="k">${</span><span class="nv">vmId</span>:0:9<span class="k">}</span>
<span class="nv">vmIdPart2</span><span class="o">=</span><span class="k">${</span><span class="nv">vmId</span>:10:4<span class="k">}</span>
<span class="nv">vmIdPart3</span><span class="o">=</span><span class="k">${</span><span class="nv">vmId</span>:15:4<span class="k">}</span>
<span class="nv">vmId</span><span class="o">=</span><span class="k">${</span><span class="nv">vmIdPart1</span>:7:2<span class="k">}${</span><span class="nv">vmIdPart1</span>:5:2<span class="k">}${</span><span class="nv">vmIdPart1</span>:3:2<span class="k">}${</span><span class="nv">vmIdPart1</span>:1:2<span class="k">}</span>-<span class="k">${</span><span class="nv">vmIdPart2</span>:2:2<span class="k">}${</span><span class="nv">vmIdPart2</span>:0:2<span class="k">}</span>-<span class="k">${</span><span class="nv">vmIdPart3</span>:2:2<span class="k">}${</span><span class="nv">vmIdPart3</span>:0:2<span class="k">}</span>-<span class="nv">$vmIdCorrectParts</span>
<span class="nv">vmId</span><span class="o">=</span><span class="k">${</span><span class="nv">vmId</span><span class="p">,,</span><span class="k">}</span>
<span class="nb">echo</span> <span class="s2">"---- VMID fixed ----"</span>
<span class="nb">echo</span> <span class="nv">$vmId</span>
</code></pre></div></div>
<p>That did the trick to get a VM ID which I can use to find my VM through ARM REST APIs, or through the Azure CLI since I am using bash-scripts here:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#</span>
<span class="c"># Login, and don't forget to turn off telemetry to avoid user prompts in an automation script.</span>
<span class="c">#</span>
azure telemetry <span class="nt">--disable</span>
azure config mode arm
azure login <span class="nt">--username</span> <span class="s2">"</span><span class="nv">$appId</span><span class="s2">"</span> <span class="nt">--service-principal</span> <span class="nt">--tenant</span> <span class="s2">"</span><span class="nv">$tenantId</span><span class="s2">"</span> <span class="nt">--password</span> <span class="s2">"</span><span class="nv">$pwd</span><span class="s2">"</span>
<span class="c">#</span>
<span class="c"># Get the details for the VM and save it</span>
<span class="c">#</span>
<span class="nv">vmJson</span><span class="o">=</span><span class="si">$(</span>azure vm list <span class="nt">--json</span> | jq <span class="nt">--arg</span> pVmId <span class="s2">"</span><span class="nv">$vmId</span><span class="s2">"</span> <span class="s1">'map(select(.vmId == $pVmId))'</span><span class="si">)</span>
<span class="nb">echo</span> <span class="nv">$vmJson</span> <span class="o">></span> /home/metadata/vmmetadatalist.json
<span class="nb">echo</span> <span class="s2">"---- VM JSON ----"</span>
<span class="nb">echo</span> <span class="nv">$vmJson</span>
</code></pre></div></div>
<p>What you see above is, that there’s today (as of August 2016) no way to query Azure Resource Manager REST APIs by using the VM Unique ID. Only attributes such as resource group and VM name can be used. Of course that applies to the Azure CLI, as well. Therefore I retrieve a list of VMs and filter it down using JQ by the VM ID… which fortunately is delivered as an attribute in the JSON response from the ARM REST APIs.</p>
<p>Now we have our first metadata asset, a simple list entry for the VM in which we are runnign with basic attributes. But what if you need more details. The obvious way is to execute an <code class="language-plaintext highlighter-rouge">azure vm show --json</code> command to get the full VM-JSON. But even that will not include all details. E.g. lets say you need the public or the private IP address assigned to the VM. What you need to do then is, navigating through relationships between those Azure Resource Manager Assets (the VM and the Network Interface Card resource, in specific). That is where it gets a bit tricky:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#</span>
<span class="c"># Get the detailed VM JSON with relationship attributes (e.g. the NIC identified through its unique Resource ID)</span>
<span class="c">#</span>
<span class="nv">vmResGroup</span><span class="o">=</span><span class="si">$(</span><span class="nb">echo</span> <span class="nv">$vmJson</span> | jq <span class="nt">-r</span> <span class="s1">'.[0].resourceGroupName'</span><span class="si">)</span>
<span class="nv">vmName</span><span class="o">=</span><span class="si">$(</span><span class="nb">echo</span> <span class="nv">$vmJson</span> | jq <span class="nt">-r</span> <span class="s1">'.[0].name'</span><span class="si">)</span>
<span class="nv">vmDetailedJson</span><span class="o">=</span><span class="si">$(</span>azure vm show <span class="nt">--json</span> <span class="nt">-n</span> <span class="s2">"</span><span class="nv">$vmName</span><span class="s2">"</span> <span class="nt">-g</span> <span class="s2">"</span><span class="nv">$vmResGroup</span><span class="s2">"</span><span class="si">)</span>
<span class="nb">echo</span> <span class="nv">$vmDetailedJson</span> <span class="o">></span> /home/metadata/vmmetadatadetails.json
<span class="c">#</span>
<span class="c"># Then get the NIC for the VM through ARM / Azure CLI</span>
<span class="c">#</span>
<span class="nv">vmNetworkResourceName</span><span class="o">=</span><span class="si">$(</span><span class="nb">echo</span> <span class="nv">$vmJson</span> | jq <span class="nt">-r</span> <span class="s1">'.[0].networkProfile.networkInterfaces[0].id'</span><span class="si">)</span>
<span class="nv">netJson</span><span class="o">=</span><span class="si">$(</span>azure network nic list <span class="nt">-g</span> <span class="nv">$vmResGroup</span> <span class="nt">--json</span> | jq <span class="nt">--arg</span> pVmNetResName <span class="s2">"</span><span class="nv">$vmNetworkResourceName</span><span class="s2">"</span> <span class="s1">'.[] | select(.id == $pVmNetResName)'</span><span class="si">)</span>
<span class="nb">echo</span> <span class="nv">$netJson</span> <span class="o">></span> /home/metadata/vmnetworkdetails.json
<span class="c">#</span>
<span class="c"># The private IP is contained in the previously received NIC config (netJson)</span>
<span class="c">#</span>
<span class="nv">netIpConfigsForVm</span><span class="o">=</span><span class="si">$(</span><span class="nb">echo</span> <span class="nv">$netJson</span> | jq <span class="s1">'{ "ipCfgs": .ipConfigurations }'</span><span class="si">)</span>
<span class="nb">echo</span> <span class="nv">$netIpConfigsForVm</span> <span class="o">></span> /home/metadata/vmipconfigs.json
<span class="c">#</span>
<span class="c"># But the public IP is a separate resource in ARM, so you need to navigate and execute a further call</span>
<span class="c">#</span>
<span class="nv">netIpPublicResourceName</span><span class="o">=</span><span class="si">$(</span><span class="nb">echo</span> <span class="nv">$netJson</span> | jq <span class="nt">-r</span> <span class="s1">'.ipConfigurations[0].publicIPAddress.id'</span><span class="si">)</span>
<span class="nv">netIpPublicJson</span><span class="o">=</span><span class="si">$(</span>azure network public-ip list <span class="nt">-g</span> <span class="nv">$vmResGroup</span> <span class="nt">--json</span> | jq <span class="nt">--arg</span> ipid <span class="nv">$netIpPublicResourceName</span> <span class="s1">'.[] | select(.id == $ipid)'</span><span class="si">)</span>
<span class="nb">echo</span> <span class="nv">$netIpPublicJson</span> <span class="o">></span> /home/metadata/vmipconfigspublicip.json
</code></pre></div></div>
<p>This should give you enough of the needed concepts to get all sorts of VM Metadata for your own VM using Bash-scripting. If you want to translate this to your Java, .NET, NodeJS or whatsoever code, then you need to look at the management libraries for the respective runtimes/languages.</p>
<h2 id="step-3-putting-it-all-together---the-arm-template">Step #3: Putting it all together - the ARM template</h2>
<p>Finally we need to put this all together! That happens in an ARM template and the parameters this ARM template requests from the user to be entered on provisioning. An ARM-template similar to this could be built for a <a href="https://azure.microsoft.com/en-us/documentation/articles/marketplace-publishing-solution-template-creation/">solution template based Marketplace Offer</a>.</p>
<p>On my GitHub repository for this prototype, the ARM template and its parameters are baked into the files <a href="https://raw.githubusercontent.com/mszcool/azureSpBasedInstanceMetadata/master/azuredeploy.json">azuredeploy.json</a> and <a href="https://raw.githubusercontent.com/mszcool/azureSpBasedInstanceMetadata/master/azuredeploy.parameters.json">azuredeploy.parameters.json</a>. I won’t go through all details of these templates. The most important aspects are in the parameters-section and in the VM creation section where I hook up the Service Principal with the Script and attach it as a Custom Script Extension. Start with an excerpt of the “parameters”-section of the template:</p>
<pre><code class="language-JSON">"parameters": {
"storageAccountName": {
"type": "string"
},
"dnsNameForPublicIP": {
"type": "string"
},
"adminUserName": {
"type": "string"
},
"adminPassword": {
"type": "securestring"
},
"azureAdTenantId": {
"type": "string"
},
"azureAdAppId": {
"type": "string"
},
"azureAdAppSecret": {
"type": "securestring"
},
...
},
...
</code></pre>
<p>The important parameters are the <code class="language-plaintext highlighter-rouge">azureAdTenantId</code>, <code class="language-plaintext highlighter-rouge">azureAdAppId</code> and <code class="language-plaintext highlighter-rouge">azureAdAppSecret</code> parameters. Those together form the sign-in details for the Service Principal as it is used in the script described in the previous section to read out the metadata for the VM on provisioning, automatically.</p>
<p>Reading the metadata is initiated through specifying my <a href="https://raw.githubusercontent.com/mszcool/azureSpBasedInstanceMetadata/master/readmeta.sh">readmeta.sh</a>-script as a custom script extension for the VM in the ARM template as below:</p>
<pre><code class="language-JSON">...
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(parameters('vmName'),'/writemetadatajson')]",
"apiVersion": "2015-06-15",
"location": "[parameters('location')]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
],
"properties": {
"publisher": "Microsoft.OSTCExtensions",
"type": "CustomScriptForLinux",
"typeHandlerVersion": "1.5",
"settings": {
"fileUris": [
"[concat('https://', parameters('storageAccountName'), '.blob.core.windows.net/customscript/readmeta.sh')]"
]
},
"protectedSettings": {
"commandToExecute": "[concat('bash readmeta.sh ', parameters('azureAdTenantId'), ' ', parameters('azureAdAppId'), ' ', parameters('azureAdAppSecret'))]"
}
}
}
...
</code></pre>
<p>Since the Azure Linux Custom Script extension prints a lot of diagnostics details about what it is doing, we need to at least make sure that our sensitive data, especially the Service Principal’s password is NOT included in that diagnostics logs to keep it protected (well… as good as possible:)). Therefore the <code class="language-plaintext highlighter-rouge">commandToExecute</code>-setting is put into the <code class="language-plaintext highlighter-rouge">protectedSettings</code>-section which is NOT disclosed in any diagnostics-logs from the Custom Script Extension.</p>
<p><strong>Important Note:</strong> On the <a href="https://github.com/Azure/azure-quickstart-templates">Azure Quickstarts Template Gallery</a> are many templates that are using the custom script extension version <code class="language-plaintext highlighter-rouge">1.2</code>. For having the <code class="language-plaintext highlighter-rouge">commandToExecute</code>-setting in the <code class="language-plaintext highlighter-rouge">protectedSettings</code>-section, you have to use a newer version. For me, the latest version <code class="language-plaintext highlighter-rouge">1.5</code> at the time of writing the post worked. With the previous versions it just didn’t call the script.</p>
<h2 id="step-4-trying-it-out">Step #4: Trying it out…</h2>
<p>Before you can try things out, there’s one thing you need to prepare: create the storage account and upload the <code class="language-plaintext highlighter-rouge">readmeta.sh</code>-script into that account (argh, next time I just write the scripts to clone my GitHub-repository:)). To make it easy, I created a script called <a href="https://raw.githubusercontent.com/mszcool/azureSpBasedInstanceMetadata/master/deploy.sh">deploy.sh</a> with 10 parameters that does everything:</p>
<ol>
<li>Create the Resource group</li>
<li>Create the storage account</li>
<li>Upload the script to the storage account</li>
<li>Update the parameters in azuredeploy.parameters.json to reflect your service principal attributes</li>
<li>Start the deployment with the template and the updated template parameters.</li>
</ol>
<p>And while trying I thought the 10 parameters make it flexible, but it’s still a hard start if you’d love to just quickly try this. So I created another bash-script called <a href="https://raw.githubusercontent.com/mszcool/azureSpBasedInstanceMetadata/master/getstarted.sh">getstarted.sh</a>. That asks you for all the data interactively and calls the <a href="https://raw.githubusercontent.com/mszcool/azureSpBasedInstanceMetadata/master/createsp.sh">createsp.sh</a> and <a href="https://raw.githubusercontent.com/mszcool/azureSpBasedInstanceMetadata/master/deploy.sh">deploy.sh</a> scripts based on the input you interactively entered. Just like below:</p>
<p><img src="https://raw.githubusercontent.com/mszcool/azureSpBasedInstanceMetadata/master/blogimages/Figure02.png" alt="Getting Started" /></p>
<h2 id="final-words">Final Words</h2>
<p>With this in place, you have a solution that allows you to do both, reading instance metadata of the VM in which your software runs and also (with the right permissions set on the Service Principal) modify aspects of the VM through Azure Resource Manager APIs or Command Line Interfaces.</p>
<p>Sure, this reads like a complex, long thing. It would be much easer for Instance Metadata if you could do it without authentication and Service Principals. All I can say is that this will change and will become easier. But for now, that’s a solution and I hope I provide you with valuable assets that make the story less complex for you to achieve this goal!</p>
<p>And even when we have a simpler solution for instance metadata available in Azure, the content above shows you some advanced scripting concepts of which I hope you can learn from. The coolest thing of it: since Windows 10 Anniversary Update you can run all of the above on both, Windows and Ubuntu Linux, <em>BECAUSE</em> all is written as Bash scripts.</p>
<p>For me the nice side-effect of this was experiencing, how mature the Linux Subsystem for Windows seems to be. What really surprised me is, that I even can run <a href="https://www.digitalocean.com/community/tutorials/how-to-install-node-js-with-nvm-node-version-manager-on-a-vps">Node Version Manager</a> and <a href="https://help.ubuntu.com/community/CompilingEasyHowTo">build-essential</a> on it (I even tried compiling v5 of my Node.JS version using it and it ran through and works).</p>
<p>Anyways - if you have any questions, reach out to me on <a href="http://twitter.com/mszcool">Twitter</a>.</p>Mario SzpusztaAt SAP Sapphire we announced the availabiltiy of SAP HANA on Azure. My little contribution to this was working on case that was shown as a demo in the key note at SAP Sapphire 2016: Sports Basement with HANA on Azure. It was meant as a show-case and proof for running HANA One workloads in Azure DS14 VMs and it was the first case of HANA on Azure productive outside of the SAP HANA on Azure Large Instances. While we proved we can run HANA One in DS14, what’s still missing is the official Marketplace image. We are working on that on-boarding of HANA One into the Azure Marketplace at the time I am writing this post here. This post is about a very specific challenge which I know is needed by many others, as well. While Azure will have a built-in solution, it is not available, today (August 2016), so this might be of help for you!AzureAD - OAuth Flows for Multi-Tenant Web Applications that need to create Service Principals2016-06-29T11:00:00+00:002016-06-29T11:00:00+00:00http://blog.mszcool.com/wordpressarchive/2016/06/29/azuread-oauth-flow-of-multi-tenant-web-app-for-creating-service-principals<p>I am currently working with one if my main Global Independent Software Vendor (ISV) partners for on-boarding their solution into the Azure Marketplace. The main challenge that we face there is, that the solution needs to do some post-provisioning steps in the end-customer’s target subscription as well as Azure Active Directory tenant:</p>
<ul>
<li>Creating a Service Principal that can be used by the Software inside of the provisioned VM in the end-customer’s target directory.</li>
<li>Using that service principal to read data from the end-customer’s Azure Subscription.</li>
</ul>
<p>Note: the end customer in this case is the customer, who purchases the product published by the ISV in the store!</p>
<p>Such cases typically require the creation of “multi-tenant” Azure Active Directory applications. And this application then needs to access the end-customer’s target directory using the Azure AD Graph API. At the same time, creating service principals is not an easy task.</p>
<!--more-->
<h3 id="a-multi-tenant-web-app-to-create-service-principals-as-sample">A Multi-Tenant Web App to create Service Principals as Sample</h3>
<p>To make this as practical as possible, I decided to create a web app that creates service principals in the target Azure Active Directory of an end-customer that’s using the web app.</p>
<p>This shows, how the general multi-tenancy challenge can be solved and at the same time provides a handy tool for creating Service Principals, which is a harder task on its own.</p>
<p>All the details for using the app and for cloning the source code are available on my GitHub-repository under the link below. In addition, I also run the app on my Azure Subscription as a free-tier Azure Web App.</p>
<ul>
<li>The app running: <a href="https://mszcoolserviceprincipal.azurewebsites.net">https://mszcoolserviceprincipal.azurewebsites.net</a>
<ul>
<li>Note: when using <strong>Microsoft Accounts (MSA)</strong>, you need to open the site <strong>“in-private”/”incognito”</strong> mode!</li>
</ul>
</li>
<li>
<p>Source code: <a href="https://github.com/mszcool/azureAdMultiTenantServicePrincipal"><a href="https://github.com/mszcool/azureAdMultiTenantServicePrincipal">https://github.com/mszcool/azureAdMultiTenantServicePrincipal</a></a></p>
</li>
<li>Documentation: <a href="https://github.com/mszcool/azureAdMultiTenantServicePrincipal/blob/master/README.md"><a href="https://github.com/mszcool/azureAdMultiTenantServicePrincipal/blob/master/README.md">https://github.com/mszcool/azureAdMultiTenantServicePrincipal/blob/master/README.md</a></a></li>
</ul>
<p>The documentation shows, how-to register a multi-tenant application in your Azure AD tenant to make such an application available as a multi-tenant application. It shows, how such an application is reflected in a customer’s target Azure AD tenant and how-to manage access to it.</p>
<p>The sample also demonstrates the various OAuth- and OpenIdConnect-flows which are needed in a simple yet practical and useful scenario. All of this should be easy to reflect to your own scenarios and I found that, despite Microsoft has decent docs for Azure Active Directory out there, such a sample is not easy to find in an end-2-end and focused way. That’s what I tried to create.</p>
<h3 id="the-basicinitial-openidconnect-flow-for-signing-in">The basic/initial OpenIdConnect-Flow for Signing-In</h3>
<p>So, let’s start with digging into the OAuth details. First of all, all the theory is well-explained on the official Microsoft Azure and MSDN documentation pages (see last section of the article).</p>
<p>I just get down at the protocol-trace level so that it’s easy for developers to understand what’s going on and how simple those protocols are, indeed. Also it should help configuring/using other frameworks on all sorts of platforms appropriately to fit into this model.</p>
<p>For all of the below I am using the <a href="https://mszcoolserviceprincipal.azurewebsites.net">real deployment</a> of my Service Principal Web App Demo mentioned above (note: I might remove that deployment at any point in time since I’ve guidance on my GitHub-repo for how-to deploy it in your own Azure AD tenant, as well).</p>
<ol>
<li>
<p>First the user browses to the target application which is secured by Azure AD.</p>
</li>
<li>
<p>That typically ends up in a redirect to Azure AD as an IDP to get an initial token. A typical Redirect Request for an OAuth Sign-In flow looks as follows (using line-breaks to make it easier to read):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> GET https://login.microsoftonline.com/common/oauth2/authorize?
client_id=---your client id from azure ad app registration---
&response_mode=form_post
&response_type=code+id_token
&scope=openid+profile
&state=OpenIdConnect.AuthenticationProperties%3dW1HmJdRTdYw...
&redirect_uri=https%3a%2f%2flocalhost%3a44330%2f HTTP/1.1
Host: login.microsoftonline.com
Connection: keep-alive
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8
</code></pre></div> </div>
<ul>
<li>The <strong>client_id</strong>-parameter reflects the Client ID that is configured in Azure Active Directory for that application.</li>
<li>The <strong>scope</strong>-parameter contains various additional items used for token validation.</li>
<li>The <strong>nonce</strong> is used to protect against token replay attacks (typically). It’s value provided in the request must match the response and is unique per user session, typically.</li>
</ul>
</li>
<li>
<p>When the user (assume Admin) signs in for the first time, a consent dialog is displayed. This is part of the OAuth Authorization flow and gives the user a chance to “Accept” or decline the permissions the app needs. Since that is handled by Azure AD as an IdP, we don’t look into the details of the requests issued there.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/azureAdMultiTenantServicePrincipal/master/Docs/Usage-Figure01-Sign-In.png" alt="Consent" /></p>
</li>
<li>
<p>Once the user accepted this consent, Azure AD posts a token to a target URL which was specified in the earlier request with the <strong>redirect_uri</strong> parameter. Let’s look at the details (again with newlines for readability):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> POST https://localhost:44330/ HTTP/1.1
Host: localhost:44330
Connection: keep-alive
Content-Length: 2428
Cache-Control: max-age=0
Origin: https://login.microsoftonline.com
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36
Content-Type: application/x-www-form-urlencoded
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Referer: https://login.microsoftonline.com/common/Consent/Grant
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.8
Cookie: OpenIdConnect.nonce.Z9f6E8u...
code=AAABA...
</code></pre></div> </div>
<p>That post contains an OAuth-Authorization code in the body. This Code can be used to request tokens from Azure AD for downstream API-calls of APIs which are also secured by Azure AD. Of course, the code will only work for APIs to which the app has been given permissions in the Azure AD portal.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/azureAdMultiTenantServicePrincipal/master/Docs/Figure02-App-Permissions.png" alt="Permissions of the App" /></p>
<p>For the “Service Principal Demo App” those permissions are highlighted in the screen shot above. The Code therefore would work for requests of tokens for the Azure Active Directory Graph API (identified as https://graph.windows.net) and the Azure Service Management and Resource Manager APIs (identified as https://management.core.windows.net).</p>
</li>
<li>
<p>When the Service Principal Web App receives the request, it actually uses it to request an additional token that permits the app to call into Azure Active Directory Graph APIs. This is another token-request which the app tries to execute when it received the post above.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> POST https://login.microsoftonline.com/common/oauth2/token HTTP/1.1
Accept: application/json
x-client-last-request: a5db36d8-ab46-4dfc-b96e-9dc31cf06a5c
x-client-last-response-time: 1284
x-client-last-endpoint: token
x-client-SKU: PCL.Desktop
x-client-Ver: 3.10.0.0
x-client-CPU: x64
x-client-OS: Microsoft Windows NT 10.0.10586.0
x-ms-PKeyAuth: 1.0
client-request-id: 1eb9034c-e02c-4e7b-8c4f-0fe5e2faabfe
return-client-request-id: true
Content-Type: application/x-www-form-urlencoded
Host: login.microsoftonline.com
Content-Length: 1079
Expect: 100-continue
resource=https%3A%2F%2Fgraph.windows.net&client_id=---your client id from azure ad app registration---&client_secret=---your client secret configured in the azure ad portal&grant_type=authorization_code&code=---previously received authorization code---&redirect_uri=https%3A%2F%2Flocalhost%3A44330%2F
</code></pre></div> </div>
<p>Such a request would the respond with a new OAuth Bearer Token that would permit us to call into the Azure AD Graph APIs. This token needs to be added to the HTTP Authorize header on each request, then. Here’s an example response for the request above:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> HTTP/1.1 200 OK
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/8.5
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff
x-ms-request-id: fb2db119-ca9e-421d-8007-6ae7e97d163e
client-request-id: 1eb9034c-e02c-4e7b-8c4f-0fe5e2faabfe
x-ms-responsehealth: TargetId=ESTSFE_IN_329;Action=None;Category=None;Health=0;Load=9;
P3P: CP="DSP CUR OTPi IND OTRi ONL FIN"
Set-Cookie: esctx=AAABAA ...; domain=.login.microsoftonline.com; path=/; secure; HttpOnly
Set-Cookie: x-ms-gateway-slice=productionb; path=/; secure; HttpOnly
Set-Cookie: stsservicecookie=ests; path=/; secure; HttpOnly
X-Powered-By: ASP.NET
Date: Wed, 29 Jun 2016 21:31:37 GMT
Content-Length: 3826
{
"token_type": "Bearer",
"scope": "Directory.AccessAsUser.All Directory.ReadWrite.All Group.ReadWrite.All User.Read",
"expires_in": "3599",
"ext_expires_in": "3600",
"expires_on": "1467239498",
"not_before": "1467235598",
"resource": "https://graph.windows.net",
"access_token": "eyJ0eXAiOiJK...",
"refresh_token": "AAABAAAAiL9Kn2..."
}
</code></pre></div> </div>
<p>The response is a JSON-response containing some helpful details about the issued token as well as a refresh-token to renew the actual access token. Note: if you need to get a new access token with the refresh token, you still need to have the Client ID and the App Secret available in that refresh-request.</p>
</li>
</ol>
<h3 id="oauth-admin-consent-for-multi-tenant-azure-ad-apps">OAuth Admin Consent for Multi-Tenant Azure AD Apps</h3>
<p>Yikes, the biggest challenge I faced with the tool when building it was, that ordinary Azure AD Users (role = ‘User’) where not able to use it. You had to be a ‘Global Admin’ to execute it.</p>
<p>The main reason for that was, that my app requires “acting as the Signed-in User” against Azure AD Graph API. And for that, the Azure AD team changed the default behavior for a good reason a while ago (well, in March 2015): <a href="https://blogs.msdn.microsoft.com/aadgraphteam/2015/03/18/update-to-graph-api-consent-permissions/">https://blogs.msdn.microsoft.com/aadgraphteam/2015/03/18/update-to-graph-api-consent-permissions/</a>.</p>
<p>So, to enable ordinary users to make use of such applications, a Global Admin first needs to “approve” the application for the target directory by running through an OAuth Admin Consent. This is a special type of consent that asks the Global Admin if he wants to make the permissions the App requires available to ordinary users inside of the Organization (technically: in the target directory against the multi-tenant app tries to work depending on the signed-in user).</p>
<p>The steps are:</p>
<ol>
<li>
<p>The Global Admin needs to Sign-in into the application.</p>
</li>
<li>
<p>The application needs to provide the appropriate “on-boarding”-function, which essentially initiates the Admin-Consent against the target directory of the signed-in user. I did this by just adding a button to my app that starts the Admin Consent.</p>
<p><img src="https://raw.githubusercontent.com/mszcool/azureAdMultiTenantServicePrincipal/master/Docs/Usage-Figure05-AdminConsent1.png" alt="Admin Consent Function" /></p>
</li>
<li>
<p>All that button does is composing a URL that goes against the Azure Active Directory OAuth endpoints to walk through the Admin Consent. This leads to the following request that initiates the Admin Consent:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> GET https://login.windows.net/yourazureadtenantid/oauth2/authorize?
api-version=1.0
&response_type=code
&client_id=yourazureadappid
&resource=https://management.core.windows.net/
&redirect_uri%20=https://mszcoolserviceprincipal.azurewebsites.net/Home/CatchConsentResult
&prompt=admin_consent
HTTP/1.1
Host: login.windows.net
Connection: keep-alive
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Referer: https://mszcoolserviceprincipal.azurewebsites.net/
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8
</code></pre></div> </div>
<p>The <strong>really important</strong> aspect of that request is the query-string parameter <strong>prompt=admin_consent</strong> which does the work. I combine the request with issuing an authorization code right away, but I do think that’s optional (would need to read back in the specs:)).</p>
</li>
<li>
<p>After that initial admin-consent is completed, every other ordinary user (role = ‘user’) can sign-in to the application and make use of it. The admin-consent literally approved the application through an administrator for the organization for security reasons.</p>
</li>
</ol>
<h3 id="the-graph-api-calls-with-the-issued-tokens">The Graph API calls with the issued tokens</h3>
<p>Finally, with that Access Token we can make calls into the Azure AD Graph API. The sample-calls for creating a Service Principal are similar to the following types of requests.</p>
<ol>
<li>
<p>First, the app tries to find if the needed “Application” for the Service Principal has been created in Azure AD, already:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> GET https://graph.windows.net/yourazureadtenantid/applications()?$filter=identifierUris/any(iduri:iduri%20eq%20'http%3A%2F%2Fyourappidurienteredinthescreen')&api-version=1.6 HTTP/1.1
DataServiceVersion: 3.0;NetFx
MaxDataServiceVersion: 3.0;NetFx
Accept: application/json;odata=minimalmetadata
Accept-Charset: UTF-8
DataServiceUrlConventions: KeyAsSegment
User-Agent: Microsoft Azure Graph Client Library 2.1.1
Authorization: Bearer eyJ0eXAiOiJK...
X-ClientService-ClientTag: Office 365 API Tools 1.1.0612
Host: graph.windows.net
Connection: Keep-Alive
</code></pre></div> </div>
<p>The request above looks, if an Application is registered in the target tenant <strong>yourazureadtenantid</strong> with the App ID URI <strong>http://yourappidurienteredinthescreen</strong>. The HTTP-response will have an OData-based JSON with the resulting elements in it if an App exists, already.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> {
"odata.metadata": "https://graph.windows.net/yourazureadtenantid/$metadata#directoryObjects/Microsoft.DirectoryServices.Application",
"value":[
...
]
}
</code></pre></div> </div>
</li>
<li>
<p>If no application exists, it actually creates the application by posting an ApplicationEntity into the Graph API:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>POST https://graph.windows.net/yourazureadtenantid/applications?api-version=1.6 HTTP/1.1
DataServiceVersion: 3.0;NetFx
MaxDataServiceVersion: 3.0;NetFx
Content-Type: application/json;odata=minimalmetadata
Accept: application/json;odata=minimalmetadata
Accept-Charset: UTF-8
DataServiceUrlConventions: KeyAsSegment
User-Agent: Microsoft Azure Graph Client Library 2.1.1
Authorization: Bearer eyJ0eXAiOiJKV1...
X-ClientService-ClientTag: Office 365 API Tools 1.1.0612
Host: graph.windows.net
Content-Length: 201
Expect: 100-continue
{
"odata.type": "Microsoft.DirectoryServices.Application",
"displayName": "YourAppDisplayName",
"identifierUris@odata.type": "Collection(Edm.String)",
"identifierUris": [
"http://YourAppIdUri"
]
}
</code></pre></div> </div>
<p>This post will return with a detailed JSON object which contains all the details about the created App including it’s AppId.</p>
</li>
<li>
<p>Then the application does the same for checking if a Service Principal exists for the Application previously created, already.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>GET https://graph.windows.net/yourazureadtenantid/servicePrincipals()?$filter=appId%20eq%20'b3ccae52-19bc-45a1-a4e4-f572f6963213'&api-version=1.6 HTTP/1.1
DataServiceVersion: 1.0;NetFx
MaxDataServiceVersion: 3.0;NetFx
Accept: application/json;odata=minimalmetadata
Accept-Charset: UTF-8
DataServiceUrlConventions: KeyAsSegment
User-Agent: Microsoft Azure Graph Client Library 2.1.1
Authorization: Bearer eyJ0eXAiOiJKV1...
X-ClientService-ClientTag: Office 365 API Tools 1.1.0612
Host: graph.windows.net
</code></pre></div> </div>
<p>The response will again contain an OData JSON document with the service principal if it exists, already. I am skipping the details for now…</p>
</li>
<li>
<p>Finally, if the Service Principal does not exist, the app creates one with a password credential attached to it. That means this principal can be used by service- and backend-applications.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>POST https://graph.windows.net/yourazureadtenantid/servicePrincipals?api-version=1.6 HTTP/1.1
DataServiceVersion: 3.0;NetFx
MaxDataServiceVersion: 3.0;NetFx
Content-Type: application/json;odata=minimalmetadata
Accept: application/json;odata=minimalmetadata
Accept-Charset: UTF-8
DataServiceUrlConventions: KeyAsSegment
User-Agent: Microsoft Azure Graph Client Library 2.1.1
Authorization: Bearer eyJ0eXAiOiJKV1...
X-ClientService-ClientTag: Office 365 API Tools 1.1.0612
Host: graph.windows.net
Content-Length: 627
Expect: 100-continue
{
"odata.type": "Microsoft.DirectoryServices.ServicePrincipal",
"accountEnabled": true,
"appId": "b3ccae52-19bc-45a1-a4e4-f572f6963213",
"displayName": "tttttteeeeeeeessssstttt",
"passwordCredentials@odata.type": "Collection(Microsoft.DirectoryServices.PasswordCredential)",
"passwordCredentials": [
{
"customKeyIdentifier": null,
"endDate": "2017-06-29T21:43:15.6654372Z",
"keyId": "0259571d-a663-4507-94e9-9381629e2116",
"startDate": "2016-06-29T21:43:15.6639533Z",
"value": "pass@word1"
}
],
"servicePrincipalNames@odata.type": "Collection(Edm.String)",
"servicePrincipalNames": [
"b3ccae52-19bc-45a1-a4e4-f572f6963213",
"http://tttttteeeeeeeessssstttt"
]
}
</code></pre></div> </div>
</li>
</ol>
<p><strong>Note:</strong> One piece missing is to assign appropriate roles for executing on Service Management Operations for Azure Resource Manager Rolebased Access Control so that the Service Principal can execute the needed operations against the management APIs.</p>
<h3 id="do-you-really-need-to-know-all-of-these-details">Do you really need to know all of these details?</h3>
<p>With that we went through all the protocol details for the OAuth, OpenIdConnect and Graph API calls that are needed to accomplish an end-2-end task. It’s actually a very practical look at how all these “sequence diagrams” that are talking about OAuth are looking in the real world.</p>
<p>My intent to show these details was, to help people which are working with programming languages and runtimes that do not have nice SDKs available for encapsulating those protocol details to at least have a high-level overview and starting-point without reading the OAuth and OpenIdConnect specs. I know it’s high-level, but it’s practical.</p>
<h3 id="oauth-and-openid-connect-azure-ad-resources">OAuth and OpenId Connect Azure AD Resources</h3>
<p>The following links do explain all the different query string parameters of the OAuth/OpenIdConnect flows with Azure AD. They are a great resource to better understand the http-requests I’ve outlined above.</p>
<ul>
<li><a href="https://msdn.microsoft.com/en-us/library/azure/dn645542.aspx">https://msdn.microsoft.com/en-us/library/azure/dn645542.aspx</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/azure/dn645541.aspx">https://msdn.microsoft.com/en-us/library/azure/dn645541.aspx</a></li>
<li><a href="http://openid.net/specs/openid-connect-core-1_0.html">http://openid.net/specs/openid-connect-core-1_0.html</a></li>
<li><a href="http://oauth.net/2/">http://oauth.net/2/</a></li>
<li><a href="http://tools.ietf.org/html/rfc6749">http://tools.ietf.org/html/rfc6749</a></li>
</ul>
<h3 id="sdks-for-languages-and-runtimes">SDKs for languages and Runtimes</h3>
<p>Fortunately, if you are a .NET, Java, Node.js, PHP or Python developer, there are numerous examples and resources available. Also for Azure AD’s Graph API there’s a nice tool available to dig into all the JSON and protocol details.</p>
<p>Here are the most important links:</p>
<ul>
<li>Azure AD Authentication Library for Java
<ul>
<li><a href="https://azure.microsoft.com/en-us/documentation/articles/active-directory-devquickstarts-webapp-java/">https://azure.microsoft.com/en-us/documentation/articles/active-directory-devquickstarts-webapp-java/</a></li>
<li><a href="https://github.com/AzureAD/azure-activedirectory-library-for-java">https://github.com/AzureAD/azure-activedirectory-library-for-java</a></li>
<li><a href="https://github.com/Azure-Samples/active-directory-java-webapp-openidconnect">https://github.com/Azure-Samples/active-directory-java-webapp-openidconnect</a></li>
</ul>
</li>
<li>Azure AD Authentication Library for .NET
<ul>
<li><a href="https://azure.microsoft.com/en-us/documentation/articles/active-directory-devquickstarts-dotnet/">https://azure.microsoft.com/en-us/documentation/articles/active-directory-devquickstarts-dotnet/</a></li>
<li><a href="https://github.com/AzureAD/azure-activedirectory-library-for-dotnet">https://github.com/AzureAD/azure-activedirectory-library-for-dotnet</a></li>
<li><a href="https://blogs.technet.microsoft.com/enterprisemobility/2016/05/18/adal-net-v3-reaches-ga/">https://blogs.technet.microsoft.com/enterprisemobility/2016/05/18/adal-net-v3-reaches-ga/</a></li>
</ul>
</li>
<li>Azure AD Authentication Library for JavaScript
<ul>
<li><a href="https://azure.microsoft.com/en-us/documentation/articles/active-directory-devquickstarts-angular/">https://azure.microsoft.com/en-us/documentation/articles/active-directory-devquickstarts-angular/</a></li>
<li><a href="http://www.cloudidentity.com/blog/2014/10/28/adal-javascript-and-angularjs-deep-dive/">http://www.cloudidentity.com/blog/2014/10/28/adal-javascript-and-angularjs-deep-dive/</a></li>
<li><a href="https://github.com/AzureAD/azure-activedirectory-library-for-js">https://github.com/AzureAD/azure-activedirectory-library-for-js</a></li>
</ul>
</li>
<li>Azure AD Node.js Integration Sample
<ul>
<li><a href="https://azure.microsoft.com/en-us/documentation/articles/active-directory-devquickstarts-webapi-nodejs/">https://azure.microsoft.com/en-us/documentation/articles/active-directory-devquickstarts-webapi-nodejs/</a></li>
<li><a href="https://azure.microsoft.com/en-us/documentation/articles/active-directory-devquickstarts-openidconnect-nodejs/">https://azure.microsoft.com/en-us/documentation/articles/active-directory-devquickstarts-openidconnect-nodejs/</a></li>
</ul>
</li>
</ul>
<p>For Graph API there are also good samples and SDKs out there:</p>
<ul>
<li>Graph API and Java:
<ul>
<li><a href="https://github.com/Azure-Samples/active-directory-java-graphapi-web">https://github.com/Azure-Samples/active-directory-java-graphapi-web</a></li>
<li><a href="https://github.com/AzureAD/azure-activedirectory-library-for-java">https://github.com/AzureAD/azure-activedirectory-library-for-java</a></li>
<li><a href="https://azure.microsoft.com/en-us/documentation/samples/active-directory-java-graphapi-web/">https://azure.microsoft.com/en-us/documentation/samples/active-directory-java-graphapi-web/</a></li>
</ul>
</li>
<li>Graph API and .NET:
<ul>
<li><a href="https://raw.githubusercontent.com/mszcool/azureAdMultiTenantServicePrincipal">https://raw.githubusercontent.com/mszcool/azureAdMultiTenantServicePrincipal</a></li>
<li><a href="https://github.com/Azure-Samples/active-directory-dotnet-graphapi-web">https://github.com/Azure-Samples/active-directory-dotnet-graphapi-web</a></li>
<li><a href="https://www.nuget.org/packages/Microsoft.Azure.ActiveDirectory.GraphClient/">https://www.nuget.org/packages/Microsoft.Azure.ActiveDirectory.GraphClient/</a></li>
</ul>
</li>
<li>Graph API for JavaScript and Node.js
<ul>
<li><a href="https://www.npmjs.com/package/azure-graphapi">https://www.npmjs.com/package/azure-graphapi</a></li>
</ul>
</li>
<li>Azure AD Graph API Explorer (good for learning and manually crafting requests)
<ul>
<li><a href="http://graphexplorer.cloudapp.net/">http://graphexplorer.cloudapp.net/</a></li>
</ul>
</li>
</ul>
<p>I hope that was helpful and gives you a great background or even a handy tool to create Service Principals. My partner needed the understanding of how-to build such multi-tenant Azure AD applications that do access the Azure AD Graph API and they needed to create Service Principals out of such a multi-tenant web application. So I thought it’s worth spending the additional time and getting it documented!</p>Mario SzpusztaI am currently working with one if my main Global Independent Software Vendor (ISV) partners for on-boarding their solution into the Azure Marketplace. The main challenge that we face there is, that the solution needs to do some post-provisioning steps in the end-customer’s target subscription as well as Azure Active Directory tenant: Creating a Service Principal that can be used by the Software inside of the provisioned VM in the end-customer’s target directory. Using that service principal to read data from the end-customer’s Azure Subscription. Note: the end customer in this case is the customer, who purchases the product published by the ISV in the store! Such cases typically require the creation of “multi-tenant” Azure Active Directory applications. And this application then needs to access the end-customer’s target directory using the Azure AD Graph API. At the same time, creating service principals is not an easy task.Azure Cognitive Services with Image Search and Shop Pricing Comparisons2016-04-21T11:00:00+00:002016-04-21T11:00:00+00:00http://blog.mszcool.com/wordpressarchive/2016/04/21/cognitive-services-image-search-and-shop-comparison<p>This week I got the chance for my first attempts with Cognitive services and image search - based on a request from my Global ISV partner:) While the services are actually easy to use for a developer, documentation is behind and Internet Search is highly miss-leading (since it mostly points to a previous Bing-search API that does not exist, anymore). That’s why I decided to blog about it and point people into the right direction.</p>
<!--more-->
<h2 id="the-case---price-comparisons">The Case - Price Comparisons</h2>
<p>The use case is simple and has been made available <a href="http://appadvice.com/appnn/2015/12/microsoft-updates-bing-for-iphone-with-in-store-price-comparison-and-more">with our latest Bing-App for the iOS</a>: perform price comparisons of products across multiple shops. But instead of an App, this blog-post is about doing this from within any application with right usage of the new Bing Search APIs in their version 5.0.</p>
<h2 id="cognitive-services">Cognitive Services</h2>
<p>At the annual <a href="https://channel9.msdn.com/Events/Build/2016?wt.mc_id=build_hp">//build 2016 conference</a>, Microsoft announced its new <a href="https://www.microsoft.com/cognitive-services">Cognitive services</a>. These are services exposed through simple-to-use APIs for doing all sorts of intelligent stuff based on extensive data and machine learning algorithms. Face recognition, Speech Recognition and the likes are some of the more advanced APIs.</p>
<p>What many people don’t know is, that Bing APIs are also part of the Cognitive Services, now. That’s because a lot of the intelligence behind Cognitive Services is powered by Bing services incl. their intelligence and machine learning components. So don’t let yourself miss-lead by Internet-search results pointing to any kind of previous services.</p>
<p>That means don’t search the Internet, just navigate to <a href="https://www.microsoft.com/cognitive-services">Cognitive services</a> right away and dig into the documentation.</p>
<h2 id="walking-through-the-use-case">Walking through the Use Case</h2>
<p>Ok, let’s walk through the Use Case in a schematic way by looking at the APIs and their responses. That will give you a good view on how it actually works.</p>
<ol>
<li>
<p>You need a <a href="https://account.microsoft.com/about">Microsoft Account</a>, so if you don’t have one, sign up for one first.</p>
</li>
<li>
<p>If you have a <a href="https://account.microsoft.com/about">Microsoft Account</a>, the first thing you need is signing up for the Cognitive Services Preview. For that purpose just navigate to the <a href="https://www.microsoft.com/cognitive-services/en-us/subscriptions">Cognitive Services Subscriptions Page</a>.</p>
</li>
<li>
<p>Once you have signed up for the subscriptions, you get application keys for each of the different types of APIs as shown in my screen-shot below. You need to “Show” and “Copy” the key for Bing Search to implement the case I’ve described above:
<img src="/images/posts2016/20160421-figure01.png" alt="Subscription Keys" /></p>
</li>
<li>Once you have copied the subscription key for Bing Search, you can use Bing Image Search to <strong>look for images products you want to get price comparisons for</strong>.
<ul>
<li>I know, that sounds a bit confusing. But let’s assume you want to look for offerings of a sports watch such as Garmin Forerunner 225 (which I am currently interested in:)).</li>
<li>To get to shopping offers through Bing or Bing APIs, you’d do an image search for “Garmin Forerunner 225” and then the “new Bing” and “new Bing APIs” will give you that additional details of shopping offers.</li>
<li>Let me show you how that works with the API, but note that the same thing works with Bing Search in the browser for end-users.</li>
</ul>
</li>
<li>For testing the APIs I use the available Test User Interfaces that you can use for learning the APIs without writing code right away.
<ul>
<li><a href="https://bingapis.portal.azure-api.net/docs/services/56b43f0ccf5ff8098cef3808/operations/56b4433fcf5ff8098cef380c">API Overview Page</a></li>
<li><a href="https://bingapis.portal.azure-api.net/docs/services/56b43f0ccf5ff8098cef3808/operations/56b4433fcf5ff8098cef380c/console">API Testing Console Page</a>
<img src="/images/posts2016/20160421-figure02.png" alt="The testing Console in Action" /></li>
</ul>
</li>
<li>
<p>Now, let’s dig into a few requests. If you want to get offers for e.g. a “Garmin Forerunner” watch, first you need to find images for that watch.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>GET https://bingapis.azure-api.net/api/v5/images/search?q=garmin forerunner 225&count=10&offset=0&mkt=en-us&safeSearch=Moderate HTTP/1.1
Host: bingapis.azure-api.net
Ocp-Apim-Subscription-Key: <<your API Key taken from the previous screen shot above>>
</code></pre></div> </div>
</li>
<li>
<p>Next you need to examine the results of the request above. The interesting pieces of the JSON response are the <code>imageInsightsToken</code> and the <code>insightsSourcesSummary</code> elements as highlighted below:
<img src="/images/posts2016/20160421-figure03.png" alt="Insights Details Highlighted" /></p>
</li>
<li>These two attributes are used for the following purposes:
<ul>
<li><code>imageInsightsToken</code> is used in the next subsequent request to get further details about the image kept and managed by Bing.</li>
<li><code>insightsSourcesSummary</code> is something you can use to asses if it’s worth querying further insights for the image. E.g. in my case I wanted to get as many shop-offers for the Garmin as possible. So if I’d write this in a program I would search the top-most search results (first page of JSON elemetns I got from the previous API request) and pick those with the highest <code>shoppingSourcesCount</code> value as a simple strategy.</li>
</ul>
</li>
<li>
<p>So, let’s use the <code>imageInsightsToken</code> to get further details from the image. the following code shows the next request we’re executing with the token passed in as an additional parameter. Also note the use of the <code>modulesRequested</code> parameter which I use to specify which kinds of additional details I’d love to get from Bing for that image. That said, there are different modules providing additional information beyond the shopping sources.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>GET https://bingapis.azure-api.net/api/v5/images/search?
q=garmin forerunner 225&count=10&offset=0
&mkt=en-us
&safeSearch=Moderate
&modulesRequested=shoppingSources
&insightstoken=ccid_eDfbozgF*mid_7F932B775F08CAD51E0BC3609B3ABA1B7AB73856*simid_608041772346312873
HTTP/1.1
Host: bingapis.azure-api.net
Ocp-Apim-Subscription-Key: <<your API Key taken from the previous screen shot above>>
</code></pre></div> </div>
</li>
<li>Finally we get the results we want to have from this request. We see a list of sources which are offering the product for a given price and the basic stock-information which Bing extracted from the web pages of that source for my Garmin Forerunner 225 search.
<img src="/images/posts2016/20160421-figure04.png" alt="Shopping Sources Results" /></li>
</ol>
<p>I really find this kind-of cool. I found it really interesting to work with my ISV partner through that and given that documentation is not always most up2date on those new services I thought it’s worth blogging about it:).</p>
<p>Further resources for reading:
Here are a few helpful links with further details and information. Fortunately from the time I started writing this post until I got it published, the team has updated additional documentation on MSDN!</p>
<ul>
<li><a href="https://www.microsoft.com/cognitive-services">Cognitive Services</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/mt604056.aspx">Bing Search API v5 Docs</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/mt712790.aspx">Bing Image Insights Docs on MSDN</a></li>
<li><a href="https://bingapis.portal.azure-api.net/docs/services/56b43f0ccf5ff8098cef3808/operations/56b4433fcf5ff8098cef380c/console">Bing Image Search Test Console</a></li>
</ul>
<p>Let me know your thoughts, best via <a href="http://twitter.com/mszcool">http://twitter.com/mszcool</a>!</p>Mario SzpusztaThis week I got the chance for my first attempts with Cognitive services and image search - based on a request from my Global ISV partner:) While the services are actually easy to use for a developer, documentation is behind and Internet Search is highly miss-leading (since it mostly points to a previous Bing-search API that does not exist, anymore). That’s why I decided to blog about it and point people into the right direction.NServiceBus for Hybrid and Portable Cloud Solutions using Messaging2016-04-01T11:00:00+00:002016-04-01T11:00:00+00:00http://blog.mszcool.com/wordpressarchive/2016/04/01/nservicebus-for-hybrid-and-portable-cloud-solutions<p><strong><em>[restored from my Wordpress blog]</em></strong></p>
<p><a href="http://docs.particular.net/nservicebus/">NServiceBus</a> is a very popular messaging and workflow framework for .NET developers across the globe. This week a few peers of mine and I were working for one of our global ISV partners to evaluate, if NServiceBus can be used for Hybrid Cloud and portable solutions, that can be moved seamless from On-Premises to the Public Cloud and vice versa.</p>
<p>My task was to evaluate, if NServiceBus can be used with both, Microsoft Azure Service Bus in the public cloud as well as Service Bus 1.1 for Windows Server in the private cloud. It was a very interesting collaboration and finally I got to write some prototype code for one of our partners, again. How cool is that - doing interesting stuff and at the same time it helps a partner. That’s how it should be!</p>
<!--more-->
<h2 id="part-1-on-premises-service-bus-11-environment">Part #1: On-Premises Service Bus 1.1 Environment</h2>
<p>The journey and prototyping began with setting up an On-Premises Service Bus 1.1 environment in my home lab. Fortunately there are some good instructions out there, but of course nothing goes without any pitfalls. Here’s a good set of instructions to start with - note that I did setup an entire Azure Pack express setup, which is clearly optional. But it makes things more convenient, especially for presentations, since it provides the nice, good old Azure Management Portal experience for your Service Bus on-premises. Here’s where you should look at how-to setup things:</p>
<ul>
<li><a href="https://technet.microsoft.com/en-us/library/dn296439.aspx>">Install the Azure Pack Express Setup</a>
<ul>
<li>This shows, how-to setup a basic Azure Pack environment using Web Platform Installer.</li>
<li>I did not run into any problems installing it on a Hyper-V Box on my Home Lab. So it should be fairly straigth forward.</li>
<li>You can install all on a single machine. What you need is SQL Server (Express is sufficient) pre-installed.</li>
</ul>
</li>
<li><a href="https://msdn.microsoft.com/en-us/library/dn441412.aspx">Install Service Bus for Windows Server</a>
<ul>
<li>Again this happens via Web Platform Installer. With this I had a little challenge. Unfortunately as of writing this article, the link to the required version of the Windows Fabric in the Web Platform Installer was broken. I’ve uploaded it on my public OneDrive for convenience. You find it <a href="https://onedrive.live.com/redir?resid=D37C9D7BFBCE8418!449&authkey=!ABMFFxRSyhq0XAc&ithint=folder%2cmsi">here</a>. But I’ve had conversations with the product team, they will fix the broken link. So when you try it, it might work, already.</li>
</ul>
</li>
<li><a href="https://msdn.microsoft.com/en-us/library/dn441425.aspx">Configure Service Bus using the Wizard</a>.
<ul>
<li>After installing you need to configure Service Bus for Windows Server. That happens through a Wizard. It essentially allows you to configure endpoints, ports and certificates used for security purposes for Service Bus 1.1 for Windows Server.</li>
<li>The configuration failed on the first attempt because something went wrong with installing the Service Bus patch. Uninstall and re-install did solve the problem.</li>
</ul>
</li>
<li><a href="https://msdn.microsoft.com/en-us/library/dn440945.aspx">Configure Service Bus for Windows Server for the Azure Pack Portal</a>
<ul>
<li>That’s the final step to get the Azure Pack management portal experience for Service Bus 1.1 for Windows Server.</li>
<li>If you are fine with managing Service Bus through PowerShell, you can skip the entire Azure Pack Express stuff, start with Service Bus 1.1 right away and <a href="https://msdn.microsoft.com/en-us/library/dn441434.aspx">manage it through PowerShell</a>.</li>
</ul>
</li>
</ul>
<p>At the end of this journey, which took me about half a day overall starting from scratch and figuring out the little gotchas mentioned above, I had a lab environment to test against. I am a fan of using <a href="https://www.royalapplications.com/ts/win/features">Royal TS</a>, hence the screen shot with my on-premises Service Bus as web pages embedded in Royal TS:</p>
<p><img src="/images/posts2016/20160401-figure01.png" alt="RoyalTsAzureOnPremSetup" /></p>
<h2 id="part-2-nservicebus-and-azure-service-bus">Part #2: NServiceBus and Azure Service Bus</h2>
<p>The second part of the challenge was easy - figure out if <a href="http://docs.particular.net/nservicebus/">NServiceBus</a> supports Azure Service Bus, already. Because that would give us a good starting point, wouldn’t it!? Here are the docs for the NServiceBus transport extension for the Azure Serivce Bus: <a href="http://docs.particular.net/nservicebus/azure/azure-servicebus-transport">source code</a>.</p>
<p>But the point is, the earliest version that supports Azure seems to be NServiceBus v5.0.0 and they started with a Microsoft.ServiceBus.dll above 3.x in the code-base. This version of the library is not compatible with Service Bus 1.1 for Windows Server. So I had to dig into the source code and back-port the library. Fortunately, Particular is open sourcing most of the framework’s bits and pieces on GitHub - so it does for the <a href="https://github.com/Particular/NServiceBus.AzureServiceBus/tree/support-6.2">NServiceBus.AzureServiceBus connector here</a>. Note that I am referring to the version 6.2 of the implementation, directly, since that works with NServiceBus 5.0.0 which our global ISV partner is using at this point in time. I also tried back-porting the current development branch, but that turned out to be way more complex and risky. And it was not needed for the partner, either:)</p>
<h2 id="part-3-back-porting-to-microsoftservicebusdll-v21">Part #3: Back-Porting to Microsoft.ServiceBus.dll v2.1</h2>
<p>So the needed step is to back-port to a Service Bus SDK library that also works with Service Bus 1.1. Service Bus for Windows Server recently received a patch to work with .NET 4.6.1, but it has not received any major updates since its original release. So it is behind with regards to its APIs compared to Service Bus in Azure.</p>
<p>I’ve done all of the steps below on my GitHub repository in a fork of the original implementation. Note that you should only look at my work in the branch ‘support-6.2’ which is the one that works with NServiceBus 5.0.0. The rest is considered to be experiments as we speak right now:)</p>
<p><a href="https://github.com/mszcool/NServiceBus.AzureServiceBus-SB1.1-WinSrv/tree/support-6.2">Here is the link to my GitHub repo and the fork!!</a></p>
<p>The first step for doing so was to remove the NuGet package and replace it with one that works with Service Bus for Windows Server. Microsoft fortunately released a separate NuGet package that contains the version that is compatible with Service Bus for Windows Server:</p>
<p><img src="/images/posts2016/20160401-figure02.png" alt="NugetPackageSelectionTrick" /></p>
<p>The rest was all about looking where the NServiceBus-implementation is using features that are not available in version 2.1 of the Service Bus SDK and testing it against my Service Bus 1.1 for Windows Server lab setup. I think the best place to look at what actually changed is by looking at the change-logs on my GitHub-Repository:</p>
<ul>
<li><a href="https://github.com/mszcool/NServiceBus.AzureServiceBus-SB1.1-WinSrv/commit/81a84866cb12898d95a2954fbf42793b23136346">Initial back-port with most code changes (click here to open details on GitHub)</a>
<ul>
<li>Update the NuGet Package to “ServiceBus.v1_1” instead of “WindowsAzure.ServiceBus”.</li>
<li>Remove <code class="language-plaintext highlighter-rouge">EnablePartitioning</code> because that’s not supported on SB 1.1.</li>
<li>Use <code class="language-plaintext highlighter-rouge">MessagingFactory.CreateFromConnectionString()</code> instead of <code class="language-plaintext highlighter-rouge">MessagingFactory.Create()</code> because the latter one does not assume different ports on different EndPoints for different APIs Service Bus is exposing. But that’s typically the case on default-setups of Service Bus for Windows Server (see my first ScreenShot).</li>
<li>I also added some regular expressions to detect, if a Service Bus Connection String is one for on-premises or the public cloud to keep most of the default behaviors when connecting against the public cloud true. See the code-snippet below. It might not be complete or perfect, but it fulfills the basic needs.</li>
</ul>
</li>
<li><a href="https://github.com/mszcool/NServiceBus.AzureServiceBus-SB1.1-WinSrv/commit/5d12b00cac4471881c4e529be36eaed1567349b3">Added some samples (click here to open details on GitHub)</a>
<ul>
<li>This contains a basic Sender and Receiver implementation that uses the transport.</li>
<li>You need to set the Environment Variable <code class="language-plaintext highlighter-rouge">AzureServiceBus.ConnectionString</code> in a command prompt and start Visual Studio from that one to successfully execute. Btw. that’s also needed if you need to run the tests. In that case you also need to set <code class="language-plaintext highlighter-rouge">AzureServiceBus.ConnectionString.Fallback</code> with an alternate Service Bus connection string.</li>
</ul>
</li>
</ul>
<p>Here is the little code-snippet that checks if the code is used for on-premises Service Bus services or for Azure Service Bus instances:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class CreatesMessagingFactories : ICreateMessagingFactories
{
#region mszcool 2016-04-01
// mszcool - Added Connection String parsing to detect whether a public or private cloud Service Bus is addressed!
public static readonly string Sample = "Endpoint=sb://[namespace name].servicebus.windows.net;SharedAccessKeyName=[shared access key name];SharedAccessKey=[shared access key]";
private static readonly string Pattern =
"^Endpoint=sb://(?<namespaceName>[A-Za-z][A-Za-z0-9-]{4,48}[A-Za-z0-9]).servicebus.windows.net/?;SharedAccessKeyName=(?<sharedAccessPolicyName>[\\w\\W]+);SharedAccessKey=(?<sharedAccessPolicyValue>[\\w\\W]+)$";
public static readonly string OnPremSample = "Endpoint=[namespace name];StsEndpoint=[sts endpoint address];RuntimePort=[port];ManagementPort=[port];SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=[shared access key]";
private static readonly string OnPremPattern =
"^Endpoint=sb\\://(?<serverName>[A-Za-z][A-Za-z0-9\\-\\.]+)/(?<namespaceName>[A-Za-z][A-Za-z0-9]{4,48}[A-Za-z0-9])/?;" +
"StsEndPoint=(?<stsEndpoint>https\\://[A-Za-z][A-Za-z0-9\\-\\.]+\\:[0-9]{2,5}/[A-Za-z][A-Za-z0-9]+)/?;" +
"RuntimePort=[0-9]{2,5};ManagementPort=[0-9]{2,5};" +
"SharedAccessKeyName=(?<sharedAccessPolicyName>[\\w\\W]+);" +
"SharedAccessKey=(?<sharedAccessPolicyValue>[\\w\\W]+)$";
private bool DetectPrivateCloudConnectionString(string connectionString)
{
if (Regex.IsMatch(OnPremPattern, connectionString, RegexOptions.IgnoreCase))
return true;
else if (Regex.IsMatch(Pattern, connectionString, RegexOptions.IgnoreCase))
return false;
else {
throw new ArgumentException($"Invalid Azure Service Bus connection string configured. " +
$"Valid examples: {Environment.NewLine}" +
$"public cloud: {Pattern} {Environment.NewLine}",
$"private cloud (SB 1.1): {OnPremPattern}");
}
}
#endregion
ICreateNamespaceManagers createNamespaceManagers;
// ... rest of the implementation ...
}
</code></pre></div></div>
<p>The part where I needed this detection most was to decide, how-to instantiate the MessagingFactory. This is the relevant piece of code - note that <code class="language-plaintext highlighter-rouge">MessagingFactory.Create()</code> with the NamespaceManager-Address passed in does only work in the public cloud, not with Service Bus 1.1 on Windows Server:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class CreatesMessagingFactories : ICreateMessagingFactories
{
// ... earlier stuff in that class including 'DetectPrivateCloudConnectionString' ...
ICreateNamespaceManagers createNamespaceManagers;
public CreatesMessagingFactories(ICreateNamespaceManagers createNamespaceManagers)
{
this.createNamespaceManagers = createNamespaceManagers;
}
public MessagingFactory Create(Address address)
{
var potentialConnectionString = address.Machine;
var namespaceManager = createNamespaceManagers.Create(potentialConnectionString);
// mszcool - Updated to detect if Service Bus 1.1 for Windows Server is used
if (DetectPrivateCloudConnectionString(potentialConnectionString))
{
// mszcool - Need to use this approach because different ports for control and transport endpoints are used
return MessagingFactory.CreateFromConnectionString(potentialConnectionString);
}
else {
var settings = new MessagingFactorySettings
{
TokenProvider = namespaceManager.Settings.TokenProvider,
NetMessagingTransportSettings =
{
BatchFlushInterval = TimeSpan.FromSeconds(0.1)
}
};
return MessagingFactory.Create(namespaceManager.Address, settings);
}
}
}
</code></pre></div></div>
<p>Finally with those fixes incorporated, I was able to get almost all things working and all except two tests passing for now. For the proof-of-concept that is successful for now, since it proofs that the partner could achieve what they need to achieve.</p>
<h2 id="part-4-see-it-in-action">Part #4: See it in Action</h2>
<p>Now comes the cool part - testing it out and seeing it in Action. The Samples I’ve added to the git-repository are simple messaging examples which I’ve modified from the <a href="http://docs.particular.net/samples/non-durable-messaging/">NServiceBus Samples repository</a> as well. Note that I’ve taken the Non-Durable MSMQ sample to proof since starting-point for the partner was MSMQ and I wanted something super-simple to start with. That is just how far I got, eventually I try other samples (but no promise at this time:)). Below the code-snippet of the sender - the receiver looks nearly identical and you can look it up on my repository on GitHub:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>static void Main()
{
string connStr = System.Environment.GetEnvironmentVariable("AzureServiceBus.ConnectionString");
Console.Title = "Samples.MessageDurability.Sender";
#region non-transactional
BusConfiguration busConfiguration = new BusConfiguration();
busConfiguration.Transactions()
.Disable();
#endregion
busConfiguration.EndpointName("Samples.MessageDurability.Sender");
busConfiguration.ScaleOut().UseSingleBrokerQueue();
busConfiguration.UseTransport<AzureServiceBusTransport>()
.ConnectionString(connStr);
busConfiguration.UseSerialization<JsonSerializer>();
busConfiguration.EnableInstallers();
busConfiguration.UsePersistence<InMemoryPersistence>();
using (IBus bus = Bus.Create(busConfiguration).Start())
{
bus.Send("Samples.MessageDurability.Receiver", new MyMessage());
Console.WriteLine("Press any key to exit");
Console.ReadKey();
}
}
</code></pre></div></div>
<p>Here’s the code actually in action and working. You see what it produced on my on-premises Service Bus as well as the log output from the console windows. Note that the message handler part of the receiver outputs that it received a message.</p>
<p><img src="/images/posts2016/20160401-figure03.png" alt="RoyalTsAzureOnPremSetupTestSuccessful" /></p>
<p>One thing I played around with was having two receivers, that’s why you see two output-lines for one single message in my receiver. To clarify, here’s the code of MyHandler.cs from the Receiver project which outputs those lines:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class MyHandler : IHandleMessages<MyMessage>
{
static ILog logger = LogManager.GetLogger<MyHandler>();
public void Handle(MyMessage message)
{
logger.Info("Hello from MyHandler");
}
}
public class MyHandler2 : IHandleMessages<MyMessage>
{
static ILog logger = LogManager.GetLogger<MyHandler2>();
public void Handle(MyMessage message)
{
logger.Info("Hello from MyHandler2!");
}
}
</code></pre></div></div>
<h2 id="final-words">Final Words</h2>
<p>I think this Proof-Of-Concept we’ve built alongside with other aspects we covered for that Global Software Vendor partner in UK demonstrates several aspects:</p>
<ul>
<li>That it is possible to have a solution that works (nearly) seamless on-premises and in the public cloud with largely the same code base.
<ul>
<li>The situation should DRAMATICALLY improve once Microsoft has released the <a href="https://azure.microsoft.com/en-us/overview/azure-stack/">Azure Stack</a>, which is the successor of what I’ve used here (which was the <a href="https://www.microsoft.com/en-us/server-cloud/products/windows-azure-pack/">Azure Pack</a>).</li>
<li>We can expect that the <a href="https://azure.microsoft.com/en-us/overview/azure-stack/">Azure Stack</a> will deliver a much more up2date and consistent experience with Azure in the public cloud once it is fully available incl. Service Bus.</li>
</ul>
</li>
<li>That with <a href="http://docs.particular.net/nservicebus/azure/azure-servicebus-transport">NServiceBus</a> one of the most important 3rd-party middle-ware frameworks plays very well together with Azure and that it also can be used for Service Bus On-Premises with some caveats (like back-porting the transport-library).
<ul>
<li>An alternative, which I also tried to demonstrate with the simple sample, would be to use the MSMQ NServiceBus transport on-premises and use the AzureServiceBus Transport from NServiceBus for public cloud deployments. As long as only features supported on both sides are used, that might be the preferred way since with that you can fully rely in code delivered for you by NServiceBus without any changes.</li>
</ul>
</li>
</ul>
<p>Note that my attempts are meant to be a Proof-of-Concept, only. You can look at them, try them and even apply them for your solutions fully at your own risk:)</p>
<p>I think it was a great experience working with the team in UK on this part of a larger Proof-of-Concept (which also included e.g. <a href="https://azure.microsoft.com/en-us/documentation/services/service-fabric/">Azure Service Fabric</a> for software that needs to be portable between on-premises and the public cloud but wants to make use of a true Platform-as-a-Service foundation).</p>
<p>I hope you enjoyed reading this and found it interesting and useful.</p>Mario Szpuszta[restored from my Wordpress blog] NServiceBus is a very popular messaging and workflow framework for .NET developers across the globe. This week a few peers of mine and I were working for one of our global ISV partners to evaluate, if NServiceBus can be used for Hybrid Cloud and portable solutions, that can be moved seamless from On-Premises to the Public Cloud and vice versa. My task was to evaluate, if NServiceBus can be used with both, Microsoft Azure Service Bus in the public cloud as well as Service Bus 1.1 for Windows Server in the private cloud. It was a very interesting collaboration and finally I got to write some prototype code for one of our partners, again. How cool is that - doing interesting stuff and at the same time it helps a partner. That’s how it should be!Git - Removing (accidentally added) secrets from the history2016-03-01T11:00:00+00:002016-03-01T11:00:00+00:00http://blog.mszcool.com/wordpressarchive/2016/03/01/git-removing-secrets-from-history<p><strong><em>[restored from my Wordpress blog]</em></strong></p>
<p>I really do like Git a lot and even for my private projects I use it as the default. But some aspects of it are quite tricky. A well-known practice is, that you should never check-in secrets or things you don’t want to share with others into a Git-repository. That is especially interesting with public repositories hosted on e.g. GitHub.</p>
<p>Well, saying you should not and actually not forgetting about it are two different things. Sometimes it just happens. And even if you are careful with secrets, it can also be other stuff you checked in but didn’t want to share with others. So it happened to me when I wrote the last blog-post about <a href="http://blog.mszcool.com/index.php/2016/02/my-developer-machine-setup-automation-script-chocolatey-powershell-published/">automating my developer machine setup</a> and published my <a href="https://github.com/mszcool/devmachinesetup/">Machine Setup Script</a>.</p>
<!--more-->
<h2 id="the-secrets-in-the-history-on-github">The secrets in the history on GitHub!?</h2>
<p>As explained in <a href="http://blog.mszcool.com/index.php/2016/02/my-developer-machine-setup-automation-script-chocolatey-powershell-published/">my previous blog-post</a>, in the setup automation script I use for re-setting up a fresh developer machine I also do clone a hand of repositories which are of relevance to me and/or to which I contributed some code. The majority of those repositories is public on GitHub. But some of them are from real-world projects with our customers and partners which are hosted in a private VSTS environment we run for our global team. I accidentally published that list of <code class="language-plaintext highlighter-rouge">git clone</code> commands as well.</p>
<p>No passwords, no secrets - but the repository names sometimes contained the names of the partners/customers and some of that work is not done or public, yet. So even though these were not secrets, I am not supposed to share them, yet.</p>
<p>Unfortunately, I realized that only after a few check-ins. So the entire history in my public GitHub repository contained those repository-names from an internal VSTS environment which I didn’t want to share. Damn… the post is out, the link points to the repository… what to do?</p>
<h2 id="how-to-remove-secretscontent-from-the-entire-history-with-git">How-to remove secrets/content from the entire history with Git?</h2>
<p>Of course the “easy” way for this specific case would have been to delete the repository and re-create a new one with the fixed file published. That works for cases where the history is not really important and where you have a truly small repository. In other words, it works for samples and the likes. But even I have received a pull-request for that file which I didn’t want to loose, either. By all means, deleting and re-creating is not something that should be considered as a solution for this problem.</p>
<p>So, I did a little Internet-search and came across something that can save many Git-Hub repositories from mandatory deletion to remove things from the entire history, I guess:</p>
<p><a href="https://rtyley.github.io/bfg-repo-cleaner/">The BFG Repo Cleaner</a></p>
<p>This is an awesome tool if you ran into the problem I’ve had. Let’s say you have published something into a Git-repository across multiple commits and pushes that you want to get rid of from the entire history. All you need to do are the following steps:</p>
<ol>
<li>Download the <a href="https://rtyley.github.io/bfg-repo-cleaner/">BFG Repo Cleaner</a> into a local directory of your choice.
<ol>
<li>The app is written in Scala</li>
<li>It requires a Java-runtime on your machine.</li>
<li>It is distributed as a JAR-package that contains all dependenices.</li>
</ol>
</li>
<li>Open up a command prompt and switch to a temporary directory.
<ol>
<li>I did this in a temp-directory because it requires a new <code class="language-plaintext highlighter-rouge">git clone --mirror</code> of your repository which is a 1:1 mirror of the remote repository.</li>
<li>After that you need to push that mirror back to the remote repository again. And then you can delete the mirror and return to your ordinary repository clone.</li>
</ol>
</li>
<li>Perform a clone of your repository with the option <code class="language-plaintext highlighter-rouge">--mirror</code> (I am using my devmachinesetup-repo here since I had to do it with this one, so just replace <strong>devmachinesetup</strong> with any of your repository-names in the commands below).
<ol>
<li><code class="language-plaintext highlighter-rouge">git clone --mirror https://github.com/mszcool/devmachinesetup.git</code></li>
<li>This clones a mirror of your remote repository with the entire history into a sub-folder of the current folder called <code class="language-plaintext highlighter-rouge">devmachinesetup.git</code>.</li>
</ol>
</li>
<li>Stay in the folder that contains the <code class="language-plaintext highlighter-rouge">devmachinesetup.git</code> folder with the mirrored repository in it.</li>
<li>Create a text file that contains the text you want to purge from the history of all files in your git repository.
<ol>
<li>Each line contains a string (incl. spaces, special characters etc.) that you want to remove. In my case these strings were the complete <code class="language-plaintext highlighter-rouge">git clone <<repositoryname>></code> commands which I wanted to remove from the history of commits of the script in the repository. Each line in this text-file contained one of those entire commands.</li>
<li>BFG searches every file in your git-mirror folder and replaces each instance of each lines from the text-file with the text <code class="language-plaintext highlighter-rouge">*** REMOVED ***</code> in the target files of the repository.</li>
<li>
<p>A little sample excerpt for how the content of that text file shows, how simple it is - in my case it was just one git-clone per line which I wanted to remove from the history:</p>
<p><code class="language-plaintext highlighter-rouge">git clone https://xyz.visualstudio.com/_DefaultCollection/first.git</code> <br />
<code class="language-plaintext highlighter-rouge">git clone https://xyz.visualstudio.com/_DefaultCollection/second.git</code><br />
<code class="language-plaintext highlighter-rouge">git clone https://xyz.visualstudio.com/_DefaultCollection/third%20complex%20name.git thirdrepo</code></p>
</li>
</ol>
</li>
<li>Execute the BFG command. Note that BFG is based on the Java-runtime, so either add the folder with BFG JAR-package to your CLASSPATH environment variable or specify the full path to the JAR-package when executing Java. This looks similar to:
<ol>
<li><code class="language-plaintext highlighter-rouge">java -jar C:\Temp\bfg-1.12.8.jar --replace-text myunwantedtext.txt devmachinesetup.git</code></li>
<li>Note that when you download the BFG JAR package, the version in the name of the .jar-file might be different.</li>
<li>file <strong>myunwantedtext.txt</strong> contained the full</li>
</ol>
</li>
<li>Now BFG has replaced the unwanted content in the local clone. Last-but-not-least you need to push that one back to the remote repository.
<ol>
<li>Again, in your command prompt window remain in the directory which contains the <code class="language-plaintext highlighter-rouge">devmachinesetup.git</code> sub-directory with your git-mirror.</li>
<li>Execute <code class="language-plaintext highlighter-rouge">git push</code> to push the mirror back.</li>
</ol>
</li>
</ol>
<p>That’s it, you’re done. After I executed the steps above on my repository, I checked online with several commits if it worked. In your case, the result should now look similar to what I’ve achieved once you’ve completed the steps above:</p>
<p><img src="https://txfzva.dm2303.livefilestore.com/y3pwjYBEhhqiChyKL65x0Lxtz6uWAYYezMAYtEOSXRrizZd3-NtBXQ7zHCMdUWKGoT9cD5ts4KmWCiUbDCHOTCnvAeBHXOrskdViZCdSE2NCXTat1yLG_flOTUhb67ctxlLTOJKFwEXClYSWypmFFPLQGa0OsHMbCYSeUeBAYpyDzE/20160301-Git-Removing-Secrets-Figure-01.png?psid=1" alt="Results of Removing unwanted content" /></p>
<h2 id="final-words">Final words</h2>
<p>Removing unwanted content from the entire history of a Git-repository is needed sometimes. Whether it’s about accidental commits of secrets or other (sensitive) content or e.g. large files you want to clean up from your repository.</p>
<p>The <a href="https://rtyley.github.io/bfg-repo-cleaner/">BFG Repo Cleaner</a> is a handy tool for such cases. It can indeed be used for cases such as the one I described. But it also contains options for other cases such as removing large files from the history of your repository which are not needed there, anymore.</p>
<p>BFG is cool and handy, but if you need more advanced scenarios, you might need to fall-back to the way more powerful, yet much more complex <code class="language-plaintext highlighter-rouge">git-filter-branch</code> tool (<a href="http://git-scm.com/docs/git-filter-branch">here</a>). I guess that for 80% of the cases, BFG might be good enough and given the fact it is super-easy to use I’d first give it a chance before digging through the docs of <code class="language-plaintext highlighter-rouge">git-filter-branch</code>. Kudos to the folks which built BFG… great job and thank you very much for saving my day (I will donate;))…</p>Mario Szpuszta[restored from my Wordpress blog] I really do like Git a lot and even for my private projects I use it as the default. But some aspects of it are quite tricky. A well-known practice is, that you should never check-in secrets or things you don’t want to share with others into a Git-repository. That is especially interesting with public repositories hosted on e.g. GitHub. Well, saying you should not and actually not forgetting about it are two different things. Sometimes it just happens. And even if you are careful with secrets, it can also be other stuff you checked in but didn’t want to share with others. So it happened to me when I wrote the last blog-post about automating my developer machine setup and published my Machine Setup Script.