Single-Sign-On with SAP HANA, Azure Active Directory and Office 365

Signle-Sign-On with Azure Active Directory for HANA

At last SAP Sapphire (May 2017) we announced several improvements and also new offerings for SAP on Azure as you can read here. The most prominent ones are more HANA Certifications as well as SAP Cloud Platform on Azure (as you can read from my last blog post specifically focused on SAP CP.

One of the less discussed and visible announcements, despite being mentioned, is the broad support of Enterprise-Grade Single-Sign-On across many SAP technologies with Azure Active Directory. This post is solely about one of these offerings – HANA integration with Azure AD.

Pre-Requisites for HANA / AAD Single-Sign-On

An integration of HANA with Azure AD (AAD) as primary Identity Provider works for HANA Instances you can run anywhere you want (on-premises, any public IaaS, Azure VMs or SAP Large Instances in Azure). The only requirement is, that the end-user accessing apps (Web Administration, XSA, Fiori) running inside of the HANA instance has access to the Internet to be able to sign-in against Azure AD.

For this post, I start with an SAP HANA Instance that runs inside of an Azure Virtual Machine. You can deploy such HANA instances either manually or through the SAP Cloud Appliance Library.

In addition to just running HANA, I’ve also installed XRDP for the Linux VM in Azure and SAP HANA Studio inside of the Virtual Machine to be able to perform necessary configurations across both, the XSA Administration Web Interface as well as HANA Studio as needd.

Finally, you need to have access to an Azure Active Directory tenant for which you are the Global Administrator or have the appropriate permissions to add configurations for Enterprise Applications to that Azure AD Tenant!

The following figure gives an overview of the HANA VM environment I used for this blog-post. The important part is the Azure Network Security Group which opens up the ports for HTTP and HTTPS for HANA which are following the pattern 80xx and 43xx for regular HTTP and HTTPS, respectively.

HANA VM in Azure Overview

Azure Active Directory Marketplace instead of manual configuration

SAP HANA is configured through the Azure Active Directory Marketplace rather than the regular App Registration model followed for custom developed Apps in Azure AD. There are several reasons for this, here are the most important ones outlined:

  • SAML-P is required.Most SAP assets follow SAML-P for Web-based Single-Sign-On. While it is possible in Azure AD, when setting it up manually with advanced options, Azure AD Premium Edition is required. For offerings from the Azure AD Marketplace (Gallery), standard edition is sufficient. While that’s not the primary reason, it’s a neat one!
  • Entity Identifier Formats for SAP Assets.When registering an application in Azure AD through the regular App Registration model, Application IDs (Entity IDs in federation metadata documents) are required to be URNs with a protocol prefix (xyz://…). SAP applications use Entity IDs with arbirtray strings not following any specific format. Hence a regular app registration does not work. Again, this challenge can be solved through the Enterprise App Integration in AAD Premium. But when taking the pre-configured Offering from the Marketplace, you don’t need to take care of such things!
  • Name ID formats in issued SAML Tokens.Users are typically identified using Name ID assertions (claims). In requests, Azure AD accepts nameid-format:persistent, nameid-format:emailAddress, nameid-format:unspecified and nameid-format:transient. All of these are documented here in detail. Now, the challenge here is:
    • HANA sends requests with nameid-format:unspecified.
    • This leads to Azure AD selecting the format for uniquely identifying a user.
    • But HANA expects the Name ID claim to contain the plain user name (johndoe instead of domain\johndoe or johndoe@domain.com).
    • This leads to a mismatch and HANA not detecting the user as a valid user even if the user exists inside of the HANA system!

    The Azure AD Marketplace item is configured and on-boarded in a way, that enables this technical challenge to be resolved.

  • Pre-configured claims.While that’s not a need for HANA in specific, for most of the other SAP-related offerings, the marketplace-based integrartion pre-configures the SSO-configuration with claims/assertions typically required by the respective SAP technology.

Step #1 – Register HANA in Azure Active Directory

Assuming you have HANA running in a VM as I explained earlier in this post, the first step to configure Azure AD as an Identity Provider for HANA is to add HANA as an Enterprise Application to your Azure AD Tenant. You need to select the offer as shown in the screen shot below:

Selecting the HANA AAD Gallery Offering

Within the first step, you just need to specify a Display Name for the app as shown in the Azure AD management portal. The details are configured later down the road as next steps. Indeed you can get more detailed instructions from within the Azure AD management portal, directly. Just open up the Signle Sign-On-section, select SAML-based Sign-On in the very top Dropdown-Box, then scroll to the bottom and click the button for detailed demo instructions.

Detailed Demo instructions for SAML-P

If you’re filling out the SAML-P Sign-In settings according to these instructions, you’re definitely on a good path. So, let’s just walk through the settings so you get an example of what you need to enter there:

  • Identifier: should be the Entity ID which HANA uses in it’s Federation Metadata. Needs to be unique across all enterprise apps you have configured. I’ll show you later down in this post, where you can find it. Essentially you need to navigate to HANA’s Federation Metadata in the XSA Administration Web Interface.
  • Reply URL: use the XSA SAML login endpoint of your HANA system for this setting. For my Azure VM, it had a public IP address bound to the Azure DNS name marioszpsaphanaaaddemo.westeurope.cloudapp.azure.com, therefore I had to configure https://marioszpsaphanaaaddemo.westeurope.cloudapp.azure.com:4300/sap/hana/xs/saml/login.xscfunc for it.
  • User Identifier: this is one of the most important settings you must not forget. The default, user.userprincipalname will NOT work with HANA. You need to select a function called ExtractMailPrefix() in the Dropdown and select the user.userprincipalname for the Mail-parameter of this function.

Detailed Settings Visualized

Super-Important: Don’t ignore the information-message shown right below the certificate list and the link for getting the Federation Metadata. You need to check the box Make new certificate active so that the signatures will be correctly applied as part of the sign-in process. Otherwise, HANA won’t be able to verify the signature.

Step #2 – Download the Federation Metadata from Azure AD

After you have configured all settings, you need to save the SAML configuration before moving on. Once saved, you need to download the Federation Metadata for configuring SSO with Azure AD within the HANA administration interfaces. The previous screen-shot highlights the download-button in the lower, right corner.

Downloading the federation metadata document is the easiest way to get the required certificate and the name / entity identifier configured in your target HANA system.

Step #3 – Login to your HANA XSA Web Console and Configure a SAML IdP

We have done all required configurations on the Azure AD side for now. As a next step, we need to enable SAML-P Authentication within HANA and configure Azure AD as a valid identity provider for your HANA System. For this purpose, open up the XSA web console of your HANA System by browsing to the respective HTTPS-endpoint. For my Azure VM, that was:

https://marioszpsaphanaaaddemo.westeurope.cloudapp.azure.com:4300/sap/hana/xs/admin

Of course, HANA will still redirect you to a Forms-based login-page because we have not configured SAML-P, yet. So, sign-in with your current XSA Administrator Account in the system to start the configuration.

Tip: take note of the Forms-Authentication URL. If you break something in your SAML-P configuration later down the road, you can always use it to sign back in via Forms Authentication to fix the configuration! The respective URL to take note of, is: https://marioszpsaphanaaaddemo.westeurope.cloudapp.azure.com:4300/sap/hana/xs/formLogin/login.html?x-sap-origin-location=%2Fsap%2Fhana%2Fxs%2Fadmin%2F.

Now the previously downloaded federation metadata document from Step #2 above becomes relevant. In the XSA Web Interface, you need to navigate to SAML Identity Providers and from there, click the “+”-button on the bottom of the screen. In the form opening now, just paste the previously downloaded federation metadata document into the large text box on the top of the screen. This will do most of the remaining job for you! But, you need to fix a few fields.

  • The name in the General Data must not contain any special characters, also no spaces.
  • The SSO URL is not filled by default since we don’t have it in the AAD metadata, yet. So you need to manually fill it as per the guidance from within the Azure AD portal shown above in this post.

HANA SAML IdP Data Filled

Since we are in the HANA XSA tool, it’s the right point in time to show you, where I retrieved the information required earlier in the Azure AD portal when registering HANA as an App there – the Identifier as shown in the last screen shot from the Azure AD console, above.

Indeed, these details are retrieved from the SAML Service Provider configuration section as highlighted in the screen shot below. A quick side-note: this is one of the rare cases where I constantly needed to switch to Microsoft Edge as a browser instead of Google Chrome. For some reasons, in Chrome I was unable to open the metadata tab, while in Edge I typically can open the metadata-tab which shows the entire Federation Metadata document for this HANA instance. From there, you can also grab the identifier required for Azure AD since this is the Entity ID inside of the Federation Metadata document.

HANA SAML Federation Metadata

Ok, we have configured Azure AD as valid IdP for this HANA system. But we did not really enable SAML-based authentication for anything. This happens now at the level of applications managed by the XS-environment inside of HANA (that’s how I understand this with my limited HANA knowledge:)). You can enable SAML-P on a per-package basis inside of XSA, means it’s fully up to you to decide for which components you plan to enable SAML-P and for which you stay with other authentication methods. Below a screen shot that enables SAML-P for SAP-provided package! But stick with a warning: if you enable SAML-P for those, this might also have impact on other systems interacting with those packages. They should probably also support SAML-P as a means of authentication, especially if you disable other options, entirely!

HANA SAML Federation Metadata

By enabling the sap-package for SAML-P, we get SSO based on Azure AD for a range of built-in functions including the XSA web interface, but also Fiori-interfaces hosted inside of the HANA instance for which you configured the setting.

Step #4 – Troubleshooting

So far so good, seems we could try it out, right? So, let’s logout, open an In-Private-Browsing session with your browser of choice and navigate to your HANA XSA Administration application, again. You will see, that this time by default you will get redirected to Azure AD for signing into the HANA System. Let’s see what happens when trying to login with a valid user from the Azure AD tenant.

HANA SAML Federation Metadata

Seems the login was not so successful. Big question is why. This is now where we need access to the HANA system with HANA Studio and access to the system’s trace log. For my configuraiton, I installed XRDP on the Linux machine and have the HANA Studio running directly on that machine. So, best way to start is connecting to the machine, starting HANA Studio and navigating to the system configuration settings.

HANA Diagnosis for Sign-In Failing

The error-message is kind of confusing and miss-leading, though. We spent some time when onboarding HANA into the AAD Marketplace to figure out what was going wrong. So much ahead – Fiddler-Traces and issues with Certificates where not the problem! The resolution for this is to be found in an entirely different section. Nevertheless, I wanted to show this here, because it really is extremly valuable to understand, how-to troubleshoot stuff when it’s not going well.

The main reason for this failure is a miss-match in timeout configurations. The signatures are created based on some time stamps. One of those timestamps is used for ensuring, that authentication messages are valid only a given amount of time. That time is set to a very low limit in HANA by default resulting into this, quite miss-leading error message.

Anyways, to fix it, you need to stay in the HANA System Level Properties within HANA Studio and make some tweaks and adjustments. Within the system properties of the Configuration Tab, just filter settings by SAML and adjust the assertion_timeout setting. It’s impossible to do an entire, user-driven sign-in process within 10 sec. Think about it, the user navigates to a HANA App, gets re-directed to Azure AD, needs to enter her/his username/password, then eventually there’s Multi-Factor-Auth involved and finally upon success, the user gets redirected back to the respective HANA application. Impossible within 10sec. So, in my case, I adjusted it to two min.

HANA Diagnosis for Sign-In Failing

Btw. this behavior and required configuration is also documented on official SAP Support Notes as an outcome of the collaboration between us and the SAP HANA team as part of enabling SSO with Azure AD (thanks for the great collaboration, again:)):

SAP Support Note 2476310 – SAML SSO to HANA System Using Microsoft AAD – Your Browser Shows the Error “Assertion did not contain a valid Message ID”

Ok, time for a next attempt. Now, if you still have the same error message about not being able to validate the signature, you probably forgot something earlier in the game. Make sure that when configuring HANA in Azure AD you make the certificate active by hitting the Make new certificate active checkbox I’ve mentioned earlier… below the same screen shot with the important informational message, again!

Don't forget Make new certificate active

Step #4 – Configuring a HANA Database User

If you’ve followed all the steps so far, the Sign-In with a User from Azure AD will still not succeed. Again, the trace logs from HANA are giving more insights on what’s going on and why the sign-in is failing this time.

Trace about User does not exist in HANA

HANA is complaining that it does not know about the user. This is a fair complaint since Azure AD (or any other SAML Identity Provider) takes care for authentication, only. Authorization needs to happen in the actual target system (the service provider, also often called relying party application). For being able to Authorize, the user needs to be known to the service provdier. That means, at least some sort of user entity needs to be configured.

  • With HANA, that means you essentially create a database user and enable Single Sign-On for this database user.
  • HANA then uses the NameID-assertion from the resulting SAML token to assign the user authenticated by the IdP, Azure AD in this case, to map that successfully authenticated user to a HANA database user. This is where the format of the NameID in the issued token is so important and why we had to configure the ExtractMailPrefix()-strategy in the Azure AD portal as part of Step #1.

So, to make all of this happen and finally get to a successful login, we need to create a user in HANA, enable SSO and make sure that user has the appropriate permissions in HANA to e.g. access Fiori Apps or the XSA Administration Web Interface. This happens in HANA Studio, again.

Detailed Settings Visualized

Super-Important: The left-most part of the figure above visualizes the mapping from the SAML-Token’s perspective. So it defines the IdP as per the previous configurations and the user as it will end up being set in the NameID-assertion of the resulting SAML-token. With Azure AD users, these will mostly be lower-case! Case matters here!!! Make sure you enter the value lower-case here, otherwise you’ll get a weird message about dynamic user creation failing!!!

The next step is to make sure that the user has the appropriate permissions. For me, as a non-HANA-expert, I just gave the user all permissions to make sure I can show success as part of this demo. Of course, that’s not a best practice. You should give those permissions appropriate for your use cases, only.

Detailed Settings Visualized

Step #5 – A Successful Login

Finally, we made it! If you have completed all the steps above, you can start using HANA with full Single-Sign-On across applications also integrated with your Azrue AD tenant. For example, the screen shot below shows my globaladmin-user account signing into the HANA Test VM I used, navigating to the HANA XSA Administration web console and then navigating from there to Office 365 Outlook… It all works like a charm without me being required to enter credentials, again!

Detailed Settings Visualized

That is kind-of cool, isn’t it! It would then even work with navigating back and forth between those environments. Now, this scenario would work for any application that runs inside of the XS-environment.

But for now, at least for enterprise administrators, it means they can secure very important parts of their HANA systems with a prooven Identity platform using Azure AD. They can even configure Multi-Factor Authentication in Azure AD and thus even further protect HANA environments along other applications using the same Azure AD tenant as an Identity Provider.

Final Words

Finally, this is the simplest possible way of integrating Single-Sign-On with SAP applications using Azure AD, only. SAP Netweaver would be similarly simple as it is documented here. There’s even a more detailed tutorial available for Fiori Launch Pad on Netweaver based on these efforts we’ve implemented on SAP blogs here.

The tip of the iceberg is then the most advanced SSO we’ve implemented with SAP Cloud Platform Identity Authentication Services. This will give you centralized SSO-management through both company’s Identity-as-a-Service offerings (Azure AD, SAP Cloud Platform Identity Services). As part of that offering, SAP even includes automated identity provisioning which would remove the need for manually creating users as we did above.

I think, over the past year, we achieved a lot with the partnership between SAP and Microsoft. But if you ask for my personal opinion, I think the most significant achievements are HANA on Azure (of course, right:)), SAP Cloud Platform on Azure and … the Single-Sign-On Offerings across all sorts of SAP technologies and services!

I hope you found this super-interesting. It is most probably my last blog post as member of the SAP Global Alliance Team from the technical side since I am moving forward to the customer facing part of Azure Engineering (Azure Customer Advisory Team) as an engineer. Still, I am part of the family and will engage as needed out of my new role with SAP, that’s for sure!

Azure & Cloud Foundry – Setting up a Multi-Cloud Environment

This week I was presenting at the CloudFoundry Summit 2016 Europe in Frankfurt, of course about running CloudFoundry on Azure and Azure Stack. It was greate being here, especially because one of my two main Global ISV partners I am working with on the engineering side, have been here as well and are even a Gold-sponser of the event. It was indeed an honor and great pleasure for me to be part of this summit here … and great to finally have a technical session at a non-Microsoft conference, again:)

Indeed, one reason for that blog-post is because I ran out of time during my session and was able to show only small parts of the last demo.

Anyways, let’s get to the more techncial part of this blog-post. My session was all about running CF in Public, Private as well as Hybrid Clouds with Azure being involved in some way. This is highly relevant since most enterprises are driving a multi-cloud strategy of some way:

  • Either they are embracing Hybrid cloud and run deployments in the public cloud as well as in their own data centers for various reasons or
  • they want to distribute and minimize risk by running their solutions across two (or more) public cloud providers.

Despite the fact my session was focused on running Cloud Foundy on Azure, a lot of the concepts and architectural insights presented, can be re-used for other kinds of deployments with other cloud vendors or private clouds, as well.

The basics – Running Cloud Foundry on Azure and Pivotal

Microsoft has developed a Bosh CPI that enables bosh-based deployments of Cloud Foundry on Azure. The CPI is entirely developed as an Open Source Project and contributed to the Cloud Foundry Incubator project on GitHub.

Based on this CPI, there are two main ways for deploying deploying Cloud Foundry clusters on Microsoft Azure:

There’s a very detailed guidance on all of those GitHub repositories available that do explain all the details, I would suggest to follow this one since it is by far the easiest one: Deploy Cloud Foundry on Azure and always follow the via ARM templates suggestions of the docs.

Finally, in addition to Azure, to completly follow this post you need a 2nd CF cluster running in another cloud. The by far easiest way is to setup a trial account on Pivotal Cloud, which provides you with some sort of "CloudFoundry-as-a-Service". Follow these steps here for doing so…

A Multi-Cloud CF Architecture with Azure on one side

There are many reasons for multi-cloud environments. Some might include running parts in private clouds because of legal and compliance reasons while others including spreading risk across multiple cloud providers for disaster recovery reasons. The example in this post is focused exactly around the multi-cloud DR case since it covers two public cloud providers:

architecture

  • Azure Traffic Manager acts as a DNS-based load balancer. We will configure traffic manager with a Priority-Policy, which essentially leads traffic based on priority and if one cloud has a failure, Traffic Manager will route traffic to the other cloud.
  • The Azure Load Balancer is a component you get "for free" in Azure and don’t really need to take care off. It balances traffic across the front-nodes of your CF cluster and is automatically configured for you if you follow the guidance above for deploying CF on Azure.
  • Inside of each CF cluster, we need to make sure to register the DNS names used by Traffic Manager and configure the CF routers to route to our apps in the CF cluster, apropriately.

Setting up traffic manager

Let’s start with setting up the Azure Traffic Manager since we’ll need it’s domain name for the configuration of the apps in both Cloud Foundry targets. You can just add Azure Traffic Manager as a Resource to the Resource Group of your Cloud Foundry deployment or any other resource group. In my case, I deployed the Traffic Manager in another resource group as shown in the following screen shot:

Traffic Manager Setup

The important piece to take for now is the Domain Name of your traffic manager end-points. The actual end-points for traffic manager do not need to be configured at this point in time – we will look at it later.

Deploying the sample app to Pivotal Web Services

As a next step, we need to deploy the sample application to Pivotal web services and need to take note of the (probably random) domain name it has associated ot the application.

$pivotalApiEndpoint="api.run.pivotal.io"
cf login -a $pivotalApiEndpoint
cf target -o $pivotalOrg -s $pivotalSpace
cf push -f ./sampleapp/manifest.yml -p ./sampleapp
cf set-env multicloudapp REGION "Pivotal Cloud"
cf restage multicloudapp

To get the domain name and IP, just execute a cf app multicloudapp and take note of the domain name as shown in the following figure:

Pivotal App Domain Name

Deploying the App into Cloud Foundry on Azure

The deployment of the sample app into Azure goes exactly the same way, except that we’ll need to use different API end-points, organization names and spaces inside of Cloud Foundry:

$azureCfApiEndpoint="api.$azureCfPublicIp.xip.io"
cf login -a $azureCfApiEndpoint
cf target -o $azureOrg -s $azureSpace
cf push -f ./sampleapp/manifest.yml -p ./sampleapp
cf set-env multicloudapp REGION "Microsoft Azure"
cf restage multicloudapp

The Cloud Foundry API end-point I used above is the one that is registered by default when using the ARM-based deployment of open source Cloud Foundry with the Azure Quickstart Templates. The DNS-registration mechanism used there is documented here.

Also note the environment variables I am setting in the scripts above using cf set-env multicloudapp REGION "xyz". Indeed, that is used by our sample application (which is written with Ruby in this case) to output, in which region we are running the app. That way, we can see, if we are directed to the app deployed in Azure or in Pivotal Web Services.

Finally, if you’re new to Azure, the best way to find out the public IP which has been created for your CF cluster, is looking up a public IP address in the Azure Portal which has been created inside of the Resource Group for your Cloud Foundry cluster. Another way – if you are a Shell Scripter – would be to use the following command with the Azure Cross Platform CLI:

azure network public-ip show --resource-group YOUR-RESOURCE-GROUP YOUR-IP-NAME
info:    Executing command network public-ip show
+ Looking up the public ip "YOUR-IP-NAME"
data:    Id                              : /subscriptions/YOUR-SUBSCRIPTION-ID/resourceGroups/YOUR-RESOURCE-GROUP/providers/Microsoft.Network/publicIPAddresses/mszcfbasics-cf
data:    Name                            : YOUR-IP-NAME
data:    Type                            : Microsoft.Network/publicIPAddresses
data:    Location                        : northeurope
data:    Provisioning state              : Succeeded
data:    Allocation method               : Static
data:    IP version                      : IPv4
data:    Idle timeout in minutes         : 4
data:    IP Address                      : 52.169.87.212
data:    IP configuration id             : /subscriptions/YOUR-SUBSCRIPTION-ID/resourceGroups/marioszpCfSimple/providers/Microsoft.Network/networkInterfaces/SOME-ID/ipConfigurations/ipconfig1
data:    Domain name label               : marioszpcfsimple
data:    FQDN                            : marioszpcfsimple.northeurope.cloudapp.azure.com
info:    network public-ip show command OK

Configuring Traffic Manager Endpoints

Next, we need to tell Azure Traffic Manager the endpoints it should direct request which do approach on the DNS record registered with Traffic Manager to.

In our case, we use a simple Priority-based policy which means, Traffic Manager tries to always direct requests to an endpoint with the more important priority except that endpoint is not responsive. For a full documentation about policy routes, please refer to the Azure Traffic Manager docs.

Traffic Manager Endpoints

As you can see from the above, we have two endpoints:

  • Azure Endpoint which goes against the Public IP that the scripts and Bosh deployed for us when we deployed Cloud Foundry on Azure at the beginning.
  • External Endpoint which goes against the domain name for the app that Pivotal Web Services has registered for us (something like multicloudapp-xyz-abc.cfapps.io).

Let’s give it a try…

Now, in the previous configuration for Traffic Manager, we defined that the Pivotal Deployment has priority #1 and therefore will be preferred by Traffic Manager for Traffic routing. So, let’s open up a browser and navigate to the Traffic Manager DNS name for your deployment (in my screen shots and at my CF session that is marioszpcfsummithybrid.trafficmanager.net):

not working

Of course, a Cloud Foundry veteran spots immediately, what that means. I am not a veteran in that area, so I was falling into the trap…

Configuring Routes in Cloud Foundry

What I forgot when setting this up, originally, was configuring routes for the Traffic Manager Domain in my Cloud Foundry clusters. Otherwise, Cloud Foundry will reject requests coming in through that domain as it does not know about it.

We need to configure the routes on both ends to make it working, as shown below, we’re adding the traffic manager domain to the routes and ensure, CF routes traffic from those domains to our multi-cloud sample app:

$trafficMgrDomain=marioszpcfsummithybrid.trafficmanager.net

#
# First do this for Pivotal
#
cf login -a $pivotalApiEndpoint
cf target -o $pivotalOrg -s $pivotalSpace

cf create-domain $pivotalOrg $trafficMgrDomain
cf create-route $pivotalSpace $trafficMgrDomain
cf map-route multicloudapp $trafficMgrDomain

#
# Then do this for the CF Cluster on Azure
#
$azureCfApiEndpoint="api.$azureCfPublicIp.xip.io"
cf login -a $azureCfApiEndpoint
cf target -o $azureOrg -s $azureSpace

cf create-domain $azureOrg $trafficMgrDomain
cf create-route $azureSpace $trafficMgrDomain
cf map-route multicloudapp $trafficMgrDomain

Now let’s give it a try, again, and see what happens. This time we should see our Ruby sample app running and showing that it runs in Pivotal since we defined the priority for the Pivotal-based deployment within Azure Traffic Manager.
it works

Fixing Routes on Azure with Traffic Manager

After I indeed did the route mapping on Azure, Traffic Manager still claimed that the Azure-side of the house is Degraded, despite having the route configured. Initially, I didn’t understand why.

I didn’t have this problem when I initially tried this setup before. But when I initially tried this, I did not have assigned a DNS name to the Cloud Foundry Public IP in Azure. I’ve changed that because I tried something else in between and assigned a DNS name to the Azure Public IP for the CF Cluster. This lead traffic manager to route against that DNS name instead of the IP.

For troubleshooting that, I initated a fail-over and stopped the app on the Pivotal side (see next section) to make sure, Traffic Manager would try to route to Azure. A tracert finally told me, what was going on:

C:\code\github\mszcool\cfMultiCloudSample [master ≡]> tracert marioszpcfsummithybrid.trafficmanager.net

Tracing route to marioszpcfsimple.northeurope.cloudapp.azure.com [52.169.87.212]
over a maximum of 30 hops:

  1     5 ms     5 ms     4 ms  10.10.16.4
  2     2 ms     1 ms     1 ms  80.146.218.2
  3     2 ms     1 ms     2 ms  62.156.233.185
  4     5 ms     5 ms     5 ms  87.190.232.17
  5     8 ms     7 ms     7 ms  f-ed1-i.F.DE.NET.DTAG.DE [62.154.14.118]

When looking at the selected routes, we immediately spot, that the traffic manager domain gets resolved to the .cloudapp.net domain of the Azure Public IP. So my route on the CF-side of the house was just wrong. The route for Azure should not go against the traffic manager, but rather on the custom domain assigned to the cloud foundry cluster’s public IP in Azure:

cf map-route multicloudapp marioszpcfsimple.northeurope.cloudapp.azure.com

C:\code\github\mszcool\cfMultiCloudSample [master ≡]> cf routes
Getting routes for org default_organization / space dev as admin ...

space   host   domain                                            port   path   type   apps            service
dev            52.169.87.212
dev            marioszpcfsimple.northeurope.cloudapp.azure.com                        multicloudapp
dev            marioszpcfsummithybrid.trafficmanager.net                              multicloudapp

Testing a failover

Of course, we want to test if our failover strategy really works. For this purpose, we kill the App on the Pivotal-environment by executing the following commands:

cf login -a $pivotalApiEndpoint
cf target -o $pivotalOrg -s $pivotalSpace
cf stop multicloudapp

After that, we need to wait a while until traffic manager detects, that the application is not healthy. It then also might take a few seconds or minutes until the DNS record updates are propagated until we see the failover working (the smallest DNS TTL you can set, is 300s as of today).

So watch, what goes on, the simplest way is looking at the Azure Portal and opening up the Azure Traffic Manager configuration. At some point in time we should see, that one of the endpoints changes its status from Online to Degraded. When opening up a browser and trying to navigate to the traffic manager URL, we should no get redirected to the Azure-based deployment (which we see given our App is outputing the content of the environment variable we did set different for each of the deployments, before):

failover test

Final Words

I hope this gives you a nice start in setting up a Multi-Cloud Cloud Foundry environment across Azure and a 3rd-party cloud or your own data center. I will try to continue this conversation on my blog, for sure. There are tons of other cool things to explore with Cloud Foundry in relationship to Azure, and I’ll at least try to cover some of those. Let me know what you think by contacting me through twitter.com/mszcool!

As usual – all the code is available on my GitHub in the following repository:

https://github.com/mszcool/cfMultiCloudSample

AllJoyn IoT Peer-To-Peer Protocol & Framework – Making it Work with Visual Studio 2013

Our global team is running several industry subject matter working groups with different focus areas. One of them is targeted to Internet-Of-Things (IoT).

The charter of this working group is the exploration of IoT standards, protocols and frameworks across various industries with the goal of developing recommendations, reference architectures and integration points to and on the Microsoft platform as well as feeding IoT-related product groups of Microsoft with input for technologies, services and features we’re building as a company.

Together with peers from this working group including @joshholmes, @tomconte, @mszcool, @reichenseer, @jmspring, @timpark, @irjudson, @ankoduizer, @rachelyehe and daniele-colonna we explored AllJoyn as a technology.

Background on AllJoyn and our activities

I had the chance to work with this working group on exploring AllJoyn as a technology – especially because Microsoft joined the AllSeen alliance (www.allseenalliance.org). AllSeen has the sole target of making AllJoyn a defacto-standard for device-2-device communications in a mash-up oriented way across platforms and operating systems.

In a nutshell, AllJoyn is a peer-2-peer protocol framework that allows all sorts of device-2-device communication in local networks. It is built on-top of DBUS and TCP/UDP and includes advanced services such as on-boarding of devices to a network, device/service discovery, communication and notifications.

If you want to learn more about the fundamentals, Rachel has published a great summary on AllJoyn on her blog, which explains the capabilities and services provided by AllJoyn and the architectural principles, in general!

Why is this relevant?

Well, I think AllJoyn does have the potential revolutionize, how devices find and talk to each other in a standardized, cross-platform way in local networks and across wide networks (through a gateway). If done right and assuming, that a broad ecosystem of devices will adopt AllJoyn, this can lead to seamless detection and usage of nearby devices through other (smart) devices such as phones.

Think about the following scenario: you enter a hotel room as part of a business trip and your phone, regardless of which platform it is using, detects the TV, the coffee machine and the wake up radio in your room and you can “configure” and “use” those devices right away through your phone without the need of other remote controls or getting up from the bed to start brewing your coffee. Also media sharing could become much easier than it is today across devices from different vendors using different operating systems.

The potential is huge, the ecosystem nevertheless needs to be developed. And since Microsoft joined the alliance around this protocol and services framework I think we could and want to drive some of this development actively. Let’s see what the future brings based on our early efforts here right now;)

Setup of an AllJoyn Dev-Environment with Visual Studio 2013 on Windows

This blog is solely focused on what you need to do to get the AllJoyn SDK working with Visual Studio 2013. This sounds like a simple thing, but so far the AllJoyn SDK is available for VS 2012, only. Since AllJoyn is using a specific combination of build tools from the OSS-world, tuning the setup to work with VS 2013 requires a few further steps which I’ll dig into as part of this blog-post.

Once you have completed your setup you can start developing AllJoyn enabled services for a variety of devices on Windows machines with Visual Studio 2013 including Windows Services, Desktop Applications and backend-services (e.g. ASP.NET) making use of AllJoyn as a technology and protocol framework.

To setup a full development environment that works with Visual Studio 2013 (and 2012 in parallel), follow these steps below. You need to install exactly the versions of the tools provided below as opposed to the official docs from the AllJoyn home page since these versions of the dependent tools also work with Visual Studio 2013.

  1. Download and extract the AllJoyn Source Code Suite.
    1. Downloading the Windows SDK will give you libraries compiled with VS2012. You definitely will run into issues with using them on VS2013 since there were some changes relevant for AllJoyn in the VS2013 C++ compiler.
    2. In my tests I did use the version 14.06 from the SDK.
    3. For details on how-to work with the Thin SDK, look at Thomas’ blog post who writes about how-to get the Thin SDK to compile with VS 2013 and use it with Intel Galileo Boards running Windows.
    4. Note: for extracting the SDK, I suggest installing a ZIP-tool such as 7-zip which is capable of dealing with *.tar and *.gz archive formats.
  2. Download & install Python 2.7.8.
  3. Install SCONS 2.3.3 (use the Windows Installer Download) or higher (don’t use earlier versions of SCONS).
  4. Install the following tools in addition to Python and SCONS. These are optional, but I’ve installed all of them to make sure not running into other distracting issues:
    1. DoxyGen 1.8.8
    2. Basic MikTex 2.9.5105
    3. GraphViz 2.3.8
    4. Uncrustify 0.57 for Win32
  5. Make sure to have Python as well as SCONs and the other tools in your PATH environment variable.
  6. Fine-tune some of the source files for the AllJoyn SDK before compilation due to changes made from VS2012 to VS2013 in the C++ Libraries and Compiler.
  7. Compile the AllJoyn SDK using SCONS.
  8. Create your VS2013 C++ project to test your compiled AllJoyn Library.

For more details on how-to setup and verify the development environment, in general, also look at Rachel’s blog. She will create a post to explain, how-to install the tools above, make sure you have the environment variables setup correctly and verify if the tools are all available in your PATH environment variable. This post nevertheless will just explain, how-to setup for VS2012 based on the learnings with the official current releases from AllJoyn available at the time of writing this article.

Update the AllJoyn Source to work with VS2013

As mentioned above, Microsoft made some changes in the Visual Studio 2013 compiler (making it more compliant to certain C/C++ standards and defacto-standards). This resulted in a refactoring of some of the standard template and base libraries the AllJoyn SDK makes use of. Also some workarounds AllJoyn used for its cross-platform build with SCONS are not needed in VS2013, anymore. So we need to get rid of those.

Fortunately the changes you have to make are not that many (although they were a bit more challenging to be tracked down:)).

  1. For the following steps, replace <alljoynsdkroot> with the root folder in which you have installed the AllJoyn SDK. To ensure we’re talking about the same directory structure, this is what I assume the AllJoyn root directory looks like:
  2. Helpful background info: the core SDK for AllJoyn is built in C++. For other platforms including C, Java, Node etc. the AllJoyn group has built language bindings which are all based on the core C++ libraries. Therefore some headers are available multiple times in the source code control structure for the different language bindings.
  3. The first file we need to touch is one of the platform mapping headers AllJoyn is using the the core SDK. These header files are used to provide some macros that cover differences/workarounds for core functionality of C/C++ compilers of different platforms.
    1. Open the file <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\common\inc\qcc\windows\mapping.h
    2. Add the following pre-compiler macro at the beginning of the source file:
    3. Un-comment the following source files at the end of the source file to avoid duplication.
  4. The very same mappings file need to be updated for the C-bindings. For this purpose, conduct exactly the same changes as outlined in step 3 for the mapping.h file in the C language binding folder.
    1. The mapping file to update is called <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\alljoyn_c\inc\qcc\windows\mapping.h
    2. Perform the same changes as outlined in step 3.
  5. The final change that needs to happen is an update in the SCONs build script files for AllJoyn so that it supports the VS2013 option from SCONS 2.3.3 in addition to the existing VS2010 and VS2012 options.
    1. Open the file <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\build_core\SConscript
    2. Search for the section in the script that defines the MSVC_VERSION enumeration with allowed values. This will support values up to VC 11.0 incl. VC 11.0Exp for the express edition.
    3. Add an additional value “12.0” to this variable definition as shown below (this assumes you have VS2013 Professional or higher – I didn’t test express but assume 12.0Exp would make it all work with the express edition, as well):

Build with Visual Studio 2013

Now that we have made all changes to platform-headers and SCONS scripts, we can build the libraries for Visual Studio 2013. These can then be used in any VS2013 C/C++ project to enable you develop with the latest and greatest (released) development tools from Microsoft.

  1. Open up a Visual Studio 2013 command prompt.
  2. Change to the directory <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn
  3. Execute the following command:
    scons BINDINGS=”C,C++” MSVC_VERSION=”12.0″ OS=”win7″ CPU=x86_64 WS=”off”
    1. Note that you might see some warnings since a few verifications got stricter in VS2013 C/C++.
    2. The command builds the AllJoyn SDK with the following options:
      1. Language bindings for C and C++. For the other bindings I’d suggest to just use the existing SDKs;)
      2. Visual Studio 2013 using the 12.0 version of the Visual C compiler.
      3. Target operating system Windows 7 (which perfectly works on Windows 8, as well – it does not have any impact on the compiler options or Windows SDK references since all used are standard libraries – the SCONS scripts are just using this for validation of other options, e.g. the CPU-options available for the platform of choice).
      4. White space fixing of source files turned off (WS=”off”). To turn this on, make sure that Uncrustify is set in your path appropriately.
    3. Here a screen-shot of how the start of the build should look like:
  4. Since the build runs a while, wait until it is finished and see if any errors occurred. If not, you find the ready-to-use libraries built with the VS2013 C/C++ compiler under the following folders:
    1. X86: <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\build\win7\x86_64\debug\dist\cpp
    2. X64: <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\build\win7\x86_64\debug\dist\c

Creating a VS2013 Console project for testing the libraries:

Finally to verify if things are working, you can create test apps and see if they can join other devices on an AllJoyn bus (refer to Rachel’s blog for details on what an AllJoyn Bus is). In this project you need to refer to the libraries just built.

  1. Create a new Visual C++ project. For our testing purposes I’ve created a Console application (Win32/64).
  2. Update your build configurations as you need them.
    1. E.g. by default the project template will create a Win32 32-bit application.
    2. If you want to have 64-bit, just add the 64-bit configuration as usual for any other C/C++ project.
    3. Note that the subsequent changes in this list need to happen for both, 32-bit and 64-bit build configurations.
  3. In the Project Properties dialog, add the following “VC++ Directories” to your “Include Directories” and “Library Directories” settings so that the compiler finds the dependent header files and compiled libraries. These should now point to the directories used earlier to build AllJoyn with VS2013/VC12.0.
    1. Note: in the screen-shot below I have used a relative path from my solution so that whenever I get the solution to a different machine it will still compile without issues whenever the AllJoyn SDK is put into the same directory on that new machine. I’d suggest doing something similar for your projects, as well, so that re-creating a dev-machine from scratch is predictable and easy-to-do.
    2. Include Directories
    3. Library Directories
  4. If you want to “avoid” IntelliSense complaining that it does not find headers required, also add the “Include Directories” added earlier to the more general “VC++ Directories” option to the option “C/C++ \ General \ Additional Include Directories”. These are exactly the same as those specified in step 3 for “Include Directories”.
  5. Next you need to define a pre-processor that is used by some of the header-files from the language bindings to detect the platform and define different types of Macros and platform types (remember the customization we made earlier to make stuff build on VS2013 – these are some of those). This pre-processor is called QCC_OS_GROUP_WINDOWS as shown below:
  6. Finally you need to tell the VC Linker which libraries to use for the linking process. This ultimately includes some dependencies from AllJoyn itself as well as the built AllJoyn libraries. See below for a list of items to include there:

With all these steps in place, you can start writing some code for AllJoyn. E.g. you can discover devices and services or register yourself as a service in an AllJoyn Bus network – all done with Visual Studio 2013.

For example the following code attaches itself to an existing bus service in a local network and queries for devices that do offer services with a specific service prefix name on this bus:

#include <qcc/platform.h>

#include <assert.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include <alljoyn_c/DBusStdDefines.h>
#include <alljoyn_c/BusAttachment.h>
#include <alljoyn_c/BusObject.h>
#include <alljoyn_c/MsgArg.h>
#include <alljoyn_c/InterfaceDescription.h>
#include <alljoyn_c/version.h>
#include <alljoyn_c/Status.h>

#include <alljoyn_c/BusListener.h>
#include <alljoyn_c/Session.h>
#include <alljoyn_c/PasswordManager.h>

#include <qcc/String.h>
#include <qcc/StringUtil.h>
#include <qcc/Debug.h>

#include<vector>
#include<signal.h>

int main(int argc, char** argv, char** envArg)
{
QStatus status = ER_OK;
char* connectArgs = "null:";
alljoyn_interfacedescription testIntf = NULL;
/* Create a bus listener */
alljoyn_buslistener_callbacks callbacks = {
&buslistener_registered,
NULL,
&found_advertised_name,
NULL,
&name_owner_changed,
NULL,
NULL,
NULL
};
/* Session Port variables */
alljoyn_sessionportlistener_callbacks spl_cbs = {
accept_session_joiner,
NULL
};
alljoyn_sessionopts opts;

printf("AllJoyn Library version: %s\n", alljoyn_getversion());
printf("AllJoyn Library build info: %s\n", alljoyn_getbuildinfo());

/* Install SIGINT handler */
signal(SIGINT, SigIntHandler);

/* Create a password */
alljoyn_passwordmanager_setcredentials("ALLJOYN_PIN_KEYX", "ABCDEFGH");

/* Create message bus and start it */
g_msgBus = alljoyn_busattachment_create(OBJECT_DAEMON_BUSNAME, QCC_TRUE);
if (ER_OK == status) {
status = alljoyn_busattachment_start(g_msgBus);
if (ER_OK != status) {
printf("alljoyn_busattachment_start failed\n");
}
else {
printf("alljoyn_busattachment started.\n");
}
}

/* Register a bus listener in order to get discovery indications */
g_busListener = alljoyn_buslistener_create(&callbacks, NULL);
if (ER_OK == status) {
alljoyn_busattachment_registerbuslistener(g_msgBus, g_busListener);
printf("alljoyn_buslistener registered.\n");
}

/* Connect to the bus */
if (ER_OK == status) {
status = alljoyn_busattachment_connect(g_msgBus, connectArgs);
if (ER_OK != status) {
printf("alljoyn_busattachment_connect
(\"%s\") failed\n", connectArgs);
}
else {
printf("alljoyn_busattachment connected to
\"%s\"\n", alljoyn_busattachment_getconnectspec(g_msgBus));
}
}

/* Create the mutex to avoid multiple parallel executions of foundAdvertisedNames */
gJoinSessionMutex = CreateMutex(NULL, FALSE, NULL);
if (gJoinSessionMutex == NULL) {
printf("Error creating mutex, stopping!");
return -1000;
}

/* Find the Led controller advertised name */
status = alljoyn_busattachment_findadvertisedname(g_msgBus, OBJECT_NAME);
if (ER_OK == status) {
printf("ok alljoyn_busattachment_findadvertisedname %s!\n", OBJECT_NAME);
}
else {
printf("failed alljoyn_busattachment_findadvertisedname
%s!\n", OBJECT_NAME);
}

/* Get the number of expected services
In our test-setup we expect 2 services (arduino and galileo) */
int nExpectedServices = 2;

/* Wait for join session to complete */
while ((g_interrupt == QCC_FALSE) && (i_joinedSessions < nExpectedServices)) {
Sleep(10);
}

/*
Devices found, do whatever needs to be done now
...
*/
}

The code above is using a call-back that is called by the AllJoyn core libraries whenever a device is found. More specific it looks for two devices we expected to be available in our “lab-environment” for testing purposes. One was an Arduino and the other one an Intel Galileo with a LED connected. Both where using the AllJoyn Thin Client Library to connect to the bus.

The only callback relevant was the following one since it is called by AllJoyn libraries when a new device was found that was connected to the same AllJoyn Bus (we’ve used the other callbacks for other tests):

void found_advertised_name(const void* context,
const char* name,
alljoyn_transportmask transport,
const char* namePrefix)
{
printf("\nfound_advertised_name(name=%s, prefix=%s)\n", name, namePrefix);
DWORD dwWaitResult = WaitForSingleObject(gJoinSessionMutex, INFINITE);
s_sessionNames[i_joinedSessions] = (char*)malloc(sizeof(char)*1024);
strcpy_s(s_sessionNames[i_joinedSessions], 1024, name);
i_joinedSessions++;
/* enable concurrent callbacks so joinsession can be called */
alljoyn_busattachment_enableconcurrentcallbacks(g_msgBus);
ReleaseMutex(gJoinSessionMutex);
printf("found advertisments %d\n", i_joinedSessions);
}

Note that the code above is just meant to give you some sort of impression. For full end-2-end scenarios wait for further blog-posts from us as well as look at the official AllJoyn documentation.

Final Thoughts…

Okay, the journey above is a lot of effort, isn’t it? Well, at this point it needs to be said that AllJoyn is still in a very early stage, and therefore some development steps are still a bit hard to get done. Especially setting up an environment that works end-2-end with the tools-chain of your choice (VS2013 in my case at the time of writing this post).

But, I am excited that I am involved in this journey. I see various things happening in the industry that are all geared towards some sort of device mash-up. Think of what Microsoft tried to start a few years ago with Live Mesh. Think of what Apple is doing with their seamless device-2-device interaction which really works great. And consider what Android is doing attempting to do with Android “L”. All of these efforts are really cool and enable great scenarios – but they’re all locked down to one vendor and their specific operating system.

When thinking beyond the ecosystems (Microsoft, Apple, Google) I mentioned above and involving all sorts of devices of our daily live such as (Smart) TVs, Coffee machines, HiFi stereo systems, cars and car information systems, anything related to home automation or even industrial facilities leveraging device mash-ups to solve more complex problems, there MUST be something that AVOIDS vendor lock-in.

AllJoyn can become “this something”. It has the potential. The technology is cool. All it needs now is some sort of dramatic simplification as well as an ecosystem of service interfaces and devices supporting those for all sorts of scenarios across different industries.

We’ll see and experience, how this will work out and where we will end up with this. Microsoft definitely has a huge interest in participating in this journey and you’ll definitely see more around the Microsoft platform and AllJoyn over the course of the upcoming months.

Also look at the Twitter and Blog accounts of my peers since we’re already planning a subsequent hackathon around this topic together with some product groups to dig even deeper into the world of peer-2-peer device mash-ups based on AllJoyn… so that said, expect more to come!!!

Cloud – Windows Azure – Combining PaaS & IaaS to get best of both worlds in your Architecture

Over the past 2 years I have been working with many ISVs (Independent Software Vendors) to get their products and platforms to the Public Cloud on Windows Azure. In almost all cases the requirements and motivations from those ISVs did include one or a combination of the following reasons and/or expectations:

  • Expand beyond the own country, get global / international.
  • Be able to scale faster and easier with less amount of effort.
  • Reduce effort and costs for operations management.

Of course there are many more reasons and motivations why (or why not) an ISV or a company would consider (or not) cloud computing. But these are very common ones.

When looking at those requirements above there’s one piece they do have in common: the ISVs need to spend less time on managing your infrastructure, networking configurations and operating systems (e.g. patching etc.) to be able to be successful. With such requirements in mind I’d definitely rather look into automatically managed service offerings from Cloud Platforms such as Azure (or in other words: Platform-as-a-Service and Software-as-a-Service). Because with those requirements above you will want to have as much automatic management & setup as possible to achieve your goals.

But in practice things are often more difficult…

How far the goals above can be achieved requires looking detailed at the initial situation of the ISV and his application. In specific the application architecture and identification of which technologies are used in detail is of major relevance. Not all techniques, technologies and approaches might work well in Platform-as-a-Service runtimes such as Windows Azure Web Sites, Mobile Services or Cloud Services (often for a good reason, sometimes because some features are not available, yet). Let’s look at a typical example architecture we see most often with software vendors nowadays:

As you can see, we do have an ASP.NET MVC web front-end, some services performing more complex computational or IO-intensive tasks in the background, a database cluster (for high-availability) and a storage-system for documents, videos and other binary data. Looking at it, the naive mapping for Azure could work as follows with pure Platform-as-a-Service and ready-to-use services (such as Azure storage). That way we would not have to deal with any kind of traditional operations management at all – a truly nice vision and in my opinion something that always should be on a long-term roadmap:

Component Windows Azure Service
ASP.NET MVC Application Web Sites or Cloud Services
Computational background process Cloud Services with Worker Roles
SQL Server Cluster Azure SQL Database
Storage Cluster Azure BLOB Storage

Looks pretty simple and would be great if it would always be that easy. In practice we need to look at each component to see, if it is doing or making use of something that is not built for working in Platform-as-a-Service environments. If there’s nothing like that, definitely go for it because you’ll benefit most from the Cloud and Azure then. If you have challenges we need to consider alternatives: either adopt your product/code base or select another alternative.

And in case of Windows Azure that other alternative to PaaS definitely can be Windows Azure Virtual Machines, which is IaaS (Infrastructure-as-a-Service) on Azure. Let’s look a little bit deeper into the sample architecture above, look at some of the most important questions I typically ask and pick some assumptions for this post.
Conclusion: leverage BLOB storage as a ready-to-use service from Azure.Conclusion: Web Sites will not work because of 3rd-party components to be installed, but Cloud Services is a fit as stateless, file storage can be outsourced to Azure BLOB storage.Conclusion: Cloud Services worker are a perfect match since async processing possible and file storage can be easily replaced by BLOB storage.Conclusion: this is the only case where we cannot use the Platform-as-a-Service offering from Azure. We need to fall-back to Infrastructure-as-a-Service and run SQL Server in a Virtual Machine.

Component Questions Assumption
Storage Cluster How good is access to storage encapsulated? Is it spread across all source files or central implementation with e.g. repository pattern? Let’s assume access to file system is centrally encapsulated in the code base in a repository class. This can be easily exchanged with a BLOB-storage-based implementation.
ASP.NET MVC Application Stateless?
Persistent local file storage?
Installation of 3rd-party components needed?
For this assume, the app uses 3rd-party components, local file storage and is stateless (load-balancer ready with round-robin algorithm).
Computational
background process
Windows? Linux?
Asynchronous?
Persistent local file storage?
Installation of 3rd-party components?
Let’s assume the background job runs on Windows, can work asynchronous in the background and has no 3rd-party components needed.
SQL Server Cluster SQL features used?
Performance requirements?
Let’s assume our SQL Server database uses .NET CLR procedures and encryption functions.

The final architecture – Mixing Virtual Machines and Cloud Services…

Since we would like to be as effective and efficient as possible I definitely recommend to use Platform-as-a-Service and Software-as-a-Service where possible. Given the above sample-analysis for this example that’s the case for all components except SQL Server. Finally that leads to the following architecture in Windows Azure:


Setting-up the infrastructure in Azure (basic steps)…

To setup the architecture above in Windows Azure, you need to follow the subsequent steps in this order. Note that this is just a quick overview, in the next post I’ll give you a detailed step-by-step guide based on an example I’ll publish on my Codeplex workspace.

  1. Create an affinity group.
    All networks, virtual machines and cloud services you want to combine through a virtual network MUST be placed into the SAME affinity group.
  2. Setup a “Virtual Network” in Windows Azure.
    This network is used for having a private network with subnets in Azure that allows your Cloud Services and Virtual Machines to interact with each other. The nice thing is that as long as you don’t do VPN, this service is free of charge. Also note that the VMs (IaaS-Only, not PaaS) will remain the same IP-addresses assigned inside of the Virtual Network as long as you don’t DELETE the VMs.
  3. Create a new Virtual Machine in the network and configure SQL Server.
    After the network is created, create a VM and make sure you add it to the virtual network. After the VM has been created, perform the following steps:

    1. Open up port 1433 in the VM. That enables 1433 communication ONLY INSIDE the Virtual Network. If you also want it available externally, you need to open the port in the endpoint-configuration on the management portal from Windows Azrue.
    2. Configure SQL Server using SQL Authentication (except you also have an AD deployed in a VM in Azure, then you can also use Windows Authentication).
    3. Import your database, create a login with SQL Authentication and make sure to provide it access to the database.
    4. Finally open up a command prompt, type ipconfig and write down the IP address. Note that the address will be constant as long as you don’t delete the VM. Please DO NOT assign a static address since this is not supported in Azure VMs!!
  4. Create & deploy a Cloud Service Package for your web site and deploy.
    Finally for your ASP.NET web application (mentioned in the sample above) create a cloud service package, add the network configuration in your “ServiceConfiguration.Cloud.cscfg” XML configuration file. Before publishing make sure that your database connection string points to the IP address you’ve seen for your VM in step 2.

Final Words and more scenarios!!

Windows Azure supports “mixed deployments” that include Virtual Machines (IaaS), Cloud Services (PaaS) as well as other platform services (e.g. storage, media services etc). That enables you to get best of both worlds: the full efficiency, automatic scale and automatic management of PaaS where possible while gaining full control through VMs where needed.

Typical scenarios that are enabled by combining Virtual Machines and Cloud Services on Azure where you run most of your workloads in automatically managed Platform-as-a-Service while running other pieces on VMs where you need full control include:

  • Combining your app with Linux-based work-loads because Linux runs in Azure Virtual Machines.
  • Special SQL Server requirements that lead to situations where you cannot leverage Azure SQL Database.
  • You need to run legacy components in your app that just don’t work inside of PaaS runtimes such as Cloud Services, Web Sites & Co.

With such principles and thoughts you definitely can move much faster to the public Cloud and Windows Azure when you need to! You don’t need to re-write your whole app and use VMs where applicable while moving to PaaS where you think you can benefit most out of it!!