AllJoyn IoT Peer-To-Peer Protocol & Framework – Making it Work with Visual Studio 2013

Our global team is running several industry subject matter working groups with different focus areas. One of them is targeted to Internet-Of-Things (IoT).

The charter of this working group is the exploration of IoT standards, protocols and frameworks across various industries with the goal of developing recommendations, reference architectures and integration points to and on the Microsoft platform as well as feeding IoT-related product groups of Microsoft with input for technologies, services and features we’re building as a company.

Together with peers from this working group including @joshholmes, @tomconte, @mszcool, @reichenseer, @jmspring, @timpark, @irjudson, @ankoduizer, @rachelyehe and daniele-colonna we explored AllJoyn as a technology.

Background on AllJoyn and our activities

I had the chance to work with this working group on exploring AllJoyn as a technology – especially because Microsoft joined the AllSeen alliance ( AllSeen has the sole target of making AllJoyn a defacto-standard for device-2-device communications in a mash-up oriented way across platforms and operating systems.

In a nutshell, AllJoyn is a peer-2-peer protocol framework that allows all sorts of device-2-device communication in local networks. It is built on-top of DBUS and TCP/UDP and includes advanced services such as on-boarding of devices to a network, device/service discovery, communication and notifications.

If you want to learn more about the fundamentals, Rachel has published a great summary on AllJoyn on her blog, which explains the capabilities and services provided by AllJoyn and the architectural principles, in general!

Why is this relevant?

Well, I think AllJoyn does have the potential revolutionize, how devices find and talk to each other in a standardized, cross-platform way in local networks and across wide networks (through a gateway). If done right and assuming, that a broad ecosystem of devices will adopt AllJoyn, this can lead to seamless detection and usage of nearby devices through other (smart) devices such as phones.

Think about the following scenario: you enter a hotel room as part of a business trip and your phone, regardless of which platform it is using, detects the TV, the coffee machine and the wake up radio in your room and you can “configure” and “use” those devices right away through your phone without the need of other remote controls or getting up from the bed to start brewing your coffee. Also media sharing could become much easier than it is today across devices from different vendors using different operating systems.

The potential is huge, the ecosystem nevertheless needs to be developed. And since Microsoft joined the alliance around this protocol and services framework I think we could and want to drive some of this development actively. Let’s see what the future brings based on our early efforts here right now;)

Setup of an AllJoyn Dev-Environment with Visual Studio 2013 on Windows

This blog is solely focused on what you need to do to get the AllJoyn SDK working with Visual Studio 2013. This sounds like a simple thing, but so far the AllJoyn SDK is available for VS 2012, only. Since AllJoyn is using a specific combination of build tools from the OSS-world, tuning the setup to work with VS 2013 requires a few further steps which I’ll dig into as part of this blog-post.

Once you have completed your setup you can start developing AllJoyn enabled services for a variety of devices on Windows machines with Visual Studio 2013 including Windows Services, Desktop Applications and backend-services (e.g. ASP.NET) making use of AllJoyn as a technology and protocol framework.

To setup a full development environment that works with Visual Studio 2013 (and 2012 in parallel), follow these steps below. You need to install exactly the versions of the tools provided below as opposed to the official docs from the AllJoyn home page since these versions of the dependent tools also work with Visual Studio 2013.

  1. Download and extract the AllJoyn Source Code Suite.
    1. Downloading the Windows SDK will give you libraries compiled with VS2012. You definitely will run into issues with using them on VS2013 since there were some changes relevant for AllJoyn in the VS2013 C++ compiler.
    2. In my tests I did use the version 14.06 from the SDK.
    3. For details on how-to work with the Thin SDK, look at Thomas’ blog post who writes about how-to get the Thin SDK to compile with VS 2013 and use it with Intel Galileo Boards running Windows.
    4. Note: for extracting the SDK, I suggest installing a ZIP-tool such as 7-zip which is capable of dealing with *.tar and *.gz archive formats.
  2. Download & install Python 2.7.8.
  3. Install SCONS 2.3.3 (use the Windows Installer Download) or higher (don’t use earlier versions of SCONS).
  4. Install the following tools in addition to Python and SCONS. These are optional, but I’ve installed all of them to make sure not running into other distracting issues:
    1. DoxyGen 1.8.8
    2. Basic MikTex 2.9.5105
    3. GraphViz 2.3.8
    4. Uncrustify 0.57 for Win32
  5. Make sure to have Python as well as SCONs and the other tools in your PATH environment variable.
  6. Fine-tune some of the source files for the AllJoyn SDK before compilation due to changes made from VS2012 to VS2013 in the C++ Libraries and Compiler.
  7. Compile the AllJoyn SDK using SCONS.
  8. Create your VS2013 C++ project to test your compiled AllJoyn Library.

For more details on how-to setup and verify the development environment, in general, also look at Rachel’s blog. She will create a post to explain, how-to install the tools above, make sure you have the environment variables setup correctly and verify if the tools are all available in your PATH environment variable. This post nevertheless will just explain, how-to setup for VS2012 based on the learnings with the official current releases from AllJoyn available at the time of writing this article.

Update the AllJoyn Source to work with VS2013

As mentioned above, Microsoft made some changes in the Visual Studio 2013 compiler (making it more compliant to certain C/C++ standards and defacto-standards). This resulted in a refactoring of some of the standard template and base libraries the AllJoyn SDK makes use of. Also some workarounds AllJoyn used for its cross-platform build with SCONS are not needed in VS2013, anymore. So we need to get rid of those.

Fortunately the changes you have to make are not that many (although they were a bit more challenging to be tracked down:)).

  1. For the following steps, replace <alljoynsdkroot> with the root folder in which you have installed the AllJoyn SDK. To ensure we’re talking about the same directory structure, this is what I assume the AllJoyn root directory looks like:
  2. Helpful background info: the core SDK for AllJoyn is built in C++. For other platforms including C, Java, Node etc. the AllJoyn group has built language bindings which are all based on the core C++ libraries. Therefore some headers are available multiple times in the source code control structure for the different language bindings.
  3. The first file we need to touch is one of the platform mapping headers AllJoyn is using the the core SDK. These header files are used to provide some macros that cover differences/workarounds for core functionality of C/C++ compilers of different platforms.
    1. Open the file <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\common\inc\qcc\windows\mapping.h
    2. Add the following pre-compiler macro at the beginning of the source file:
    3. Un-comment the following source files at the end of the source file to avoid duplication.
  4. The very same mappings file need to be updated for the C-bindings. For this purpose, conduct exactly the same changes as outlined in step 3 for the mapping.h file in the C language binding folder.
    1. The mapping file to update is called <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\alljoyn_c\inc\qcc\windows\mapping.h
    2. Perform the same changes as outlined in step 3.
  5. The final change that needs to happen is an update in the SCONs build script files for AllJoyn so that it supports the VS2013 option from SCONS 2.3.3 in addition to the existing VS2010 and VS2012 options.
    1. Open the file <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\build_core\SConscript
    2. Search for the section in the script that defines the MSVC_VERSION enumeration with allowed values. This will support values up to VC 11.0 incl. VC 11.0Exp for the express edition.
    3. Add an additional value “12.0” to this variable definition as shown below (this assumes you have VS2013 Professional or higher – I didn’t test express but assume 12.0Exp would make it all work with the express edition, as well):

Build with Visual Studio 2013

Now that we have made all changes to platform-headers and SCONS scripts, we can build the libraries for Visual Studio 2013. These can then be used in any VS2013 C/C++ project to enable you develop with the latest and greatest (released) development tools from Microsoft.

  1. Open up a Visual Studio 2013 command prompt.
  2. Change to the directory <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn
  3. Execute the following command:
    scons BINDINGS=”C,C++” MSVC_VERSION=”12.0″ OS=”win7″ CPU=x86_64 WS=”off”
    1. Note that you might see some warnings since a few verifications got stricter in VS2013 C/C++.
    2. The command builds the AllJoyn SDK with the following options:
      1. Language bindings for C and C++. For the other bindings I’d suggest to just use the existing SDKs;)
      2. Visual Studio 2013 using the 12.0 version of the Visual C compiler.
      3. Target operating system Windows 7 (which perfectly works on Windows 8, as well – it does not have any impact on the compiler options or Windows SDK references since all used are standard libraries – the SCONS scripts are just using this for validation of other options, e.g. the CPU-options available for the platform of choice).
      4. White space fixing of source files turned off (WS=”off”). To turn this on, make sure that Uncrustify is set in your path appropriately.
    3. Here a screen-shot of how the start of the build should look like:
  4. Since the build runs a while, wait until it is finished and see if any errors occurred. If not, you find the ready-to-use libraries built with the VS2013 C/C++ compiler under the following folders:
    1. X86: <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\build\win7\x86_64\debug\dist\cpp
    2. X64: <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\build\win7\x86_64\debug\dist\c

Creating a VS2013 Console project for testing the libraries:

Finally to verify if things are working, you can create test apps and see if they can join other devices on an AllJoyn bus (refer to Rachel’s blog for details on what an AllJoyn Bus is). In this project you need to refer to the libraries just built.

  1. Create a new Visual C++ project. For our testing purposes I’ve created a Console application (Win32/64).
  2. Update your build configurations as you need them.
    1. E.g. by default the project template will create a Win32 32-bit application.
    2. If you want to have 64-bit, just add the 64-bit configuration as usual for any other C/C++ project.
    3. Note that the subsequent changes in this list need to happen for both, 32-bit and 64-bit build configurations.
  3. In the Project Properties dialog, add the following “VC++ Directories” to your “Include Directories” and “Library Directories” settings so that the compiler finds the dependent header files and compiled libraries. These should now point to the directories used earlier to build AllJoyn with VS2013/VC12.0.
    1. Note: in the screen-shot below I have used a relative path from my solution so that whenever I get the solution to a different machine it will still compile without issues whenever the AllJoyn SDK is put into the same directory on that new machine. I’d suggest doing something similar for your projects, as well, so that re-creating a dev-machine from scratch is predictable and easy-to-do.
    2. Include Directories
    3. Library Directories
  4. If you want to “avoid” IntelliSense complaining that it does not find headers required, also add the “Include Directories” added earlier to the more general “VC++ Directories” option to the option “C/C++ \ General \ Additional Include Directories”. These are exactly the same as those specified in step 3 for “Include Directories”.
  5. Next you need to define a pre-processor that is used by some of the header-files from the language bindings to detect the platform and define different types of Macros and platform types (remember the customization we made earlier to make stuff build on VS2013 – these are some of those). This pre-processor is called QCC_OS_GROUP_WINDOWS as shown below:
  6. Finally you need to tell the VC Linker which libraries to use for the linking process. This ultimately includes some dependencies from AllJoyn itself as well as the built AllJoyn libraries. See below for a list of items to include there:

With all these steps in place, you can start writing some code for AllJoyn. E.g. you can discover devices and services or register yourself as a service in an AllJoyn Bus network – all done with Visual Studio 2013.

For example the following code attaches itself to an existing bus service in a local network and queries for devices that do offer services with a specific service prefix name on this bus:

#include <qcc/platform.h>

#include <assert.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include <alljoyn_c/DBusStdDefines.h>
#include <alljoyn_c/BusAttachment.h>
#include <alljoyn_c/BusObject.h>
#include <alljoyn_c/MsgArg.h>
#include <alljoyn_c/InterfaceDescription.h>
#include <alljoyn_c/version.h>
#include <alljoyn_c/Status.h>

#include <alljoyn_c/BusListener.h>
#include <alljoyn_c/Session.h>
#include <alljoyn_c/PasswordManager.h>

#include <qcc/String.h>
#include <qcc/StringUtil.h>
#include <qcc/Debug.h>


int main(int argc, char** argv, char** envArg)
QStatus status = ER_OK;
char* connectArgs = "null:";
alljoyn_interfacedescription testIntf = NULL;
/* Create a bus listener */
alljoyn_buslistener_callbacks callbacks = {
/* Session Port variables */
alljoyn_sessionportlistener_callbacks spl_cbs = {
alljoyn_sessionopts opts;

printf("AllJoyn Library version: %s\n", alljoyn_getversion());
printf("AllJoyn Library build info: %s\n", alljoyn_getbuildinfo());

/* Install SIGINT handler */
signal(SIGINT, SigIntHandler);

/* Create a password */
alljoyn_passwordmanager_setcredentials("ALLJOYN_PIN_KEYX", "ABCDEFGH");

/* Create message bus and start it */
g_msgBus = alljoyn_busattachment_create(OBJECT_DAEMON_BUSNAME, QCC_TRUE);
if (ER_OK == status) {
status = alljoyn_busattachment_start(g_msgBus);
if (ER_OK != status) {
printf("alljoyn_busattachment_start failed\n");
else {
printf("alljoyn_busattachment started.\n");

/* Register a bus listener in order to get discovery indications */
g_busListener = alljoyn_buslistener_create(&callbacks, NULL);
if (ER_OK == status) {
alljoyn_busattachment_registerbuslistener(g_msgBus, g_busListener);
printf("alljoyn_buslistener registered.\n");

/* Connect to the bus */
if (ER_OK == status) {
status = alljoyn_busattachment_connect(g_msgBus, connectArgs);
if (ER_OK != status) {
(\"%s\") failed\n", connectArgs);
else {
printf("alljoyn_busattachment connected to
\"%s\"\n", alljoyn_busattachment_getconnectspec(g_msgBus));

/* Create the mutex to avoid multiple parallel executions of foundAdvertisedNames */
gJoinSessionMutex = CreateMutex(NULL, FALSE, NULL);
if (gJoinSessionMutex == NULL) {
printf("Error creating mutex, stopping!");
return -1000;

/* Find the Led controller advertised name */
status = alljoyn_busattachment_findadvertisedname(g_msgBus, OBJECT_NAME);
if (ER_OK == status) {
printf("ok alljoyn_busattachment_findadvertisedname %s!\n", OBJECT_NAME);
else {
printf("failed alljoyn_busattachment_findadvertisedname
%s!\n", OBJECT_NAME);

/* Get the number of expected services
In our test-setup we expect 2 services (arduino and galileo) */
int nExpectedServices = 2;

/* Wait for join session to complete */
while ((g_interrupt == QCC_FALSE) && (i_joinedSessions < nExpectedServices)) {

Devices found, do whatever needs to be done now

The code above is using a call-back that is called by the AllJoyn core libraries whenever a device is found. More specific it looks for two devices we expected to be available in our “lab-environment” for testing purposes. One was an Arduino and the other one an Intel Galileo with a LED connected. Both where using the AllJoyn Thin Client Library to connect to the bus.

The only callback relevant was the following one since it is called by AllJoyn libraries when a new device was found that was connected to the same AllJoyn Bus (we’ve used the other callbacks for other tests):

void found_advertised_name(const void* context,
const char* name,
alljoyn_transportmask transport,
const char* namePrefix)
printf("\nfound_advertised_name(name=%s, prefix=%s)\n", name, namePrefix);
DWORD dwWaitResult = WaitForSingleObject(gJoinSessionMutex, INFINITE);
s_sessionNames[i_joinedSessions] = (char*)malloc(sizeof(char)*1024);
strcpy_s(s_sessionNames[i_joinedSessions], 1024, name);
/* enable concurrent callbacks so joinsession can be called */
printf("found advertisments %d\n", i_joinedSessions);

Note that the code above is just meant to give you some sort of impression. For full end-2-end scenarios wait for further blog-posts from us as well as look at the official AllJoyn documentation.

Final Thoughts…

Okay, the journey above is a lot of effort, isn’t it? Well, at this point it needs to be said that AllJoyn is still in a very early stage, and therefore some development steps are still a bit hard to get done. Especially setting up an environment that works end-2-end with the tools-chain of your choice (VS2013 in my case at the time of writing this post).

But, I am excited that I am involved in this journey. I see various things happening in the industry that are all geared towards some sort of device mash-up. Think of what Microsoft tried to start a few years ago with Live Mesh. Think of what Apple is doing with their seamless device-2-device interaction which really works great. And consider what Android is doing attempting to do with Android “L”. All of these efforts are really cool and enable great scenarios – but they’re all locked down to one vendor and their specific operating system.

When thinking beyond the ecosystems (Microsoft, Apple, Google) I mentioned above and involving all sorts of devices of our daily live such as (Smart) TVs, Coffee machines, HiFi stereo systems, cars and car information systems, anything related to home automation or even industrial facilities leveraging device mash-ups to solve more complex problems, there MUST be something that AVOIDS vendor lock-in.

AllJoyn can become “this something”. It has the potential. The technology is cool. All it needs now is some sort of dramatic simplification as well as an ecosystem of service interfaces and devices supporting those for all sorts of scenarios across different industries.

We’ll see and experience, how this will work out and where we will end up with this. Microsoft definitely has a huge interest in participating in this journey and you’ll definitely see more around the Microsoft platform and AllJoyn over the course of the upcoming months.

Also look at the Twitter and Blog accounts of my peers since we’re already planning a subsequent hackathon around this topic together with some product groups to dig even deeper into the world of peer-2-peer device mash-ups based on AllJoyn… so that said, expect more to come!!!

GeRes2 – An OSS Framework for Reliable Execution of Background Jobs on Azure

A few weeks ago we published an open source framework that covers a scenario required by many partners I’ve been working with on Azure over the course of the past 4 years: asynchronous execution of jobs on compute instances in the background – or simple batch processing. Finally now I found the time (on the plane) to blog a bit about it and point you to that framework.

Being one of the product owners of this OSS framework I am very proud of what we achieved. Note that there might be something way more sophisticated and complete coming as a platform Service in Microsoft Azure at some point in time in the future. Until that time, our OSS Framework fills this requirement of “batch-processing” in a more simplified way.

Generic Resource Scheduler v2.0 (GeRes2) for Microsoft Azure

In a nutshell, GeRes2 is a framework for scalable, reliable and asynchronous execution of a large amount of jobs on many Worker Machines in Windows Azure. It is built with Azure Cloud Services, provides a Web API for submitting jobs and querying data about jobs and executes Jobs in parallel across multiple worker roles.

As such, GeRes2 implements a ready-to-use framework which you can use in your own solutions. It is all available as open source and you can use the code or the deployment packages to get GeRes2 running in your own subscriptions.

GeRes2 essentially implements an all-so well-known pattern from the Azure-world as shown in the sketch below:

Classic scenarios partners are implementing through a pattern like this are:

– Generating documents in the background (e.g. massive amount of documents)
– Image or Video post processing (although for Video you should consider WAMS)
– Complex, distributed calculations
– Asynchronous provisioning jobs (e.g. when provisioning new tenants for your system)
– anything that needs to run multiple jobs in parallel reliably and asynchronously…

Of course this outlines just the generic pattern and it looks fairly simple. But when digging into details there are a whole lot of challenges that need to be solved when implementing such an architecture such as:

– Execution of long-running and short-running jobs across multiple instances
– Status tracking & status-querying of jobs
– Notifications (status & progress) to consumers and clients
– Prioritization of jobs
– Reliability features (e.g. retry logic, dead-letter handling etc.)
– AutoScaling that is aware of which instances are not executing long-running jobs

Exactly those are the features GeRes2 implements for you – so you can take care of the implementation and scheduling of your jobs. For the while GeRes2 fills a gap that might be filled through a native Azure platform service later (as of today no details are available).

Side Note: We built GeRes2 because we do have partners that needed the functionality outlined below right now. Note that some for you obvious features might be missing since partners we’ve worked with did not immediately require those features, today.

Side Note: if you have HPC-background than you might think about a separation of jobs and tasks (jobs as compositions of more fine-granular tasks). In GeRes2 we don’t make that separation, so for GeRes2 Jobs are equal to Tasks and you can use that tger

Backend Work: Implementing a Job for GeRes2

Creating jobs is fairly easy (whereas we focused on .NET developers, first): You need to create a library that implements an Interface which contains the logic of your job. The interface is part of the server-SDK from GeRes2 that defines the IJobImplementation-interface.

public class FinancialYearEnd : IJobImplementation
    // make the property volatile as it may be set by more than one thread
    private volatile bool _cancellationToken = false;

    // default constructor (required by MEF)
    public FinancialYearEnd()

    public JobProcessResult DoWork(Job job,
        string jobPackagePath, string jobWorkingPath, 
        Action<string> progressCallback)
        // Your Job-logic goes here

    public string JobType
        get { return "FinancialYearEnd"; }

    public void CancelProcessCallback()
        _cancellationToken = true;

Every job gets executed on a single Worker Role instance reserved by GeRes2 for job-execution. The number of jobs executed in parallel is defined by the number of workers you reserve for the job execution. On each worker. every job gets his own, isolated directory where the job files (executables, anything required by a job) are temporarily stored for execution as well as a temporary folder available for local processing.

Job files are deployed automatically by GeRes2 from Azure Blob storage to the worker VMs on job execution. So all you need to do is implement your job, package it in a ZIP-archive and push it into a container called “jobprocessorfiles” in Blob Storage (in a sub-folder matching your TenantName). For more details look at the documentation Wiki from the GeRes2 OSS project workspace.

Front-End Work: Submission of Jobs for Execution and Notifications

When GeRes2 is deployed in your subscription it provides a WebAPI secured by Azure Active Directory. More details are also available on the documentation wiki from GeRes2. We also provide a client/consumer-SDK for applications and services that want to use GeRes2 for submitting jobs.

With that SDK the submission of a Job is as simple as shown in the following lines of code:

var geresServiceClient = new GeresAzureAdServiceClient

geresServiceClient.Authenticate(clientId, clientSecret).Wait();

var newJob = new Job
    JobType = "FinancialYearEnd",
    JobName = "Job Display Name",
    TenantName = "YourTenant",
    JobProcessorPackageName = "",
    Parameters = "any parameters you need to pass in"

string jobId = geresServiceClient.Management.SubmitJob(newJob, batchId).Result;

This leads to the execution of a job in the GeRes2 system on a worker. The status of a job can be tracked in two ways:

– Through “near-real-time” notifications via SignalR and/or Service Bus.
– By polling the monitoring API of GeRes2.

Setting up notifications with SignalR can happen through the client-SDK, directly, as shown in the following code-snippet:

// Setup notification handlers
geresServiceClient.Notifications.JobStarted += (sender, args) =>
    // ...
geresServiceClient.Notifications.JobProgressed += (sender, args) =>
    // ...
geresServiceClient.Notifications.JobCompleted += (sender, args) =>
    // ...

// Connect to the SignalR Hub

// Schedule jobs and subscribe to job notifications
string jobId = geresServiceClient.Management.SubmitJob(newJob, batchId).Result;

As mentioned, there’s also the option to query the status of jobs manually. The status of jobs is kept in an Azure-table. Of course you can query that table, directly since you are in full control of your GeRes2 deployment. But we’ve created a WebAPI and a corresponding client-API that should make it a bit easier for you as shown below:

var batchesTask = geresServiceClient.Monitoring.GetBatches();

geresServiceClient.Monitoring.PageSize = pageSize;
var jobsList = geresServiceClient.Monitoring.GetJobsFromBatch(batchIdToGetJobs);

var jobDetails = geresServiceClient.Monitoring.GetJobDetails(j.JobId);

Final remarks:

We’ve created GeRes2 based on concrete partner projects we’ve been involved into across the world. They all started implementing the pattern outlined above manually and it always turned out it is a whole lot of work that needs to get done.

GeRes2 implements a simple set of batch-processing features we’ve seen a lot with partners across the world. Of course it is not a full-blown HPC or batch-processing/job-scheduling platform. It covers simple yet often required scenarios. Nevertheless, what I can confirm is that we’ve partners who are using GeRes2 with > 100 job execution instances and tested way above 100 instances (to be precise: with 800 instances; but note that the built-in AutoScaler works up to 100 instances, only, but is not required to be used, either).

If you’re interested, then go to our project workspace, download and play the source code, dig into our docs and share feedback if you have.
is where you can look for further details.

Have much fun and thanks for your interest!

Java & Windows Azure – Using vFabric TC Server with the Azure Plug-In for Eclipse

Quick summary: if you plan to use vFabric TC Server as a container for your Java applications in Windows Azure PaaS Cloud Services Worker Roles with the help of the Azure Plug-In for Eclipse, you need to do some customization to the Azure plug-in for Eclipse and add component configuration. This blog-post shows, how you can do it and also gives some background information and context on the scenario.

Background Information / Context

This week I’ve been working with some larger software vendors in Tel Aviv / Israel on moving their products & services to Windows Azure. One of those partners I spent most of the time with while being here is using a pretty large-scale Java-based solution with some NoSQL and BigData-solutions in the backend.

Their case is a perfect example for a mixed cloud app involving PaaS Cloud Services that are automatically managed as well as Virtual Machines running Linux with the NoSQL and BigData solutions they’re using. Therefore as part of our efforts we decided to try to move their Java-stack onto Web/Worker Role Cloud Services in Azure while running the rest in Virtual Machines. That way the application and services tier of their architecture would be automatically managed by Windows Azure.  At the same time with the help of our Java/Eclipse Azure Plug-In, we can solve some of their core-challenges in Azure, especially SSL Offloading. SSL offloading btw. is a default option when using the Azure Plug-In for Eclipse in the meantime.

Concretely that partner is using vFabric TC Server which is a special and extended version of Apache Tomcat. If you want to try it out by yourself, it’s the app server that ships by default in the Spring Tool Suite (STS) for debugging and development purposes. For production purposes it requires a commercial license, nevertheless, as far as I understood.

The challenge nevertheless is, that vFabric TC Server is not one of the default-options in the Azure Plug-In for Eclipse we as Microsoft have published. And of course vFabric TC Server works different than Tomcat what finally means that the deployment- and startup-procedures for the Worker Role deployments created with the Azure Plug-In for Eclipse need to be customized.

Enable vFabric TC Server in the Azure Plug-In for Eclipse

First of all you should get familiar with the Azure Plug-In for Eclipse if you’re not, already. Install the plug-in and make yourself familiar through the docs on MSDN.

After installed, you can create deployment projects in Eclipse with that plug-in to get your Java-application running in Windows Azure Platform-as-a-Service Cloud Services. This project essentially packages your JDK of choice, your container/app server of choice as well as your application packages together. It also generates deployment procedures to enable automatic deployments, initialization and running your app server or applications in Worker Roles whenever instances are added or recycled in your deployments. That way your environment can be managed automatically by Windows Azure without you being interfered on temporary failures, update cycles and the likes.

All of this is done through a variety of configurations filled through several dialogs of the plug-in. The most important dialogs are those where you configure the server-configs as well as environment variables as shown below:

Unfortunately vFabric TC Server is not part of the list of server-types the plug-in knows. Fortunately the mechanism is extensible. So to get vFabric TC Server integrated with the Azure Plug-In for Eclipse, just follow the subsequent steps:

#1: Add a new component configuration for the plug-ins server-types

For this purpose in the server-configuration dialog as shown in the previous screen-shot, next to the “Type”-combobox where you can select the application server of choice (e.g. Tomcat), select “Customize…”. This will open up a plug-in-wide configuration file where you need to enter the following, additional component-configuration section. Make sure you do this BEFORE you create your first project that uses vFabric TC Server so that the build-files get generated appropriately:

<componentset name="vFabric TC Server 2.9.3" type="server" detectpath="lib\tc-runtime-instance-2.9.3.RELEASE.jar">
    <startupenv name="TC_INSTANCE_NAME" value="yourinstancename" />
    <startupenv name="CATALINA_HOME" value="${placeholder}\%TC_INSTANCE_NAME%" type="server.home"/>
    <startupenv name="SERVER_APPS_LOCATION" value="%CATALINA_HOME%\webapps" type=""/>
    <component importsrc="${placeholder}" importmethod="copy" type="server.deploy" deploydir="%DEPLOYROOT%" deploymethod="copy"/>
    <component deploydir="%SERVER_APPS_LOCATION%" deploymethod="copy" importsrc="${placeholder}" importmethod="${placeholder}" importas="${placeholder}" type=""/>
    <component deploymethod="exec" deploydir="%CATALINA_HOME%\bin" importas="tcruntime-ctl.bat run" type="server.start"/>

This will tell the Eclipse Plug-In for Azure how the actual app server does work. Since TC Server is based on Tomcat, you’ll see some familiar settings but some additional ones, as well.

First of all I’ve added an environment variable to the settings called “TC_INSTANCE_NAME”. vFabric TC Server always works with app server instances which need to be created upfront. These are then the actual containers to which your application packages (WARs) get deployed. Before deploying vFabric TC Server to a Worker Role, you should prepare the instance (that’s at least the easiest way, but we could also script the creation of an instance as part of the auto-deployment process of the plug-in as an alternative).

Second the home directory is not the root directory of the app server anymore, it’s the one of the specific instance executed by vFabric TC server.

Third the startup of that instance takes place with the “tcruntime-ctl.bat” batch file instead of the well-known startup.bat from Tomcat.

Fourth there is a nice property called “detectpath” which allows you to identify a file or a path inside of your app server directory that allows the plug-in during project-creation-time to automatically detect the appropriate componentset-configuration just by picking the root-path of your application server when creating the Azure project (see step 3). This is optional, so if it is not there you need to manually select the server-configuration from the drop-down shown in the screen shot above with Tomcat selected.

#2: Prepare your vFabric TC Server Directory and Instance:

As mentioned above, TC server works with instances where it runs applications. For the approach described in this blog post I assume the instances get prepared locally before deployment. To prepare your TC server, just open up a command prompt, get to the path where you have your TC server local installation files and create a new instance by issuing the following command:

tcruntime-instance.bat create yourinstancename

This will configure and create a new TC server instance as shown in the following screen shot. Remember that instance name since you’ll need it later when creating the Azure deployment projects in Eclipse.

Now before moving forward there’s a tweak in the vFabric TC Server config files you need to make. When creating the instance it creates a config file under the following path:

<vFabric TC Server root>\<yourinstancename>\conf\wrapper.conf

This file sets the JAVA_HOME-directory to a hard-coded path pointing to the JVM it finds locally on your machine:

set.JAVA_HOME=C:\Program Files\Java\jdk1.7.54 (example)

you need to change that to use the %JAVA_HOME% environment variable as shown below which is set by as part of the automated deployment process of the Worker Role by the packages created through the Azure Plug-In for Eclipse.


Next it creates another file in the instance-directory where the Java-home directory is set based on your local Java-installation.

<vFabric TC Server root>\<yourinstancename>\bin\setenv.bat

In this file make sure you comment out the line at the beginning that sets the JAVA_HOME environment variable since it will be set by the Azure Plug-In for Eclipse deployment-process to the correct path where it automatically deployed the JDK to, earlier in the process. The file should look similar to the following:

rem Edit this file to set custom options
rem Tomcat accepts two parameters JAVA_OPTS and CATALINA_OPTS
rem JAVA_OPTS are used during START/STOP/RUN
rem CATALINA_OPTS are used during START/RUN

rem set JAVA_HOME=c:\hadoop\java
set JVM_OPTS=-Xmx512M -Xss256K

#3: Create your Windows Azure Deployment Project with Eclipse

Finally after all these steps are completed, you can use the normal process for creating the Windows Azure Deployment Project through the Azure Plug-In for Eclipse. When creating your project, in the server-configuration make sure to point to the directory with the vFabric TC Server where you previously created the instance as shown in the last screen shot.

Now there’s one last step you need to accomplish. Remember the instance you’ve created earlier for TC server!? In the <componentset /> I’ve defined earlier for the Azure Eclipse Plug-In I referred to the instance-name through an environment-variable. You need to set this variable in the Environment Variables properties settings for the worker role you want to use with TC server as shown below.

Since I did name the instance for this blog-post “blogpost” earlier when I created it with tcruntime-instance.bat as per the screen shot, I also used this name here in the environment variable definition.

#4: Done – Now we can deploy to Azure

That’s it, after that you can deploy the project to Windows Azure. The neat thing is that the component configuration above in the first step needs to happen only once / dev-machine. After done you can re-use that configuration for each new project you create. Definitely an optimization would be to also script the creation of an instance in TC server, but at the same time deploying it in a prepared way simplifies the deployment process and things that potentially could go wrong with it. So I am in favor of this, simpler way of doing things if possible.

That said, have much fun running vFabric TC Server in automatically managed Windows Azure PaaS Worker Roles. Feel free to get back to me through Twitter if you have any comments or questions.

ASP.NET 4.5.1 WebAPI, “general” integration with OAuth2 and OAuth Authentication Servers

Over the past weeks I did work with several software vendors all having the same question: how does the integration of OAuth2 really look like in ASP.NET 4.5.1?

Well, although that sounds like a simple question to answer with point to (which is btw. a good starting-point to dig deeper to better understand what I am writing in this post), it is not, indeed!

What you’ll quickly see is that the walk-through samples on the official ASP.NET homepage show all sorts of code and how-to for integrating either with social identity providers (e.g. Facebook, Microsoft Account, Google, Yahoo) or with Windows Azure Active Directory (short WAAD).

That is cool, but what if you’re really interested in the native OAuth-integration in ASP.NET? What if you’re interested to understand, how-to integrate your WebAPI with any OAuth authorization server, e.g. your own or a 3rd-party authorization server?

Questions that nearly all of the partners and customers I talked to over the past 3 weeks had these questions and the official docs did not really provide an easy-to-understand answer. With this blog-post I try to give that answer and an how-to for you.

First – Understanding OAuth Implicit Grant-Flow

First of all, you need to understand, how OAuth implicit grant approximately works. In essence it is just a bunch of browser-redirects that do take place. The following picture stolen from MSDN explains, what’s going on:

Whenever a users browses to a secured site (= resource provider, relying party) anonymously, then that secured site redirects the user to the authentication service. The user then authenticates against this service and assuming that was successful, the authentication service responds with a browser-page that contains an access token (typically signed, eventually encrypted). That browser page returned then posts the token to a target URL on the resource provider that is registered with the authentication service. The resource provider then decrypts and validates the signature of the token and then the user is authenticated against the resource/site. If the resource/site is a web page, it can then issue its own authentication cookie as usual, if it is a web API, the client (mostly an app on a device or a JavaScript client) can post the token to the WebAPI with every request as long as the token is valid (simplified!).

If that process takes place in an app, a very typical approach is to “host” a browser control that performs this process of browser-redirects, but instead of posting the token back to the resource provider, the client/app intercepts that post and grabs the token to include it in the authentication header for HTTP requests to the WebAPI. The following picture shows that:

For example, for Windows 8 store apps, the WebAuthenticationBroker-class encapsulates the basic process outlined in the previous image for you. It hosts a browser control to do the following:

1. Open an authentication URL on the authentication server
2. Perform all browser requests required for the authentication
3. When the authentication server responds with a redirect URL the broker knows, it
4. intercepts this redirect URL and gives it to you so you can grab the token.
5. Finally it posts the token in the HTTP Authorization-header when calling the WebAPI (resource provider)

For this process to work, the client-app must be registered as a valid client at the authentication server and of course the WebAPI must also be registered as a valid resource provider in the authentication server. Typically auth-servers such as Windows Azure AD or Thinktecture Identity Server do provide these kind of configurations. If you build your own authentication/authorization server, you should follow the same approach for security reasons.

Second – Get up2speed with ASP.NET 4.5.1 and OWIN/Katana

It helps to understand, what OWIN and Katana are all about to fully understand the code that configures the OAuth2-providers for ASP.NET WebAPIs. But to be honest, if you’re not interested in these details and just want to get it working, you can skip this as well and accept code as it is:)

So I keep it short as well: if you want to understand OWIN and Katana, watch this video on Channel9 – it really explains the strategy very well:

In essence one of the important aspects of OWIN/Katana is to decouple the integrated ASP.NET components a bit so that the whole framework gets less “monolithic” and can be easily composed of re-usable, small and slim modules as needed without loading always a big System.Web.dll into memory.

NOW Rock – Connect your WebAPI with any OAuth2 Provider

All OAuth2-functionality starting with ASP.NET 4.5.1 and Visual Studio 2013 is implemented in OWIN/Katana-Modules. That said, when you want to integrate your WebAPI with an OAuth Authentication/Authorization server, you need the following NuGet-package which do contain the generic OWIN-implementation:

JSON Web Token Handler For the Microsoft .NET Framework

Note that when installing Microsoft.Owin.Security.OAuth that you get all packages as dependencies except the JSON Web Token Handler which you then need to add manually. The JSON Web Token Handler is required to implement the token validation functionality which is not part of the OWIN OAuth module.

After you have added that, you can register the OAuth-provider as shown in the code below in your OWIN configuration class – with the default ASP.NET templates in Visual Studio 2013 that is in App_Start\Startup.Auth.cs whereas if you do things by yourself you could add that code in your OWIN Startup-Class, directly:

        new OAuthBearerAuthenticationOptions()
            Realm = "http://mysimpletest",
            Provider = new OAuthBearerAuthenticationProvider(),
            AccessTokenFormat = new JwtFormat
                                        allowedAudience: "http://mysimpletest/",
                                        issuerCredentialProvider: new SymmetricKeyIssuerSecurityTokenProvider
                                                                        issuer: "",
                                                                        base64Key: "iGROpg/Q+8oFSihQKJ6+D6Vsu+GOyfEC4qcGKzY4qNQ="

The code above registers the OAuth Bearer Authentication OWIN module on the IAppBuilder passed into the startup-class for ASP.NET OWIN. It specifies the following parameters:

– the realm for which it expects the token to be issued from the auth-server
– the authentication provider, which is OAuthBearer as we expect a bearer token
– the token access format with a token provider used for validating the token

In the sample above we use the SymmetricKeyIssuerTokenProvider which means the authentication server should use the symmetric key specified in the code above to validate the token.

To understand these values better, it is now time to look at the configuration of the Authentication Server. For implementing my sample I used the ThinkTecture Identity Server which (still) supports OAuth implicit grant (note that this functionality is expected to be moved to the ThinkTecture Authorization server – see their homepage for further details). Since in my sample I am using a Windows 8 Store App as a client to call my web API, I also need to register that store app as a valid client with the authentication server as shown below:

As you can see in this picture, the Web API symmetric key and the realm do match what we have used in the code, above. The configured redirect-URL is not used since we’re accessing the WebAPI from a store app. The store APP is registered as a client as you can see from the second screen.

In the Windows 8 Store App I am using the WebAuthenticationBroker from WinRT to kick-off the authentication flow. The WebAuthenticationBroker ecnapsulates a browser-control that performs the required process as shown in the first image of this blog post including all browser redirects. The process stops when that hosted browser control tries to post to the redirect URI entered in the client configuration as shown in the Thinktecture Identity Server screen-shot above.

In case of a Windows 8 app that “redirect URL” is not a physical URL, but it is something the browser-control hosted in the WebAuthenticationBroker will intercept. Many authentication servers use that URL with the access token provided as a query string parameter as shown below:


The WebAuthenticationBroker just waits for the hosted browser control to try to post to this URL, intercepts that process and then returns to your code so that you can extract the access token out of that:

string thinktectureUrl = "https://mszspro8/idsrv/issue/oauth2/authorize";
string webApiUriForOauth = "http://mysimpletest/";
string thinktectureClientId = "testclient";

var endpoint = new Uri("https://mszw81-devvm/idsrv2/issue/oauth2/authorize");
var callbackUri = WebAuthenticationBroker.GetCurrentApplicationCallbackUri();
var startUri = new Uri(

var success = await WebAuthenticationBroker.AuthenticateAsync

// Pick the token out of the response
if (success.ResponseStatus == WebAuthenticationStatus.Success)
    var parametersString = success.ResponseData.Substring(success.ResponseData.IndexOf("/#") + 2);
    var parameters = parametersString.Split('&');
    var token = parameters[0].Substring("access_token=".Length);

    // Now call the web API with the bearer header
    httpClient.DefaultRequestHeaders.Authorization =
        new AuthenticationHeaderValue("Bearer", token);

    var response = await httpClient.GetAsync("https://localhost:44304/api/values");

    await PrintWebAPISuccess(response);
    throw new Exception(success.ResponseData);

The code above is part of a Button-Click-Handler in a Windows Store App that calls the WebAPI. As you can see, the first thing it does is composing a URL to start the authentication process with as well as a redirect-URL to intercept for token posting (which is the ms-app:// URI retrieved with a utility function of the broker as follows: WebAuthenticationBroker.GetCurrentApplicationCallbackUri()).

The start-URL is one that is specified by the authentication server and the one used above is specific to the Thinktecture Identity Server. Windows Azure AD or other authentication servers will have a different start-URL.

But the process then is always the same – the WebAuthenticationBroker hosts a browser and starts kicking off the process with this start URL. And it stops processing by intercepting a post to the redirect / callback URI.

After that process is completed, we take the response from the web authentication broker and parse the JWT-token out of the intercepted URL. Then we add the token to the HTTP header and call the web API. The following screen-shot shows that in action using the Thinktecture Identity Server from within our Windows Store App:

The “.UseOAuthBearerAuthentication”-call we made in the WebAPI ensures that a token must be posted to the API so that it can be callend and it also sets the System.Threading.Thread.CurrentPrincipal so that we can query properties of the user contained in the token as follows:

public class ValuesController : ApiController
    // GET api/values
    public IEnumerable<string> Get()
        // Claims APIs are abstract APIs that provide user information etc. independent of the Identity Provider used

        var claimsId = (ClaimsIdentity)System.Threading.Thread.CurrentPrincipal.Identity;
        foreach (var claim in claimsId.Claims)
            // A claim is an attribute of a user, e.g. email, age/birthdate
                        "-) {0} = {1}",

        return new string[] { "value1", "value2" };

Please note the [Authorize]-attribute which ensures that the API cannot be called without a token.

The APIs used to query the attributes of a user are the “new Windows Identity Foundation” APIs which are now part of the .NET Framework.

Other OAuth OWIN Providers in ASP.NET 4.5.1

Now you might ask why we have other providers in the form of NuGet-packages if the Microsoft.Owin.Security.OAuth provider solves the problem in a general way. The answer is: simplicity!!! You have to admit that e.g. the following Code for Facebook or Windows Azure AD or Google is much simpler than the code above:

    new WindowsAzureActiveDirectoryBearerAuthenticationOptions
        Audience = ConfigurationManager.AppSettings["ida:Audience"],
        Tenant = ConfigurationManager.AppSettings["ida:Tenant"]


app.UseFacebookAuthentication(appId: "yourappid", appSecret: "yourappsecret");

All of these providers are doing nothing else but encapsulating what we’ve done manually using Microsoft.Owin.Security.Oauth in our WebAPI above. With Windows Azure AD and Windows Server AD / ADFS you have a few other neat libraries on the client (Active Directory Authentication Libraries (ADAL)) which do make things much simpler as outlined on Vittorio’s blog:

Sample Code Download:

I’ll continue with a 2nd post on Windows Azure AD and OAuth with ASP.NET WebAPIs after Christmas and then also post my sample code of which I’ve taken the fragments above.

So stay tuned on more details, but I think the details above are exactly those which are not very well documented and I hope with this post I do fill that gap appropriately.

Merry Christmas!

Windows Azure – Console Apps in Platform-as-a-Service Worker Roles the right way!

I am currently working with a software vendor in the media space who has some really valuable software assets implemented as console applications, today. These command line applications are used for some high-performance image transcoding/encoding jobs and are implemented with C/C++ (originally built for Unix/Linux environments).

Now the partner wants to run this application as part of a Platform-as-a-Service (PaaS) deployment on Windows Azure using Web/Worker Roles for a new online service they’re currently building. But a requirement is to enable that scenario without rewriting the application and re-using it as it is.

In this blog post I’ll show, how you can run console applications correctly in Windows Azure Worker Roles without a single modification of the original console application itself. This is a scenario I’ve been challenged with in some other engagements with my partners and the requirements and solutions where always similar.

I’ll start with a bit background information and then I’ll dig into the solution based on an anonymized sample use case that pretty much captures all the requirements I’ve been confronted with in such cases with several partners. The complete sample code is published on my code workspace on codeplex and is available as a download or by cloning the git-repository I am using there.

Download the release sample code as ZIP archive here

Background #1 – Why not re-writing the application?

The console application that performs transcoding/encoding tasks is not fully owned by the partner. Furthermore the application runs perfectly fine on the machines they run in their own data center – and that should remain exactly the same as not all customers will be deployed in the cloud. Furthermore their on-premise data center runs on Linux VMs and therefore the console application needs to run on both, Linux and Windows (as they’re moving to Windows Azure PaaS and not IaaS).

Background #2 – Why do they want to do Windows Azure PaaS instead of IaaS?

Next you might think it might be easier to just take Linux VMs on Windows Azure and run their console application there. Indeed that was our first approach for the overall design of the solution. But using Windows Azure Virtual Machines (and IaaS in general) means that you have to maintain the virtual machines by yourself while running the solution. Maintenance for mean means things such as taking care of OS-updates, updating the application code appropriately without downtime and the like.

That’s okay if you’re just running a bunch of virtual machines that you need to maintain. But in this case the partner needs to run > 100 virtual machines in a scale out scenario for a massive amount of transcoding jobs. With such a scale, maintenance really becomes an annoying accompanist over time that will result in increased effort, time and money required to operate the environment.

In PaaS and with Windows Azure Cloud Services (Web/Worker Roles) on the other side the operating system and all application deployments are managed automatically by the Windows Azure platform. That said the software vendor does not need to manage updates manually and maintain the operating system anymore, at all. For the ongoing operations management it means a huge amount of effort goes away and frees up time for more important and valuable things. Therefore we jointly decided to move towards PaaS instead of continuing the IaaS/virtual machine approach.

The Solution / Part #1 – High-Level architecture and “Conditions”

The sample implementation I am providing as an add-on to this post outlines the final architecture in a simplified way. Essentially we decided to run the console application in Windows Azure Worker Roles as part of an encoding/transcoding cloud service. This service gets its jobs from an Azure queue which is filled by a web service accessed by some other, permitted applications of the customers form the software vendor. Basically the flow looks as follows:

  1. The originator of a job uploads assets into Azure BLOB storage.
  2. The originator then calls a web service to submit a new encoding/transcoding job.
  3. This web service validates the submitted job and if okay it adds an entry to an Azure TABLE with the details on the job.
  4. After that the web service adds a message to queue to initiate the processing of the job by the worker.
  5. The worker picks-up the job, reads the details (e.g. the asset in BLOB storage to be processed).
  6. The worker downloads the asset from the BLOB storage onto the local machine.
  7. Then the worker executes the console application against the downloaded asset and stores the result locally, as well.
  8. After that the worker uploads the resulting asset (the output from the console application) to BLOB storage again (in a separate container).
  9. When that’s done, the worker updates the table with the Job information and marks the job as “done”.
  10. Finally the worker deletes all assets from the local file system as they’re not needed on the executing compute instance, anymore.

As you can see from the solution above, the primary aspect to consider is that the local storage on Web/Worker Roles in Windows Azure is transient. All compute instances need to be treated as “stateless” and if there’s some state on these compute instances, it needs to be “temporary”. Therefore all assets that are an output from the processing need to be uploaded to Azure BLOB (or another permanent storage service such as Azure TABLEs, Azure SQL DB etc.) after they’ve been processed on a single compute instance, successfully.

On the other hand, the console application does not understand anything about Azure BLOB storage and the like. It does understand command line parameters, local file system input and local file system output for storing results, typically. Therefore the worker needs to download the input to the local file system so that the console application is able to process the assets and after processing upload the results typically written into a local file system by such console applications to Azure BLOB storage. Given that the console application must not be changed in our scenario (which is often the case), the worker needs to take care of that processing.

Of course that is not the only scenario related to command line applications, but it is the one I’ve been challenged with most often and with several of our software vendors we’ve been working with. Therefore it is a scenario common enough for me to outline at a greater detail.

Below a sketch of that architecture in a simplified way – more correct would be a complete implementation of a queue-centric workflow pattern as outlined in Jason Short’s blog

Note that you can easily scale out that solution by adding any number of instances of the worker role you want. E.g. you could set the number of instances to 100 worker instances so that you can process 100 transcoding jobs in parallel. Of course in addition to that you could optimize that by being able to run multiple instances of your console app at the same time on a single compute instance. But still, the easy scale out just happens by adding compute instance nodes to this and every instance picks up jobs from the queue as available and processes them in parallel.

Matching the assets of the architecture sketch above to the sample-download I’ve published looks as follows:

Component in Sketch Asset in Visual Studio Sample
Client n/a – since I have built a web app
Web Service Web Role ThumbnailFrontend (ASP.NET MVC)
Proc. Worker (processing Worker) ThumbnailBackend (Azure Worker Role)
cmd.exe ThumbnailProducerApp (Commandline App)
access to Queue, Jobs table, Blob ThumbnailShared (class library)
Azure Cloud Service ThumbnailCloudService

The Solution / Part #2 – Some Implementation Details

Now that we know how the solution is structured in general, we can take a look at some more details. The sample I’ve posted on my workspace is on a fictive (yet realistic) use case of generating thumbnails from images and it implements the architecture outlined earlier. For the concrete case I’ve mentioned above we just need to replace the use case (and therefore the command line tool subject of discussion) with their use case and tool and we’re all set. Of course we’ve implemented a more details (such as error handling, dead letter queue, etc.) which I’ve left in my sample implementation for the sake of focus and simplicity, but the overall approach is the same.

In this section let’s instead of going into all details let’s focus on the heart of the Worker Role which runs the legacy console application and does the work around executing that legacy application. There are a few things to keep in mind to do this correctly. These are:

  • First the console application does not understand anything about Azure BLOB storage and will never be (since it should not be changed and should run as-is on-premise and in the cloud). So content needs to be downloaded and uploaded to and from BLOB-storage from the worker role host process.
    • The logic of this one is implemented as part of the RoleEntryPoint-class you can implement for every Web/Worker Role in Windows Azure.
  • Since the console application relies on the local file system (for reading and writing content), we need to make sure to do this in the right way. What does that mean?
    • Well, the file system and drive structure for Web/Worker Roles is defined by Azure and might change over time.
    • Therefore hard-coding drive- and directory-access is definitely not the best solution.
    • For this purpose, Local Resources do exist in Windows Azure Web/Workers that give you access to the local file system in the right way.
  • Deploying the console application needs to happen alongside with the Worker Role project so that we can call it. Calling the console application also should not happen with hard-coded paths.
    • Deployment of the console app happens by adding it to the Worker Role project and making sure the Build Action is set to “Copy Always in solution explorer. That way the necessary console application executable will be included in the *.cspkg Azure deployment package by Visual Studio.
    • Second to call the console application, correct environment variables should be used. The environment variable we are using in our sample will be the ROLEROOT environment variable which points to the root directory of the extracted content from our *.cspkg Azure deployment package. From there we can easily find the console application using relative path specifications and therefore being “resistant” against possible, future changes of deployment file structures from Azure Web/Worker Roles.

Before looking at further coding details, let’s have a quick look at how the console application of my sample works so you get a better understanding of the scenario at a greater level of detail. Essentially the sample application creates thumbnails from existing images on the local file system and creates an output file for the generated thumbnail:

Think of this as your legacy-application you want to run in Windows Azure Worker Roles without being changed. Okay, now let’s look at some code from the worker role implementation. Looking at the solution structure, the RoleEntryPoint-implementation does all the plumbing (querying the queue, downloading and uploading content from Azure BLOB storage and calling the console application). For that purpose the console legacy app needs to be deployed with the worker role through the Azure deployment package. That is done by adding the console app to the project and making sure the “Copy to Output Directory”-property is set to “Copy Always” as shown here:

As a next step you need a file system directory to which and from which your console application can read content and write new content to. In our case that’s from where the console application reads the actual source images downloaded by the worker role to the local file system of the instance processing the job and where it writes the resulting thumbnail to. From there the worker host implementation picks it up and uploads it to BLOB storage.

For that purpose you need to specify a local resource which is essentially a local, temporary directory given to you by the Windows Azure Role Environment APIs for Web/Worker Roles. Given you can specify a size for that, the environment can guarantee that this size is available locally (also depending on the instance size you have chosen for your Web/Worker Role). Local resources are configured on the properties dialog for the

Worker Role in the Windows Azure Project in Visual Studio as shown below:

Now that we know the “setup” for our project, we can look at some of the interesting pieces of code for the overall solution. The first thing I’d like to take a look at is the overall structure of our Worker Role and the initial steps before the actual processing starts as they are super-important for anything afterwards:

public class WorkerRole : RoleEntryPoint
    // ... some private variables are defined here ...

    public override bool OnStart()
        // ... default implementation

    public override void Run()
        // ... some other code here not so important right now...

        // Now reserve a local temp path where images will be saved by the console app
        Trace.WriteLine("Getting a local directory for temporary work with image thumbnail generation...");
        var localResource = RoleEnvironment.GetLocalResource(

        // Next retrieve the path of the executable deployed with this worker role 
        // based on the environment 'RoleRoot' variable 
        var legacyAppExecutable = System.IO.Path.Combine(

        // Receive messages, download the image, process them with the console app and upload the image to blob
        while (true)
            var message = queueRep.GetMessageForJob(out jobId, out hasBeenDequeued);
            if (!string.IsNullOrEmpty(jobId))
                // Process the message ...

    // ... private implementation methods go here ...

There are a few super-important aspects in this code. First we get access to the local resource defined earlier using RoleEnvironment.GetLocalResource(). This gives us access to the local path we have requested through our project properties (see picture above) with the amount of disk space required. We don’t need to take care of disk structures and file system structures, we will get a directory that works. Into this directory, the worker will download content from BLOB, execute the console app onto it and give the console app this directory as an output.

Next we are getting the right path to the console application we’ve deployed alongside with our worker role implementation. For that instead of hard-coding any path in our solution, we use environment variables which are pre-defined by the Windows Azure environment in our Web/Worker Role, already. In specific the “RoleRoot”-environment variable will point to the root directory to which our *.cspkg content has been extracted to. There, under approot we will find the files from our project and from there we can point to the legacy console app deployed with our package.

These two are the most important aspects to consider for calling the console application later on. After we’ve set them up, we start the processing loop for our worker where we try to get messages from the message loop and if there are any, we start processing them. The method ThumbnailQueueRepository.GetMessageForJob() also makes sure that poison messages do not keep our workers busy forever as shown below:

public CloudQueueMessage GetMessageForJob(out string jobId, out bool dequeued)
    var message = _jobsQueue.GetMessage(TimeSpan.FromMinutes(2));
    if (message == null)
        jobId = string.Empty;
        dequeued = false;
        return null;

    if (message.DequeueCount > 3)
        // Remove the poison message from the queue

        jobId = message.AsString;
        dequeued = true;
        // Return the job ID
        jobId = message.AsString;
        dequeued = false;

    return message;


Note that I’ve decided to encapsulate queue, table and blob-processing into a repository-class in my sample implementation so that in theory they could be easily replaced with other implementations at a later point in time (such as using Azure Service Bus Queues instead of Azure Storage Queues for different scenarios). Implemented completely correct I should have added poison messages to a dead-letter queue, but I didn’t do this for the sake of keeping the sample simpler.

Now let’s come to the heart of the worker where the console application gets ultimately called. This happens in the Worker Role project in a method called ProcessJob(). This method gets the path to the console application which we resolved using the RoleRoot-environment variable as well as the path to the local temporary directory.

It then downloads the source image which is added through an http-URL in the Azure table I’ve created for maintaining the details of the job received through the queue before. It is accessed using JobsRepository.GetJob(). After downloaded it calls the console application simply using Process.Start() based on the path we have resolved earlier.

The console application of course saves the file into our temporary directory (which we pass in through command line arguments) from where the Worker implementation below picks it up and uploads it to BLOB-storage (encapsulated in the BlobRepository-class in my sample).

Finally in any case the worker implementation shown below tries to clean-up all files from the temporary directory (which is just deleting all files from the temporary directory using System.IO classes from the .NET framework). The clean-up is important to be done because if we won’t do it, the temporary space fills up over time and then we’re running into out-of-space exceptions if our temporary space is full based on the quota we’ve requested in the project properties, earlier.

And that’s it, all of that is encapsulated in the ProcessJob()-method below, which is part of my Worker Role project from the sample implementation.


private void ProcessJob(string jobId, string appPath, string localTempPath)
    // First get the job data
    var job = JobsRepository.GetJob(jobId);
    job.Status = ThumbnailJobEntity.JOB_STATUS_RUNNING;

    // Temporary files used for processing by the legacy app
    var sourceFileName = Path.Combine(localTempPath, "source_" + jobId);
    var targetFileName = Path.Combine(localTempPath, "result_" + jobId);

        // Next download the source image to the local directory
        var httpClient = new HttpClient();
        var downloadTask = httpClient.GetAsync(job.SourceImageUrl);
        using (var sourceFile = new FileStream(sourceFileName, FileMode.Create))

        // Then execute the legacy application for creating the thumbnail
        var app = Process.Start
                        string.Format("\"{0}\" \"{1}\" Custom 100 100", sourceFileName, targetFileName)
        // You should set a timeout to wait for the external process and kill if timeout exceeded

        // Evaluate the result of execution and throw exception on failure
        if (app.ExitCode != 0)
            var errorMessage = string.Format("Legacy app did exit with code {0}, processing failed!", app.ExitCode);
            Trace.WriteLine(errorMessage, "Warning");
            throw new Exception(errorMessage);

        // Processing succeeded, Now upload the result file to blob storage and update the jobs table
        using (FileStream resultFile = new FileStream(targetFileName, FileMode.Open))
            var resultingUrl = BlobRepository.SaveImageToContainer(resultFile, job.TargetImageName);

            job.Status = ThumbnailJobEntity.JOB_STATUS_COMPLETED;
            job.TargetImageUrl = resultingUrl;
        // Deletes all temporary files that have been created as part of processing
        // It would also be good to run that time-controlled in regular intervals in case of this fails
        TryCleanUpTempFiles(sourceFileName, targetFileName);

In Summary

The scenario outlined in this blog article is a very common one I have been challenged from partners often: run a console application in Windows Azure PaaS without modifying it. With the following considerations, that can be accomplished easily:

  • Use LocalResource through project properties and RoleEnvironment.GetLocalResource() instead of hard-coding paths.
  • Let the console application access all files required through that local resource path.
  • Download and upload input-files and output-files to and from BLOB storage to that local resource path for processing by the console application to keep web/worker role instances “stateless” and be able to scale out (well, this is a MUST).
  • Instead of hard-coding the path to your console application, use the environment variable “RoleRoot” to get the correct path to your console application.
  • For a list of environment variables issued by Windows Azure look at the following blog post:
  • Make sure you clean up the files you store in the local temporary file system to avoid running into disk space problems and the like.

Ultimately to me solutions like these also show, that having legacy apps in place does not mean PaaS through Web/Workers in Azure is not an option. It is, even if it might not be obvious at first.

I do hope this post, the sample and the details were helpful to many other developers. As mentioned, I’ve been challenged by software development partners I’ve been working with more often with such a scenario and therefore I decided to post a more detailed sample implementation on it.