Azure & Cloud Foundry – Setting up a Multi-Cloud Environment

This week I was presenting at the CloudFoundry Summit 2016 Europe in Frankfurt, of course about running CloudFoundry on Azure and Azure Stack. It was greate being here, especially because one of my two main Global ISV partners I am working with on the engineering side, have been here as well and are even a Gold-sponser of the event. It was indeed an honor and great pleasure for me to be part of this summit here … and great to finally have a technical session at a non-Microsoft conference, again:)

Indeed, one reason for that blog-post is because I ran out of time during my session and was able to show only small parts of the last demo.

Anyways, let’s get to the more techncial part of this blog-post. My session was all about running CF in Public, Private as well as Hybrid Clouds with Azure being involved in some way. This is highly relevant since most enterprises are driving a multi-cloud strategy of some way:

  • Either they are embracing Hybrid cloud and run deployments in the public cloud as well as in their own data centers for various reasons or
  • they want to distribute and minimize risk by running their solutions across two (or more) public cloud providers.

Despite the fact my session was focused on running Cloud Foundy on Azure, a lot of the concepts and architectural insights presented, can be re-used for other kinds of deployments with other cloud vendors or private clouds, as well.

The basics – Running Cloud Foundry on Azure and Pivotal

Microsoft has developed a Bosh CPI that enables bosh-based deployments of Cloud Foundry on Azure. The CPI is entirely developed as an Open Source Project and contributed to the Cloud Foundry Incubator project on GitHub.

Based on this CPI, there are two main ways for deploying deploying Cloud Foundry clusters on Microsoft Azure:

There’s a very detailed guidance on all of those GitHub repositories available that do explain all the details, I would suggest to follow this one since it is by far the easiest one: Deploy Cloud Foundry on Azure and always follow the via ARM templates suggestions of the docs.

Finally, in addition to Azure, to completly follow this post you need a 2nd CF cluster running in another cloud. The by far easiest way is to setup a trial account on Pivotal Cloud, which provides you with some sort of "CloudFoundry-as-a-Service". Follow these steps here for doing so…

A Multi-Cloud CF Architecture with Azure on one side

There are many reasons for multi-cloud environments. Some might include running parts in private clouds because of legal and compliance reasons while others including spreading risk across multiple cloud providers for disaster recovery reasons. The example in this post is focused exactly around the multi-cloud DR case since it covers two public cloud providers:

architecture

  • Azure Traffic Manager acts as a DNS-based load balancer. We will configure traffic manager with a Priority-Policy, which essentially leads traffic based on priority and if one cloud has a failure, Traffic Manager will route traffic to the other cloud.
  • The Azure Load Balancer is a component you get "for free" in Azure and don’t really need to take care off. It balances traffic across the front-nodes of your CF cluster and is automatically configured for you if you follow the guidance above for deploying CF on Azure.
  • Inside of each CF cluster, we need to make sure to register the DNS names used by Traffic Manager and configure the CF routers to route to our apps in the CF cluster, apropriately.

Setting up traffic manager

Let’s start with setting up the Azure Traffic Manager since we’ll need it’s domain name for the configuration of the apps in both Cloud Foundry targets. You can just add Azure Traffic Manager as a Resource to the Resource Group of your Cloud Foundry deployment or any other resource group. In my case, I deployed the Traffic Manager in another resource group as shown in the following screen shot:

Traffic Manager Setup

The important piece to take for now is the Domain Name of your traffic manager end-points. The actual end-points for traffic manager do not need to be configured at this point in time – we will look at it later.

Deploying the sample app to Pivotal Web Services

As a next step, we need to deploy the sample application to Pivotal web services and need to take note of the (probably random) domain name it has associated ot the application.

$pivotalApiEndpoint="api.run.pivotal.io"
cf login -a $pivotalApiEndpoint
cf target -o $pivotalOrg -s $pivotalSpace
cf push -f ./sampleapp/manifest.yml -p ./sampleapp
cf set-env multicloudapp REGION "Pivotal Cloud"
cf restage multicloudapp

To get the domain name and IP, just execute a cf app multicloudapp and take note of the domain name as shown in the following figure:

Pivotal App Domain Name

Deploying the App into Cloud Foundry on Azure

The deployment of the sample app into Azure goes exactly the same way, except that we’ll need to use different API end-points, organization names and spaces inside of Cloud Foundry:

$azureCfApiEndpoint="api.$azureCfPublicIp.xip.io"
cf login -a $azureCfApiEndpoint
cf target -o $azureOrg -s $azureSpace
cf push -f ./sampleapp/manifest.yml -p ./sampleapp
cf set-env multicloudapp REGION "Microsoft Azure"
cf restage multicloudapp

The Cloud Foundry API end-point I used above is the one that is registered by default when using the ARM-based deployment of open source Cloud Foundry with the Azure Quickstart Templates. The DNS-registration mechanism used there is documented here.

Also note the environment variables I am setting in the scripts above using cf set-env multicloudapp REGION "xyz". Indeed, that is used by our sample application (which is written with Ruby in this case) to output, in which region we are running the app. That way, we can see, if we are directed to the app deployed in Azure or in Pivotal Web Services.

Finally, if you’re new to Azure, the best way to find out the public IP which has been created for your CF cluster, is looking up a public IP address in the Azure Portal which has been created inside of the Resource Group for your Cloud Foundry cluster. Another way – if you are a Shell Scripter – would be to use the following command with the Azure Cross Platform CLI:

azure network public-ip show --resource-group YOUR-RESOURCE-GROUP YOUR-IP-NAME
info:    Executing command network public-ip show
+ Looking up the public ip "YOUR-IP-NAME"
data:    Id                              : /subscriptions/YOUR-SUBSCRIPTION-ID/resourceGroups/YOUR-RESOURCE-GROUP/providers/Microsoft.Network/publicIPAddresses/mszcfbasics-cf
data:    Name                            : YOUR-IP-NAME
data:    Type                            : Microsoft.Network/publicIPAddresses
data:    Location                        : northeurope
data:    Provisioning state              : Succeeded
data:    Allocation method               : Static
data:    IP version                      : IPv4
data:    Idle timeout in minutes         : 4
data:    IP Address                      : 52.169.87.212
data:    IP configuration id             : /subscriptions/YOUR-SUBSCRIPTION-ID/resourceGroups/marioszpCfSimple/providers/Microsoft.Network/networkInterfaces/SOME-ID/ipConfigurations/ipconfig1
data:    Domain name label               : marioszpcfsimple
data:    FQDN                            : marioszpcfsimple.northeurope.cloudapp.azure.com
info:    network public-ip show command OK

Configuring Traffic Manager Endpoints

Next, we need to tell Azure Traffic Manager the endpoints it should direct request which do approach on the DNS record registered with Traffic Manager to.

In our case, we use a simple Priority-based policy which means, Traffic Manager tries to always direct requests to an endpoint with the more important priority except that endpoint is not responsive. For a full documentation about policy routes, please refer to the Azure Traffic Manager docs.

Traffic Manager Endpoints

As you can see from the above, we have two endpoints:

  • Azure Endpoint which goes against the Public IP that the scripts and Bosh deployed for us when we deployed Cloud Foundry on Azure at the beginning.
  • External Endpoint which goes against the domain name for the app that Pivotal Web Services has registered for us (something like multicloudapp-xyz-abc.cfapps.io).

Let’s give it a try…

Now, in the previous configuration for Traffic Manager, we defined that the Pivotal Deployment has priority #1 and therefore will be preferred by Traffic Manager for Traffic routing. So, let’s open up a browser and navigate to the Traffic Manager DNS name for your deployment (in my screen shots and at my CF session that is marioszpcfsummithybrid.trafficmanager.net):

not working

Of course, a Cloud Foundry veteran spots immediately, what that means. I am not a veteran in that area, so I was falling into the trap…

Configuring Routes in Cloud Foundry

What I forgot when setting this up, originally, was configuring routes for the Traffic Manager Domain in my Cloud Foundry clusters. Otherwise, Cloud Foundry will reject requests coming in through that domain as it does not know about it.

We need to configure the routes on both ends to make it working, as shown below, we’re adding the traffic manager domain to the routes and ensure, CF routes traffic from those domains to our multi-cloud sample app:

$trafficMgrDomain=marioszpcfsummithybrid.trafficmanager.net

#
# First do this for Pivotal
#
cf login -a $pivotalApiEndpoint
cf target -o $pivotalOrg -s $pivotalSpace

cf create-domain $pivotalOrg $trafficMgrDomain
cf create-route $pivotalSpace $trafficMgrDomain
cf map-route multicloudapp $trafficMgrDomain

#
# Then do this for the CF Cluster on Azure
#
$azureCfApiEndpoint="api.$azureCfPublicIp.xip.io"
cf login -a $azureCfApiEndpoint
cf target -o $azureOrg -s $azureSpace

cf create-domain $azureOrg $trafficMgrDomain
cf create-route $azureSpace $trafficMgrDomain
cf map-route multicloudapp $trafficMgrDomain

Now let’s give it a try, again, and see what happens. This time we should see our Ruby sample app running and showing that it runs in Pivotal since we defined the priority for the Pivotal-based deployment within Azure Traffic Manager.
it works

Fixing Routes on Azure with Traffic Manager

After I indeed did the route mapping on Azure, Traffic Manager still claimed that the Azure-side of the house is Degraded, despite having the route configured. Initially, I didn’t understand why.

I didn’t have this problem when I initially tried this setup before. But when I initially tried this, I did not have assigned a DNS name to the Cloud Foundry Public IP in Azure. I’ve changed that because I tried something else in between and assigned a DNS name to the Azure Public IP for the CF Cluster. This lead traffic manager to route against that DNS name instead of the IP.

For troubleshooting that, I initated a fail-over and stopped the app on the Pivotal side (see next section) to make sure, Traffic Manager would try to route to Azure. A tracert finally told me, what was going on:

C:\code\github\mszcool\cfMultiCloudSample [master ≡]> tracert marioszpcfsummithybrid.trafficmanager.net

Tracing route to marioszpcfsimple.northeurope.cloudapp.azure.com [52.169.87.212]
over a maximum of 30 hops:

  1     5 ms     5 ms     4 ms  10.10.16.4
  2     2 ms     1 ms     1 ms  80.146.218.2
  3     2 ms     1 ms     2 ms  62.156.233.185
  4     5 ms     5 ms     5 ms  87.190.232.17
  5     8 ms     7 ms     7 ms  f-ed1-i.F.DE.NET.DTAG.DE [62.154.14.118]

When looking at the selected routes, we immediately spot, that the traffic manager domain gets resolved to the .cloudapp.net domain of the Azure Public IP. So my route on the CF-side of the house was just wrong. The route for Azure should not go against the traffic manager, but rather on the custom domain assigned to the cloud foundry cluster’s public IP in Azure:

cf map-route multicloudapp marioszpcfsimple.northeurope.cloudapp.azure.com

C:\code\github\mszcool\cfMultiCloudSample [master ≡]> cf routes
Getting routes for org default_organization / space dev as admin ...

space   host   domain                                            port   path   type   apps            service
dev            52.169.87.212
dev            marioszpcfsimple.northeurope.cloudapp.azure.com                        multicloudapp
dev            marioszpcfsummithybrid.trafficmanager.net                              multicloudapp

Testing a failover

Of course, we want to test if our failover strategy really works. For this purpose, we kill the App on the Pivotal-environment by executing the following commands:

cf login -a $pivotalApiEndpoint
cf target -o $pivotalOrg -s $pivotalSpace
cf stop multicloudapp

After that, we need to wait a while until traffic manager detects, that the application is not healthy. It then also might take a few seconds or minutes until the DNS record updates are propagated until we see the failover working (the smallest DNS TTL you can set, is 300s as of today).

So watch, what goes on, the simplest way is looking at the Azure Portal and opening up the Azure Traffic Manager configuration. At some point in time we should see, that one of the endpoints changes its status from Online to Degraded. When opening up a browser and trying to navigate to the traffic manager URL, we should no get redirected to the Azure-based deployment (which we see given our App is outputing the content of the environment variable we did set different for each of the deployments, before):

failover test

Final Words

I hope this gives you a nice start in setting up a Multi-Cloud Cloud Foundry environment across Azure and a 3rd-party cloud or your own data center. I will try to continue this conversation on my blog, for sure. There are tons of other cool things to explore with Cloud Foundry in relationship to Azure, and I’ll at least try to cover some of those. Let me know what you think by contacting me through twitter.com/mszcool!

As usual – all the code is available on my GitHub in the following repository:

https://github.com/mszcool/cfMultiCloudSample

AllJoyn IoT Peer-To-Peer Protocol & Framework – Making it Work with Visual Studio 2013

Our global team is running several industry subject matter working groups with different focus areas. One of them is targeted to Internet-Of-Things (IoT).

The charter of this working group is the exploration of IoT standards, protocols and frameworks across various industries with the goal of developing recommendations, reference architectures and integration points to and on the Microsoft platform as well as feeding IoT-related product groups of Microsoft with input for technologies, services and features we’re building as a company.

Together with peers from this working group including @joshholmes, @tomconte, @mszcool, @reichenseer, @jmspring, @timpark, @irjudson, @ankoduizer, @rachelyehe and daniele-colonna we explored AllJoyn as a technology.

Background on AllJoyn and our activities

I had the chance to work with this working group on exploring AllJoyn as a technology – especially because Microsoft joined the AllSeen alliance (www.allseenalliance.org). AllSeen has the sole target of making AllJoyn a defacto-standard for device-2-device communications in a mash-up oriented way across platforms and operating systems.

In a nutshell, AllJoyn is a peer-2-peer protocol framework that allows all sorts of device-2-device communication in local networks. It is built on-top of DBUS and TCP/UDP and includes advanced services such as on-boarding of devices to a network, device/service discovery, communication and notifications.

If you want to learn more about the fundamentals, Rachel has published a great summary on AllJoyn on her blog, which explains the capabilities and services provided by AllJoyn and the architectural principles, in general!

Why is this relevant?

Well, I think AllJoyn does have the potential revolutionize, how devices find and talk to each other in a standardized, cross-platform way in local networks and across wide networks (through a gateway). If done right and assuming, that a broad ecosystem of devices will adopt AllJoyn, this can lead to seamless detection and usage of nearby devices through other (smart) devices such as phones.

Think about the following scenario: you enter a hotel room as part of a business trip and your phone, regardless of which platform it is using, detects the TV, the coffee machine and the wake up radio in your room and you can “configure” and “use” those devices right away through your phone without the need of other remote controls or getting up from the bed to start brewing your coffee. Also media sharing could become much easier than it is today across devices from different vendors using different operating systems.

The potential is huge, the ecosystem nevertheless needs to be developed. And since Microsoft joined the alliance around this protocol and services framework I think we could and want to drive some of this development actively. Let’s see what the future brings based on our early efforts here right now;)

Setup of an AllJoyn Dev-Environment with Visual Studio 2013 on Windows

This blog is solely focused on what you need to do to get the AllJoyn SDK working with Visual Studio 2013. This sounds like a simple thing, but so far the AllJoyn SDK is available for VS 2012, only. Since AllJoyn is using a specific combination of build tools from the OSS-world, tuning the setup to work with VS 2013 requires a few further steps which I’ll dig into as part of this blog-post.

Once you have completed your setup you can start developing AllJoyn enabled services for a variety of devices on Windows machines with Visual Studio 2013 including Windows Services, Desktop Applications and backend-services (e.g. ASP.NET) making use of AllJoyn as a technology and protocol framework.

To setup a full development environment that works with Visual Studio 2013 (and 2012 in parallel), follow these steps below. You need to install exactly the versions of the tools provided below as opposed to the official docs from the AllJoyn home page since these versions of the dependent tools also work with Visual Studio 2013.

  1. Download and extract the AllJoyn Source Code Suite.
    1. Downloading the Windows SDK will give you libraries compiled with VS2012. You definitely will run into issues with using them on VS2013 since there were some changes relevant for AllJoyn in the VS2013 C++ compiler.
    2. In my tests I did use the version 14.06 from the SDK.
    3. For details on how-to work with the Thin SDK, look at Thomas’ blog post who writes about how-to get the Thin SDK to compile with VS 2013 and use it with Intel Galileo Boards running Windows.
    4. Note: for extracting the SDK, I suggest installing a ZIP-tool such as 7-zip which is capable of dealing with *.tar and *.gz archive formats.
  2. Download & install Python 2.7.8.
  3. Install SCONS 2.3.3 (use the Windows Installer Download) or higher (don’t use earlier versions of SCONS).
  4. Install the following tools in addition to Python and SCONS. These are optional, but I’ve installed all of them to make sure not running into other distracting issues:
    1. DoxyGen 1.8.8
    2. Basic MikTex 2.9.5105
    3. GraphViz 2.3.8
    4. Uncrustify 0.57 for Win32
  5. Make sure to have Python as well as SCONs and the other tools in your PATH environment variable.
  6. Fine-tune some of the source files for the AllJoyn SDK before compilation due to changes made from VS2012 to VS2013 in the C++ Libraries and Compiler.
  7. Compile the AllJoyn SDK using SCONS.
  8. Create your VS2013 C++ project to test your compiled AllJoyn Library.

For more details on how-to setup and verify the development environment, in general, also look at Rachel’s blog. She will create a post to explain, how-to install the tools above, make sure you have the environment variables setup correctly and verify if the tools are all available in your PATH environment variable. This post nevertheless will just explain, how-to setup for VS2012 based on the learnings with the official current releases from AllJoyn available at the time of writing this article.

Update the AllJoyn Source to work with VS2013

As mentioned above, Microsoft made some changes in the Visual Studio 2013 compiler (making it more compliant to certain C/C++ standards and defacto-standards). This resulted in a refactoring of some of the standard template and base libraries the AllJoyn SDK makes use of. Also some workarounds AllJoyn used for its cross-platform build with SCONS are not needed in VS2013, anymore. So we need to get rid of those.

Fortunately the changes you have to make are not that many (although they were a bit more challenging to be tracked down:)).

  1. For the following steps, replace <alljoynsdkroot> with the root folder in which you have installed the AllJoyn SDK. To ensure we’re talking about the same directory structure, this is what I assume the AllJoyn root directory looks like:
  2. Helpful background info: the core SDK for AllJoyn is built in C++. For other platforms including C, Java, Node etc. the AllJoyn group has built language bindings which are all based on the core C++ libraries. Therefore some headers are available multiple times in the source code control structure for the different language bindings.
  3. The first file we need to touch is one of the platform mapping headers AllJoyn is using the the core SDK. These header files are used to provide some macros that cover differences/workarounds for core functionality of C/C++ compilers of different platforms.
    1. Open the file <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\common\inc\qcc\windows\mapping.h
    2. Add the following pre-compiler macro at the beginning of the source file:
    3. Un-comment the following source files at the end of the source file to avoid duplication.
  4. The very same mappings file need to be updated for the C-bindings. For this purpose, conduct exactly the same changes as outlined in step 3 for the mapping.h file in the C language binding folder.
    1. The mapping file to update is called <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\alljoyn_c\inc\qcc\windows\mapping.h
    2. Perform the same changes as outlined in step 3.
  5. The final change that needs to happen is an update in the SCONs build script files for AllJoyn so that it supports the VS2013 option from SCONS 2.3.3 in addition to the existing VS2010 and VS2012 options.
    1. Open the file <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\build_core\SConscript
    2. Search for the section in the script that defines the MSVC_VERSION enumeration with allowed values. This will support values up to VC 11.0 incl. VC 11.0Exp for the express edition.
    3. Add an additional value “12.0” to this variable definition as shown below (this assumes you have VS2013 Professional or higher – I didn’t test express but assume 12.0Exp would make it all work with the express edition, as well):

Build with Visual Studio 2013

Now that we have made all changes to platform-headers and SCONS scripts, we can build the libraries for Visual Studio 2013. These can then be used in any VS2013 C/C++ project to enable you develop with the latest and greatest (released) development tools from Microsoft.

  1. Open up a Visual Studio 2013 command prompt.
  2. Change to the directory <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn
  3. Execute the following command:
    scons BINDINGS=”C,C++” MSVC_VERSION=”12.0″ OS=”win7″ CPU=x86_64 WS=”off”
    1. Note that you might see some warnings since a few verifications got stricter in VS2013 C/C++.
    2. The command builds the AllJoyn SDK with the following options:
      1. Language bindings for C and C++. For the other bindings I’d suggest to just use the existing SDKs;)
      2. Visual Studio 2013 using the 12.0 version of the Visual C compiler.
      3. Target operating system Windows 7 (which perfectly works on Windows 8, as well – it does not have any impact on the compiler options or Windows SDK references since all used are standard libraries – the SCONS scripts are just using this for validation of other options, e.g. the CPU-options available for the platform of choice).
      4. White space fixing of source files turned off (WS=”off”). To turn this on, make sure that Uncrustify is set in your path appropriately.
    3. Here a screen-shot of how the start of the build should look like:
  4. Since the build runs a while, wait until it is finished and see if any errors occurred. If not, you find the ready-to-use libraries built with the VS2013 C/C++ compiler under the following folders:
    1. X86: <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\build\win7\x86_64\debug\dist\cpp
    2. X64: <alljoynsdkroot>\alljoyn-suite-14.06.00-src\core\alljoyn\build\win7\x86_64\debug\dist\c

Creating a VS2013 Console project for testing the libraries:

Finally to verify if things are working, you can create test apps and see if they can join other devices on an AllJoyn bus (refer to Rachel’s blog for details on what an AllJoyn Bus is). In this project you need to refer to the libraries just built.

  1. Create a new Visual C++ project. For our testing purposes I’ve created a Console application (Win32/64).
  2. Update your build configurations as you need them.
    1. E.g. by default the project template will create a Win32 32-bit application.
    2. If you want to have 64-bit, just add the 64-bit configuration as usual for any other C/C++ project.
    3. Note that the subsequent changes in this list need to happen for both, 32-bit and 64-bit build configurations.
  3. In the Project Properties dialog, add the following “VC++ Directories” to your “Include Directories” and “Library Directories” settings so that the compiler finds the dependent header files and compiled libraries. These should now point to the directories used earlier to build AllJoyn with VS2013/VC12.0.
    1. Note: in the screen-shot below I have used a relative path from my solution so that whenever I get the solution to a different machine it will still compile without issues whenever the AllJoyn SDK is put into the same directory on that new machine. I’d suggest doing something similar for your projects, as well, so that re-creating a dev-machine from scratch is predictable and easy-to-do.
    2. Include Directories
    3. Library Directories
  4. If you want to “avoid” IntelliSense complaining that it does not find headers required, also add the “Include Directories” added earlier to the more general “VC++ Directories” option to the option “C/C++ \ General \ Additional Include Directories”. These are exactly the same as those specified in step 3 for “Include Directories”.
  5. Next you need to define a pre-processor that is used by some of the header-files from the language bindings to detect the platform and define different types of Macros and platform types (remember the customization we made earlier to make stuff build on VS2013 – these are some of those). This pre-processor is called QCC_OS_GROUP_WINDOWS as shown below:
  6. Finally you need to tell the VC Linker which libraries to use for the linking process. This ultimately includes some dependencies from AllJoyn itself as well as the built AllJoyn libraries. See below for a list of items to include there:

With all these steps in place, you can start writing some code for AllJoyn. E.g. you can discover devices and services or register yourself as a service in an AllJoyn Bus network – all done with Visual Studio 2013.

For example the following code attaches itself to an existing bus service in a local network and queries for devices that do offer services with a specific service prefix name on this bus:

#include <qcc/platform.h>

#include <assert.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include <alljoyn_c/DBusStdDefines.h>
#include <alljoyn_c/BusAttachment.h>
#include <alljoyn_c/BusObject.h>
#include <alljoyn_c/MsgArg.h>
#include <alljoyn_c/InterfaceDescription.h>
#include <alljoyn_c/version.h>
#include <alljoyn_c/Status.h>

#include <alljoyn_c/BusListener.h>
#include <alljoyn_c/Session.h>
#include <alljoyn_c/PasswordManager.h>

#include <qcc/String.h>
#include <qcc/StringUtil.h>
#include <qcc/Debug.h>

#include<vector>
#include<signal.h>

int main(int argc, char** argv, char** envArg)
{
QStatus status = ER_OK;
char* connectArgs = "null:";
alljoyn_interfacedescription testIntf = NULL;
/* Create a bus listener */
alljoyn_buslistener_callbacks callbacks = {
&buslistener_registered,
NULL,
&found_advertised_name,
NULL,
&name_owner_changed,
NULL,
NULL,
NULL
};
/* Session Port variables */
alljoyn_sessionportlistener_callbacks spl_cbs = {
accept_session_joiner,
NULL
};
alljoyn_sessionopts opts;

printf("AllJoyn Library version: %s\n", alljoyn_getversion());
printf("AllJoyn Library build info: %s\n", alljoyn_getbuildinfo());

/* Install SIGINT handler */
signal(SIGINT, SigIntHandler);

/* Create a password */
alljoyn_passwordmanager_setcredentials("ALLJOYN_PIN_KEYX", "ABCDEFGH");

/* Create message bus and start it */
g_msgBus = alljoyn_busattachment_create(OBJECT_DAEMON_BUSNAME, QCC_TRUE);
if (ER_OK == status) {
status = alljoyn_busattachment_start(g_msgBus);
if (ER_OK != status) {
printf("alljoyn_busattachment_start failed\n");
}
else {
printf("alljoyn_busattachment started.\n");
}
}

/* Register a bus listener in order to get discovery indications */
g_busListener = alljoyn_buslistener_create(&callbacks, NULL);
if (ER_OK == status) {
alljoyn_busattachment_registerbuslistener(g_msgBus, g_busListener);
printf("alljoyn_buslistener registered.\n");
}

/* Connect to the bus */
if (ER_OK == status) {
status = alljoyn_busattachment_connect(g_msgBus, connectArgs);
if (ER_OK != status) {
printf("alljoyn_busattachment_connect
(\"%s\") failed\n", connectArgs);
}
else {
printf("alljoyn_busattachment connected to
\"%s\"\n", alljoyn_busattachment_getconnectspec(g_msgBus));
}
}

/* Create the mutex to avoid multiple parallel executions of foundAdvertisedNames */
gJoinSessionMutex = CreateMutex(NULL, FALSE, NULL);
if (gJoinSessionMutex == NULL) {
printf("Error creating mutex, stopping!");
return -1000;
}

/* Find the Led controller advertised name */
status = alljoyn_busattachment_findadvertisedname(g_msgBus, OBJECT_NAME);
if (ER_OK == status) {
printf("ok alljoyn_busattachment_findadvertisedname %s!\n", OBJECT_NAME);
}
else {
printf("failed alljoyn_busattachment_findadvertisedname
%s!\n", OBJECT_NAME);
}

/* Get the number of expected services
In our test-setup we expect 2 services (arduino and galileo) */
int nExpectedServices = 2;

/* Wait for join session to complete */
while ((g_interrupt == QCC_FALSE) && (i_joinedSessions < nExpectedServices)) {
Sleep(10);
}

/*
Devices found, do whatever needs to be done now
...
*/
}

The code above is using a call-back that is called by the AllJoyn core libraries whenever a device is found. More specific it looks for two devices we expected to be available in our “lab-environment” for testing purposes. One was an Arduino and the other one an Intel Galileo with a LED connected. Both where using the AllJoyn Thin Client Library to connect to the bus.

The only callback relevant was the following one since it is called by AllJoyn libraries when a new device was found that was connected to the same AllJoyn Bus (we’ve used the other callbacks for other tests):

void found_advertised_name(const void* context,
const char* name,
alljoyn_transportmask transport,
const char* namePrefix)
{
printf("\nfound_advertised_name(name=%s, prefix=%s)\n", name, namePrefix);
DWORD dwWaitResult = WaitForSingleObject(gJoinSessionMutex, INFINITE);
s_sessionNames[i_joinedSessions] = (char*)malloc(sizeof(char)*1024);
strcpy_s(s_sessionNames[i_joinedSessions], 1024, name);
i_joinedSessions++;
/* enable concurrent callbacks so joinsession can be called */
alljoyn_busattachment_enableconcurrentcallbacks(g_msgBus);
ReleaseMutex(gJoinSessionMutex);
printf("found advertisments %d\n", i_joinedSessions);
}

Note that the code above is just meant to give you some sort of impression. For full end-2-end scenarios wait for further blog-posts from us as well as look at the official AllJoyn documentation.

Final Thoughts…

Okay, the journey above is a lot of effort, isn’t it? Well, at this point it needs to be said that AllJoyn is still in a very early stage, and therefore some development steps are still a bit hard to get done. Especially setting up an environment that works end-2-end with the tools-chain of your choice (VS2013 in my case at the time of writing this post).

But, I am excited that I am involved in this journey. I see various things happening in the industry that are all geared towards some sort of device mash-up. Think of what Microsoft tried to start a few years ago with Live Mesh. Think of what Apple is doing with their seamless device-2-device interaction which really works great. And consider what Android is doing attempting to do with Android “L”. All of these efforts are really cool and enable great scenarios – but they’re all locked down to one vendor and their specific operating system.

When thinking beyond the ecosystems (Microsoft, Apple, Google) I mentioned above and involving all sorts of devices of our daily live such as (Smart) TVs, Coffee machines, HiFi stereo systems, cars and car information systems, anything related to home automation or even industrial facilities leveraging device mash-ups to solve more complex problems, there MUST be something that AVOIDS vendor lock-in.

AllJoyn can become “this something”. It has the potential. The technology is cool. All it needs now is some sort of dramatic simplification as well as an ecosystem of service interfaces and devices supporting those for all sorts of scenarios across different industries.

We’ll see and experience, how this will work out and where we will end up with this. Microsoft definitely has a huge interest in participating in this journey and you’ll definitely see more around the Microsoft platform and AllJoyn over the course of the upcoming months.

Also look at the Twitter and Blog accounts of my peers since we’re already planning a subsequent hackathon around this topic together with some product groups to dig even deeper into the world of peer-2-peer device mash-ups based on AllJoyn… so that said, expect more to come!!!

Cloud – Windows Azure – Combining PaaS & IaaS to get best of both worlds in your Architecture

Over the past 2 years I have been working with many ISVs (Independent Software Vendors) to get their products and platforms to the Public Cloud on Windows Azure. In almost all cases the requirements and motivations from those ISVs did include one or a combination of the following reasons and/or expectations:

  • Expand beyond the own country, get global / international.
  • Be able to scale faster and easier with less amount of effort.
  • Reduce effort and costs for operations management.

Of course there are many more reasons and motivations why (or why not) an ISV or a company would consider (or not) cloud computing. But these are very common ones.

When looking at those requirements above there’s one piece they do have in common: the ISVs need to spend less time on managing your infrastructure, networking configurations and operating systems (e.g. patching etc.) to be able to be successful. With such requirements in mind I’d definitely rather look into automatically managed service offerings from Cloud Platforms such as Azure (or in other words: Platform-as-a-Service and Software-as-a-Service). Because with those requirements above you will want to have as much automatic management & setup as possible to achieve your goals.

But in practice things are often more difficult…

How far the goals above can be achieved requires looking detailed at the initial situation of the ISV and his application. In specific the application architecture and identification of which technologies are used in detail is of major relevance. Not all techniques, technologies and approaches might work well in Platform-as-a-Service runtimes such as Windows Azure Web Sites, Mobile Services or Cloud Services (often for a good reason, sometimes because some features are not available, yet). Let’s look at a typical example architecture we see most often with software vendors nowadays:

As you can see, we do have an ASP.NET MVC web front-end, some services performing more complex computational or IO-intensive tasks in the background, a database cluster (for high-availability) and a storage-system for documents, videos and other binary data. Looking at it, the naive mapping for Azure could work as follows with pure Platform-as-a-Service and ready-to-use services (such as Azure storage). That way we would not have to deal with any kind of traditional operations management at all – a truly nice vision and in my opinion something that always should be on a long-term roadmap:

Component Windows Azure Service
ASP.NET MVC Application Web Sites or Cloud Services
Computational background process Cloud Services with Worker Roles
SQL Server Cluster Azure SQL Database
Storage Cluster Azure BLOB Storage

Looks pretty simple and would be great if it would always be that easy. In practice we need to look at each component to see, if it is doing or making use of something that is not built for working in Platform-as-a-Service environments. If there’s nothing like that, definitely go for it because you’ll benefit most from the Cloud and Azure then. If you have challenges we need to consider alternatives: either adopt your product/code base or select another alternative.

And in case of Windows Azure that other alternative to PaaS definitely can be Windows Azure Virtual Machines, which is IaaS (Infrastructure-as-a-Service) on Azure. Let’s look a little bit deeper into the sample architecture above, look at some of the most important questions I typically ask and pick some assumptions for this post.
Conclusion: leverage BLOB storage as a ready-to-use service from Azure.Conclusion: Web Sites will not work because of 3rd-party components to be installed, but Cloud Services is a fit as stateless, file storage can be outsourced to Azure BLOB storage.Conclusion: Cloud Services worker are a perfect match since async processing possible and file storage can be easily replaced by BLOB storage.Conclusion: this is the only case where we cannot use the Platform-as-a-Service offering from Azure. We need to fall-back to Infrastructure-as-a-Service and run SQL Server in a Virtual Machine.

Component Questions Assumption
Storage Cluster How good is access to storage encapsulated? Is it spread across all source files or central implementation with e.g. repository pattern? Let’s assume access to file system is centrally encapsulated in the code base in a repository class. This can be easily exchanged with a BLOB-storage-based implementation.
ASP.NET MVC Application Stateless?
Persistent local file storage?
Installation of 3rd-party components needed?
For this assume, the app uses 3rd-party components, local file storage and is stateless (load-balancer ready with round-robin algorithm).
Computational
background process
Windows? Linux?
Asynchronous?
Persistent local file storage?
Installation of 3rd-party components?
Let’s assume the background job runs on Windows, can work asynchronous in the background and has no 3rd-party components needed.
SQL Server Cluster SQL features used?
Performance requirements?
Let’s assume our SQL Server database uses .NET CLR procedures and encryption functions.

The final architecture – Mixing Virtual Machines and Cloud Services…

Since we would like to be as effective and efficient as possible I definitely recommend to use Platform-as-a-Service and Software-as-a-Service where possible. Given the above sample-analysis for this example that’s the case for all components except SQL Server. Finally that leads to the following architecture in Windows Azure:


Setting-up the infrastructure in Azure (basic steps)…

To setup the architecture above in Windows Azure, you need to follow the subsequent steps in this order. Note that this is just a quick overview, in the next post I’ll give you a detailed step-by-step guide based on an example I’ll publish on my Codeplex workspace.

  1. Create an affinity group.
    All networks, virtual machines and cloud services you want to combine through a virtual network MUST be placed into the SAME affinity group.
  2. Setup a “Virtual Network” in Windows Azure.
    This network is used for having a private network with subnets in Azure that allows your Cloud Services and Virtual Machines to interact with each other. The nice thing is that as long as you don’t do VPN, this service is free of charge. Also note that the VMs (IaaS-Only, not PaaS) will remain the same IP-addresses assigned inside of the Virtual Network as long as you don’t DELETE the VMs.
  3. Create a new Virtual Machine in the network and configure SQL Server.
    After the network is created, create a VM and make sure you add it to the virtual network. After the VM has been created, perform the following steps:

    1. Open up port 1433 in the VM. That enables 1433 communication ONLY INSIDE the Virtual Network. If you also want it available externally, you need to open the port in the endpoint-configuration on the management portal from Windows Azrue.
    2. Configure SQL Server using SQL Authentication (except you also have an AD deployed in a VM in Azure, then you can also use Windows Authentication).
    3. Import your database, create a login with SQL Authentication and make sure to provide it access to the database.
    4. Finally open up a command prompt, type ipconfig and write down the IP address. Note that the address will be constant as long as you don’t delete the VM. Please DO NOT assign a static address since this is not supported in Azure VMs!!
  4. Create & deploy a Cloud Service Package for your web site and deploy.
    Finally for your ASP.NET web application (mentioned in the sample above) create a cloud service package, add the network configuration in your “ServiceConfiguration.Cloud.cscfg” XML configuration file. Before publishing make sure that your database connection string points to the IP address you’ve seen for your VM in step 2.

Final Words and more scenarios!!

Windows Azure supports “mixed deployments” that include Virtual Machines (IaaS), Cloud Services (PaaS) as well as other platform services (e.g. storage, media services etc). That enables you to get best of both worlds: the full efficiency, automatic scale and automatic management of PaaS where possible while gaining full control through VMs where needed.

Typical scenarios that are enabled by combining Virtual Machines and Cloud Services on Azure where you run most of your workloads in automatically managed Platform-as-a-Service while running other pieces on VMs where you need full control include:

  • Combining your app with Linux-based work-loads because Linux runs in Azure Virtual Machines.
  • Special SQL Server requirements that lead to situations where you cannot leverage Azure SQL Database.
  • You need to run legacy components in your app that just don’t work inside of PaaS runtimes such as Cloud Services, Web Sites & Co.

With such principles and thoughts you definitely can move much faster to the public Cloud and Windows Azure when you need to! You don’t need to re-write your whole app and use VMs where applicable while moving to PaaS where you think you can benefit most out of it!!