WPD203: Choosing an X-Platform Dev Strategy (in review)

Yesterday at TechEd 2014 (Melbourne) I presented a session on “Choosing a X-Platform Dev Strategy”. As the name suggests the topic is on cross platform development, from a Microsoft developers perspective, it also focused on mobile development which is where cross platform comes up for most folks.

Unlike previous TechEd events in Australia, this year they are running it twice, once in Melbourne, and once in Sydney a few weeks later. Each event is smaller with less concurrent sessions which means that the audiences for each session are pretty large (my session had folks standing at the back and outside the presentation area).

I’m always keen to look at my session evaluations (yes we do read them) as it gives us a chance to take the constructive feedback and improve generally as a presenter. This year I’ve got the added benefit at being able to present the session twice to two TechEd scale audiences, and hopefully take it as an opportunity to improve my delivery. So I figured I would do a debrief with “the Internet”, or at least those three people that read my blog.

To get started, let’s look at the aggregated scores for my session:

WPD203: Aggregated Session EvalatuionSo overall I scored a 4.23 out of 5.00 or about 84%. My worst scoring area was around the following question:

The knowledge/skills I gained are relevant to my role

As you can see I scored a 3.92 in that particular area. It would be really interesting to know what the expectations of the topic were going in just in case there is a misalignment between that and the title of the talk. One of the arguments that I made in my talk was that cross platform development is going to become increasingly more relevant to those developers who build business applications (it already is for consumer applications).

It’s hard to gain insights from numbers, but given the number of responses (49 to date) and the scores that I was getting a fair few fives, fours and maybe a few threes in everything except that question. But given the other questions really relate to my delivery of the content I want to focus on them for my next delivery. The detailed responses might provide some more insights:

WPD203: Eval COmments

Only five comments. I like it when people take the time to provide event a sentence because it really helps understand what I can change. Let’s tackle the first one.

Good presentation. Good high level of information with some deep dives into specific tech. The presentation felt a tiny bit unstructured.

Pretty good feedback and summarizes the session well, and I take the unstructured piece. When I first put my slide deck together I had it broken down it to well defined sections but what I found as I kept going through the presentation was that they kind lead to repetition. So I’ll need to figure out a way to segue more explicitly so that folks know where we are in the presentation.

Talk didn’t seem rehearsed and relied too much on code demos that had the potential to stall or go wrong.

Fair cop. I probably went through my presentation 20+ times in terms of walking the slides and demos but in terms of verbal rehearsals I tend to rely on the combination of the slides and code to guide what I say when I present. I’d be a terrible actor because I could never remember lines. On the bright side I’m pretty good at learning APIs so where I could never recite Hamlet, I’m pretty good at getting my code to compile on the first try.

One challenge that I had with this particular presentation was that much of the time was spent inside a Virtual Machine running in Parallels. That mean that some of the compilation and deployment steps take longer than I would like. Indeed one of the challenges with cross platform development, particularly when you are targetting iOS and Windows Phone is that virtualization or cross network calls is going to be involved somewhere. So to get it all running on one machine so I could flick backwards and forwards between Visual Studio and the iOS simulator. I also got caught out a few times starting to demo something but forgetting to flick back to my demo rig from the slides (you wouldn’t believe how often I do that).

Too much detail on one specific product, misleading title.

Not sure about this one. I think I could improve the title by adding “mobile”, in fact I had considered that. In terms of focusing on one specific product there was two tools that I spent time looking at. Cordova and Xamarin. Both are quite different approaches to cross platform development which is why I chose to contrast and compare them. There are other cross platform tools out there but it would have been hard to add even one more to what I presented (in fact I pretty much ran out of time). I’ll have to think about this one.

Perhaps I could trim some of the Cordova stuff since I did kind of demo it three ways (native, with Ionic, and via Visual Studio MDA). I could potentially drop Ionic since it has limited Windows Phone support anyway and go Cordova (as a point of reference), Visual Studio MDA, Xamarin and then try to briefly cover something else. I did consider AppBuilder by Telerik. It is an entire stack which leverages Cordova, but adds some UI frameworks like Kendo UI. Another alternative would be adding in the Azure Mobile Services angle. What do you think?

Overall, it was good feedback with some specific pointers on how to improve. Now I just need to figure out what I can do to improve it for the NSW folks.

Check Yammer before your e-mail.

I am a huge fan of Yammer, but this guidance goes for any enterprise collaboration tool that you might be using. I now receive too many e-mails to keep up with. If I spent my whole day processing e-mail I wouldn’t be able to get through it. So I don’t try anymore.

For this reason I’ve actively been trying to shift more and more of my collaboration to Yammer to try and leverage my network as a way of coping with the volume.

Quite a bit of my e-mail volume is notification messages from Yammer which I will soon be turning off (except for direct e-mails or replies to threads I’ve engaged in) since the Yammer app on my phone uses notifications to let me know about things that I’ll be interested in.

What this means is that I am starting to move to a model where e-mail is more for external organisations who don’t use Yammer (or some other tool). Is Yammer a perfect replacement for e-mail? Nope! Both tools have a place and one of the tricks of being an effective communicator is to know when to use e-mail, phone, Yammer or any other mechanism.

Personally I’m looking forward to caring less about my Inbox.

ImportTimeVsDTUPerformance

Timing Data Imports into Azure SQL Databases

We often help customers migrate their applications to the cloud. For small applications with small databases this can be pretty straight forward, however as the complexity and volume of data that the application deals with grows you need to do a little bit more planning.

With Azure SQL Database the principal way of importing data is via a *.bacpac file which contains all of the database definition and data. You can upload this file into an Azure Storage Account and then trigger the import via the web-based UI upon creation of the database.

The size and shape of the data you are importing, combined with the service tier that you chose can have a dramatic effect on import time. If you are planning a time critical cut-over then you are going to want to test how long this migration will take in advance, and if necessary step up to a higher performance tier during import and then move down to an appropriate tier for your usual transactional workloads.

As you can see from the following chart, a given database will import in significantly less time if you do it on a higher performance tier.

ImportTimeVsDTUPerformance

The exact timings that you achieve will of course depend on your specific database. To help establish those timings you can use a script that I created to collect the data points that went into the above graph – I’ve posted it up on GitHub under the Readify account.

AzureSqlPerformance is a PowerShell module that takes a bunch of arguments and uses them to create a Azure SQL Server, and Database at a specified performance tier, then kicks off an import and times how long (to about 10 seconds accuracy) it takes for the import to complete.

Feel free to check the script out, and if you can make some improvements to the implementation submit a pull request. I kind of stopped working on it as soon as it did what I wanted :)

By the way, you can find some of the other pet projects others at Readify have contributed to over at labs.readify.net where we are trying to collect some of these kinds of resources.

 

Cloud Transition Strategies for ISVs

Whilst the media heaps plenty of hype on the latest developments from major software and platform vendors it is important to remember that many businesses run on software packages tailor made to run a particular kind of industry operation. Often this software has been evolving over an extending period of time.

As the industry shifts to an increasingly cloud-enabled and mobile-focused future many of the independent software vendors (ISVs) who create these applications need to figure out how to remain relevant to customers who are now placing very different demands on the way that their software is deployed.

The kinds of challenges that ISVs are facing are (at least) threefold:

  1. The software is designed to run on a desktop and the UI would need to be completely re-written for a web or mobile experience.
  2. The software evolved organically and much of the business logic is scattered throughout the UI code within the previously mentioned desktop application.
  3. The software deployment topology uses traditional client/server database connectivity and may not be suitable for Internet deployments.

Attempts by ISVs to re-work their application architectures, or even entirely re-write their solutions have been mixed. Part of the problem is that these applications are literally the work of team decades and even if the code is less than ideal, it is still quite detailed in its implementation. In any code base you will find the tell-tale signs of inexperience as the founder of the business cut their teeth with programming for the first time, and at other times you will find that a code has been through so many hands that the foundational elements of the architecture have been worn away.

So how to move forward? Well there is no silver bullet for the code itself. At some point that crusty VB6/.NET hybrid monster is going to need to be put out the pasture, but perhaps you can get a little bit more mileage out of it yet by some shrewd use of cloud technology.

Priming for Cloud Enablement

One of the mistakes that many ISVs make when re-platforming is cutting off an upgrade path from their current customers to the new solution. Nothing encourages a customer to look at other offerings than their vendor telling them there is no upgrade path. The problem with the cloud is that not only do you have to provide an upgrade path but you also need to consolidate all customer data into one location.

It would be a mistake to leave this step until the end of your cloud enablement project. Instead I recommend that ISVs begin shipping an opt-in “Cloud Disaster Recovery” feature in their own software which replicates their customer databases to the cloud. Many ISV solutions are poorly managed by customers and so an offering like this would not only provide a benefit to the customer but also set things up for future offerings.

Another benefit of this approach is that for minimal investment you’ll clearly identity customers that “have problems with thems there cloud stuff”. If they won’t avail themselves of your applications cloud DR service then you’ve got some work to do ahead of your cloud cut-over.

Read-Only Mobility

Once the customer is backing-up their data to the cloud it is possible for the ISV to use that admittedly stale data to provide a read-only view accessible on mobile devices. Once again this would be an additional feature that the ISV could charge for. Behind the scenes the Cloud Disaster Recovery component of the desktop software would become more of a “connector” allowing for more frequent updates to the back/cloud replica.

This step requires that you begin exposing your customer data via a RESTful API and appropriately securing it (probably via an OAuth 2.0 mechanism). This API would be deployed exclusively in the cloud which means your mobile applications would require minimal configuration after downloaded from the major mobile vendor app marketplaces. You might even enable cloud identity providers such as Azure Active Directory or Google Apps.

Cloud Cutover

Having satisfied some of the customer demands by achieving a read-only mobile solution the next step is to remove the need for the customer to manage infrastructure just for your application by uplifting the application and shifting it to the cloud. At this point the primary read-write usage of the application is still through a desktop application and so you would need to leverage technology such as Azure RemoteApp or Amazon WorkSpaces to provide a remotely hosted version of your application.

The critical thing to remember here is that you can leverage your existing DR/connector feature to make this transition as smooth as possible. In fact you can embed the components necessary in your desktop software to detect that a cloud migration has taken place and automatically cut over to the remoted experience.

This step still has its challenges. Before you can take on this responsibility (or indeed the DR responsibility) you need to move your team beyond just writing software and shipping and installer to a team capable of managing a sophisticated cloud infrastructure complete with the tooling and processes to manage upgrades on behalf of clients. This is actually the hardest part.

Another challenge at this phase is pricing the service. Most successful ISVs transitioned to a subscription model long ago. But the price of that subscription only covered the maintenance on the software, not a full blown hosting infrastructure. So you’ll probably need to add a hosted subscription option which includes the maintenance, hosting fees and perhaps bundle the DR fee (which is now redundant anyway) and the mobile access.

Read-Write Mobility

At this point step back and consider what you would have achieved. First, you have got all of your customer data securely stored in the cloud (probably with way better redundancy than they could ever afford). Secondly the customers are using that data live via a remote desktop experience. You now have got everything you need to start enabling specific read-write mobility scenarios against your application.

As part of the read-only mobility implementation you exposed a RESTful API. In this phase that API is extended and more features are built into the mobile applications. As the number of features increased certain classes of users cut across to using the mobile applications exclusively and you can start spinning down maintenance on the desktop application screens.

I would strongly recommend instrumenting your code at this point with something like Application Insights to get a good idea of how your software is being used to help prioritise this work.

Cost Optimisation

Congratulations! You now have a SaaS application, but you aren’t done yet. One of the things that you probably didn’t do when you migrated your customers over was optimise your application to run more efficiently in a centralised computing model. For example can you work your schema so that multiple logical tenants can share a single database to reduce your database hosting costs? Are there any background processes that the desktop software used to perform that you now need to shift off to dedicated compute instances, and can multiple tenants share that single instance – and finally, when can you shut off the remote desktop access entirely?

Final Thoughts

If you have read this far then it probably means that you have seen first hand the challenges that ISVs face responding to customer demands for cloud/mobile capabilities in your software. I hope that the above gives you some creative ideas for taking your first steps or continuing on your journey. If you have any thoughts, feedback or want to share some of your challenges feel free to share below.

Federated Identity in Visual Studio Online

Earlier this week Sean McBreen from the Visual Studio Online team posted an announcement detailing the work that he and his team have been doing to streamline the process of creating a new Visual Studio Online accounts where users can authenticate using their corporate username and password.

This scenario is enabled through Azure Active Directory (AAD) which allows for synchronisation of corporate identity information to the cloud which can then be used by SaaS applications such as Office 365, Dynamics CRM and now Visual Studio Online.

If you want to get up and running with Visual Studio Online quickly, and link it to your corporate directory I recommend that you read Sean’s post. The purpose this post is to show how enabling authentication with AAD in Visual Studio Online is not only great for organisations who want to us corporately controlled user names and passwords, but also for projects that span organisational boundaries. Really good examples are consulting companies and system integrators, and of course their customers.

Before I get into the detail of how to get federated identity working with Visual Studio Online I wanted to provide a little bit of the background on the journey Microsoft has taken so far with AAD support.

Background

The release this week is actually a refinement (from a users perspective) on a capability that was released at the BUILD conference in April 2014. In April, Microsoft released a preview of the new Azure portal. In that preview it was possible to create a Visual Studio Online Team Project, and if one didn’t already exist, a Visual Studio Online account. Team Projects created through the preview portal were special however because they made use of AAD to authenticate users.

What’s Changed in This Update

Whilst it was great that we could get Visual Studio Online using AAD via the preview Azure portal, it didn’t really give us much control over which directory the Visual Studio Online account was associated against. This latest update firstly allows us to specific which AAD instance to associate the Visual Studio Online account against at the point of creation.

Create VSO Account

One of the quirks that I have noticed is that whilst you can have multiple Visual Studio Online accounts associated with a single Azure subscription, once you create your Visual Studio Online account and associate it with an AAD instance, all Visual Studio Online accounts within that subscription must be associated with that same subscription.

The work around which would enable you to freely associate different AAD instances with each Visual Studio Online account is to create multiple Azure subscriptions. Whilst this might sound complex creating a new subscription is easy and it does provide some benefits which we will get to shortly.

So now with all that background out of the way we finally get around to the whole reason for this post. Enabling users from different organisations (different AAD instances) to use the same Visual Studio Online account.

Reasons for Federation

Given that Visual Studio Online now supports AAD and Microsoft Accounts you might be wondering why federated identity is that important, surely anyone who doesn’t have an Organizational ID can just use an MSA? The following diagram shows how that topology would work.

MSA via AAD

While this is true it isn’t really optimal from a security perspective. Let’s say that you are the IT manager at your company. You have engaged the services of a system integrator to build a new line of business application and you decided to manage the project using Visual Studio Online. You create the Visual Studio Online account and associate it with your corporate directory. You then grant access to the individual MSA accounts from the system integrator. You’ve now optimised access to Visual Studio Online for your own staff, but in reality the majority of the users of the Visual Studio Online account are going to be users from the external system integrator. You could have instead opted to associate the Visual Studio Online subscription with the system integrators AAD instance but in the end you are going to want to bring maintenance of the solution in house.

There is another potentially more serious security concern. Imagine that one of the staff members of the system integrator is terminated on poor terms. Whilst the system integrator has probably disabled their corporate account they generally have no control over the MSA that staff member was using to access your systems. If the system integrator fails to advise you to shutdown access to the MSA  then the former staff member still has access to your Visual Studio Online account.

Instead it would be better if, as IT manager you could link your AAD instance to that of the system integrators so that whenever one of their staff log into your Visual Studio Online account that they use their own corporate credentials. If that staff member leaves the organisation (for whatever reason) then their account would be disabled, and access to your Visual Studio Online account with it.

This doesn’t negate your obligation to regularly check who has access to your Visual Studio Online account, but it helps leverage the user management processes and workflows at your partner system integrator. It is also one less password for individuals to have to manage.

Getting Federation Working

Fortunately AAD supports the ability to add users from other AAD instances. When adding a user you have the option of creating a new user in the directory, adding a Microsoft Account, or adding a user from an external directory.

In order to add a user from an external directory the current user logged in performing the administrative task must have read access to that same external directory, otherwise it won’t be possible to validate the user that is being added. In practice what this means is that you can generally add users from other AAD instances within the same subscription, but not from another instance in another subscription controlled by a different organisation.

Here is the federation chicken and egg problem. You can’t add a user from a foreign directory because you can’t get read access to it, and the administrator of the foreign directory can’t grant access to you to read their directory for exactly the same reason.

Federation Problem

So how do we move forward? Well the trick which was explained to me was to use an MSA and add it to both directories. You would then log into the user management portal using that MSA account and then add users from the foreign directory taking advantage of your user administrator status in the local directory and read access to the foreign directory.

FedHelper

When adding users you will now get a green tick next to the username indicating that AAD was able to successfully validate that user exists. Note that when you add a user from a foreign directory (or an MSA) you still need to provide some user details. This information is not kept in sync between the directories. Whilst this might be considered a problem it does allow you to suffix user descriptions with their organisation name which makes it easier to identify them in your systems as a foreign user (other than the username of course).

Security Concerns

If the process above left you feeling slightly uncomfortable you are not alone. Let me explain why you might like to take a slightly different (and complex) approach in the interests of maintaining the integrity of each organisations AAD instance.

The problem with the scenario above is that someone is simultaneously given read access to the system integrator directory and user administrative rights on the customer directory. I don’t know of many IT managers in either organisation that would be too happy with that approach. In most circumstances it would be unacceptable to allow an external organisation to have control over your user directory. On the flip side, system integrators work with a variety of customers and often have internal distribution lists which detail client names and projects – also information that you wouldn’t want to share. So how do we enable federation without raising security concerns?

Sandboxing Projects and Relationships

The approach that I recommend is to create a separate Azure subscription. This subscription will contain a separate AAD instance which is associated with a Visual Studio Online account (also in the same subscription). Both organisations would then create their own MSAs with user-level access to their own AAD instances. Both MSAs would be granted user admin rights to the new AAD instance and could login and grant rights to other users from within their organisation. As an added security measure you might initially add project leaders from both organisations and grant them user admin rights and then disable the MSA accounts ensuring that all users from that point forward make user of their Organizational IDs and the benefits associated with them.

AAD Fed Solution

Another nice side effect of this approach is that casual users who may not have an Azure Active Directory instance can be added to this new AAD instance via an MSA. Having a separate Azure subscription for the project or vendor relationship is also handy from a development infrastructure point of view. The development team can use the Azure subscription to deploy their solution to the cloud and have all of the costs associated with the project contained within one sandbox.

Summary

Support for Azure Active Directory support was right at the top of my wish list for Visual Studio Online and I’m excited to now have this capability. The ability to enable federated identity scenarios is also going to be very useful for consulting organizations and system integrators.

I do wish that establishing relationships between AAD instances was easier. Perhaps Azure could be extended to support sending “federation invitations” where a user with sufficient rights In a remote directory could grant the ability for an AAD instance to check username validity without having to go through the whole MSA workaround.

The content in this post was heavily influenced by some work I did with the Visual Studio Online team around guidance for consulting organisations and system integrators. Now that AAD support has been shipped for Visual Studio Online some of that guidance might need to be updated. Specifically I think that the more favourable pattern for organisations wishing to support federated identity is the Account per Customer / Account per Vendor pattern. Like everything its a matter of trade offs though.

If you take this post, and the patterns outlined in that document you should have a good foundation for getting Visual Studio Online up and running for you and your organisation regardless of whether you are a system integrator needing to service multiple customers, or a company that engages multiple external vendors.

Finally, if you have any questions about getting federated identity working for AAD and Visual Studio Online specifically feel free to leave a comment and I’ll help you out if I can.

 

Government Identity on the Web

How should governments respond to identity on the web?

Today Troy Hunt (a fellow MVP) was quoted in the Sydney Morning Herald in relation to an article about myGov (http://my.gov.au) which is a portal for Australian citizens to access Medicare, eHealth, Centrelink, Child Support and NDIS records. The basic premise of the criticism is that myGov doesn’t support two-factor authentication (2FA) and that this represents a security concern.

Later a conversation between technology professionals on Twitter is speculating about the security around how passwords are stored within the database. Personally I would be extremely surprised if the cornerstone of the Australian Government’s online strategy would store user passwords in plain text instead of a hash and salt combination, but this might be foolish optimism on my behalf. It would be great if someone in the know could actually confirm this, and then perhaps explore the 2FA topic that Troy and others have raised.

I believe that any 2FA implementation would need to be “recommended but optional”. 2FA requires a device capable of generating a token which is supplied along with your username and password. For some users having this additional device might pose a challenge, there are still people in Australian society which do not have access to mobile phones, let alone smart phones. By making 2FA optional it allows those citizens to scale their security to something to what is within their means, alternatively the government could provide special token generators upon request if they wanted to make 2FA mandatory (I’d personally still want to use my phone).

Stepping back a bit, I think there is a much more interesting question about identity on the web and the government’s response to it. On one hand I really want myGov to be secure, on the other I would like it someone open so that as a developer I can create an application that can acquire a verified identity from users. Imagine a local council being able to significantly automate its processes by allowing local residents to file paperwork by signing forms with an OAuth-based flow that jumps out to myGov to gather scoped personal information and perform non-repudiation tasks.

Such a capability would necessarily require a very stringent security audit of the myGov platform prior to being opened up along with the creation of a community of developers who know how to work with the various APIs provided by the myGov platform.

Ubiquitous Connectivity & Time-based Internet Billing

I am currently sitting on board the Dawn Princess with anchors down in Akaroa. I am half way through a two week cruise around New Zealand with my family. It’s a good break from our busy lifestyle back in Melbourne.

The timing of our family holiday is a little awkward because I am missing the excitement around the announcements made at BUILD 2014 last week. Or I would be if I wasn’t able to connect from the Internet from the ship as the details unfold.

Aboard the Dawn Princess is a ubiquitous Wi-Fi network which is available in all the staterooms and most of the common areas. The Wi-Fi network routes through a satellite connection backed by MTN Satellite Communications. When using the Internet I pay at a rate of approximately $0.30 to $0.80 per minute depending on the package.

Having cruised before I was aware of these pretty steep charges so we organised a local SIM card for mobile data access when ashore. The mobile data plan is of course tied to data volume usage rather than time. Time-based billing for Internet access to be strange in this era of mobile computing. The thing that I miss most about not being connected all the time is the absence of casual interaction with my information sources (e-mail, social media, news headlines, browsing etc).

I certainly can’t afford to be online all the time to receive these notifications, so for BUILD 2014 content, I logged in to get a quick synopsis of what is being discussed and disconnect. When I was ashore I set my various mobile devices to download content (such as the keynote videos).

The whole experience has left me pondering why satellite access (at least in this instance) is charged on a time based model. Surely the ship maintains a constant satellite connection so why not allow passengers to be constantly connected and then bill a lower rate for actual usage. The administrator of the ship-based network could apply QoS to the traffic to ensure essential traffic is given preference.

Another curious aspect of the setup here is that I can’t access OneDrive or OneNote (which is backed by OneDrive). I can understand that the ship might not want passengers syncing their local content across the link (especially with all the high resolution photos captured) but it sure is a pain when I want to access one of my Notebooks from OneNote. This makes me think that the critical resource is indeed bandwidth, but by charging for time the ship ensures that passengers don’t accidentally set their devices syncing constantly.

Overall I have been pretty impressed with the strength of Wi-Fi connectivity on the ship (kudos to Princess Cruises for that). I just wish that the route to the Internet didn’t have such a big toll gate in front of it. Is there a cruise ship out there that caters to geeks wanting to be constantly connected?

Maybe someone from MTN Satellite Communications could explain why satellite communications might be billed on a time basis.