Cloud Transition Strategies for ISVs

Whilst the media heaps plenty of hype on the latest developments from major software and platform vendors it is important to remember that many businesses run on software packages tailor made to run a particular kind of industry operation. Often this software has been evolving over an extending period of time.

As the industry shifts to an increasingly cloud-enabled and mobile-focused future many of the independent software vendors (ISVs) who create these applications need to figure out how to remain relevant to customers who are now placing very different demands on the way that their software is deployed.

The kinds of challenges that ISVs are facing are (at least) threefold:

  1. The software is designed to run on a desktop and the UI would need to be completely re-written for a web or mobile experience.
  2. The software evolved organically and much of the business logic is scattered throughout the UI code within the previously mentioned desktop application.
  3. The software deployment topology uses traditional client/server database connectivity and may not be suitable for Internet deployments.

Attempts by ISVs to re-work their application architectures, or even entirely re-write their solutions have been mixed. Part of the problem is that these applications are literally the work of team decades and even if the code is less than ideal, it is still quite detailed in its implementation. In any code base you will find the tell-tale signs of inexperience as the founder of the business cut their teeth with programming for the first time, and at other times you will find that a code has been through so many hands that the foundational elements of the architecture have been worn away.

So how to move forward? Well there is no silver bullet for the code itself. At some point that crusty VB6/.NET hybrid monster is going to need to be put out the pasture, but perhaps you can get a little bit more mileage out of it yet by some shrewd use of cloud technology.

Priming for Cloud Enablement

One of the mistakes that many ISVs make when re-platforming is cutting off an upgrade path from their current customers to the new solution. Nothing encourages a customer to look at other offerings than their vendor telling them there is no upgrade path. The problem with the cloud is that not only do you have to provide an upgrade path but you also need to consolidate all customer data into one location.

It would be a mistake to leave this step until the end of your cloud enablement project. Instead I recommend that ISVs begin shipping an opt-in “Cloud Disaster Recovery” feature in their own software which replicates their customer databases to the cloud. Many ISV solutions are poorly managed by customers and so an offering like this would not only provide a benefit to the customer but also set things up for future offerings.

Another benefit of this approach is that for minimal investment you’ll clearly identity customers that “have problems with thems there cloud stuff”. If they won’t avail themselves of your applications cloud DR service then you’ve got some work to do ahead of your cloud cut-over.

Read-Only Mobility

Once the customer is backing-up their data to the cloud it is possible for the ISV to use that admittedly stale data to provide a read-only view accessible on mobile devices. Once again this would be an additional feature that the ISV could charge for. Behind the scenes the Cloud Disaster Recovery component of the desktop software would become more of a “connector” allowing for more frequent updates to the back/cloud replica.

This step requires that you begin exposing your customer data via a RESTful API and appropriately securing it (probably via an OAuth 2.0 mechanism). This API would be deployed exclusively in the cloud which means your mobile applications would require minimal configuration after downloaded from the major mobile vendor app marketplaces. You might even enable cloud identity providers such as Azure Active Directory or Google Apps.

Cloud Cutover

Having satisfied some of the customer demands by achieving a read-only mobile solution the next step is to remove the need for the customer to manage infrastructure just for your application by uplifting the application and shifting it to the cloud. At this point the primary read-write usage of the application is still through a desktop application and so you would need to leverage technology such as Azure RemoteApp or Amazon WorkSpaces to provide a remotely hosted version of your application.

The critical thing to remember here is that you can leverage your existing DR/connector feature to make this transition as smooth as possible. In fact you can embed the components necessary in your desktop software to detect that a cloud migration has taken place and automatically cut over to the remoted experience.

This step still has its challenges. Before you can take on this responsibility (or indeed the DR responsibility) you need to move your team beyond just writing software and shipping and installer to a team capable of managing a sophisticated cloud infrastructure complete with the tooling and processes to manage upgrades on behalf of clients. This is actually the hardest part.

Another challenge at this phase is pricing the service. Most successful ISVs transitioned to a subscription model long ago. But the price of that subscription only covered the maintenance on the software, not a full blown hosting infrastructure. So you’ll probably need to add a hosted subscription option which includes the maintenance, hosting fees and perhaps bundle the DR fee (which is now redundant anyway) and the mobile access.

Read-Write Mobility

At this point step back and consider what you would have achieved. First, you have got all of your customer data securely stored in the cloud (probably with way better redundancy than they could ever afford). Secondly the customers are using that data live via a remote desktop experience. You now have got everything you need to start enabling specific read-write mobility scenarios against your application.

As part of the read-only mobility implementation you exposed a RESTful API. In this phase that API is extended and more features are built into the mobile applications. As the number of features increased certain classes of users cut across to using the mobile applications exclusively and you can start spinning down maintenance on the desktop application screens.

I would strongly recommend instrumenting your code at this point with something like Application Insights to get a good idea of how your software is being used to help prioritise this work.

Cost Optimisation

Congratulations! You now have a SaaS application, but you aren’t done yet. One of the things that you probably didn’t do when you migrated your customers over was optimise your application to run more efficiently in a centralised computing model. For example can you work your schema so that multiple logical tenants can share a single database to reduce your database hosting costs? Are there any background processes that the desktop software used to perform that you now need to shift off to dedicated compute instances, and can multiple tenants share that single instance – and finally, when can you shut off the remote desktop access entirely?

Final Thoughts

If you have read this far then it probably means that you have seen first hand the challenges that ISVs face responding to customer demands for cloud/mobile capabilities in your software. I hope that the above gives you some creative ideas for taking your first steps or continuing on your journey. If you have any thoughts, feedback or want to share some of your challenges feel free to share below.

Federated Identity in Visual Studio Online

Earlier this week Sean McBreen from the Visual Studio Online team posted an announcement detailing the work that he and his team have been doing to streamline the process of creating a new Visual Studio Online accounts where users can authenticate using their corporate username and password.

This scenario is enabled through Azure Active Directory (AAD) which allows for synchronisation of corporate identity information to the cloud which can then be used by SaaS applications such as Office 365, Dynamics CRM and now Visual Studio Online.

If you want to get up and running with Visual Studio Online quickly, and link it to your corporate directory I recommend that you read Sean’s post. The purpose this post is to show how enabling authentication with AAD in Visual Studio Online is not only great for organisations who want to us corporately controlled user names and passwords, but also for projects that span organisational boundaries. Really good examples are consulting companies and system integrators, and of course their customers.

Before I get into the detail of how to get federated identity working with Visual Studio Online I wanted to provide a little bit of the background on the journey Microsoft has taken so far with AAD support.


The release this week is actually a refinement (from a users perspective) on a capability that was released at the BUILD conference in April 2014. In April, Microsoft released a preview of the new Azure portal. In that preview it was possible to create a Visual Studio Online Team Project, and if one didn’t already exist, a Visual Studio Online account. Team Projects created through the preview portal were special however because they made use of AAD to authenticate users.

What’s Changed in This Update

Whilst it was great that we could get Visual Studio Online using AAD via the preview Azure portal, it didn’t really give us much control over which directory the Visual Studio Online account was associated against. This latest update firstly allows us to specific which AAD instance to associate the Visual Studio Online account against at the point of creation.

Create VSO Account

One of the quirks that I have noticed is that whilst you can have multiple Visual Studio Online accounts associated with a single Azure subscription, once you create your Visual Studio Online account and associate it with an AAD instance, all Visual Studio Online accounts within that subscription must be associated with that same subscription.

The work around which would enable you to freely associate different AAD instances with each Visual Studio Online account is to create multiple Azure subscriptions. Whilst this might sound complex creating a new subscription is easy and it does provide some benefits which we will get to shortly.

So now with all that background out of the way we finally get around to the whole reason for this post. Enabling users from different organisations (different AAD instances) to use the same Visual Studio Online account.

Reasons for Federation

Given that Visual Studio Online now supports AAD and Microsoft Accounts you might be wondering why federated identity is that important, surely anyone who doesn’t have an Organizational ID can just use an MSA? The following diagram shows how that topology would work.


While this is true it isn’t really optimal from a security perspective. Let’s say that you are the IT manager at your company. You have engaged the services of a system integrator to build a new line of business application and you decided to manage the project using Visual Studio Online. You create the Visual Studio Online account and associate it with your corporate directory. You then grant access to the individual MSA accounts from the system integrator. You’ve now optimised access to Visual Studio Online for your own staff, but in reality the majority of the users of the Visual Studio Online account are going to be users from the external system integrator. You could have instead opted to associate the Visual Studio Online subscription with the system integrators AAD instance but in the end you are going to want to bring maintenance of the solution in house.

There is another potentially more serious security concern. Imagine that one of the staff members of the system integrator is terminated on poor terms. Whilst the system integrator has probably disabled their corporate account they generally have no control over the MSA that staff member was using to access your systems. If the system integrator fails to advise you to shutdown access to the MSA  then the former staff member still has access to your Visual Studio Online account.

Instead it would be better if, as IT manager you could link your AAD instance to that of the system integrators so that whenever one of their staff log into your Visual Studio Online account that they use their own corporate credentials. If that staff member leaves the organisation (for whatever reason) then their account would be disabled, and access to your Visual Studio Online account with it.

This doesn’t negate your obligation to regularly check who has access to your Visual Studio Online account, but it helps leverage the user management processes and workflows at your partner system integrator. It is also one less password for individuals to have to manage.

Getting Federation Working

Fortunately AAD supports the ability to add users from other AAD instances. When adding a user you have the option of creating a new user in the directory, adding a Microsoft Account, or adding a user from an external directory.

In order to add a user from an external directory the current user logged in performing the administrative task must have read access to that same external directory, otherwise it won’t be possible to validate the user that is being added. In practice what this means is that you can generally add users from other AAD instances within the same subscription, but not from another instance in another subscription controlled by a different organisation.

Here is the federation chicken and egg problem. You can’t add a user from a foreign directory because you can’t get read access to it, and the administrator of the foreign directory can’t grant access to you to read their directory for exactly the same reason.

Federation Problem

So how do we move forward? Well the trick which was explained to me was to use an MSA and add it to both directories. You would then log into the user management portal using that MSA account and then add users from the foreign directory taking advantage of your user administrator status in the local directory and read access to the foreign directory.


When adding users you will now get a green tick next to the username indicating that AAD was able to successfully validate that user exists. Note that when you add a user from a foreign directory (or an MSA) you still need to provide some user details. This information is not kept in sync between the directories. Whilst this might be considered a problem it does allow you to suffix user descriptions with their organisation name which makes it easier to identify them in your systems as a foreign user (other than the username of course).

Security Concerns

If the process above left you feeling slightly uncomfortable you are not alone. Let me explain why you might like to take a slightly different (and complex) approach in the interests of maintaining the integrity of each organisations AAD instance.

The problem with the scenario above is that someone is simultaneously given read access to the system integrator directory and user administrative rights on the customer directory. I don’t know of many IT managers in either organisation that would be too happy with that approach. In most circumstances it would be unacceptable to allow an external organisation to have control over your user directory. On the flip side, system integrators work with a variety of customers and often have internal distribution lists which detail client names and projects – also information that you wouldn’t want to share. So how do we enable federation without raising security concerns?

Sandboxing Projects and Relationships

The approach that I recommend is to create a separate Azure subscription. This subscription will contain a separate AAD instance which is associated with a Visual Studio Online account (also in the same subscription). Both organisations would then create their own MSAs with user-level access to their own AAD instances. Both MSAs would be granted user admin rights to the new AAD instance and could login and grant rights to other users from within their organisation. As an added security measure you might initially add project leaders from both organisations and grant them user admin rights and then disable the MSA accounts ensuring that all users from that point forward make user of their Organizational IDs and the benefits associated with them.

AAD Fed Solution

Another nice side effect of this approach is that casual users who may not have an Azure Active Directory instance can be added to this new AAD instance via an MSA. Having a separate Azure subscription for the project or vendor relationship is also handy from a development infrastructure point of view. The development team can use the Azure subscription to deploy their solution to the cloud and have all of the costs associated with the project contained within one sandbox.


Support for Azure Active Directory support was right at the top of my wish list for Visual Studio Online and I’m excited to now have this capability. The ability to enable federated identity scenarios is also going to be very useful for consulting organizations and system integrators.

I do wish that establishing relationships between AAD instances was easier. Perhaps Azure could be extended to support sending “federation invitations” where a user with sufficient rights In a remote directory could grant the ability for an AAD instance to check username validity without having to go through the whole MSA workaround.

The content in this post was heavily influenced by some work I did with the Visual Studio Online team around guidance for consulting organisations and system integrators. Now that AAD support has been shipped for Visual Studio Online some of that guidance might need to be updated. Specifically I think that the more favourable pattern for organisations wishing to support federated identity is the Account per Customer / Account per Vendor pattern. Like everything its a matter of trade offs though.

If you take this post, and the patterns outlined in that document you should have a good foundation for getting Visual Studio Online up and running for you and your organisation regardless of whether you are a system integrator needing to service multiple customers, or a company that engages multiple external vendors.

Finally, if you have any questions about getting federated identity working for AAD and Visual Studio Online specifically feel free to leave a comment and I’ll help you out if I can.


Government Identity on the Web

How should governments respond to identity on the web?

Today Troy Hunt (a fellow MVP) was quoted in the Sydney Morning Herald in relation to an article about myGov ( which is a portal for Australian citizens to access Medicare, eHealth, Centrelink, Child Support and NDIS records. The basic premise of the criticism is that myGov doesn’t support two-factor authentication (2FA) and that this represents a security concern.

Later a conversation between technology professionals on Twitter is speculating about the security around how passwords are stored within the database. Personally I would be extremely surprised if the cornerstone of the Australian Government’s online strategy would store user passwords in plain text instead of a hash and salt combination, but this might be foolish optimism on my behalf. It would be great if someone in the know could actually confirm this, and then perhaps explore the 2FA topic that Troy and others have raised.

I believe that any 2FA implementation would need to be “recommended but optional”. 2FA requires a device capable of generating a token which is supplied along with your username and password. For some users having this additional device might pose a challenge, there are still people in Australian society which do not have access to mobile phones, let alone smart phones. By making 2FA optional it allows those citizens to scale their security to something to what is within their means, alternatively the government could provide special token generators upon request if they wanted to make 2FA mandatory (I’d personally still want to use my phone).

Stepping back a bit, I think there is a much more interesting question about identity on the web and the government’s response to it. On one hand I really want myGov to be secure, on the other I would like it someone open so that as a developer I can create an application that can acquire a verified identity from users. Imagine a local council being able to significantly automate its processes by allowing local residents to file paperwork by signing forms with an OAuth-based flow that jumps out to myGov to gather scoped personal information and perform non-repudiation tasks.

Such a capability would necessarily require a very stringent security audit of the myGov platform prior to being opened up along with the creation of a community of developers who know how to work with the various APIs provided by the myGov platform.

Ubiquitous Connectivity & Time-based Internet Billing

I am currently sitting on board the Dawn Princess with anchors down in Akaroa. I am half way through a two week cruise around New Zealand with my family. It’s a good break from our busy lifestyle back in Melbourne.

The timing of our family holiday is a little awkward because I am missing the excitement around the announcements made at BUILD 2014 last week. Or I would be if I wasn’t able to connect from the Internet from the ship as the details unfold.

Aboard the Dawn Princess is a ubiquitous Wi-Fi network which is available in all the staterooms and most of the common areas. The Wi-Fi network routes through a satellite connection backed by MTN Satellite Communications. When using the Internet I pay at a rate of approximately $0.30 to $0.80 per minute depending on the package.

Having cruised before I was aware of these pretty steep charges so we organised a local SIM card for mobile data access when ashore. The mobile data plan is of course tied to data volume usage rather than time. Time-based billing for Internet access to be strange in this era of mobile computing. The thing that I miss most about not being connected all the time is the absence of casual interaction with my information sources (e-mail, social media, news headlines, browsing etc).

I certainly can’t afford to be online all the time to receive these notifications, so for BUILD 2014 content, I logged in to get a quick synopsis of what is being discussed and disconnect. When I was ashore I set my various mobile devices to download content (such as the keynote videos).

The whole experience has left me pondering why satellite access (at least in this instance) is charged on a time based model. Surely the ship maintains a constant satellite connection so why not allow passengers to be constantly connected and then bill a lower rate for actual usage. The administrator of the ship-based network could apply QoS to the traffic to ensure essential traffic is given preference.

Another curious aspect of the setup here is that I can’t access OneDrive or OneNote (which is backed by OneDrive). I can understand that the ship might not want passengers syncing their local content across the link (especially with all the high resolution photos captured) but it sure is a pain when I want to access one of my Notebooks from OneNote. This makes me think that the critical resource is indeed bandwidth, but by charging for time the ship ensures that passengers don’t accidentally set their devices syncing constantly.

Overall I have been pretty impressed with the strength of Wi-Fi connectivity on the ship (kudos to Princess Cruises for that). I just wish that the route to the Internet didn’t have such a big toll gate in front of it. Is there a cruise ship out there that caters to geeks wanting to be constantly connected?

Maybe someone from MTN Satellite Communications could explain why satellite communications might be billed on a time basis.

Partnership Vacuum

My role at Readify is pretty diverse. One day I can be up to my elbows in technical detail and other days I can be focusing on business relationships. I’m pretty sure that my technical background influences the way I look at business relationships.

I tend to look at relationships in terms of mutual benefits. The best partnerships are formed where both parties can put something into the relationship that the other really appreciates and can benefit from. Every so often I talk to organisations where the benefits don’t align to a great need on one or both sides of the relationship. In this situation the potential partners should agree to part ways and invest no further but keep the door open to future collaboration, but what can happen is a kind of “over politeness” where neither side are willing to admit that they see no benefit in a formal partnership. The result is the creation of a partnership with no activity at its centre, creating a kind of partnership vacuum.

Moving forward I’m going to try to be better at calling this out when I see it. The cost of not avoiding partnership vacuums is exertion of effort trying to create artificial value where no natural genesis exists.



OneNote vs. Evernote vs. Google Keep

Comparing OneNote and Evernote might be missing misreading Microsoft’s intent behind recent announcements.

OneNote has gotten lots of attention this week when Microsoft release OneNote for Mac. Microsoft also made OneNote free for Windows users and is opening up a cloud API for integration.

This recent activity has prompted many to compare OneNote to Evernote. Evernote is a popular third-party note taking tool with great support on most platforms.

While the effects of the announcement may impact Evernote, I suspect the real target is Google Keep. Google Keep is part of a much larger ecosystem of productivity tools by Google, including Google Drive.

All three products exist within an important category that I call “memory augmentation”. They are distinct from traditional productivity tools such as word processors and spreadsheets. They allow for the capture of unstructured (or lightly structured) detail.

I have been a OneNote user for years, and enjoy the pen input capabilities on Windows. But that power is useless without ubiquity of access, and that is what the OneNote announcement provides. Google needs to step up here and start providing access to their services on Windows Phone and Windows if they want to compete.

Writing Engaging Yammer Posts

The first sentence of Yammer posts should capture the essence of your topic to better engage your audience. If you don’t, summary representations of your post risk having their message hidden below the fold.

I work in an organisation that is taking advantage of Yammer. This tool can help lift the burden on e-mail volume and increase collaboration by being more open.

When I make an internal announcement I prefer to make it on Yammer because I can see feedback and dip in and out of the conversation.

Posts on Yammer lack a dedicated subject field. This might seem to be a critical flaw in the service, but I have come to appreciate it. When a Yammer post gets displayed in summary form only the first few chunks of text are visible. This is true of e-mail notifications and digests, mobile applications and the Yammer Inbox.

This means that posters to Yammer need to capture interest in the first sentence of their post. If you waffle you risk having the message missed in the stream of content on the enterprise social media tool.

I have found this constraint to be a blessing in disguise. My flowery writing style has begun to change as a result. This teaches me that constraints aren’t always negative. Sometimes they have positive consequences as the following articles can attest.

  1. Embrace Constraints in Getting Real by 37signals (now Basecamp)
  2. Why Innovators Love Constraints in HBR by Whitney Johnson
  3. Constraints Drive Innovation by Jim Highsmith

Recently one tool that has helped me deal with this constraint is Hemingway, a simple web-based app. Hemingway analyses text and reports what grade level you are writing and other characteristics. The following screenshot is an earlier version of the text in this article. You can see for yourself how the content has evolved.


Perhaps one of the lessons I need to relearn is brevity, and with that I will end my post.