Category Archives: Uncategorized

Death of the Start Screen Exagerated

It seems like the online media is foaming at the mouth about Windows 8.1 Update 1 supposedly making the move to boot by default to the desktop. The first article I came across was by Steven J. Vaughan-Nichols over at Computerworld followed by an article at Mary Jo Foley over at ZDNet (who gives the topic a much more balanced treatment).

Where Mary Jo Foley stops at reporting the news, Steven’s opinion piece is vitriolic which is typical of his previous articles. I mentioned to one of my co-workers the other day that I was disappointed in the state of tech journalism these days. Alas, there is only so much news to go around which means that content ends up being rehashed – so the only way to stand out is to write an opinion piece, the more bitter the better.

With that out-of-the-way, I have to acknowledge that where there is smoke, there is fire. So what about the rumour that Windows 8.1 Update 1 will boot straight to the desktop. If the rumour is true it is worth stepping back and thinking practically how this might work.

When Microsoft introduced Windows 8.x, the Start Screen didn’t replace the desktop. Rather the Start Screen replaced the Start Menu. The Desktop was always there in the background, it was just as if immediately after login that the Start button had been pressed and a full screen start menu was being displayed.

I suspect that Microsoft might ideally want to just boot to the desktop by default for devices that don’t have a touch first experience (desktop PCs and laptops) leaving touch first devices to boot to the Windows 8-style Start Screen. This would give users balance the best possible experience on their particular devices form-factor.

Some of the press has pointed out that if you boot to the desktop, then how will users know to launch metro-style apps? Well, remember that the Start Screen did not replace the Desktop, it replaced the Start Menu. So when a user presses the start button on the desktop (or on their keyboard) the Start Screen would still be displayed. If this continues to be the case then metro apps would still be discoverable in a desktop environment.

As Mary Jo Foley pointed out, leaked screenshots of the Windows 8.1 Update 1 UX reveal that users will be able pin metro-apps to the start menu. This means that users could quickly switch between legacy desktop apps and modern apps. I suspect that Microsoft has more plans for blending the metro and desktop environments.

I speculate that in a future release (not necessarily Windows 8.1 Update 1) we’ll see metro apps being able to run in floating windows on the desktop along with API support to make apps intelligently respond to this environmental change. Windows Runtime (or WinRT) is the underlying platform for modern metro applications, but that doesn’t mean that you can’t produce a UI in it that is more optimal for a mouse/keyboard user.

The second change we MIGHT see is a Start Menu / Start Screen hybrid. So far the best concept of how this might work I have seen is this start menu/screen transition video which was posted recently (and received great reviews).

What this video hints at is that the theme for phone, tablet and PC operating systems moving forward is convergence. Microsoft has pretty much confirmed this to be the case for Windows and Windows Phone, but I suspect this will also happen with OSX/iOS. Google is an interesting case because they have two quite different approaches to apps in Android and Chrome OS.

For Windows the key ingredient to convergence is Windows Runtime. Not only does Windows Runtime expose the UI building capabilities, but it also surfaces other OS features, file system, networking, sensors etc. WinRT modernizes key aspects of the existing Windows API (in Win32) in a way that is suitable for desktop, tablet and phone.

In summary, I don’t think that booting to the desktop on desktop-style devices spells the end for either metro, or the Windows 8-style start screen. Rather I think it’s just a continued fine-tuning of the user experience to support as broader range as users as possible. Indeed this kind of UX spread is a necessary feature of a single operating system running on a multitude of devices.

Code First Dynamics CRM?

A friend and former co-worker of mine, Jake Ginnivan forwarded a thread to me started by Darrell Tunnell to the DbUp mailing list. DbUp is a .NET API for building database migration tooling for applications by sequentially applying update scripts to a database, and keeping track of which scripts have been applied.

Darrell’s thread mentioned that he was working at a derivative work which could be used to apply changes to Dynamics CRM, he has called his tool CrmUp. For those of you who haven’t used Dynamics CRM before, out of the box it provides a way for businesses to manage sales, service and marketing processes.

Dynamics CRM is interesting from a development perspective because it is all built upon a customisable foundation where you can build your own custom business entities, relate them together and build processes that flow across the top of them complete with dashboarding and reporting. For some classes of problems it is a very productive development environment.

The product bundles customisations together into things called “Solutions” which contain the definition of all the customisations to be applied and this allows developers to ship grouped customisations to target environments. CrmUp is useful because it allows Dynamics CRM to be treated like an application dependency that can be upgraded to a baseline version.

Darrell’s work has got me thinking about what I would want from a developer orientated framework for managing customisations to Dynamics CRM. If we look at databases generally there are three broad approaches:

  1. Model/Tooling Driven Migrations (e.g. Database Projects in Visual Studio)
  2. Script Driven Migrations (e.g. *.sql scripts applied sequentially)
  3. Code Driven Migrations (e.g. EF CodeFirst)

Dynamics CRM doesn’t have a project type in CRM. The development environment is really CRM itself, from which you can export you customisations. It is these exported customisations (solution files) which are then fed into CrmUp.

Personally I am a huge fan of the way that Entity Framework Code-First migrations work. I love the fact that I can define my database entities in code, and add behaviour to them and then use the tooling to build a “migration” which takes me from one version of the database schema to the other. This leads me to wonder what a Code-First CRM framework would look like.

Dynamics CRM has various customisable elements:

  • Entities (create new, and extend existing)
  • Processes, Workflows & Workflow Activities
  • Reports
  • Dashboards
  • Views
  • Forms (for Entities)
  • Service Endpoints
  • … and many more.

This is perhaps what makes the CrmUp approach somewhat easier to get off the ground (because you are leveraging existing solution/customisation files). But in the long run a code-based way of defining the above would be incredibly useful.

For example, it would be great if I could define an entity in code:

[Entity(Name = "CustomEntity")]
public class CustomEntityDefinition: EntityDefinition
{
public override void Retrieve(RetrieveContext context)
{
// Do whatever.
}
}
Having this entity definition as part of a deployment would result in an entity being created, but a plug-in would also be generated which would route traffic to an Azure host (in the Dynamics Online scenario) to invoke the Retrieve method whenever that entity is retrieved.

There would be additional base classes and associated attributes to describe all aspects of a CRM solution in-code, and then you could do something similar to “Enable-Migrations” from Entity Framework to create the migration scaffolding. As the code is updated you could call “Add-Migrations” and it would generate a migration between the earlier migration and the current state. It would also take into consideration publisher details/signing and versioning.

I personally think that Dynamics CRM is an awesome product, and I’ve been working with it for years, but I would really love to have a more streamlined developer experience. Hat’s off to Darrell Tunnell for his contribution. I hope that the Dynamics team looks at what he has done and also looks at how the Entity Framework project has approached the challenge.

Perception vs. Reality: Data Sovereignty in Australia

Data sovereignty and perceptions around security and privacy remain as some of the biggest blockers to Australian organisations looking to reap the benefits of cloud computing technology. In the long run most of these concerns will be resolved as top-tier public cloud providers continue to open up data centres in Australia. Until then many organisations will avoid taking advantage of the benefits.

When I ask why they aren’t willing to place their data in a foreign data centre most decision makers will make vague references to “laws” which dictate that their applications and data must be hosted in the country, sometimes within a particular state or territory and in many cases on their own hardware.

I find legal arguments to avoid foreign cloud-computing solutions interesting (yep, I’m that guy). Frequently I find that the organisation has not properly investigated their legal obligations. They take a shortcut in assessing the appropriate legislation and guidelines and arrive at a default negative assessment.

Part of the problem is finding the legislation and guidelines that apply to your organisation in the first place and I think that both state and federal governments could do more to help sign-post the way relevant acts which would support the decision making process.

An organisation in Victoria would need to consider both state and federal legislation and good places to start looking would be the privacy act, and industry specific records keeping acts. As a practical example, consider an industry where data privacy is of the utmost importance – healthcare.

The Health Records Act 2001 (including amendments made 10 February 2013) includes statements with regards to trans-border data flows under Principle 9.

An organisation may transfer health information about an individual to someone (other than the organisation or the individual) who is outside Victoria only if

The act then goes onto list quite a large list of exceptions to the rule which I suspect would allow most organisations to store health data both outside Victoria, and indeed the country.

There is a danger to reading chunks of legislation in isolation, and I am not a lawyer. But I’m yet to read any legislation that says “you can’t use cloud computing” and I think that organisations which might benefit from cloud computing technology might be well served by examining their options.

Naturally the goal is not to “cloud-all-the-things”, but consider whether the economic benefits from cloud computing (capital expenditure transformation & operational cost optimisation) free up resources over time to tackle other initiatives.

The Meaning of the Word Estimate

The technology business is one of those industries that comes jammed packed with its own set of jargon. You would think that the use and misuse of this jargon is a contributing factor to the reason why so many software development projects are “challenged”.

Not so! In fact, one of the biggest problems with software projects is the collective agreement around what the term “estimate” means.

The English oxford dictionary defines estimate (noun) as:

“an approximate calculation or judgement of the value, number, quantity, or extent of something”

That seams pretty reasonable to me. It is clear that even though you’ve sized something up, it is only an approximation. Unfortunately what others sometimes hear is:

“a binding contract of cost and time, in which every other variable (often scope) can change, but which cost and time cannot”

To be fair, here is what a software developer might be thinking when they hear the word estimate, or provide an estimate:

“a optimistic guess about how long something might take given the incomplete and inconsistent description of something that needs to be built. I don’t ever expect to be held to this.”

I think that the oxford description is probably the fairest, where those doing the estimation need to give their best estimate, and take reasonable steps to clarify unknowns, but those receiving the estimate need to acknowledge that given imperfect knowledge, the resulting estimate will also be imperfect.

The real question is – given an imperfect estimate, who carries the risk and reward when the estimate is wrong. In our industry many customers believe that vendors should provide fixed-price estimates, and that any additional costs should be worn by the vendor.

In some cases the vendors accept this if only to avoid the uncomfortable conversation about the true nature of risk in software projects and structure in punitive change control clauses.

I prefer to get a good collective understanding of what an estimate is, and in the case of a particular estimate put some boundaries around what the estimate includes and what its relative strengths and weaknesses are.

Further, we have to acknowledge that way more than the envisaged scope for a project goes into producing an estimate. More often than not there is a critical business event which sets a deadline for a particular project (e.g. a legislation enforcement date), or a budget constraint (we can only afford 12 months of development at a certain team size).

Budget is usually the big one. A customer will typically only have a finite budget, so the project simply can’t cost more than that. The question then becomes what can you fit within that budget.

The challenge then becomes producing something of value in that budget. And that is also where the real risk is. You see if you can produce something that returns sufficient value for the money spent. If you can balance money spent, with value received as you progress with a project then there is very little risk of the project being classified as challenged.

For this reason I prefer T&M style projects where the project team has to actively work with the customer to provide value. When the value stops arriving, then the project either stops or changes direction.

At the end of the day, software development isn’t about estimates, and the vendor and customer getting one over on each other, it’s about trust. Trusting that the vendor is bringing the right technical skills to the table and working as fast as possible whilst achieving the quality outcomes the customer expects, and trusting that the customer will be engaged enough to direct the scope and priority of the project, and that the money that is being spent is going into something worthwhile.

 

Presenting on VS2013 and Windows 8.1 in Brisbane.

I’m up in Brisbane for a week at the moment and on Monday the 28th I am giving two presentations back to back on What’s new in VS2013, and a separate session on improvements in Windows 8.1 for Enterprise Applications. We’ve got quite a few registrations at the moment and I’m not sure what the cut-off is but if you would like to come along and check out the latest improvements in VS2013, and get a handle on how you might using Windows 8.1 in your business then register here.

I’ll be around for a little while after the event so if you want to discuss some app ideas or whiteboard anything then I’ll be available.

DNS Limitations in Windows Runtime

Lately I’ve been looking at line-of-business applications for Windows 8.x and how you would go about deploying such applications in the real world. When you build a line-of-business application in-house you can often get away with doing things like hard coding service URLs, but if you are building software that is going to redistributed to multiple customers or even on the Windows Store you need to figure out how your application is going to “discover” where the services are located.

The simple and obvious approach is for the application, upon starting to ask the user for a server name to connect to, but in enterprise environments this information is often opaque to the end-user. Further, once this information “gets out” it can be notoriously hard to retrain users to enter some other value if the network topology changes.

DNS Support in Windows Runtime

Discovering service endpoints is a common challenge. For example, when you set-up e-mail in many mobile devices an automated discovery process is undertaken which involves querying DNS for addresses such as “autodiscover.mycompany.com” when you tell the mail client that your e-mail is “someone@mycompany.com”.

The Domain Name System actually has a mechanism to support discovery of services associated with a domain name via the use of SRV records. An SRV record is a DNS record which combines a service name, protocol, domain and target address to tell a program (via a DNS resolver) which endpoint it should connect to. Many technologies and/or products such as Active Directory and Lync rely on this mechanism.

Getting back to Windows 8.x, I was investigating the best way to enable an service address discovery mechanism which followed this basic process:

  1. On initial start-up of application, ask user for their e-mail address.
  2. Use the domain part of the e-mail address to initiate an SRV record lookup (DNS)
  3. Use the target address from the SRV to connect to service address.

Unfortunately, Windows Runtime doesn’t expose a good mechanism to do this, and the .NET API subset exposed to Windows Runtime doesn’t contain the classes that you might have used to do this in a traditional desktop application. There is some suggestion that you can use the DatagramSocket class to do SRV record lookups for UDP endpoints, but my scenario is for TCP connections.

Workaround

Without the ability to execute sophisticated DNS queries from modern applications in Windows 8.x the only thing that you can do from a discovery point of view is abandon using SRV records and use an “autodiscover” mechanism similar to the way that Microsoft Outlook uses to find the mail server based on the e-mail address.

This process would look something like the following:

  1. On initial start-up of application, ask user for their e-mail address.
  2. Use the domain part of the e-mail address and prefix with a unique string (e..g myappdiscover.mycompany.com).
  3. Send a request to this endpoint to download configuration data which can be used to locate service endpoints.
  4. If this fails fallback to ask for detailed configuration information.

As it happens I believe that Microsoft has already encountered this problem. If you’ve used Lync, you’ll know that there are two different Lync clients for Windows. The standard desktop Lync client, and the modern version of the Lync client. The modern Lync client relies on a mechanism similar to the above (lyncdiscover.mycompany.com), whereas the desktop client can make use of the SRV records to discover the location of the server.

The “autodiscover” or “lyncdiscover” approach for Exchange or Lync respectively rely on a CNAME record being present in the DNS and a server available at that address to respond to the discovery requests. This would be largely unnecessary if the platform supported querying SRV records.

You might be tempted to look at implementing your own DNS client using the sockets API in Windows Runtime. Of course this would take some time to implement correctly, but more than that, you need to be able to discover the address of your assigned DNS server, which from what I can tell is not exposed to modern applications via Windows Runtime.

Presenting at TTUG on Visual Studio 2013 and Team Foundation Server

I’m off to Hobart tomorrow. I’m heading down to present at the Tasmanian Technology User Group on Visual Studio 2013 and Team Foundation Server. I’m really looking forward to presenting down there again (it’s been a while). Not sure when registrations close, but if you are in the area and interested in attending the details are here.

Thanks to Christopher Baker for getting this organised.

Lync with Windows 8.1 and Internet Explorer 11

I’ve been using Windows 8.1 preview as my day-to-day environment for a good 3-4 weeks now and in general everything is working extremely well. One of the problems that did crop up however that has been impacting me is participating in Lync meetings with customers and partners from other organisations.

The problem I experience is when clicking a meeting Lync, instead of launching the desktop Lync client I get routed to the Lync web-client login page. The web client is okay but I prefer the desktop client and it has a few extra features which I find quite useful.

LyncWebClient

 

It appears that the code behind the meeting link is doing some browser detection, not recognising IE11 as a compatible browser for the desktop client launch mechanism and then falling back to the web experience.

To work around the issue I pressed F12 to bring up the new IE developer tools. On the emulation tab (look on the left hand side down the bottom of the panel), change the user agent string to Internet Explorer 10. As soon as you’ve done this the page should refresh and the Lync desktop client will get launched.

LyncWebClientUserAgentHack

 

I’m pretty sure that this will get fixed soon either as a compat update to IE, or as a patch to Lync. Hope this helps you in the meantime!

Windows 8.1 & Improved SkyDrive Integration

Home Screen

Windows 8.1 is in many was a refinement of Windows 8. One of the areas of welcome focus is the integration between SkyDrive and the Windows shell. To be clear, Windows 8 shipped out of the box with integration with SkyDrive with a Windows Store App that allowed you to navigate through the files stored up in the cloud.

In Windows 8.1 Microsoft have gone one step further and allowed desktop applications to integrate with SkyDrive directly via Windows Explorer WITHOUT having to download a separate desktop application to synchronise files. I suspect that it is really a case of the separate desktop application now being integrated with the Windows product out of the box, but nonetheless it is one less thing to download and install.

ProgressThis alone is note worthy but the real improvement in my opinion is the fact that you now no longer have to synchronise all files down to your local machine in order to access them via Windows Explorer, instead a stub file is visible which triggers the download of the real content from SkyDrive.

Previously you had to download all your files from SkyDrive or select a  subset that you wanted to have visible on your local machine. This lead to either massive and largely unnecessary data transfers, or placing folders in directories that ultimately didn’t synchronise leading to potential data loss.

Search ResultsBecause there are stubs present on the local machine, that content is instantly available for searching via the Windows search charm. Once a file is selected from the results it is them streamed down from SkyDrive.

This subtle change in the way cloud files are handled is important in the context of two converging trends. The first is the shrinking form factors of Windows devices, and the corresponding limited storage available and the growing demand for digital media storage. It simply isn’t possible for me pull all that content down onto my local device, and I can’t even choose (at the time of configuration) an intelligent subset of the content I want synchronised. I want that complexity managed by the operating system.

Room for Improvement

Overall I think Microsoft is heading in the right direction here, but there is still room for improvement.

Like many folks I have a relatively large personal media collection of family photos. If I was a single guy, I would probably dump all this up into SkyDrive, but I want to share this media with my family members and allow them to contribute to it, and organise it.

SkyDrive allows me to share files, or folders with specific users but sharing isn’t really a first class feature in SkyDrive. For it to really work it needs:

  1. True shared ownership of folders, with clear “unshare” process (clone/merge).
  2. Ability to pool SkyDrive quota for shared folders.
  3. Better surfacing of shared content as a first class citizen in the file list.

Something akin to the rooms feature on Windows Phone would probably help resolve this where someone can create a room, and the room itself has a SkyDrive which borrows its storage quota from the participants.

Business Considerations

Most businesses that are using Windows 8 and Office 2013 should probably be looking at SkyDrive Pro for managing shared content (which is backed onto SharePoint). This is a completely different animal to SkyDrive with which it shares only a name (no actual code). In the future I would like to see Microsoft unify these two experiences so that it is possible “mount” a SkyDrive folder in SkyDrive Pro and vice versa. I doubt it’ll happen due to data security concerns with corporate data leaking out through individuals SkyDrive accounts, but it’d be nice to see a unified end-user experience.

For now however we’ll have to make do with the SkyDrive Pro desktop application and SkyDrive Pro Windows Store App. The former has an advantage over the later in that it allows you to connect to multiple SharePoint locations – the later only connects you with your Office 365 personal site – and only for one Office 365 tenant.

Installing Windows 8.1 Preview on the Acer Iconia W3-810 from scratch!

I was lucky enough for my employer to send me to BUILD 2013 this year (videos on Channel 9). The major topics for the event were Windows 8.1, Windows Azure and improvements in the development platform and tooling coming with Visual Studio 2013.

Of course, nobody does giveaways like Microsoft (and their partners). Each attendee that was at the conference received two free tablet PCs. The first is an Acer Iconia W3-810 and the second was a Surface Pro. As you can imagine the attendees were all pretty excited to pick up their goodies at the end of day one.

The devices came pre-loaded with Windows 8, and we had a USB stick that we could use to load up Windows 8.1 preview to give it a spin on the new hardware. That was exactly what I did that night back at the hotel and I can say that I am pretty happy with the progress that has been made with Windows 8.1 since Windows 8 was shipped. I’m now running Windows 8.1 on all of my devices and its working great.

Nothing Like a Fresh Install

One of my long established hang-ups about new hardware is that I like to go through the process of installing Windows on a blank drive, removing all of the cruft that comes out of the box from the hardware vendor. You’ll often hear people referring to this as bloatware.

I’ve done this regularly with almost all of my devices, except the W3-810 (and Surface Pro) which I simply did an in-place upgrade on. This weekend I decided it was time to go through the ritual. Boy did I regret it!

The Acer Iconia W3-810 is very new hardware for the Windows platform. Whilst it ships with Windows 8 pre-installed, the engineers at Acer would have pre-loaded the image with all the drivers necessary to make Windows 8 work. I had no such conveniences when doing a fresh install of Windows 8.1 from the MSDN ISO image.

The first challenge you have to over come is kicking off the installation. The way that I did this was hold down the power and start menu on boot to get into the diagnostic menus for Windows 8. I then navigated my way through the advanced settings and launched a command-line. When Windows is running out of this mode it appears to be operating out of a RAM disk, so I plugged in the USB key that had Windows 8.1 on it and ran setup. I stepped through the process and deleted the partitions on the local machine and then triggered the install of Windows 8.1 itself.

All was going fine until it rebooted and I discovered that neither the WiFi or the touch-screen had in-box driver support. So I pulled out the USB key that I used to install the operating system and plugged in a USB keyboard (the bluetooth keyboard that came with the device was no good at this point either). I used the keyboard to navigate through the personalisation prompts whilst I started downloading the drivers for the device from Acer’s web-site.

The next challenge was how do I get the device drivers onto the tablet with no network, and only one USB port currently being used by the keyboard (note, I didn’t have a USB hub handy). In the end I wrote a simple little batch script and kicked it off which copied all the files from a particular directory on the USB stick to a local directory and then looped around and did it again. I yanked the keyboard out, plugged the USB stick in and bingo, the driver files copied across to the local disk on the tablet.

Next I plugged in the USB keyboard again and triggered the installation of the drivers. There are three driver packages that come with the device. One is a big “other” drivers package which basically contains all the Intel drivers which make up the bulk of the hardware in the device. The remaining two are for Wireless LAN and Bluetooth.

The Intel drivers went on without a hitch and I noticed almost all the unidentified hardware on the device was detected sans the Bluetooth and WiFi adapters of course (Broadcom devices both). When I tried to install the WiFi driver it didn’t work. The driver package has a simple “install.bat” file which if you double click on results on plenty of activity on the screen before disappearing.

In the end I opened up a command-prompt window and ran install.bat and saw the nature of my error. Basically the command prompt needed to run elevated to install. I did so and suddenly the WiFi driver kicked into action.

One of the issues that I noticed after installing the large Intel driver package was that any of the modern user experience elements in Windows 8.1 were showing blocked out text (like the CIA had redacted Windows or something :P). That is a sure hint that video drivers are a problem so I flicked over the the desktop Windows Update screen and triggered a download of all the drivers/updates (one of which was an Intel Graphics driver). After this update everything was working as expected, except the Bluetooth driver which I then installed.

Phew! I managed to go from nothing working to a clean tablet!

Some Lessons for PC Vendors and Microsoft

One of the great things about the Windows eco-system is that of choice. When you choose to run Windows, you can choose which hardware you want to run it on and it comes in a variety of form factors with a multitude of different features.

However, all of these features aren’t worth anything if you load the base image for the PC up with loads of junk that is either truly useless, or just gets in the road of the native experience. As long as vendors keep doing that, people like me are going to continue to tear down the machine and reload a vanilla version of the operating system.

I can accept that if I am going to do that then I’m choosing to inflict some pain on myself, but at a minimum a tablet PC should have the following in-box driver support:

  1. Keyboard / Mouse
  2. Touch Screen
  3. Wireless LAN
  4. Video

Even if it is just rudimentary support out of the box with a subsequent Windows Update delivery to get it all up to the latest drivers. My installation experience would have been dramatically simpler if I had just had WiFi support.

I hope that moving forward Microsoft continues to put pressure on vendors not to mutilate the installation of Windows, and provide improved in-box driver support. After installation, detected hardware can have updated drivers delivered via an improved Windows Update experience which might (if we are lucky) lead to more Windows Store apps being installed which interact with those drivers through the app/driver bundling methods included in Windows.

Finally, I should say that the Acer Iconia W3-810 is a really nice device – thanks to Microsoft and Acer for giving them away at BUILD 2013. I’ve got a Google Nexus 7″ and it effectively replaces that for me.