Daily Archives: June 29, 2005

Technical Hiring – part science, part art.

Michael Still has posted up about the hiring woes of Tate Needham who is looking for a Junior Developer at VSoft Technologies who make the popular FinalBuilder tool. Tate had a really good hit list of some things to look for when hiring face to face at VSoft.

  • Read some code
  • Write some code
  • Customer liaison
  • Reasearch skills
  • Using FinalBuilder

At Readify most of the Senior Consultants get involved in the hiring process in one way or another either by using their connections to find good people or actually performing technical interviews.

Usually I am performing a remote technical interview so the first two items are a little bit tricky, although I try to do a bit of research on candidates before I contact them (so those with a visible online presence have an advantage).

The second two items are kind of obvious, you need to be able to communicate effectively and you need to know where to go to get more information. Because we tend to ride pretty high on the technology wave having actively exposed yourself to future products and technologies rates pretty highly.

Use of FinalBuilder is not a requirement for joining Readify, but since I am into team development practices awareness of the problems that it is trying solve is important to me and helps prove to me that candidates have at least a passing interest in producing a quality product in a repeatable fashion.

Personally I use NAnt for VS.NET 2003 generation projects but I’ve heard only good things about FinalBuilder – it definately looks better than my XML scripts

I guess the secret sauce is the combination of flavours. There are a few traits that folks at Readify share and I actively look for these in candidates because if they aren’t present then the chances are they won’t enjoy Readify in the longer term, and people that aren’t happy don’t make good company representitives.

Sounds like a good topic to discuss over a mug of coffee some day!

Geoff wants some web-service exception advice.

Geoff a very talented developer, but everyone needs to ask for help from time to time, and design issues are a classic example. First up, Geoff is absolutely correct, exception handing in web-services is special – its not just like calling a method.

Firstly, exceptions don’t really map directly to the SOAP message format, for example SOAP doesn’t really have a concept of a stack trace that can be transferred across the wire in full fidelity – they have a relatively basic SOAP Fault element that goes back in the return message.

You can throw a text-based representation of the exception into the detail to support diagnostics, but if your web-service consumer doesn’t have intimate knowledge of the service what are they going to do with it?

From a web-service consumer point of view there are only a few things that you want to know when you get a fault:

  • Was it invalid input – if so, what?
  • If it wasn’t invalid input, was it a transient condition?
  • Did to rollback cleanly?

Good web-service implementations document explicitly all the error conditions that could occur for an incoming message and publish it somewhere the consumer can get at it, but thats about the only diagnostic information you need.

Repeat after me, my web-service is autonomous, I’ll tell you I’m not feeling well (error codes) but I won’t bend over so you can take my temperature (passing back stack traces). Is that colourful enough?

Residents of Louisa Lawson Crescent, you’re wireless networks are insecure.

We are almost at the point where you can get Internet access everywhere you go, with technologies like iBurst and Telstra’s EvDO offering starting to take off bit junkies like me should be able to get a constant fix. Even in the residential areas it looks like its going to be fairly easy to get a wireless connection – through slightly illegal means.

I caught the bus home yesterday and intentionally left my wireless connection on while I was tapping away, I started counting how many wireless networks I picked up on the way from the City to Gilmore via Woden. I stopped counting at about fifty and easily 50–60% of those that I saw were not secured, so if I had stayed in one spot long enough I would have been able to surf the web for free.

Perhaps one of the worst offending areas was just down the road on Louisa Lawson Crescent where I picked up about ten different networks – nine of them insecure. At least my neighbour is secure . . .

Online RSS aggregators are information silos (RSS and Attention).

I’ve already posted up about the RSS support that is going to be built into the Windows platform for the Longhorn release, and it looks like the news is getting a fair bit of airplay across the blogosphere and even in traditional news sources.

Most of the commentry in the blogosphere falls into two buckets:

  • This is a good thing ™.
  • This is a bad thing ™.

There are a few posts like Nick Bradbury’s which are very insightful, and it inspired this post. Personally I think it (the RSS platform) is a good thing, but not because I am a one eyed .NET freakazoid that loves any new API in my platform of choice. No – its because aggregators today both online and offline lock you in, its about voice of visualisation!

Over the past eighteen months I’ve used a couple of tools to grab my RSS feeds depending on what they are. For text-based content I was using NewsGator, for Podcasts I was using iPodder or Doppler and then there were a few other little tools that were RSS aware.

The problem is that each one of these applications represents an information silo which is inaccessible to other applications. For example, iPodder can’t look at the RSS items that NewsGator has already downloaded and automatically (based on preferences) start downloading the podcasts. If you could visualise the current state of affairs it would look something like the following diagram.

SilosBad

And this is only offline aggregators, online aggregators are much worse because you can’t even get your hands on the data, its locked away from you in some data-centre. At least with the offline aggregators there tend to be some XML files hanging around that you can parse to extract the information. In the future we won’t even have to do that.

Like any good platform the new RSS features in Longhorn down into the underlying operating system allowing application developers to focus on how they want to use information versus how they are going to store it and continually receive feeds (who wants to run their aggregator all day anyway). If you could visualise the Longhorn model it would look something like the following diagram.

PlatformGood

Why am I writing this post? Well I think what Microsoft is doing here is a good thing, even if you hate Microsoft you have to love the scenario they are enabling here. They are building a platform that encourages users to take ownership of their information – but perhaps more importantly are allowing users to see that information in a way that is useful to them.

With Longhorn schedule for release some time next year its time for software developers to start factoring platform enhancements like these into their future plans.