Archive for April, 2011

Today I learned of some exciting news from out friends in Redmond: the official Microsoft Kinect SDK will be open source!

Check it:

Big caveat here: this is yet to be confirmed, and there seemed to be some confusion/disagreement in the report, but that Microsoft are even considering open source as an option is a huge step forward for us and them.

The prospect does raise questions about the other open source/open standards solutions already available, especially OpenNI and Freenect. Namely, what’s the point of all three? Surely there’s too much overlap for these to play nicely together for long, so what next? Will Microsoft’s official SDK consume one or both of them? Will one die off completely? Time will tell…


Earlier today Microsoft announced that it would be releasing its official Kinect SDK to the non-commercial developer community on May 16th.

Interestingly Microsoft chose to laud the SDK’s audio source perception and voice recognition above the physical gesture detection the device is capable of. This may be because all the other Kinect-compatible frameworks to date do not handle audio at all impressively.

I’m also pleased by Microsoft’s attitude towards community development. It’s not a first by MS, but it’s certainly the highest-profile ’embracement’ of the community that I can remember. Keep going like this, MS, and people will start loving you above those penny-pinching fanboys in Cupertino ;)

I would love to know whether Microsoft always intended on releasing the kit like this. Did MS respond (positively) in this way to the community’s immediate and successful attempts to ‘hack’ the Kinect, or (more cynically) did the company realise just how large a market it has cornered due to the popularity of the device and wants to cash in? Maybe we’ll never know, but developments in the coming months will unquestionably be exciting.

More information here:

Here are a couple of new proof-of-concept demos I’ve been working on. I take no credit for the clever stuff here :) But I am going to grab Vangos Pterneas’s WPF example and run with it; it’s exactly what I was going to write myself, but now this lovely fella has done most of the hard work for me! Thanks, dude. Much love.

Skeletal tracking using a C#.NET 4.0 WPF wrapper for the OpenNI NITE middleware. This simple demo does not contain positioning correction (the depth-sensor and RGB camera are offset along the X-axis, hence appear to by out of sync), nor does it ostensibly provide functionality beyond some of the other demos I’ve posted, but the fact that it’s 100% .NET 3.5+ WPF stuff opens the door to a huge range of possibilities. Huge. I can’t overstate this.

Skeletal tracking, mapped into a virtual 3D space. This is effectively how a real-time avatar could be created. Note the different view angles that can be achieved despite the fact that there’s only a single sensor device situated in front of the subject.

This evening I made a huge step in achieving the technical element of my research project. Up until now using Kinect to control anything other than tech demos was little more than theory. Tonight I hooked up Kinect to Celestia and performed a few simple actions. The upshot is: it’s possible!

I used the excellent FAAST – Flexible Action and Articulated Skeleton Toolkit – from the University of Southern California. It’s a relatively simple piece of middleware for OpenNI NITE. This is currently a closed development but they intend on making it open source once some new features have been added and some stability issues resolved. It’s good enough for me for now. It offers the ability to rapidly prototype by emulating key presses/holds based on the detection of a number of stock gestures.

My final product will need to be greatly refined over this, but at least it gives me some confidence that I haven’t bitten off more than I can chew.

Here it is. Not bad for an hour’s work.