Archive for June, 2011

Today I made a touch and hold style menu system for my Celestia project. It works by tracking a player’s hands, comparing where and for how long they are ‘hovered’, and if a certain time target is reached an event is fired.

The tricky part is to make this modular so that many different menus can be created without reinventing stuff. Currently this only works for rectangular areas but I will enhance this to cater for circles and irregular shapes soon.

An interesting thing I had to consider here is that the active area around a player must be scaled in such a way that all menu items can be reached without stretching, but is small enough that items aren’t triggered by accident. I think this will mainly be trial and error, and I may not have time to change the scaling based on how far away from the camera the player stands.

So now I have voice controls and touch menus from Microsoft’s SDK and full-body gesture controls from NITE. I have to decide whether to re-create the NITE stuff in the Kinect SDK, wait for someoe else to make a library of gesture controls in the SDK or to use an SDK->NITE binding (which doesn’t even exist yet). For now, though, I think I’ll go to bed.

Advertisements

I’ve been biding my time for the past few weeks, waiting for the NUI middleware market to settle down a bit. It did settle down, and the obvious choice for my project was OpenNI with NITE and Avin’s Kinect drivers. That was until Microsoft launched its Kinect SDK Beta last week…

Microsoft have delivered a fantastic library of natural interface components which I really couldn’t ignore. In particular, the audio capabilities of the Kinect have barely been exploited at all until the official SDK came along. I’m not deep into it yet, but the microphone array is positional and can triangulate and extract audio from certain locations within a 3D space. Furthermore, it works in conjunction with the device’s 3D imaging capabilities. This means, for example, what once a player is detected by the camera, the microphones can single out audio emanating from that player. So if you have a room full of people talking or other background noise, the words coming from the player are extracted alone.

So now I have a problem. I had planned my project around 3D human physical gesture controls, believing audio control to be too difficult and large a task to achieve in the timescale. However, in a relatively short space of time I put together the following software which shows how to control Celestia using voice commands. It’s far from perfect, but some fine tuning will iron out those bugs and make it a genuine candidate for inclusion in my project.

If I want to include this I’ll have to go back to the ethics board for approval. A right pain, but possibly worth the hit for the extra marks potential…

I built a quick release tripod mount for the Kinect. Cool.

Thanks to Brekel for the inspiration!

image

image

image

image

image

image

image

image

image