Posts Tagged ‘kinect’

Hello! I’ve started making plans for my next project: a robot butler who will fetch me beer. Here’s the idea:

  • Lego Mindstorms control unit – easy to hack and send my own commands to/from.
  • Lego motors, wheels and gears etc. to create a versatile,┬ámaneuverable platform thing.
  • Kinect sensor to assess the environment
  • Raspberry Pi to think about where the robot is looking, where it might go and to send the necessary commands to the Mindstorms unit.
  • Leapmotion controller to sense me pointing to the fridge, thus commanding the robot to retrieve beer.

What do you think? I for one welcome our new beer-fetching robot overlords.

I should make special mention of the new Leap Motion device. If it works as well as the demos have shown it will be a truly stunning, possibly revolutionary gadget. I’ve applied to be a developer for this amazing piece of kit. My pitch was the robot butler thing. Hopefully the Leap people like me and my half-baked ideas.

Advertisements

Development: complete!

I’ve been quiet for the past few weeks because I’ve been frantically fising those last-minute bugs that one invariably finds in a development project like this. But it was all worth it because I spent three days this week using my software to teach 9 and 10 year olds about astronomy.

Shirley Community Nursery & Primary School in Cambridge were kind enough to let me teach a few lessons. I had an excellent time at Shirley. The students and staff were all very helpful, friendly and enthusiastic. This made my job both pleasant and easier!

The sessions were well received by the children. It got a few ‘oohs’, ‘aahs’ and ‘awesomes’, so the biggest challenge – which was to engage the children in the activity – was met. I’m studying a computing degree, so my main focus is to assess the technical implementation of my work. This naturally means ‘did it work?’, but there is no point in producing something that is technically proficient unless it achieves its purpose well. In this case the purpose was to impart learning, and feedback indicates that this aim was met well too.

I asked the children to complete a Likert-scale survey after the intervention. The ‘enjoyment factor’ shown in the results is very positive. Similarly the children say that they learned about astronomy because of the lesson. Good news! Some of the discursive feedback was really helpful and very insightful too. Commments such as “I think the kinect should be able to track which person is controlling it so it doesn’t get confused’ and ‘the information text should also be read out by the computer so that blind people can hear it’ show a depth of thought that I perhaps foolishly wasn’t expecting.

Technically, the software worked well. I included three different control types:┬ádeictic gestures (pointing and hovering to control a cursor), symbolic gestures (rotate, zoom, pan) and voice commands. Probably the most highly developed and accurate method (from a technical perspective) is my deictic control method. This is highly tuned, highly developed and ‘just works’. Interestingly, the children pretty much ignored this control method. For them it was just an expected behaviour, requiring very little to understand and use proficiently.

My symbolic control method was arguably the least successful technically. The success rate of recognising gestures was around 60% (as compared with around 95% for the deictic method). The children commented on this, were occasionally frustrated and had a long list of ideas to improve it. Great!

And finally the voice control method was very popular. Of the three, it was voice controls that prompted the most ‘wow’s, and the children really enjoyed shouting at the computer to make it work. It technically performed well. I told Kinect to listen only to audio coming from an angle of 0 radians and also to suppress background noise and echoes. It did a sterling job of listening only to the audio we wanted it to, ignoring the noise. Microsoft’s Speech Recognition Engine could do with being a little faster, but overall this part of my project was remarkably stable.

I hope to run these sessions in another school in January, so it will be interesting to see if I get similar results. In the mean time I have enough information to begin writing up my findings. Here’s hoping I can articulate what I’ve done in such a way as to yield marks. I fear a discrepancy between the amount of effort I’ve put into this and the nature of the marking scheme which will define my overall grade.

This evening I release a short video of a WPF User Control I made. It’s a little plug-and-play component for the Kinect SDK (beta 2…probably works in beta 1 too) to emulate the touch and hold style pointer system used in so many Kinect applications.

I will tidy this up in the coming days and release it to the community. It’s rather simple and I’ve seen various implementations of this elsewhere on the internet, but none of them have been released formally. If I can help someone out with this then great!

Aside, I was stunned to see this get 150 hits in about two minutes. I was equally stunned to see the counter immediately reset to zero and it hasn’t counted up since. Of course, I’m inclined to believe the more flattering score. Naturally :)

It’s three days since I somehow managed to get my KinectDTW project onto the SlashDot front page, and I’m still trying to get my head around the response. 220 release downloads and 60 grabbed the source from the repo. Ok, so far from Earth-shattering figures, but I think it shows healthy enthusiasm for what I’m trying to achieve here. That is to say I wanted to give people a tool to get them started with making their own Kinect-based gesture control systems.

I’ve had lots of feedback to, almost universally positive. And any detractors just don’t like Microsoft or the Kinect, nothing bad about the KinectDTW project itself. So the next challenge must be to become established in the community, rather than just a novelty factor. I’ve no idea how to do this, but I’ll work something out. A few well-placed links will be a good start, but then I will need to follow this up with improving the system, perhaps making it open-standards, and definitely by responding to feedback.

So if you’re yet to see it, check out KinectDTW.codeplex.com. It’s far from perfect, but that’s kind of the point: I’d love the community to pick this up and roll with it.

This evening I finally published the gesture recording and recognition project I’ve been working on. With the help of the Kinect community, especially a member who goes by the name of Rhemyst, we have produced a library which introduces developers to vector-based gesture recognition.

May of the approaches I’ve seen elsewhere use specific positional tracking to recognising gestures – i.e. tracking a hand and matching its movement profile against a series of coordinates or something. This is great, of course, and can actually offer very good recognition. But the Dynamic Time Warping approach is more flexible in that it can be very easily programmed by a novice. It’s great for rapid prototying and, with the help of the community, I hope this can grow into a production-capable recognition engine. It’s not quite there yet, though…

So what are you waiting for? Grab a copy of the first release of KinectDTW from Codeplex now!

kinectdtw.codeplex.com

Please share your recorded gestures and recognition parameters with the community so that we can all learn and benefit from your experience!

Steve

Another little piece of the jigsaw: controlling a WPF ScrollViewer (with added animated easing wizardry) using swipe gestures. Freakin’ yah!

Yeah, one of those.

The trouble with the standard WPF ScrollViewer is that it doesn’t scroll very nicely. Firstly it snaps to the extremities of its components (i.e. to the edges of images) making smooth scrolling impossible. Secondly it doesn’t support .NET’s reasonably powerful animation effects. I sorted this by making my own animation mediator for a ScrollViewer, enabling all that stuff I just mentioned. Here’s a demo:

The intention is to hook this up to the Kinect gesture recogniser so that natural interactions (i.e. swipe gestures) can have a ‘natural’ effect on a menu system. Because, really, if you perform a swipe action on an object you don’t expect it to move uniformly and snap to unnatural positions; instead you would expect it to have inertia, to decelerate, and to rest in a natural position (i.e. a function of how hard you ‘pushed’ it). This WPF extension achieves that.