Archive for the ‘Kinect’ Category

Likert scale survey results. Presenting your findings is always tricky. In my example the children I taught filled in forms with question that are statements about which they can strongly agree, agree, be netural, disagree or strongly disagree. So ordinal options, the 5-point Likert scale.

I need to provide an overview of the results. I’m thinking of a stacked bar like this:

(Click to embiggenatte.)

Trouble is there are many good reasons why analysis of ordinal data like this is a bad thing. Furthermore I’ve taken some lierties myself.

1) Interval data (e.g. an absolute measures of, say, temperature over time) can be plotted like this. Ordinal data is arbitrary. E.g. who is to say that Person A’s “strongly agree” is the same as Person B’s strongly agree? Therefore mapping their results together is flawed.

2) The same is true between questions answered by the same person. Statement 1 might be “strongly agree” as might Statement 2, but is it the same strength of feeling? What if Statement 1 was “Murder is bad” and Statement 2 is “Mars Bars taste nice”. I strongly agree with both, but the strength of feeling is obviously different.

3) Similar issues exist between all the options in a single question. E.g. is the gap between “agree” and “strongly agree” the same as between “neutral” and “agree”?

4) I’ve taken the liberty of opposing the “positive” and “negative” responses so as to give instant visual feedback on how the responses balance each other.

5) I’ve also taken the liberty of equally distributing “neutral” responses across the +ve and -ve axis. A bit naughty. I could ‘zero’ the neutral responses, but then they drop off the chart completely. But doing this would lead to a fairer comparison between the explicitly +ve and -ve responses.

Having said all this, the stacked bar gives a good overall representation of the survey results.

I could present the combined data as clustered bars. This lessens the cross-question and inter-question-answers comparison inference, but it’s basically the same data. Like this:

(Click for enlargification.)

Question is: is this misleading and/or can you think of any other way I could present the data?


This evening I release a short video of a WPF User Control I made. It’s a little plug-and-play component for the Kinect SDK (beta 2…probably works in beta 1 too) to emulate the touch and hold style pointer system used in so many Kinect applications.

I will tidy this up in the coming days and release it to the community. It’s rather simple and I’ve seen various implementations of this elsewhere on the internet, but none of them have been released formally. If I can help someone out with this then great!

Aside, I was stunned to see this get 150 hits in about two minutes. I was equally stunned to see the counter immediately reset to zero and it hasn’t counted up since. Of course, I’m inclined to believe the more flattering score. Naturally :)

…and shooting some Bieber outside of the school. The cops caught me, and this explains my recent absence from the world of blogging.

Actually, I took a little break because my inspiration had just about dried up. Working 70-hour weeks for three years does that to ya.

But I’m back, and I’ll keep posting about cool stuff that I make and do. Here’s one such thing:

This shows a comparison between two normalised sequences. The straighter and more diagonal the line, the closer the match. Dotted lines show periodic samples of time warping (note irregular spacing shows that warping has occurred in one direction or another). The difficulty here is making this understandable to a broad audience.

And another:

Here’s another sequence comparison, visualised differently. This time the graphs are overlaid (albeit offset) and time warping events are shown by the orange dots. If you can imagine a perfect match between two sequences, there would be no time warping and thus the orange lines would all be vertical. Add the overall ‘lengths’ of the orange lines and you get the 100% match ‘cost’. Now consider the above graph. The sum lengths of the orange lines indicate the ‘cost’ of the transformation. When comparing multiple queries against a single reference, the shortest ‘distance’ is the best match. Hey presto! Gesture recognition!

DTW graphing

Posted: September 4, 2011 in Dissertation, Kinect
Tags: , ,

I have made good progress in writing up some of the more mathematical aspects of my project. I’m having difficulty knowing at which level to pitch the work. I have to assume a reasonable mathematical understanding otherwise I’ll just burn up my word count describing simple principles. However, if I pitch it too highly then I could miss marks for not being explanatory enough. I’ll talk about this with my supervisor. After all, she’ll be marking the work, so it’s best to ask what she would like to see.

Needless to say, there will be lots of graphs. I’m trying to work out a good way of displaying multi-variate dataset comparisons on one graph. DTW works equally well with flat arrays or 2D arrays of data. This project uses 2D arrays because, firstly, we’re dealing with co-ordinates (i.e. data pairs, so I’m forces to use multi-variate data) but also because I’m tracking six joints, so I have 12 pieces of data per time frame.

I think I’ll show an aggregated view like this, but properly labelled. I’ll then extract a few individual columns and use a three-way plot of some description to show slopes, lines of least cost etc for several sequences (i.e. similar, dissimilar etc.).

It’s too late for this. The graph looks wrong somehow but I can’t see why. I’ll work it out tomorrow.

It’s three days since I somehow managed to get my KinectDTW project onto the SlashDot front page, and I’m still trying to get my head around the response. 220 release downloads and 60 grabbed the source from the repo. Ok, so far from Earth-shattering figures, but I think it shows healthy enthusiasm for what I’m trying to achieve here. That is to say I wanted to give people a tool to get them started with making their own Kinect-based gesture control systems.

I’ve had lots of feedback to, almost universally positive. And any detractors just don’t like Microsoft or the Kinect, nothing bad about the KinectDTW project itself. So the next challenge must be to become established in the community, rather than just a novelty factor. I’ve no idea how to do this, but I’ll work something out. A few well-placed links will be a good start, but then I will need to follow this up with improving the system, perhaps making it open-standards, and definitely by responding to feedback.

So if you’re yet to see it, check out It’s far from perfect, but that’s kind of the point: I’d love the community to pick this up and roll with it.

This evening I finally published the gesture recording and recognition project I’ve been working on. With the help of the Kinect community, especially a member who goes by the name of Rhemyst, we have produced a library which introduces developers to vector-based gesture recognition.

May of the approaches I’ve seen elsewhere use specific positional tracking to recognising gestures – i.e. tracking a hand and matching its movement profile against a series of coordinates or something. This is great, of course, and can actually offer very good recognition. But the Dynamic Time Warping approach is more flexible in that it can be very easily programmed by a novice. It’s great for rapid prototying and, with the help of the community, I hope this can grow into a production-capable recognition engine. It’s not quite there yet, though…

So what are you waiting for? Grab a copy of the first release of KinectDTW from Codeplex now!

Please share your recorded gestures and recognition parameters with the community so that we can all learn and benefit from your experience!


Another little piece of the jigsaw: controlling a WPF ScrollViewer (with added animated easing wizardry) using swipe gestures. Freakin’ yah!