Anyone who has worked with Kinect will tell you that the information you get out of the sensor is somewhat noisy. This is particularly problematic if you’re doing avateering, you can’t just apply it directly to a model - you’ll need to do some filtering, such as double Holt exponential smoothing.
We were nearing the deadline for an interactive installation, then we got a very odd report: the client said that while everything was going perfect for 99% of the cases, when a particular QA user tried the application the avatar legs suddenly started wildly kicking around.
This made no sense to us - the algorithms we were using were in no way person-specific. I asked them to send us some video of how the render looked and sure enough, the legs were flailing around as an epileptic octopus.
Intrigued, we moved to the next step in debugging and checked a recording of the raw Kinect data. This immediately showed us what the problem was: Kinect was seeing the user as a floating torso.
What gives?
I had lots of interesting conversations after my talk at the recent nucl.ai conference in Vienna, but one in particular sticks to mind. I was talking to an indie game developer who’s using the Kinect for a project on his college, who remarked:
Yeah, the Kinect is great. Too bad it failed
It’s an interesting sentiment because it’s a gamer or game-development one. On the home, as a game controller, the Kinect was definitely a failure. That is no surprise: I’m on the record as saying that game controlling and avateering are perhaps the two tasks that the Kinect is worst at (even though they are two tasks for which it keeps being pushed).
But there are all the areas where Kinect succeeds that have nothing to do with game development.