While experimenting with new applications for Kinect's real-time, 3-D imaging capability, San Francisco freelance coder Robert Hodgin discovered a way to make people's bodies appear puffy and grotesque onscreen. The Kinect uses a near-infrared camera and sensors to produce a 3-D depth map that indicates a person's spatial location. Its depth camera is offset from the camera that collects color data, which causes the depth map to produce shadows on the sides of the person in view.
Hodgin thought that he could disguise the shadows by making the person slightly fatter, so he undertook the most challenging part of the project-creating a "normal map of the body. In computer graphics, a "normal" refers to a perpendicular line extending from a point on a 3-D object that allows designers to better see the spatial orientation of different points on the surface. (Adding a bunch of normals to a head, Hodgin says, makes the person look like Pinhead from the Hellraiser movies.) He obscured the undesirable shadows by extending the contours of the head and body slightly outward along each of the normals, turning himself into an Incredible Hulk-like creature.
For many more Kinect hacks, click here.
Cost: $150
Time: 10 hours
Two More Far-Out Kinect Projects
3-D TERMINATOR
Cost: $550
Time: 80 hours
Belgian software developer Davy Loots turned a Terminator graphic he found online into a 3-D, voice-activated personal assistant that can tell him the time or check the news. An LED projector displays the image on a wall, and speech software handles the voice processing. Loots wrote software to create an optical illusion that makes the Terminator's head appear three-dimensional when seen from a certain angle, and added code that reorients the head so that it faces the user based on spatial information from the Kinect.
MOVING EYE
Cost: $700
Time: 4 hours
With a Kinect and a 900-millimeter spherical display, British programmer Jon George created a giant eyeball that follows people around the room. Using an opensource graphics package, George converted a rectangular image of an eyeball into a spherical version. He then programmed the eye to move left or right in response to a person's position relative to the camera.