Project in Action
We used a Microsoft Kinect to track the sandbox and a regular projector to project the color image back onto the sandbox. Additional we used a big mirror on the wall for our setup.
Our target was to implement an augmented sandbox application for Windows. We decided to use the Unity3D engine as a base, so we could use its physics and animation capabilities for additional simulations.
To get Kinect working in Unity3D we used the Unity3D Kinect Plugin by Carnegie Mellon University. For image processing we used the AForge.NET Library.
The Kinect depth image is cropped and smoothed through a Gaussian Filter. Afterwards a Mesh is created and updated in Unity3D to represent the sandbox surface. This mesh was intended to be used for physics simulation, but no physics made it into the project. For the color we are using a self written shader, which is blending different layer textures.
While development we encountered some smaller problems.
Edges of the Box
The semitransparent edges of our sandbox weren’t captured correctly by the Kinect and caused some value spikes. As addition to that the Gaussian Filter wasn’t able to handle those spikes and caused big square in the filtered image. As solution we just taped the edges of the box, so they were captured correctly.
For calibration we planned to have a automatic calibration via QR markers. Unfortunately the RGB camera of the Kinect had some problems with the mirror, so we skipped that. The RGB image was way too distorted, so our QR tracking wasn’t able to detect any markers The source code for QR detection is in the project, but the calibration is not implemented. Instead we used simple calibration through mouse clicks and moving the camera in the Unity3D scene around.