Augmented Reality Projection Mapping

In this post, I will show you a bit of my process of creating the AR-Toolkit. The AR-Toolkit is a prototyping kit for Unity to create Augmented Reality Games really quickly.

I saw an “AR Sandbox” at the IAAPA Expo, which really amazed me. The sandbox inspired me to start doing research about the Kinect. In the holiday of December 2018, I got my hands on an Xbox Kinect V2. I wanted to learn more about 3D motion sensors.

After a couple of days, I discovered how the AR Sandbox was made. I was thinking of making my own version of the sandbox but there are already so many people who made different versions it, so I wanted to do something different. I came up with the idea to calculate an in-game collider of a  real-life physical object. After trying a lot of different things, I finished my first prototype. First Prototype: 

Collider calculations

In the beginning, I used a convex hull algorithm to calculate the border. When I put multiple objects on the table it would create one big border around all the objects. I decided to implement a concave hull algorithm to get more detailed borders. I came up with my own algorithm to separate different point-clouds from each other. This was really useful to calculate multiple colliders of different objects. Calculating the concave hull was really expensive, and since I could now distinguish multiple objects, the concave algorithm became a bit unnecessary. So I switched back to using the convex hull.

Early prototypes don’t have to be expensive. Yes, that’s my mother’s iron board… 

Projection Mapping

The colliders are now working, so I started with the projection mapping part. I borrowed an old projector and hanged it on my ceiling. I got it to work after tweaking the projection scale and position (in unity). The calibration wasn’t really perfect, but the big offset on the sides of the objects was caused by the projectors screen ratio. At this setup there was also a pretty big gap between the position of the Kinect and the projector, which made callibrating harder.  For an early prototype, I was really happy with the result.

From here I started optimizing my algorithms: I got a proper table and I bought a new projector. Then I started working on several tools and functions to make the development easier for me. One of those was a simple calibration system.

First try with the projector

Improve, Improve, Improve...

 

I also made a simple game with the current technology. Creating small games between development especially helped with thinking of important features for my toolkit.

At this point, I started thinking about a lot of ways how I could use this technology in games. One of my ideas was to use it for a laser puzzle (little did I know how much precision this would require).

 

There is a lot of noise in the Kinect depth output. You don’t really see it in the videos, but the colliders are flickering a lot. Also one of the big problems is that Microsoft stopped the production of the Kinect. So I was already looking for another good replacement. I got a new Intel Realsense D345 (IRS), and I implement it in my current projects.

By looking at the specifications the IRS is much better than the Kinect: 1280×720 vs 512×424, 90 vs 30 fps. But in practice the IRS is actually worse (for my usage), the depth images have a lot more noise. I did a lot of testing with both sensors. The width & height resolution of the IRS is better, but the depth resolution of the Kinect is more precise. This is because the Kinect uses a Time-Of-Flight technology and the IRS uses Structured Light. Structured light sensors have more noise, especially on the edges of objects! Which in my case is really bad, because I want to reflect lasers onto the edges.

In the Intel RealSense Viewer, they apply a lot of postprocessing effect to the depth image. It actually looks pretty good with all these effects. In their Unity SDK, they have a few of these effects that you can use, but they are quite expensive and are still not as good as the raw output of the Kinect.

I switched back to the Kinect, and I decided to create my own Postprocessing effects for it. I made a few different algorithms to smooth the data and to filter out the errors. This worked pretty well. The update rate is pretty slow, but the colliders don’t stutter anymore.

Test with the new postprocessing effects

 I spend almost a month inside the Unity deep profiler. a lot of stuff has changed and redesigned. This was a simple test I made to test the improved algorithms (I seriously had goosebumps everywhere when I saw this in real-life):

Laser Reflection Game

I am really in love with the effect, but this was not the goal of my project. My goal was to make a laser puzzle. I am happy with the end result, and for a demo, it works pretty well. But I am not 100% satisfied yet. The beamer has a pretty big delay and sometimes the colliders don’t update, it looks like it is buffering. I do know the cause of this and I am going to improve it in the future. I am first going to focus on another new prototype. My overall goal of the AR-Toolkit is to improve the toolkit through multiple different prototypes. 

 

The first video on this page is recorded on 30 December 2018, the final laser puzzle is recorded on 8 May 2019. At the time of writing this report (20-6-2019), I started working on a new project with the AR-Toolkit. I am still using the same techniques but I am going to use them in a totally different way. I am really excited to share more about this, but I will save that for later 😉. Small spoiler: I gave the Intel RealSense a second chance.

When I was thinking about the way of representing this project I knew that only showing the end product wouldn’t work. That’s why I tried to write this in a different way. My main goal is to give a small insight into my design and workflow, my second goal is to inspire.