July 1, 2008
Most of you have probably seen Jeff Han’s multitouch screen, the Microsoft Surface, and all these other multitouch devices that are popping up everywhere. What’s the first thing that comes to your mind when you see those videos? If you are anything like me you’ll probably want three of them, preferably yesterday. Alas, I don’t have extra 10,000 lying around to buy a MS Surface, so I decided to build my own.
Fortunately, the excellent NUI Group forum is filled with people who have the same intentions and are willing to share advices. There are dozens of blogs describing how to build a DIY multitouch screen. I’ve been planning on building one ever since I saw the Reactable in action a few years ago. Now that I have the time and motivation to actually do this, I have started the building process and I will try to document the steps here on the blog. I will also write how-to pdfs both in English and Finnish and release them once the table is finished.
I don’t want to go into too much detail about how different multitouch surfaces work, but I’ll just quickly go over the science behind the technique I’m going to use. Basically I’m going to build a multitouch table that will use the FTIR method that Jeff Han also uses in his screen.
So basically, you have an acrylic screen that is lighted up with infrared LEDs from the side. Once you put your finger on the acrylic, it will reflect the IR light downwards and your finger would look really bright below the acrylic if you could see IR light. So, you will need something that IS able to see infrared light. For this purpose you use a video camera that has been modified to pass through infrared light and block visible light. Why not just use normal light then, you might ask? Because you want to also project an image on the screen and the light from the projector would disturb the finger tracking.What your camera sees is pretty much like this video (after some image processing to make the blobs brighter)
OK, you have a bunch of bright blobs where someone touches the screen/table. How do those blobs then turn into a cool multitouch screen? You need some software that detects the blobs and then does something cool with the information or sends it along to some other software that might do something even cooler. The weapon of choice for many people is the Touchlib library, but it only works on Windows or Linux. So us Mac users need to use Boot Camp, Parallels etc. or do the tracking with some different software altogether. Unfortunately, there aren’t many options. ReacTIVision 1.4 is pretty much the only free, easily available software for this. It is mainly used for fiducial tracking and I’ve used it in some of my projects. Version 1.4 does plain finger tracking also, but it’s pretty slow when using USB cameras. I will cover the software side some other day in more detail and show you some other options that you might have.
The tracking software uses the TUIO protocol to send the information to Flash, Processing, Quartz Composer, Max/MSP or any other programming environment that understands OSC. Then it’s just up to you what to do with the tracking data.
This is a really superficial explanation on how this all works. I will go on to details within the next days and weeks when I start to document the building process my own multitouch screen. There is tons of information on the Internet about all of this if you are interested. The NUI Group and Google are your friends.
Tomorrow, Multitouch Table Post #2 – Acrylic and the LEDs.