Last week, I took part in the SenseStage workshop at the Hexagram BlackBox in Montreal. http://sensestage.hexagram.ca/workshop/introduction/. It was a workshop designed to bring together people from different disciplines (dance, theatre, sound, video, light) and cooperate in a collaborative environment with interactive technologies.

During the workshop, there were tons of sensors – light, floor pressure, accelerometers, humidity etc. – all connected to little microcontrollers which in turn were all wirelessly connected to a central computer that gathered all the data and sent it forward as OSC to any client conected to the network.

Basically, we had 5 days to complete an interactive performance sequence using the data gathered by the sensor nodes. This is what our group came up with.

We call it Treasure Islands and it’s a bit twisted interactive performance/game where a girl finds herself in a weird world where she is floating on a donut in the middle of the ocean with a mermaid talking in her head. She has to travel to all of the different Islands around her, and collect sounds from them in order to open a portal into this strange dream world for all her friends. Sounds like a good concept, doesn’t it? Check out the video and you’ll see that it actually makes sense.

There was a lot of sensor data available, but we ended up using just the pressure sensors on the floor and camera tracking. With a bit more time we could have evolved the world to be more responsive to the real world, but I’m pretty happy with the results we were able to achieve in such a short time. Our group worked really well together, which is not always the case in such collaborative projects.

Credits:

Sarah Albu – narrative, graphics, performance
Matt Waddell – sound, programming
Me – animation, programming

And I guess I need to include some more technical details for all the people who check my site for that kind of stuff (I know you’re out there).

We used camera tracking with tbeta to track Sarah and used that data to move the doughnut and to make the environment responsive to her movements. All of the real-time animation was done in Animata, which really is a perfect tool for something like this, because it allows me to animate things really fast without compromising in quality. Max was used as the middle man to convert the TUIO messages and the OSC from the sensor network into the kind of messages Animata needs to hear.

sense hat
We sewed some IR LEDs on the hat to help with tracking in a dark space.

Each island is an instrument that you can play with. Stepping on a certain area would trigger loops, add effects to your voice etc. Matt could explain the sound part better than me, but the video should make it pretty clear. it doesn’t reproduce the effect of the quadraphonic sound system we used though. Some visual clues were also triggered in the animation based on her movements on the sensors.

That’s pretty much it. If you have any questions, leave a comment and I’ll try to get back to you as soon as possible.