Tinkering

Text Tracker

text tracker

We started experimenting with what kind of interactions we could get out of a camera. An obvious one was taking facial tracking and making some kind of weak faceless interaction. The result is a text that you can control with tilting and moving your face to tilt, scroll and zoom the text.

Text Tracker Demo

I found the face-tracking to be a intuitive and fun but we came very close to a normal interface. It is so direct and the feedback so clear. We are very much focusing on one user manipulating the graphics on the screen and this was not the purpose of the assignment.

Trying to get away from the "face" of the Text Tracker we decided to testing out sounds.

Music on Speed

music on speed

Trying to get away from the directness of the previous example we tried to work with more general movement (number of pixels that have changed between frames) hoping that control scheme would make it less obvious and opening it up to multiple people at the same time. One aspect I think is important in Faceless Interaction Fields is to try to get away from the user. The way we try to do it is to have multiple users and having the interaction be a collaboration between all of them.

Music on Speed Demo

I think we got closer to facelessness here. You could argue we don't have a user as the program just cares about movement in the room. I think we may have polished the wave a bit too much. It was just there to visualize the movement before we tried to get sound working but it takes away a bit from the whole experience as it draws your attention to the screen and thus also making you think about the computer as an observer of sorts instead of treating the room as a "sensor".

Using music is also problematic as the interaction feedback becomes so prominent. We will have to go into something more abstract that draws less attention.

Next step

We are still stuck working with a computer and a screen. We have to move on and make it less obvious where and what the interaction is.