Posts about “Interactivity m:3”

  1. Photo by Leone Venter on Unsplash

    Presenting

    We went through this module with great insecurity. It felt like we never had it under control until the last week. It was a good experience for me, to have to find my footing at times.

    Tracking movements when handling objects

    The presentation went ok today. We had some critique about the visualization of the movement but I am pleased overall. These actions might not have been the best suited for machine learning but it is interesting to explore the limitations of the tech as well. It made us think more about the movement in the terms the two papers talked about. We also kept away from making a flashy demo and I think that was a plus for us. In module one we spent to much time on making graphics and that was not the case here.

    This course has been an interesting one. Working with the basic ideas of interaction and exploring it in different ways has really been educational and enjoyable. There has also been more focus on the texts and that has been really helpful for me. I'm not always the best one on reading text but in this course I have leveled up my reading discipline.

  2. Photo by Mark Eder on Unsplash

    The non action

    An ioio XPS inspired me to think about not doing things. It was a talk about colonialism in design and how not trying to solve something for someone can be more helpful. We as designers have to step back at times and acknowledge that we might not always know the best answer. At times the user knows best. I guess this is also the basis for user centric design as opposed for genius based design, and maybe the basis for IxD at MAU.

    The absent action is our theme now. We will work with cancelling, regretting and uncertainty. They feel very related but stand apart a bit. The uncertainty opens up for cancelling but can also be a fulfilled action in the end. If you just take a simplistic view of it it's just a fulfilled action that is carried out slowly but I think there is a quality that is getting missed in that view. THere is something in the hesitation that is interesting.

    When you decide not to fulfill the move, we call this a cancelled move. Cancelling is easier to identify for the mover and the observer but it may be hard the machine view. If the object we are to pick up is the one sensing it will never see an action if it has no sense of the room.

    Regreting is similar to cancelling but you first fulfill and then return it. I think observing it is similar to a cancelled move where you have to have a context to understand it. You can see it as two fulfilled actions, picking up and putting down, but we argue that when you see it as a whole it is different.

    This is the space we chose to work within the last week and I think it is an interesting one. It is far from where we started and a stretch in the topic but Jens seems to be fine with it, even excited.

  3. Photo by Paolo Nicolello on Unsplash

    What have we done?!

    This week we started by planning what we need to do to get ready for fridays presentation. During this we started to doubt the effort that we have made in this module. We worked together in module one and there we made several prototypes for at least four concepts and in this module we have not the same amount of output.

    Going through what we have done, we realized that we actually have worked more than we remembered.

    Notes on what we have done so far
  4. Photo by Siora Photography on Unsplash

    Cancel that

    After some coaching we got an approval for our concept. If we don't get the computer to recognize our moves that can be ok, as the investigation is the important part. We should analyze how these kind of moves and record how it looks when you regret a move and when you don't. We also have to try the moves and see how they feel for the mover.

    We will move forward by prototyping these moves in different situations to se if we can identify the canceling and regret of movements. What does it look like when you are in the middle of a movement? Can the the machine recognize the hesitation? Can we isolate this cancelling movement? Movements that are stopped and regretted, can the machine recognize this?

    We started looking at some different situations where one person would do something and one person would suddenly tell them to stop. The first was writing on a whiteboard. In this case we found that the canceling was done very subtly in between drawing the individual lines making up the letters. Stopping the writing action was not noticeable, if you don't account for the words not being completed. There was no twitch or anything that we could see.

    The same was true for walking, we could not see the cancelling as a specific move. We concluded that this could be a consequence of theses actions really being an ongoing series of actions. When told to stop, people would stop before initiating the next "subaction".

    I cancel a throw and then complete a throw of a small ball

    When making more distinct actions we noticed more clear cancels. With the throwing we could see a difference when the mover knew the he would cancel. In this case he would cancel before getting into the swing. We also had some tests where you didn't know that you had to cancel. In this case we could see the swing being initialized and that the cancel could fail, in regards to holding on to the ball. Theses moves where more interesting from the mover perspective, with a heavy object being thrown, the cancellation can really be felt, it can even hurt. As the mover has to stop the swing or divert it, the energy has to be dispersed in another way, and this will be felt in the arm.

    I pick up a bandage a couple of times

    Smaller actions like picking things up is different again. Here we get further into the action and when we cancel we can see a twitch almost like we are being burned. We can also see a lot of hesitation, probably because we know it is an experiment. We can also se that at times we actually take the object and then release it. We call this regretting a move as we think it differs from cancelling.

  5. Photo by Chris Barbalis on Unsplash

    Handling it

    We are handling real objects now in order to understand the movements. We may be able to translate this to our gesture interface later but we need to understand how to design meaningful gestures first.

    When we start to analyze object manipulation we find that there are many different ways of interaction, we can push, pick up, drag etc.

    I move an object by lifting it

    I move an object by pushing it

    I move an object by dragging it

    While testing the handling of objects we started to talk about social interactions when we handle objects. Things like giving and taking from other people can be an interesting angle. While testing this we started to play tricks on each other and saw how this changed the way we received the objects, you can see me taking it from under after Lin just dropped it once.

    Two people move an object back and forth in different ways

    While doing this prototyping we started to talk about hesitant actions and deciding not to do thing. We got back to the discussion we had earlier on, when Lin talked about hesitating to press play, and this time I understood her better. Just because the machine sees an interaction as a binary action, play or pause, it does not mean that the mover or observer sees the same. When you press play, your action starts earlier, the whole approach to the machine, reaching out and finally pressing play is part of the move.

    This whole move is disregarded by machines today, they just listen for button presses and similar. We found this interesting and started to investigate the canceling of actions.

  6. Photo by Rechanfle on Flickr

    Minority repeat

    We spent most of last week reading, tinkering with the ML libraries and walking around the studio. This wee we started working on something inspired by the movie Minority report, one of the better Philip K Dick adaptation (some are very bad). We didn't have a clear goal but after talking a lot last week we decided to just try something out.

    Just do it

    Minority Report style interaction

    We talked a lot about different gestures and what they could mean, and how you could complete certain actions. When we started to just prototype the movement we got new insights and ideas. While trying it out we decided that a general computer UI was not whet we wanted to do, we instead decided to focus on music. Music was our theme in module 1 and it seemed fitting to have it here too.

    We talked a lot about what different gestures could mean for our imaginary music player and once again got stuck a bit in the theory. After a while we just went to a white board and imagined that as our interface. When we started to gesturing and finding the actions we wanted, Lin brought up the idea of not being sure when you press play. I, stuck in a binary mind set, was skeptical of this but this lead us into to actions where you are not sure.

    Not so sure

    We started playing with skipping tracks in a more nuanced way. Maybe you can peek at the next track and slowly and gradually start to play it, or if you don't like it, just not skip to that track. This was interesting to me as I never saw these nuances before. In my mind the only nuanced action in music players are volume controls and scrubbing.

    Gesture controls for music player

    We got a bit stuck here, we had some easy gestures for some actions but others felt contrived and at times didn't even feel good. In our coaching with Clint he suggested we could try interacting with real objects to see how that manipulation is played out.

  7. Photo by McKenna Phillips on Unsplash

    What are gestures?

    We talk a lot about gestures, and in discussions we throw the word around without thinking of what it means, almost as carelessly as the use of "intuitive".

    But what is a gesture really? A gesture is many times really hard to define. How do you wave? There are a million different waves but we perceive them as the same.

    Is a gesture a symbolic move? Some definitions seem to say it is an expression. Then it seems to be a kind of language. It feels like it is a very nuanced language too, small differences in the movement can be the difference of a threat and an invite. It seems interesting today when we talk about gesture based interactions.

    When we design for gestures it is important to think about this. The mover perspective becomes very important here. A swipe in an app can become a peek if it is slow and I often pull to refresh when I just want to scroll to the top. There seems to be a lack of gesture interpretation and that can be grounded in how hard it is for the machine to understand the intent, and the intent is what differentiates the gesture from the move.

    One of the worst features in Instagram is how you have to know the amount of pictures in a post when you swipe to the next one. If you make the same swipe when you are at the last photo you are taken to the next screen. There has to be a better way to do this and I think it is a very engineery problem, a problem that is the result of engineers making the design decision. THere has to be a better way to interpret the touches, that take context into account.

  8. Photo by Jake Hills on Unsplash

    M3: Try walking in my shoes

    The last module in this course is about machine learning and gestures. We will use a phone to record movement and try to teach the machine to recognize what we do.

    An interesting part of ML is the difference from "normal programming". Where traditional programming is logical and has if/else statements, modern AI has more of a fuzzy logic, making judgments and discriminations based on earlier experience. This can make programming easier but can also lead to very unpredictable results where the logic becomes more of a black box that is hard to understand.

    When the logic comes from training there is a risk of unknown bias to creep in. When we define a human in a program we might think of things to identify them by, like legs arms and such. There is already a risk here, where we have to account for people without arms and legs and so forth. With ML this is even harder as we might forget to teach it a lot of stuff. This was the case when google launched filters for Hangouts, they taught the system on google engineer as these were easy to come by humans. This meant that it did not get trained on black people, as Silicon Valley is very white.

    Another thing to think about is what a gesture really is. A swipe on a phone is really easy to identify as a human, but if you try to describe it it gets harder. How long is it? How fast? It's interesting how all the simple things become complex when you really look at them.

    Assignment: Machine Learning

    Brief: Explore and design movement, gestures and bodily interactions with sensors and ML

    Materials: TensorFlow and sensors in a smartphone

    Team: Lin and me

    We started by toying around with the code Jens gave us. It's an extension of the Node JSON bridge by Clint that we used in Programming 2. We started by trying to record some easy gestures like circles and lines. It worked ok, but the length of th moves have to be the same and that might not be so good.

    We started thinking of some movements to analyze and as we both like working out we started to think in those terms. Maybe trying to see if you make your reps the right way or counting reps. When we talked to Jens he didn't like the idea. He wanted us to go deeper, analyze what the movements really are. The texts talk about this too, how the moves can be viewed in different ways.

    When designing movement The Mover is the first person perspective, an important experience as this is what the "end user" will experience. If the move feels weird it should probably be designed in another way. This is something that is often forgotten when designing for example mobile apps, the hamburger menu in the top left corner is a terrible position for the user but it looks good when designing the app on a large screen.

    The Observer is the view another person would have, this can be important to see the social implications of a move. A silly example could be when children spin around, they enjoy the movement but adults see all the dangers to the room and china.

    The final perspective is The Machine is a bit different than the observer as it has no understanding for cultural context. It can only see what we have given it sensors to see and many kinaesthetics known to the mover are lost. The machine can not see how hard you push and we have to account for this when we want the machine to understand the movement.

    In the end we want to find a mapping between what the mover feels and what the machine senses.

    To dig deeper we started investigating walks. We can identify people we know by their walk far away and yet it is hard to explain what it is about it that is special.

    We tried to record ourselves walking back and forth and train the machine to recognize us. In the end we wanted to be able to copy each others walking styles with the help of the machine. To get a sense of how it is to walk like another person. We failed miserably. It could never identify our walking style and always thought it was Lin.