First glance at TEI

New week new course. Tangible and Embodied Interactions. I don't really know what that means, at least the embodied part. I think it has to do with a stronger relation to human physics rather than just cognition. We talk about the limitations of the human bodies and senses and will design with that in mind.

The first week we will design and present a multi screen experience where glanceability is a key factor. Glanceability is the concept where you have a visual expression (maybe other senses too?) that is so abstract and/or simple that the user just has to glance at it for less than five seconds to get it. Doing this fast glance avoids kicking into a cognitive mode where you start to analyze the data. I guess this ties back to Thinking fast and slow that Sofie brought up last year.

I can see how this could be useful while I run. I use my run tracking watch to get info while I run and I have often felt that the data is too raw. I get very stupid while running hard and trying to calculate the tempo needed to make my goal time can be hard at times. Even things like understanding what number is remaining time and what is my current tempo can be hard during intervals.

Three texts to read

We have three texts to read for today and they aren't the hardest reads. Designing and Evaluating Glanceable Peripheral Displays, Matthews, T. (2006, June), is a writeup of a pre study for a thesis. It has some usable insights when defining glanceable displays but is not very deep. It feels a bit odd to quote psychology research from the 50's, there has to been development since then, but I'm not knowledgeable enough to argue too much. It seems a bit odd though to cite a paper and then saying that the estimation is rough. Why not use any of the later research done. I guess it comes down to having cool references.

Exploring the Design Space of Glanceable Feedback for Physical Activity Trackers, Gouveia, R., Pereira, F., Karapanos, E., Munson, S. A., & Hassenzahl, M. (2016, September), is a longer text. It's a study done through design and seems to be similar to what we are expected to hand in by the end of the course for our longer project. It breaks down glanceable feedback into six qualities to be able to discuss them but I think they are quite political when they do. Is the encouragement of checking your device more really something that we should strive for? I think a lot of people thought smart watches with notifications where going to make us check our devices less but I think this mindset might be a cause for the opposite to be true.

They also try to get people to compete with other anonymous pre recorded user data. In doing this they find it to be disheartening for users. It's not hard to see that competing against someone that always achieves the goal you set up while you might not can be a downer. If they would have thought about it, they could have realized beforehand that they should compare against other people with the same goal that also failed at times.

I think some of the shortcomings of the paper stems from that they did their research in part based on what was easily attainable. They seem to have some walking data but not the intent of the people behind the data. This is also true of the analysis, it feels shallow and very data oriented. They try to be a quantitative research paper without very much data. It can also be seen in their citations, when they cite a study of one subject. There are insights in here but it wasn't my favorite paper.

The last paper Evaluating Peripheral Displays, Matthews, T., Hsieh, G., & Mankoff, J. (2009), builds on the first. This time the Matthews puts her theories to test and does two studies, one lab study and one fields study. The study also takes three other studies to try and find a more all encompassing definition of qualities to evaluate when talking about peripheral displays. They are using both quantitative and qualitative methods but it seems to lack a bit. It does not go very deep in interviews and does not have a lot of data. One of the studies only has two people each in two groups. It's hard to extrapolate the findings like they do when comparing the field study result. They group one from group A with group B, just to be able to say that three out of four say one thing. It seems a bit disingenuous to create ad hoc groups just because the data pool is so small.

Even though I think the research lacks a bit in quality I think the paper can help us a lot in doing our own research later on in the course.

In the end all texts give insights into how a design research project can be done. It feels like every text is there to help us in one part of the later larger project. The first is how we will write our intentions in order to get feedback from our supervisor before we do our research. The second text is an example of what we should hand in in the end of the course and the third helps us with our research