Week 4 documentation blog
1.My definition of “interaction”
I define interaction as the reciprocal process in which two or more actors actively signal out information to each other, receive and process information, and then give out responses in return. To me, interaction is not a purely mechanical process, but instead a creative process that constructs meaningful conversations between human, human and environment, or human and computers. My definition always involves the participation of human, as I think the essence of interaction is based upon human thinking and human experiences.
2. Project examples:
My understanding of interaction is especially shaped by two projects, one is the “Eye writer” project by Zack Lieberman and the other is the “Expressive Tactile Controls” project by Hayeon Hwang. The “Eye writer” project enables me to think about interaction in the sense that it is not as simple as human interact with computers or artificial machines; but instead, the interaction between human and computer especially should better support and augment human capacity to express oneself and communicate to the outer world. Lieberman’s project was able to help the Graffiti artist with disability communicate his ideas out and transform threads of human thoughts into tangible gratifies projected on giant walls. However, Lieberman’s project mostly facilitates the interaction between the Graffiti artist and the eye writer device. The audience of such Graffiti artworks only engaged in less degree of interaction. When the audience interact with the Graffitis, they simply observe how projection of graffitis changes, but those changes do not respond to the reaction of the audiences. The audiences mostly interact with graffiti pieces with their eyes.
My understanding of interaction was further shaped by Hwang’s “Expressive Tactile Controls” project, which embodies greater level of interaction for any audiences. Similar as the “Eye writer” project, Hwang’s project augments humanity, as it simulates and expresses different human emotions through machines. Humans interact with the machine not for enjoying the mechanical process of pushing a button, but we want to enjoy and experience the emotions of human beings as embodied in these buttons. Unlike audiences of the graffitis who simply observe and react to the changes of graffiti projection, audiences interact with the “Expressive Tactile Controls” project on multiple dimensions. A person interacting with Hwang’s project touches it with his or her hands, observes visible changes, possibly hears any kind of sound created by the buttons, and reacts to this interactive process. The installation also receives signals from the human, processes signals, and gives out different responses through the movement of buttons. Human participants engage in this interactive process by multiple senses.
3. How my understanding of interaction has evolved?
From week 1 to week 4, my understanding of interaction evolve from one about two actors listening, thinking, and speaking with each other towards one that involves meaningful conversations between human actors, artificial or non-human objects, as well as the wider social context. Our first week reading about interactivity helps me see interaction as a reciprocal process. One actor speaking or giving out some signals is not enough, but interaction must also involve the process of sense-making and the other actors responding. It’s about dialogues between these human or non-human agents. The second week’s Introduction to Physical Computing chapter by Igoe and O’sullivan and Manovich’s The Language of New Media lay emphasis on the role of computational thinking and artificial intelligence in the process of interaction. Some might argue that interaction is increasingly about how computers transform the way we see, interpret, and make sense of both the virtual and physical world. However, I still believe strongly that a meaningful interaction should always involve and serve to augment human connection and communication. Interaction, even with the support of computers, should serve to better facilitate how humans connect and communicate with each other, with our wider social context, or with other artificial or non-human objects.
The “Making Interactive Art” blog post resonates with me strongly. The article specifically discusses the purpose of making interactive artwork is not about the creator making any kinds of assertive statements out there. Instead, the interactive artworks are meant to be platforms and contexts facilitating open dialogues between the creator, the audience, the object, and the contextual environment. When building some prototype or machine, it’s easy to leave out the context and focus on the product itself. But in actual use of products, it must be embedded in various kinds of social contexts, whether in the home setting, at a primary school, or in a public square. And any interaction should be contextual. It’s not simply a user interacting with the product, but also with the outer environment where the product or artwork is situated in. The reciprocal process of listening, thinking, and speaking should thus be experiential, expressive, sensual, and inherently creative; not mechanical or simply repetitive.
4. Our Group Project: “A New Dimension of Braille”
The title of our prototype and performance is called “A New Dimension of Braille”. The idea initially derived from Lieberman’s “Eye writer” project. Inspired by Lieberman’s project that enables a disabled graffiti artist to create artworks, our initial motivation was to create something that supports how a disabled person interact with his or her surroundings. Our teammate Kathy came up with the idea that we could possibly do something for people with visual disability. The team discussed that in the year of 2119, most of the interaction on earth might become entirely digital on flat screens. However, for people with disability, digitization might pose huge challenge for them to interact with others or the surroundings. While a person with good eyesight can easily see the changes on screens, a blind person could find it hard to navigate their lives around.
Further responding to some of the project examples we’ve discovered about interaction, such as the “Expressive Tactile Control” project I’ve found, we came up with the idea to augment language into something touchable and sensible, so that people with visual disability could interact with languages and words with their hands and ears. We used cardboards to create a prototype machine, which resembles a flat digital screens. Unlike screens or tablets we have in our world today, our machine could augment real textures and materials from the screen, possibly through some kinds of projection or AR/VR technologies. For instance, the device could be used for a child with visual disability to learn more about language beyond just braille. When a child was learning a word through the device, different kinds of textures or materials could be projected out for the child to touch upon. The interaction between the child and the device does not stay on verbal communication, but also exchanging information and meanings through tactile senses. The device can potentially have wider use in augmenting different kinds of textures, not only for people with visual disability, but others who have the needs to understand and feel different kinds of materials. For instance, a fashion designer might find this device extremely useful, as it can help him/her efficiently figure out the most suitable material for a clothing piece.
The project represents my definition of interaction because the device serves to facilitate how humans, in this case for people with visual disability, better interact with the outer world. The interaction engages senses on multiple dimensions, as well as addressing a problem faced by people with visual disability have been facing in the age of digitization and physical computing.