Connor Schone – Interaction Lab Individual Reflection

Interactivity, as I came to understand the term so far in the course, is a process by which a prompt from one singular entity invites one or multiple separate actors to alter their behavior in some way, after which an ensuing reaction from the initial “prompter” generates another response, beginning a self-sustained series of actions and reactions. For an interaction to occur, all parties involved must be aware of/alert to their respective counterparts’ behavior (or lack thereof) to determine whether to engage in some behavioral adjustment. 

I arrived at this definition primarily from a combination of The Art of Interactive Design by Chris Crawford, as well as my own observations and line of reasoning from class discussions. I think Crawford’s mention of thinking, listening, and speaking in discussing interactivity influenced the main body of my definition. I also the need to add the condition of there being some prompt in my own definition of interactivity. An interaction can not occur without an initial catalyst that works to establish communication. For example, if the remote used to operate a toy car worked, but didn’t have clearly marked buttons or appear to be associated with the car in any way, then I would hesitate to call it interactive. In this case, someone (ie. an external actor) would have to approach you to tell you how the remote works before you could use it, changing the nature of the interaction to be non self-contained; the remote control car is in itself not interactive.     

In one of the slides from our first class meeting, a definition of interaction appears next to a .gif of a child bowing back to a bowing robot. The bowing robot might meet my conditions for interactivity depending on its specific qualities, specifically whether it is responding directly to the little girl’s actions. The robot prompts the girl to approach it by exhibiting recognizably human traits. My definition of interaction, however, presupposes a degree of reciprocity from both actors, which means the robot must have been able to identify the girl as having approached it, and respond as a result. 

The second possible examples of interactivity I looked at were various art installations that involved moving or changing components that responded to the viewer’s motions. The installation below fits my first criteria in that by being framed against a wall facing open space, the piece prompts the viewer to approach it. The piece then responds to the people’s behavior (in this case their movements), and adjusts accordingly. This installation meets all my subsequent conditions for interactivity in that each movement by the viewer produces a reaction, and achieves an ongoing exchange between the different actors. 

Our project, “The Heal-O-Matic 5000” was predicated on the idea that by 2119, technology will have progressed to the point were people will be able to obtain an accurate diagnosis of any ailments quickly and on the spot from a machine. The title and accompanying graphics on the front of the machine prompt the user to place their hand on its surface where the outline of the hand is shown. The machine would then respond to the action by the user by identifying the persons problem and dispense the appropriate prescription.  

Displaying mmexport1570638881122.jpg

Group Project: Individual Reflection by Tya Wang (rw2399)

In my definition, interaction is the process in which two parties involved repeatedly send information through media such as words, sound, image, physical movements, etc, while receiving the other’s information simultaneously. The more types of information are involved, the more interactive this relationship is.  

When I was reading the mandatory materials and articles on the recommended website, I felt that despite the many efforts out there to give a precise definition of interaction, one without specifying the form of information transmission would be inadequate because its forms not only showed the channels from which two parties communicates, the difference in the number of forms is also decisive to people’s interactive experience. In his book The Art of Interactive Design, Crawford (2002) stressed that there should be three repeated steps in an interactive relationship, namely receiving, processing, and sending information. Vivid and brief as what “speak” and “listen” implies in this definition, which probably is the process of giving in and out the information to the other party, it didn’t specify how information is sent and received. This left space for a lot of kinds of communication that may not seem as interactive from my perspective as others according to this definition. For example, although I wouldn’t see the usage of a calculator much of an interactive activityI’m still taking in the calculator’s output, checking it according to my expectations, and putting in another formula. Both my behavior and the calculator’s processing fit Crawford’s definition. However, if we take the channels through which information is conveyed—in this case it’s numbers on the screen and buttons on the panel—we can easily see that there isn’t a lot of complicated information involved in this relationship. However, in a more interactive relationship of communication, there are more senses or channels involved. Take the â€œArtificial Arcadia” created by Fragmentin in collaboration with KOSMOS architects as an example, the landscape of a particular place is shown in 3-D with metal sticks and blanket covers. Visitors are allowed inside for speculation, touching, and an all-around experience of a simulated melting ice and snow in the Swiss mountains. This activity involves transmission of information that activates one’s nearly all senses and gives a complex package of information through the combination of different channels. We could easily see how a relationship like “Artificial Arcadia” that involves more forms of information transmission is more interactive than those with less, such as using a calculator. That’s why I stressed different channels of conveying information in my definition. 

Despite the difference, there is still something that was mentioned both in my definition and Crawford’s. A repeated process that requires information to flow back and forth, or “cyclic”, as Crawford puts it, is indispensable in an interactive process. If the communication of information is only once or twice forward and back, this relationship should be better defined as a responsive relationship. Take the project â€œOpenDataCam 2.0” as an example, the device only requires its user to input the image of an object Then it processes and calculates what it most likely is, gives back the answer, and the end of the activity. There is no repeated information going from the user and the device, and it apparently is more responsive than interactive. On the contrary, in the â€œArtificial Arcadia” project mentioned before, the sticks and blanket change according to the region of mountain its system selected and to when and where visitors come into the project. Information transmission is never absent in this relationship. Therefore, Artificial Arcadia” qualifies as an interactive project yet OpenDataCam 2.0” does not. 

In our group project, we focused on how we want our device to help people imitate the physical motion of another. We first came up with a scenario, a dance class where a lot of people have trouble managing to get the dance move, and then we talked about how to transfer information of the moves from a teacher or a student. We decided that it could be streamed live through a server to make information transmission timely and precise. I think this project resembles the “Artificial Arcadia” project a lot because both involve recording the shape or motion of a person or an object and then sending information to the device to show other people. Therefore, this idea of ours fulfills my recognition of interaction because not only it conveys complex messages of motion, speed, etc., it also allows information to flow back and forth at any time repeatedly. We then focused on the functions and parts of the device itself. In order to make the technology seem feasible, we decided that sensors on each body part would be great for information outputting and inputting. 

However, there is still space for improvements in our project. We first set our scenario and did not explore further later when the prototype is built. We received some great questions from the audience like “have you ever thought of using this in other fields such as medical operation”, and that’s when we thought about the potential of this idea in other fields that we did not show in our performance. We could definitely see this motion syncing device of ours be used in the educational field because it breaks the restriction of distance and makes hands-on experience more accessible to the common.  

Moreover, I think it would be better if our group came up with a feedback system from the students to teachers because that would make the lessons tailored to each student’s needs and make the learning experience deeper. And it also responds to my definition of interaction better because this way the student can provide more information for the teacher to make the communication more like a two-way relationship than simply teaching. These are the two aspects that I think our group could have thought more carefully through during the reflection process. 

Overall, I think our project answers to my definition of interaction and we had a great time together collaborating with each other brainstorming, prototyping and rehearsing for the performance despite the improvements that could have been made. In further projects, I will definitely continue to work in a collaborative manner, taking advice from all channels and seek space to make our ideas better and more comprehensive. 

Reference 

“Artificial Arcadia – Measured and Adjustable Landscapes.” CreativeApplications.Nethttps://www.creativeapplications.net/processing/artificial-arcadia-measured-and-adjustable-landscapes/. Accessed 9 Oct. 2019. 

Crawford, Chris. The Art of Interactive Design: A Euphonious and Illuminating Guide to Building Successful Software. 1 edition, No Starch Press, 2002. 

“OpenDataCam 2.0 – An Open Source Tool to Quantify the World.” CreativeApplications.Nethttps://www.creativeapplications.net/environment/opendatacam-2-0-an-open-source-tool-to-quantify-the-world/. Accessed 9 Oct. 2019. 

 

Christina Bowllan Individual Group Project Reflection

             The most common way to think about interaction is as a conversation; One person says something to their friend, this person processes the statement and then provides a response. Crawford, the author of The Art of Interactive Design, has a similar definition of interaction which is “a cyclic process in which two actors listen, think and speak (3).” While I agree with both of the interpretations above, interaction has taken on an extended definition for me. Firstly, an interaction done correctly should not have to be explained to other people. Tigoe, who wrote Making Interactive Art: Set the Stage, Then Shut Up and Listen, states “What you’re making is an instrument or an environment (or both) in which or with which you want your audience to take action. Ideally they will understand what you’re expressing through that experience.” In addition, through watching videos and projects in class, I have realized that interactive projects should not just be a communication between you and the computer, but should have a greater goal to simplify someone’s life.

            In order to shape my understanding of interaction, I researched a few projects that helped formulate this definition. The first project I looked at was Richard Vijgen’s ‘Hertzian Landscapes.”  The motive behind this project is to allow people to experience the frequencies of radio spectrum by moving their body along the wall panorama. This project is fascinating to watch, but it does not fit my definition of interaction. If I was the person testing it, I would not understand what I was looking or why my body moving changes the panorama video. Therefore, the author would have to step in and explain the technology behind it which is not a good interaction project. In addition, someone could argue that the greater goal of this project is to provide knowledge of a world we cannot see with our eyes, but it is hard to imagine that someone’s life is being helped because of it.

            On the other hand, a project that did align with my definition is a project developed at an MIT Media Lab called NailO. This device is connected to your computer or smartphone and can detect five different gestures to move the screen up and down. So, if someone is cooking, they don’t have to worry about making their computer or phone dirty. This is not only interaction between the person, device and computer, but also serves a practical use. And, while watching the GIF of the device, it is easily understood how and why to use it.

            When creating our “Let’s Dance” Interactive Project, we took all of these criteria under consideration. Firstly, we thought about what the basic communication should look like between the person and the device. We would program the teacher and student suit to be in sync with one another so that the student could learn the movements from the teacher. Secondly, when presenting the project, it was critical that the audience knew what the device was and why it was important for people to buy it. This is why we exaggerated the frustration of learning new dance moves, because then the student would be at ease when he put the dance suit on. And lastly, related to this point, the student’s life was easier  when he could learn the movements from the teacher. Similarly to the NailO project, something that you could not do before, was helped because of the machine.

Group Research Project Individual Reflection-Xinran Fan

Interaction:

     I get puzzled with the meaning of that word at the first time I saw the name of this class. Then I ask many people about what will I learn in this class and surprisedly gain various reply. Some told me it is all about AI, others argue that it is more focus on engineering. So what is “Interaction” in deed?

     The first understanding about Interaction I got is very vague, “It might be kinda like a conversation between people and the computer.” That is because most interactive devices we saw, like those examples below, are about the relationships between people and the computer, especially is the computer receive the information (such as movements, press, or sound) from people and make a reaction. Yes, it is the most common kind of interaction, but after the following  classes I took, I found my thought is too narrow.

Video mirrors

Floor Padseye_thing.png

salsa_floor.png

     Interaction is “an interactive process of listening, thinking, and speaking between two or more actors” Reading this concept from Chris Crawford, I suddenly realize the actor is not only the computer and the people, it can be the conversation between the outside environments and the computer, two computers and even two human! The interaction do exist if there are three stages, for human being intelligence, that are “listening, thinking and speaking”, for the computer side, that are “input, processing and output”. But no matter what kind of interaction it is , the main actor should have some sorts of intelligence, at least it has the ability to “processing”. It can follow some kind of logical system to translate the information it received into another expression. For example

WiFi Impressionist – City as an electromagnetic landscape

Created by Richard Vijgen, ‘WiFi Impressionist’ is a field installation inspired by the cityscapes of William Turner that imagines the city as an electromagnetic landscape.

whose input is WIFI signal, processing by computer, and the output is an imagine. That’s no human exist, but that’s truly a completely interaction.

        So, as a conclusion, I think the interaction is the conversation between two or more actors (a random access media as the main role) which has three completely stages.

Group project:

    “How to show a device after 100 years? Should it be very advanced?” I have to emphasize this two questions lead to many unexpected ideas and discussion  at the first meeting of our group. The first idea given by Barry is a smartwatch which can measure the wearer’s physical digital, which is shelved for lack of futuristic feeling. Jackson even presented a doomsday idea according to the Bible-a house where people live in the doomsday’s flood! Of curse it was turned down for the feasibility. Then I put forward a scheme-a smart suit, which can change its thickness depend on the temperature. Meanwhile, Cathy come up with a more artistic idea-a drawing machine but draw by the user’s emotion. Compared this two, we finally choose the latter.

     Our process is not smooth either. At the beginning I and Stephanie complete a script including aliens and other novel scene. However, this story ignited a keen discussion about that question I list above. At last, we made an agreement that this device, though it exist 100 years ago, it still used in daily life. After that everything go successfully-Leah took the responsibility of drawing and preparing the materials about it, Cathy and Barry improved the script, and we all together made a beautiful computer. (Jackson though was absent for the last show, but he did put much effort in making the materials)

       I think I don’t need to introduce that show in detail. Though my role is a computer who could not show my face, I was still very exciting to be one of this team, for I feel so many interaction between each other here! After watching all the performances, I have the confident to say our show is out-standing, because our ‘Interaction’ has a very clear logic!

Individual Reflection by Barry Wang

In Crawford’s article, interaction is defined in three steps. He used an example of two people talking to illustrate these steps, listening, thinking and responding (Crawford 5). Simpler, it can be concluded as input, processing and output. Indeed, these steps are necessary to interaction. But are they sufficient? No, I don’t think this definition depicts the full picture of interaction. In Igoe and O’sullivan ‘s Introduction to Physical Computing, the figure of “How computer sees us”, which is a finger with one eye and two ears, enlightened me (xix). I consider that besides the basic definition, high-level interaction should always be related to human. It is human who can really feel the sense of interacting. This is what really matters.

Let me explain my definition by giving an example, which has low level of interactivity. This project is an automatic plant waterer. It is made up of three parts, a soil moisture sensor, which reads the moisture as an input. An Arduino UNO as a computation unit. And a DC motor to control the water valve. Is it interactive? Sure, it involves input, processing and output. It also is quite handy to save your plant from drying to death when you are absent for a long time. But obviously we can see that this project is not as interesting as we would imagine interactive to be. Why? Because there is no human involved in. In fact, I personally prefer defining this kind of low-level interaction as a part of automation.

So, what is high-level interaction then? I also have an example here. It is called Volume, designed by Filip Visnjic. This project is an array of mirrors that redirect light and sound to “spatialize excitement”. It reads the distance and angle of the user and moves the mirrors accordingly. It is also made up of three parts, depth cameras, Arduino controller and motors to drive the mirrors. But when I looked at this project, immediately, it gave me a different feeling. This project is sensational, not only because it is a piece of art. Most importantly, it involves human. We can feel it moving according to our inputs, this feeling creates a strong sense of interacting. In this way, we can call this device a high-level interactive device.

Having defined interaction in my own way, I would like to show what are we doing in our group project. We discussed about some simple devices, including some what I called “automation” device before. Then we decided we would fix our core on human. Also, having looked at some fancy projects, like the mirror one I mentioned above, since we are working on Interactive Media Art, we decided to do more art related stuff. Guided by these ideas, we came up with our “Paint Your Day” device. The project derived from an idea of creating a picture by voice pitch. It sounds like an interesting idea, but it may be a bit difficult to paint with only one parameter. Then we thought about multiple inputs, and finally decided our plan. This device focuses on human. All the data are from the user him/herself. Though the input part is automatically done by our wearable devices, but the key is that we want our users have a sense of being cared when using our device. The idea is that when the users got the final picture, they will be happy/astounded… to see what their day looks like. By doing these, I think our final project aligns with my definition of interactive well. This might be a combination of interaction and art in the real future. I am happy to develop on this project and explore more about it.

References:

Automatic Plant Watering System: https://create.arduino.cc/projecthub/neetithakur/automatic-plant-watering-system-using-arduino-uno-8764ba

Volume: https://www.creativeapplications.net/processing/volume-interactive-cube-of-responsive-mirrors-that-redirects-light-and-sound/

Crawford, “What Exactly is Interactivity,” The Art of Interactive Design,  pp. 1-5.

Igoe and O’Sullivan,  Introduction to Physical Computing.