Week 04 – Group Project Documentation – Alison Frank

Interaction could be most simply thought of as the way in which things work together. Interaction can be seen between humans, between humans and machines, humans and animals, and between machines. As Chris Crawford might say, interaction occurs on a spectrum of complexity (6). In my opinion anything which involves actions from two different parties is considered an interaction. However, this definition of interaction can be narrowed as a an interaction which involves a response or an inference on behalf of the other party involved. For an interaction to be considered “higher” on my idea of the interaction spectrum, the response which is given by the other actor must be related to the input and involve a sort of inference. In Crawford’s example of a fridge light turning on, there is not much inference on behalf of the refrigerator. Once the door is open, the light turns on, when the door is closed, the light is off.

One of the projects I researched were Daniel Rozin’s mechanical mirrors. I feel that his mirrors fit my definition of interaction as an inference must be made between what the sensors are reading and between what is the output of the mirror. If something in front of the mirror is moving a certain way, the mirrors must reflect this.

As far as projects which do not fit my definition of interaction, I would refer to Yayoi Kusama’s Infinity Room installations. While her art is definitely stunning, I feel that the interaction with them is one-sided. When viewing these installations, you either walk inside or stick your head in. The only response you get is the reflection being made by the mirrors. The visual effect is stylistically stunning, but there is no interpretation on behalf of the installation itself. Unlike more traditional pieces of art, these installations require someone to make an action in order for them to be experienced, but they do not do anything as a response. Therefore, I feel that I would not consider these installations “interactive” while others may think they are.

As with many new technologies and with codes, I feel that interaction is often a sort of inference made by the other actor. In terms of interacting with something like our project, as the main actor, you would select what you want to happen, but then the machine (the Dream Catcher in this situation) would make an inference on what it thinks you want to happen and give a result. I feel that this type of interaction is often seen when we are conversing with one another, and I feel that this sort of phenomenon is what I would best use to describe interaction.

As far as other interactive projects, I feel that Daniel Rozin’s mirrors partly fit this definition. The main actor is the person/people standing in front of the mirror, then the “mirror” as the second actor reacts to what it is taking in as an input and makes an inference to get a desired output. While his mirrors do not directly “converse” with the actors, they can still be considered as interactive as they are manufacturing a response to the user.

For the project our group made, we wanted to create something that has a more complex mode of interaction. We were also concerned with how devices in the future might have an emotional side to interaction as well. Therefore, we thought about creating a device that was highly personalized. As dreams are often very subconscious, we thought that this would be a very interesting thing to have something interact with. With our project, the interaction is not physical. The Dream Catcher works mainly with what goes on in your mind, then the only physical interaction you would have is selecting what you want the notebook to do with your dream. However, I feel that this type of physical interaction does not necessarily go in with what I would think interaction to be. While clicking or selecting something from a menu may be viewed as an interaction, I think that the main interaction happens when the notebook interprets your dreams.

I feel that this group project fits my definition of interaction as it not a one-way channel of interaction. Where the user gives the notebook the input of a dream, the notebook then interprets the dream and gives the user a variety of choices in return. Then, after the user selects a choice, the notebook executes an action directly related to the choice. Due to this multi-channel flow of action and response, this notebook surpasses a more basic level of interaction and hence fits with my proposed definition of interaction.

Sources used:

The Art of Interactive Design – Chris Crawford (pg 1-6)

Yayoi Kusama’s Infinity Mirror Rooms (link)

Daniel Rozin’s Mirrors (link)

Week 03 – Sensors – Alison Aspen Frank

Our Circuit

For this recitation, my partner and I chose to work with a distance sensor, as the schematics were relatively simple.  First, we connected the sensor to the Arduino (as an analog input).

connection of distance sensor to arduino

Once we had it connected, we chose to create a sketch which utilized Serial.println() to log the distance value as given by the sensor. To create this sketch, we referred to the following example code, this link also helped us to set up our sensor (link).

serial log of distance

Then, once the values logged were fairly accurate, we went on to connect an LED as a digital output, and used if statements to make the LED turn on only when something got to a certain distance from the sensor.

sketch of sensor setupsensor connected to arduino

The schematics for this were fairly simple as we chose not to use a breadboard, and therefore, everything was connected to an Arduino. However, I would like to experiment more with this concept and more LEDs in the future.

Question 1

While assembling our circuit, we didn’t have a set idea of a thing which we wanted to assemble. Rather, we chose to play with the components until we got something to work. However, in the end, I feel that our circuit works similar to a porch light, a light in front of a door of a house which turns on when somebody walks close to it. Therefore, it could be used by many people, and though the concept is simple, it is relatively useful. I also feel that the concept we used could also work like a doorbell, or like the music that plays at family mart, as the schematics would be similar.

Question 2

I think people often use this metaphor for coding as it is basically a “recipe” for a computer to follow. However, I find that this metaphor doesn’t fully cover the complexities within coding as computers require very specific instructions to complete a task. Along with this, I feel that this makes coding seem boring. Often, code can be playful as well. Personally, I feel that coding is more a way of problem solving, as you have your problem (desired solution) and you have to find a path to get there. Therefore, it’s less of a tutorial and more of a way to use creativity and logic to get a result. Also, you may find a new result or solution as a side effect of your code that does something even better than you intended.

Question 3

I feel that the computer has a large impact on our behavior. For example, we have become reliant on computers for many aspects of our life. Along with this, I feel that many of the things we create artistically has evolved to have a basis on the computer. Whether it’s drawing digitally, working with a 3D game engine, or writing creative code, our ways of creating have been modified to be based on modules and snippets of code. Along with this, we are also reliant on forms of social media to keep us in touch with friends and family. When we talk with friends online, the syntax we use is different in daily life. As we cannot easily convey emotion in short texts, we use abbreviations, capitalize letters, or send emojis, things which we would not do in a normal vocal conversation.

Week 02 – Arduino Basics – Alison Frank

For the first two circuits my partner and I built, we were able to figure things out fairly quickly, as fewer components were needed when working with the arduino. The main struggle we had was with the arduino pins. We would either not initialize the pinMode() in the sketch, or we would place the wires in the wrong pin of the arduino. Circuit 1 was the easiest to build, and we had no issues with the code.

With Circuit 2 (Melody) we forgot to change the pin number in the sketch and we also forgot to open the file that supplied the musical notes to the arduino. Along with this, our speaker was very quiet and so we thought our circuit was broken when we really just couldn’t hear our speaker at first. However, we then ran the code and had no issues.

For the last circuit, a lot of tinkering was required. When we opened the model on TinkerCAD we were very careful to follow everything as closely as possible. Our main struggle was keeping track of what we had completed. When we first built the circuit, we also had some issues with the second button not registering the score. After we reconfigured the sketch and the circuit, we got it to work.

sketch of circuit schematic

Lastly, we worked with another team to try and make the 4-player game. My partner worked to connect the two circuits while I worked to modify the code. For the circuits, we chose not to remove anything and hence made things more complicated by adding even more jumper cables. However, this led us to get confused on which button and which LED was which, which ultimately led to more issues. As far as configuring the code, my main issue was reconfiguring the conditional statements, and trying to figure out how many cases to add with the addition of players. Due to time constraints, we did not end up finishing this circuit, but we tried. 🙂

our attempt at circuit 4, two breadboards with one arduino

Question 1: (Reflect how you use technology in your daily life and on the circuits you just built. Use the text Physical Computing and your own observations to define interaction.)

Throughout my day-to-day life I use technology in a variety of ways. Whenever I want to talk to my family or friends, I call them over Wechat or talk over social media. Along with this, I never carry cash and use my phone for all of my payments. For places like Starbucks and Wagas, I have membership cards which are kept on an app. Most days I use my phone to order delivery and all the work I do is on my laptop. Most often, the technology I use I use to make my life more convenient and streamlined. Even in my bedroom, I have an air purifier which changes its settings dependent on the quality of air surrounding it and I have a remote control air conditioner. These technologies are used to enhance my living space and also provide a small

Question 2:

If I were given the chance to work with 100,000 LEDs I would choose to make a mirror out of them. However, I would not want this to be a typical mirror. The mirror I would create would use the intensity of brightness to reflect how far away a person is. If possible, I would even like to customize the colors to reflect the personality of a person and combine this with the brightness to create an interesting interpretation of a mirror (ie., mirror captures your body shape, but if you are a quieter person, the LEDs are a bit dimmer, or the colors are darker, etc.).

Week 03 – ml5 Project – Alison Frank

For this project, I chose to use the poseNet model as I feel that it is versatile and the easiest for me to work with. I was intrigued by the way in which computers might view someone, and therefore decided to expand on this idea in my project. In order to accomplish this, I am capturing images through the webcam of my computer, but this capture is not being displayed on the canvas. Therefore, when you look at the canvas, you only see circles for the eyes, and a rectangle for the nose and the mouth. The technique for accomplishing this was simple, and my main issue was the sequence of my functions and the positioning of my drawings. I also had issues with my webcam, but it was unrelated to my code.

I chose to keep my project simple and to only draw over the main parts of the human face: the eyes, the nose, and the mouth. To access the keypoint of these, I used Moon’s sample code as a reference, then used my own knowledge of p5 in order to draw the shapes. When running my code and looking at the names of the parts detected by poseNet, I noticed there was nothing to indicate where the mouth is. Therefore, as I still wanted to draw something, I chose to access the “nose”, then modified the x and y position of the rectangle I used for the mouth.

When choosing which shapes to use, I chose to keep them simple so as to create an effect of strangeness when you look at the project. For each of the eyes, I drew two ellipses, one large white one, and a smaller black one to mimic the pupil. The nose is comprised of a turquoise rectangle, and the lips are a red rectangle.

Overall, I am happy with how my project turned out, and I enjoy how whimsical it is. However, I would eventually like to configure ways to make the different parts move (as in have the mouth open and close).

screenshot of my project, shows detection of face

faces being detected in my code

My Code:

dropbox link

Week 02 – Case Study Semantris – Alison Frank

Link to presentation

For this project, I chose to look at a game, Semantris developed by Google’s Research Team which makes use of AI technology. 

This game is based on word-association, and has two different modes which create different challenges. The premise of the game is that you have to guess word which the game AI will relate to the keyword given. Throughout my play of Semantris, I found that the relations are mostly natural, but there were a few unexpected results.  

The word association training of Semantris was focused on conversational language, and relations. Therefore, Google tried to implement common questions and answers seen in human conversations. In order to gain data to use for Semantris, Google Research also looked back a to their project Talk To Books,  a project which connects user input to passages from books. 

Seen below is a sample of code for the project which was used to help the AI understand some common conversational questions and their corresponding answers.

Sample of Google's code highlighting use of conversational vocab

Semantris makes use of tensorflow’s word2vec model (link here), which is used to graph the semantic similarities between words. I found this to be an interesting way to gain quantitative data of a qualitative thought.  As when you train an AI model, you need your data to be in a format which can be understood by a computer. Therefore, this model needed to move on from simply comparing strings by the characters they contain, and rather focus on the meaning of the word. Personally, I think that this would be incredibly difficult to graph, but Tensorflow has some examples of how they accomplished this (pictured below).

graph showing word relation  (just one example of how words can be related)

skip gram model, used by tensorflow(another way to highlight how data sets can be formed)

Along with this, when training this project, it was semi-supervised, so that the word pairs could be more conversational and natural, according to Google. When you play the game, the AI also understands pop culture references, and some word-relations which are only understood in conversation. 

Outside of the boundaries of a game, Semantris could have other practical uses, especially when it comes to those who are learning to speak English. Along with this, the techniques used to code the game could be implemented in other text-AI to create more natural results. However, there is still some polishing which could be done.