Interaction Lab Final Project Documentation – Alison Frank

Final Project Title: An Adventure Through Sound & Text – Alison Frank – Rudolpho Cossovich

Conception and Design

The project which I chose to make took heavy inspiration from text-based video games. As the idea was that users were to interact with text, my partner and I chose not to include visuals within our game, and we instead added audio to enhance the experience. The way in which we decided to control the game was through the use of sensors. Though this type of interaction was awkward to some people at first, I felt that this type of interaction forces users to contemplate how they usually interact with certain objects. Along with this, I feel that from a designer’s point of view, this forces us to understand how certain mediums of interaction cause users to act a certain way.

For the physical aspect of our design, we simply chose to create a laser-cut box with two holes for the distance sensors. I felt that while this box hid the circuitry well, I felt that it was boring, and did not foster any interaction. If this aspect were to be improved, I would like to have seen an interface which would have allowed users to better understand how they were supposed to interact with the game. Though visuals were not an initial part of our design, visuals could have aided the user interaction.

In terms of the actual design, the idea which I had in mind differed from the idea which we eventually went with. I had considered putting together a different physical interface which would highlight aspects of the game along with encourage interaction. However, we ended up designing a simple box instead.

Fabrication and Production

As I have more programming knowledge and experience than my partner, I ended up doing most of the programming for the game. My first challenge in creating this project was configuring the serial communication from Arduino to Processing. The main challenge that I ran into was that the value being sent over was not one which was understood by Processing, or done which could not easily be referenced within Processing. Therefore, I had to double-check the code I was using. Also, as I wanted to streamline the coding process, I added two conditional statements into Arduino which would print “1” if an object was in front of the sensor and a “0” if nothing was in front of the sensor. In this way, I only had to deal with two values instead of a range of values.

After implementing these changes, the serial communication worked flawlessly.

Another thing I struggled with while programming was making the different game scenes. I had programmed all the different scenes as separate functions, but found that I would always return to the draw loop. Therefore, I talked with Rudi who showed me how to use a switch statement based off of an integer variable. Using this method, I was able to create a “case” for every scene, and would increment the scene variable depending on which scene needed to occur at a given point.

However, the overarching obstacle which occurred throughout all phases of the project was the fact that the sensors were very sensitive and any movement would cause the game to progress regardless of the user’s choice. This could have been easily solved by switching the type of control we used for the project, but both my partner and I felt that using buttons was too typical a choice for game design. However, we also noticed throughout user testing that once a user played through the game once, they understood how to position their hands over the sensor without any side effects. This was also another reason why we decided to keep the interface how it was. Perhaps something could have also been added to the software side of the game to control this as well. I feel that keeping track of which audio sample was playing would be a way to keep this in control. To try and combat this, I originally added delays to allow the audio to play, but found that this caused further issues (more on this later).

Within the coding aspect, incorporating audio also gave me many issues. At first, I chose to use Processing’s Sound library, but found that nothing worked for me. Therefore, I tried using Minim instead and found this to work. Then, I wanted to ensure that each track would play before the user could switch scenes, so I added in delays to every scene. However, this would cause the text to load improperly or would prematurely transition to another scene. Instead, my partner worked with Tristan to solve this issue and we ended up creating a global variable with millis() as an argument. Then, we referenced this variable as a transition between scenes.

My partner was in charge of creating the physical interface for our project, and she chose to laser cut a box which was of the same dimensions as her laptop. After the box was fabricated, we painted the entirety of the box and added a dragon design, as the story behind our game was influenced by “Dungeons & Dragons.” Overall, I felt that if we had started earlier or if we had more time, we could have created a better user interface. I feel that something is lacking within the box design and it does not do much to foster user interaction. If I were to revise the physical design, I would create something which emphasized the sensors so that users may know where to place their hands. On top of this, I would opt for using the laser cutter to engrave designs into the interface in place of painting.

Conclusions

After I created this project I felt that this project was less about conforming to my definition of interaction and more about how it allows me to consider interaction. While I am designing something interactive, I noticed that there are many more things which need to be taken into consideration. Personally, creating this project was more difficult than I expected, and I found that I got too caught up in the details while I should have been thinking about how users were going to interact with it. On the same note, I also felt that my partner and I had very conflicting views with the type of project that we wanted to create. As my partner is very passionate about writing, she was hoping to design a game based purely off of a story which she had written–something which I believe to bring uniqueness to our end product. On the other hand, I enjoy programming and would have liked to make a very visual-heavy game as audio is not generally something I have an interest in. Along with this, my partner was still very new to programming, while I have had prior experience. Therefore, I felt that there were disparities within our work, and communication about issues was complicated, as they often involved comparatively complex programming to solve, which would require me to explain all the steps that I had taken while programming. While I am glad that my partner was able to familiarize herself with programming, I found that a large chunk of the workload was left for me to solve. That being said, I would also like to say that my partner did do work as well, though I still feel that there were many things which she was still not familiar with.

In terms of the interaction of our project, I find that it loosely fits my definition of interaction. However, within my definition of interaction, I had illuminated the importance of how the interaction between two parties should occur naturally, and this is the point with which I feel that our project fell short. Through user testing and through the final critique, I found that users were still not entirely sure how to best interact with our project at first, and I feel that maybe our project should have utilized a different type of sensor instead. Overall, I felt that there was not many aspects of our project which adequately fostered interaction. Instead, I felt that there were some points with which people may have been discouraged from using our project. Along with this, I feel that our project only caters to a small group of people who are willing to read through the text supplied. Otherwise, our project falls flat to those who play it, which is where I believe that the further addition of audio and visuals would be of use..

Much of the feedback I received through user testing was that there are not that many people interested in reading text, which is something that I find to be completely natural. I felt that this could also have been one of the reasons why our project was not entirely attractive for those who played it. In this sense, I believe that the added audio helped to relieve this, but it did not solve the issue. Therefore, if I had to improve my project, I would say that I would do voice recordings of all the text, or I would scrap the text altogether. Otherwise, this project remains available for the niche audience of those who enjoy playing text-based games. I also feel that the point which our project was at when we presented was pivotal, and we did not have enough time to fully make the improvements we wished to make.

In response to this, I feel that the best way to improve our project would be to create something to foster interaction. In my mind, it would be implementing more involvement with the user along with a system that encourages them to read (as the game we built is text-based). However, if I had to start from square one, I might have strayed away from building a text-based game as I feel that the interactions we build with text is very limited. I do feel that games are interactive and immersive, but I also am unsure of how meaningful the interaction is, and this is the point which drives me to further ponder how to create with interaction.

I feel that interaction is a very intricate thing, and making interaction accessible to every audience is even more challenging. However, the more experience I gain through creation and design, the more I learn about what interaction means as a whole and how it may be used in practical and artistic settings.

iML Final Project – “Copycat” – Alison Aspen Frank

Link to final code

Link to final presentation

Inspiration

Since the midterm project, I have been interested in how computers process human language along with the creative and practical applications of this. I have found many art projects which utilize machine learning to process language and I have found many articles stating how Machine Learning is being used for language translation (I will include references at the end of this documentation). As I had already worked with ML5.JS’s Word2Vec model, I wanted to work with text generation instead. That being said, my biggest goal in this project was to successfully train a model on my own.

Original Plan

The way in which I originally planned to execute this was to train a text generation model on my own and inference it with JavaScript. Originally, I tested Keras’ example text generation model, but found that the results it gave were nonsensical. Looking back, this could also have been due to the size of my dataset, as I was using a relatively small dataset.

After this, I looked into many different models, but chose to go with ML5.JS as the model would automatically be converted to JavaScript. As I was still very unfamiliar with training models on Intel’s DevCloud, I spent about two weeks trying to get the model to train successfully. The first errors I received were in my bash script, and they occurred because I did not correctly reference my data directory. However, once this was solved, there was another error with a .pkl file which was created throughout the training phase. To debug this, I had to get help from Aven. With Aven’s help, we reorganized my directories and modified the Python training script. However, even after Aven helped, I was still receiving the same error (pictured below). Eventually, I found that the .pkl file which was created was created with a different name than what was reflected in the script. Therefore, I changed every instance of the file name in the training script, and was finally able to train my model.

PKL File Error:

pkl file error

However, once the model was trained, I found that it could not be inferenced into JavaScript. Even though the model was saved in the correct folder, whenever I would run my JavaScript, I would get an “unexpected token < in JSON” error. I had Aven look at this error as well, and instead of using JavaScript to access the model, we decided to try and see if we could run it with Python. However, this also gave us an error. Then, Aven and I did some research to see if ML5 had any pre-trained models which could be used. Once I got access to the pre-trained models, I found that they returned the same error. Therefore, after conversing with Aven, we decided that this meant that there was an error within the backend of the ML5 code. Unfortunately, we had only found this error on Saturday, leaving me with two days to put together something else for the final.

Backup Plan

With the shortage of time in mind, I chose to work with ML5’s word2vec model once again. I chose to work with this as I knew that it would function properly and I believed I could get it to give me a similar outcome as to what I had originally pictured when planning my project.

My new idea was to utilize Word2Vec to take each word of a user input and find the next closest word. Then, it would output the new words. The effect which I received is similar to text generation, but I would say that it is more akin to something which I would call “machine poetry.” Overall, the outcome which I created is something I am currently satisfied with as I was forced to put it together in a day and a half. Therefore, the user interface design is not exactly where I would like it to be. All other aspects aside, I accomplished my original goal: I trained a model (even though it could not be inferenced), and I did something with text generation.

Conclusion

Though I was not able to carry out my original plan for the final project, I learned many useful things along the way. Through the stages of this project, I learned how to utilize and customize bash scripts, how to setup datasets for training, and I gained more familiarity with Python (a language which I only have three months worth of experience with). I also found that my project can be used to demonstrate how machines process our language along with the relations within. My end result may appear basic, but it does not completely show all the work which I have done along the way. That being said, this class has fostered my interest in machine learning and I am eager to learn more.

photo showing working projectn/a

Interesting Projects & Articles:

Sunspring: AI-Written Screenplay

Recitation 07 – Processing Animation – Alison Frank

For this recitation, I chose to create an animation consisted of a circle on a screen which would change color whenever the space bar was pressed. Along with this, I chose to have the circle grow until it hit the edge of the canvas, at which point it would shrink again.

As I am already familiar with p5, coding this exercise was not difficult, and the main functions I used was keyPressed(). Along with this, i also created my own functions to split up the program into parts. Overall, this exercise was not too challenging, and the only thing I experienced issues with was programming the speed with which the circle increases in size.

In the extra exercise, the shape which changed color was a ring shape. To accomplish this, I chose to draw a circle, with a heavy stroke weight, with a fill value the same as the background. The only color value which would change is the one of the stroke. Then, I chose to modify the color based on the size of the circle, as the circle would either increase or decrease. To accomplish this, I declared a variable ‘r’ to be used as the radius of the circle, then set the range of the HSB colors to go up to 500 (as the my circle would be 500 pixels wide at the largest point). Then, when using the stroke() method, I put in the variable ‘r’ for all three values. The result which I achieved is very similar to the one demonstrated, but I chose to go with a black background as I thought it looked better with my color range.

Code used for first exercise:

int r;
int g;
int b;
int x = width/2;
int y = height/2;
int radius = 25;
int circleSize = 50;
int circleSpeed = int(random(2, 5));
int newCircleRadius = int(random(10, 20));

void setup() {
size(500, 500);
//fill(0, 0, 0);
}
void keyPressed() {
colorChange();
speedChange();
}
void colorChange(){
// colorMode(HSB, 100);
r = int(random(0, 255));
g = int(random(0, 255));
b = int(random(0, 255));
fill(r, g, b);
}
void draw() {
background(255);
noStroke();
ellipse(250, 250, radius, radius);

moveCircle();
circleCheck();

}
void moveCircle() {
radius = radius + circleSpeed;
}
void circleCheck() {
if(radius < 0 || radius > 500) {
circleSpeed *= -1;
}
}
void speedChange() {
circleSpeed *= int(random(1, 2));
}

Code for extra exercise:

int h;
int s;
int b;
float speed = 3.0;

int x = width/2;
int y = height/2;
float r = 25.0;

void setup() {
size(500, 500);
}
void draw() {
background(0, 0, 0);
fill(0, 0, 0);
colorMode(HSB, 500);

stroke(r, r, r);
strokeWeight(20);
ellipse(250, 250, r, r);

expandCircle();
checkCircle();

}

void expandCircle() {
r = r + speed;
}

void checkCircle() {
if(r <25 || r > 500){
speed *= -1;
}
}

Recitation 09 – Final Project Progress – Alison Frank

Part 1 – Other Member’s Proposals

1 – Celine’s Proposal

The first proposal I received was one about an escape room. This escape room was planned to act as a metaphor for the stress and anxiety faced by the majority of university students. As this project was still in the planning stages, much of the feedback given was towards the implementation of the project. The advice I gave was related to the design of the project, and how the group could use a combination of audio and visuals to mimic an escape room. Along with this, the members in our discussion group collaborated to help come up with what system of interaction would be the best for this project. Originally, the project was planned to have a code you enter to break through, but we offered suggestions such as making a sequence game to “unlock” yourself from the room. Moreover, our group was also concerned about making this project too stressful or scary, so we came up with more suggestions on how to avoid this. Most of this discussion was about how difficult the game should be or what the audio/visuals should be like. As the project has an educational basis, we thought about showing the reasoning behind the project once the user escapes the room.

2 – Karen’s Proposal

The second project which was presented was “Pickmon – Pick your own Pokemon.” This is a group project which intends to create a system of interaction which will give a user a pokemon based on their heart rate. Just by reading the proposal, it was difficult to tell what the system of interaction was, as it was described to just be something where you measure your heart rate and get a result. Therefore, as improvement, we suggested putting together a short quiz that players take to make the results more personal. However, this could still be seen as uneventful, so another group member suggested creating a battle mode based on your heartbeat (eg. if your heart rate is higher, you win the battle). I found this to be a great suggestion which would greatly elevate the project.

3 – Sharon’s Proposal

Lastly, the other proposal received was a project based on the Sichuan tradition of mask-changing. Their proposal was to create an interface where they would use a webcam along with openCV to place a sichuan mask on the user’s face, then the user could press the button to switch the mask. Personally, I feel that the button interaction is boring and that the project could be improved if they use hand motions to change the mask. Other group members also suggested adding music and changing the system of interaction. All things aside, I appreciate that this idea is based off of a cultural tradition, which makes the project more meaningful and unique.

In terms of my understanding of interaction, I would not say that my definition has changed, but I feel that I now have a greater understanding of what higher levels of interaction may look like, and how they can be employed creatively. I also found that I have an improved understanding of the ways in which we interact with machines and how interactive systems are employed throughout our daily lives.

Part 2 – My Proposal & Feedback

For my project, the majority of the feedback had to do with the scope and the environment of my project. As we only have a short amount of time to complete the final projects, the other group members warned that if there were too many choices within my game, it would become to difficult to build in the allotted time. Aside from this, the other suggestion I received was in regards to the audio/visuals of the project. According to my group, I felt that the least successful part of my proposal was explaining what the story behind my project is. As I am working in a group for my project, and my partner had written a story to be used, I felt that my group members were a bit confused as to what the content of my project would be. I feel that my reasoning for using audio in my project was the most successful part of my proposal. While I am still unsure what the story content of my project will be, through the feedback of my group, I found that my idea of a storytelling game was very well-received. Most of the feedback I received were on things that I was expecting to get feedback on, so I will be taking this into account while building my project. On top of this, a group member also mentioned that my project could be modeled similarly to text-based games which were popular in the early history of video games. Having played these types of games before, I really enjoyed this suggestion and can see how it can be implemented within my project.

Serial Communication Documentation – Alison Frank

Exercise 1 – Etch a Sketch

In order to build the etch a sketch, I started by connecting two potentiometer to the arduino. I connected the 5V power and the ground to the analog pins on the arduino. Then, I connected a jumper wire to the remaining pin on the potentiometers and to a digital pin with analog capacity (the one with the ~ symbol). For the coding, I sent the value from the potentiometer into processing, then mapped it to the width and height of the canvas. I made some minor issues within the map function at first, but this was easily fixed. I chose to draw a series of ellipses based on the mapped values of the potentiometers. Though the final sketch worked, I found that the frame rate was too slow. I believe that the main cause of this was that the values from arduino had to constantly be processed, which combined with the frame rate would make it slow.

Exercise 2 – Musical Instrument

For this exercise, I chose to have a speaker play a tone whenever the mouse was inside of the Processing canvas. Along with this, I tried to map the tone value in relation to the mouseY. However, I noticed that regardless of the number I passed in as the frequency of the tone() function, the buzzer would make the same sound, and there would be no difference in pitch. I do not know if this was due to the button I used or to a coding error, but I got the sketch to work regardless. I think it may also be that I only connected the button to a digital Arduino pin. I decided to make my sketch more interesting, and chose to add text on the canvas reading “don’t touch me.” I feel that using the mouse as the medium of interaction was not very natural, and I feel that this could be implemented through a sensor to create an interesting effect.

Circuit Schematics:

circuit schematic sketch