While my project could not be fully polished in time for the IMA show, I had a really good time there and loved a lot of the projects I saw.
What I really enjoyed seeing was Sam’s finished video game project. Sam taught a workshop on Unity a while back, and he used the same basic avatars to teach us how to make video game characters move and animate. His game reminded me a lot of a game I really loved called Ori and the Blind Forest.
What I really appreciated about the IMA show is how it informed a lot more aspects about interactive media arts than simply what was in my class Interaction Lab. While I was doing projects for Interaction Lab a lot of the time the Art aspect of IMA would get lost, so I was blown away by a lot of the projects there. My favorites are below:
Attending the IMA show taught me the diversity of the field, and also showed me the future of art — blending technology through art to accomplish an entirely new form of expression. It was really cool to see realizations of things I would usually only see in museums, and makes me better understand how art will evolve in the future.
I started with the concept of a virtual pet, so I knew users were going to interact with it based of off previous knowledge from past games: Pokemon, Nintendogs, Tamagotchi. In all three games, no matter what other fancy or unique features they had otherwise, they fundamentally always kept the same two functions: a function to “pet” the creature, and a function to feed — and every time it takes 3 stages for them to grow up: they have a baby phase, a teenage phase, and adult phase. So I wanted to keep those two interactive designs — except unlike Nintendogs, Pokemon, and Tamagotchi, I didn’t want my petting to be a mechanical mouse touch or button press, I wanted the user to physically pet the creature. So that is why I incorporated the touch sensor as one of my main Hardware functions. Secondly, to “feed” the creature, I used a regular button, because there is something satisfying for the user to physically press a button in order to make food appear, I believe because the physical feeling of an up down motion is similar to that of putting a bowl down. I also wanted to keep this motion similar to all the virtual pet games since it is such a well established function.
In terms of physical design of how the controller looked, I wanted to illustrate a disconnect between the original form, or the “cute” form, in contrast to the screen, which shows the baby growing more and more monstrous. I got this idea from Leon, who in user testing asked if maybe the physical component can be the pet, and what is shown on screen is what the pet is thinking/is really like. I really liked that idea, so I decided to sort of implement it into my controller design. Having to pet and feed a physical version of what their pet used to look like while the pet on the screen grew more and more monstrous to me makes the user more forcibly confront that the “cute” pet they used to interact with is similar if not the exact same to the monster growing on screen. That is why I tried to make the controller to look like the first stage of the pet.
In terms of material, I considered using fur, but decided against it because I felt like it didn’t suit the sensation users should feel when interacting with the virtual pet. I felt fur made it too mammal or animal like — I wanted the creatures to emulate eldritch creatures and old Lovecraftian monsters, and I felt like adding fur would take away that aspect of them. If I had more time I would have definitely experimented with slime, because I feel like that would have been a really effective way to emulate at least some of the sensation that came with petting the monster. I put the touch sensor at the top of the head, and the button in its mouth, so the user understands firstly that the button is meant to feed, and the touch sensor is meant to pet the monster. I printed a box so that it would emulate the feeling of a video game controller rather than a toy (I wanted the user to get attached to pet on the screen, not the physical pet in real life) and so that it could stand up on its own.
In terms of design on the software side, I took a lot of inspiration from Lukas Vojir’s Processing Monster’s project, which I still think is a really amazing way to teach beginners to familiarize themselves with processing. For the first two “monster” stages, I directly used designs from Lukas Vojir’s examples – the first stage is example code he calls “monster3.” I then edited the code so that it would respond to touch sensor values, and so that it would respond to the button press and “feed” on what I chose, which for the first stage was a strawberry.
I decided to keep it black and white because it keeps the design simple, and because it also allows for a lot more flexibility in allowing the user to imagine the pet as something otherworldly. The black and white also is meant to emulate the black and white graphics of old virtual pet games such as Tamagotchi.
Then, in contributing to Lukas Vojir’s open source project, a student who goes by the name of Jan Vatomme designed what they called Black Tentacle Monster, who’s basic design I used for the second “teenage” phase of the monster. I liked this design a lot because it is still not “monstrous” but it is definitely a lot more alien than the first initial phase. I really liked the use of eyes in the construction of monsters in Lovecraft stories, so I used this design and edited it so it could react to serial communication, and so that it can eat when commanded to by the button.
Then for my last stage, I wanted to create something wholly unique because this is the stage where it fully turns into a monster. I actually had a lot of fun with this design: I kept the mouth from the second stage and maintained the eyes as the main feature of the monster, but this time I added bigger tentacles to make the monster seem more menacing. The eye was also code from Lukas Vojir’s Processing Monsters project, and the code was from Luke Mekeen. The tentacles were example code from Esteban Hufstedler, which I really liked because the code was so that the physics of the tentacles were really easy to manipulate and I felt like the wriggling effect made the monster a lot more scarier.
My final monster looked like this:
In terms of what the monster “ate” I tried to follow a similar to design to how it evolved: in the beginning the monster ate normal foods, like a “strawberry” and it would blush in happiness whenever it ate to show humanly recognizable sign of gratitude. However, when it evolved into the teenage phase, it then would eat a snake — a living thing. Then, at the final stage, the monster ate human hearts — and both the teenage phase and the final stage will not show humanly recognizeable signs of gratitude. The teenage monster would shake its eyes more frantically, and the final stage monster’s teeth would clamp down faster.
Fabrication and Production:
In terms of production, the biggest chunk of time was spent on making sure the code was right and that the physics of the monsters’ movements worked with Serial Processing. A lot of the hardships then came from having to familiarize myself with the Processing functions — the biggest Processing feature I used by far was the Class feature, which makes me really glad I took the OOP workshop during recitation. I had a lot of trouble understanding what went into the construction of a class, and how to implement that into the main code. I got a lot of help on that front from mainly Tristan and Rudi. I really liked using the class function because it made a lot of the code simpler, as I could just call features when I needed them and removed them when they went away. In that case, that meant the main stages were classes and separate codes before I added them all in — the Class feature also allowed me to use the Timer feature, which allowed me to set a time for the monsters to stay on screen before they “evolved” into something different. I did not have much in terms of physical sketches because to me the physical interaction/controls were there to supplement the main part of the project, which was the visuals on Processing — a lot of the design planning was the computer — sorting through different codes and looking at different monster designs on the internet to see which ones I liked best and wanted to emulate.
I spent most of my time designing the first two stages of the monster, because especially at the beginning, even though the first two stages of code were a lot simpler than the third stage, the first two stages were when my lack of understanding of class and class construction were the most obvious, so I had to use the first three days to familiarize myself with that.
I spent the last two days on the physical component — for the body I 3D printed a ball and painted eyes similar to the Stage 1 creatures eyes onto it, and laser cutted a small box that could keep the ball standing and could also hide and fit the Arduino into it. I didn’t want to make it too big because I wanted the entire controller to still be portable.
When I first showed up to user testing, I did not have a controller, and I only had the first two stages of the monster finished. The monsters still existed as two completely separate code, so the monster could not “evolve” yet. The main criticism then is that the physical controller needed more direction so the user knows what to do, and then Leon gave me the idea to not make the physical component too much of the pet (or else the user would get confused) but maybe a part of the pet, which is where I got the idea to make the controller a physical version of the Stage 1 pet. In terms of the virtual pet, people wanted more animation, especially when they interact with the pet. Before the feeding was very mechanical, and the pet did not react so much when you fed it. So after user testing, I made it so after you fed it, the pet would react in some way even after you finished feeding and pressed the button. The baby would blush, the teenage monster would move its teeth and eyes, and the adult’s eyes would move a lot faster.
In terms of failures, there was so much I wanted to add that I didn’t have time to. I wanted to make it more obvious that the user is reaching a goal in terms of evolution, so I wanted to add more counters or indications that the monster wants to be petted, or wants to be fed, or does not want to be petted or fed. I wanted to make the monster more customizeable, which was a big feedback I got in user testing, but I didn’t have time as just simply designing the three I had now took up a lot of time. I wanted to experiment with sound, which was another great suggestion I got during user testing, because I felt like it would have made the experience a lot more immersive and added to the scariness of each stage.
Ultimately though I think I did achieve my end project goal, which was to make an user attach themselves to a pet that grows more monstrous instead of “cuter.” Based on the “attachment” people still felt towards the end when they interacted with the pet, I feel like I achieved the goal of making people question what in society we find monstrous, and whether or not we are able to relate to something we find alien or scary.
In conclusion, the goal of my project was to question what human beings can feel empathy towards in terms of interacting with virtual pets. I believe that fear and how we imagine horror informs so many constructions of greater societal anxieties that I feel that what features we find “horrifying” and which we find “cute” and “relatable” is worth questioning. The goal aligns with my definition of interaction because I stated I want the interaction to communicate a form of expression, and should be a back and forth communication with the user and the machine. I felt like I have achieved both my goal and have satisfied my definition of interaction with the project because I felt my project accurately articulated a form of expression in terms of reacting to the users’ input and also gave them room to question their relationship with technology and virtual pets. My audience reacted how I imagined they would — they were wary of the final stage at first, but once they interacted with the monster they felt a lot more attached to it, which shows human being’s potential for empathy.
In terms of improvements, I would definitely try a way to make the experience a lot more customizeable. I felt if you were given a different monster every time you played or if a different user opened the game, it would really add to the personalization of the experience, and also allows for a lot more creativity, an aspect really important to a game that relies heavily on something so unique looking as monsters. In terms of failures, I have learned to challenge myself in terms of achieving my project goals, but keeping my project goals realistic to the skills and resources available, especially when I was struggling to design an entirely unique monster in Processing. In terms of my accomplishments, I feel like I am glad I did this project because it really helped me familiarize myself not only with Processing, but allowed me to test the boundaries of my creativity in terms of design — I wanted the monsters to look unique but still align with my project goals, which was a lot harder than it initially looked, so I am really proud of how the final stage looks.
In the end, I am such a big horror fan, because I think horror is so creative and neurotic in a way other genres aren’t. I wanted to create this project because I wanted people to challenge horror conventions they see day to day: why is Cthulu scary to us? Why is Dracula? What anxieties do each monster represent and why are their features alien to us when a dog or a cat is not? These are really worthy questions in a time when we are constantly so defined by how we look and how we appear, and the features we can not help are often twisted and demonized to create anxiety/otherize a certain group of people. That is why I feel this project was a really worthy undertaking, because I want people to question the internalized frameworks of empathy they have over the years.
I attended the OOP class workshop taught by Tristan. Tristan taught us how to construct classes so that our code for our final projects can be more organized and flexible to manipulation. He taught us how to change and remove words in an Array List on a screen. I am really glad I went to this work shop because my final project required a lot of class construction. Here is the code and a video below:
void draw() {
background(0);
soup.get(0).display();
soup.get(1).display();
soup.get(2).display();
for (int i=0; i < soup.size(); i++) { //you can now make as many as you want
soup.get(i).display();
}
}
void mousePressed() {
soup.remove(0); //how to remove soup in order
soup.remove(soup.size()-1); //how to remove soup in reverse order
}
class Terms { //write in capital letters so you can distinguish between class vs obj
String words;
int opacity;
float x, y;
float size;
Terms (String _words, float _x, float _y, float _size) { //for the constructor you write the class as a function — no return type, constructor needs to be called something different from the terms above to establish this is what YOU are bringing in
words = _words ;
opacity = 127;
x = _x;
y = _y;
size = _size;
}
I really didn’t know what to expect going into the workshop, all I knew was that I liked some games I’ve played and have heard that some of them were made on Unity, and I also knew I was really bad at coding. Unity was just as hard as I expected it to be, since there are so many computing layers that go into gaming: you need to take into account the environment, but you also need to take into account the user’s completely free will interaction within the environment you create.
We used Sam (the person in charge of the workshop) starter pack of ready made characters: I fell a bit behind in the workshop but I was really happy I was able to make the character move to the right and fall down. That was more than the level of successful interaction I was expecting, and the workshop gave me tremendous insight into just the insane amount of work game designers put into the products they create. Many of the indie games I have loved were made in Unity, such as “Ori and the Blind Forest,” “Gorogoa,” “Florence,” “Oxenfree” and “Inside,” all very inventive and beautiful platformers, puzzle games, and interactive adventures, and to me are testaments to the genres they fit into.
The Unity Workshop in general reflected my very brief time in IMA, a really cool subject I had absolutely little to no understanding of, but actively affected my life in profound ways. I am not a “gamer” or understand “game design” that in depth by any means, but many games I have played have deeply affected my life and the way I see the world. I hope to at least achieve that in some form: it doesn’t have to be through Unity/other game design engines, but I think games illustrate the power interaction has on users as a whole.
The workshop made me want to take more initiative in interactive media arts more, and learn how to make very simple games on my own through Unity, since it is just an engine you can download for free off the internet. I am happy to come away with at least a miniscule understanding of the work it takes to make a successful game, as it at least seems a little less alien to me now that I’ve taken the workshop.
For this recitation, we implemented the media controllers we learned in the week’s lecture and applied Arduino hardware functions to them. I decided to manipulate an image, because the image was the easiest on my computer, as past attempts to incorporate video with Processing have led to my computer crashing. I chose to pixel fade and image using the potentiometer.
The image I chose is
I had a little trouble making the potentiometer work because I had a problem with misunderstanding between Processing and Arduino values — once I got the correct example code of connecting Arduino to Processing instead of the other way around, making the penguin itself fade was simple enough — the video is attached below.
The reading “Computer Vision for Artist and Designers” looks at how algorithms can help computers make “intelligent assertions” about digital images and video. While the project I did in recitation reflects a very very basic understanding of this, the translation of the potentiometer values to the processing code allowed me to understand how the computer can intelligently think for itself using the language inputted by the potentiometer. Going forward I hope to manipulate the image so that it can change into different images once it fades out, since that relates a lot to my final project.