Recitation 06 – Processing Exercise – Alison Aspen Frank

For this recitation, I chose to imitate Josef Alber’s Interlocked in Processing. I chose this one because though it looks to be simple, there are some complexities within the design as well.

Joseph Alber's Interlocked(the original  —  Josef Albers, Interlocked, 1927, picture from Guggenheim,  link to picture)

In order to plan how I was going to code this, I started by mapping out the sketch with tentative pixel values. Inside of this map, I drew out the four larger rectangles within the piece so that I may center other elements around these. After I coded these rectangles out in Processing, I began to look at some of the repeated shapes.

Mapping out the main rectangles, after re-configuring canvas size:

progress pic

At first, I attempted to hard code all the smaller rectangles, but noticed that I could just use a for loop to increment them vertically. However, after I did this, I noticed that the width and height values I set for the other rectangle were not easily divisible, so to make things simpler, I made everything have dimensions which were divisible by 25. This helped with the positioning of things, but meant that I had to rethink all of the loops I had already created. After this was configured, finishing the rest of the sketch was rather easy.

Here’s how it looked halfway through:

Progress of processing recreation

One of the main challenges I faced was that I did not realize how mathematical the original painting was. Once this was configured, it made the code simpler as many things could be put in loops and incremented. 

Overall, I achieved my goal of recreating this picture. Once I realized that many of the elements in the original piece are relative to other elements it was relatively easy to code with loops to increment the y position. Aside from the very graphic feel from my sketch, it looks very similar to the original.

Finished:

finished recreation

Thoughts:

Through this exercise, I found that math often can be found in phenomena in nature and art. As computer code cannot currently completely understand completely organic graphics, it relies on a mathematical representation. Therefore,  a sort of mathematical way of thinking becomes helpful when creating art through code or designing any sort of computer graphics. Though I am not very mathematically-inclined, I enjoyed the challenge of recreating this piece of art through code.

Recitation 04 – Drawing Machine – Alison Frank

For this recitation, I never got to complete the final exercise as I broke my arduino. While it could have been many issues (wiring with the h-bridge, power supply, etc.), I feel that the cause of this was that I had accidentally left the 12V power supply plugged in while I went to upload a sketch to my Arduino. Also possibly due to some messed up wiring, these two factors were likely the reason why my Arduino ended up malfunctioning.

For the sake of documentation, I will include the progress which I made before I broke the equipment along with any more insight into what could have gone wrong. The first part of the circuit we needed to make was rather complicated with the addition of the h-bridge. Along with this factor, I generally tend to get confused when looking at diagrams of circuitry, so this did not help me in any way. However, I ensured that the h-bridge was in the right position and checked carefully to be sure my wires were connected to the correct parts of the bridge. While I tried making a place for the ground, I think I accidentally made two grounds and forgot to connect them once I integrated the potentiometer. This is also another likely factor as to why things went wrong. In terms of the coding, I used the example sketches, so everything in that aspect should not have been an issue. I also used the map function after connecting the potentiometer so that the steps of the stepper motor could match that of the potentiometer.

After this recitation, I feel a bit hesitant to work with larger motors, but after using one in my midterm project, I found that you just need to be careful — especially when working with higher voltage power supplies. Aside from this, motors are a great tool when it comes to the manipulation of art. Combined with aspects of sculpture, motors could add another dimension to the work being created. Therefore, I feel that there are many possibilities for what we could create with motors.

Question 1:

In my previous experience, the things I most enjoy creating exist only on a virtual level, but through this class, I am looking to expand my experience and explore the relation between the virtual, the physical, and the mechanical. Eventually, a project I would be interested in creating would be a sort of mixed reality. This mixed reality would rely heavily on virtual elements along with physical controllers and inputs. In my work, I like to work with “uncanny” elements, especially in 3D design. I feel that the in-between state of things in the so-called “uncanny valley” become great tools when looking to create something which reflects on the human experience. For the physical piece of my mixed reality, I would heavily focus on the user input. I would like to purposely make it feel awkward to create a challenge while users navigate the mixed reality. Much like the project Gravitron by Time’s Up (mentioned in Wilson’s Art + Science NOW, pg  126). With inputs, I like to explore how the human body is used to navigate through a virtual space. While taking a class led by visiting artist and professor Sheldon Brown, I worked closely with VR design, and became interested in how users reacted and navigated with respect to a virtual space. Therefore, I would be interested in creating a user input system which would exploit these factors and would hence give a more “natural” feeling while navigating a mixed reality.

Question 2:

Firstly, I found this reading to be incredibly insightful, especially in the discussion of robots. On page 110, there is a section that reads “Robots are potent cultural objects, representing both hopes and fears about the limit of what machines can become…Artists, on the other hand, often create sophisticated robots intended to inspire reflection on the human condition…” The concept that robots can now be considered as a “cultural object” is something of deep curiosity for me. I do agree that our world and the work of humans is becoming less focused on physical aspects, and I also believe that robots are now going to replace many of the physical tasks which used to play large roles in the aspect of human work.

In terms of art installations, I would like to discuss two, one from the reading, and one created by Sheldon Brown, a professor whose class I attended last semester. From the reading, I was particularly interested in Firebirds created by Paul Demarinis. This work was particularly inspiring to me due to the uncanny feeling one gets while looking at a flame creating a perfect imitation of a human voice. Alongside this, the setup of the display works perfectly to create feelings of eeriness. Within this work as well, there becomes a juxtaposition between the human and the mechanical, wherein mechanics are made to reflect a natural phenomena of voice. Though this work is not very interactive, it is perfectly reflective of the overlapping boundaries between human and machine.

Here’s a great video with Firebirds in action (link).

The second work I would like to bring up is Sheldon Brown’s Scalable City. Scalable City is an interactive installation where users must interact with a virtual world via a scrolling ball input, moved by the user’s hand. The world itself is based off of data visualization and utilizes photogrammetry, a 3D modelling techniques which uses photo scans of objects to replicate them in the virtual world. Within Brown’s Scalable City, when you move using the ball, you create a destructive path throughout the world, but the more you move, the more you create a visually-appealing design throughout the seemingly perfect city. This project isn’t as mechanic as a robotic system may be, but it instigates a certain kind of response within the actor who interacts with it, and for that I feel that it is worth mentioning. The interaction used makes the user rely heavily on physical input and it bridges the gap between the virtual and physical.

Scalable City link

Mystical Aunt Margot (IXLab Midterm) – Alison Frank – Cossovich

For my group project, I researched a variety of artists and projects. I feel that these may have only slightly influenced my midterm project, though I do feel that the majority of the things I create are influenced by my artistic research. For the midterm project, my partner and I ended up creating something which exhibits a very simple interaction. While part of this may have been because we knew that the building and coding process would be tedious, it is also because it is easy to get carried away with the many layers of interaction. In my experience of coding and creating, I have found that trying to have too many layers to your interaction could easily lead to confusion from users. When looking at the artists I researched, Yayoi Kusama and Daniel Rozin, I notice that their works have different layers and levels of interaction. Kusama’s work is very engaging to those who view it. Her repetitive visuals and motifs within her work are very eye-catching, and her infinity-mirrored rooms often elicit a sense of interaction. Recently, I went to her exhibit at the Fosun Foundation in Shanghai, and found a new level of interaction with her work: social media photography. While her work stands alone, it had been manipulated and exploited by many on social media. Also, due to the setup of the exhibit, you were forced to walk through areas laden with her sculptures, especially in one room, where towering tree-like polka dotted objects created a maze-like walkway which you were forced to walk through. In this exhibit, you were also shut into the infinity-mirrored room with five other people. Interestingly enough, the people in the room also became a part of the artwork and they managed to interact with a room consisting of a few lights and four mirrored walls by seeing how their image changed based on their position in their room, and how the room may appear on camera. While these findings did not directly influence my project, I certainly see how they have influenced my understanding of interaction. Before, in my definition, for something to be considered interactive, it had to give a response to another actor. However, I have now come to notice that sometimes this ‘response’ isn’t so easily seen, as is the case with my observations at the art exhibit. However, I still agree with Chris Crawford’s statement in The Art of Interactive Design, wherein there is a spectrum of interactivity (6). I also understand that sometimes an item can have a low degree of interactivity and still be captivating to those who interact with it, much like the example with the refrigerator door.

The interaction with our project isn’t entirely unique or different than something which has already been done, but it is rather simple and entertaining. The project I produced is similar to a fortune teller booth or a Magic 8 Ball, things which are fairly well known in western culture. In these cases, the interaction between a user an the item is that the user presents a question and the fortune teller or Magic 8 Ball would return a random answer. However, unlike the traditionally seen fortune telling objects, our project returns an answer which would definitely not be wanted from the user. All the possible answers we put into our design were very cynical or passive-aggressive, and would hence cause varying results with those who interacted with our project. I feel that our project adds an aspect of humor to the previous fortune-telling mechanisms. With our project, though the answers given could be seen as “negative,” they also were found to be humorous. Along with this, I would go so far as to consider our project as a satire of the aforementioned fortune-telling objects. Therefore, it could be said that our intended audience would be those who would find humor in this development, and those who do not take things too seriously.

Our original concept was to build a machine with a very cute aesthetic, made to appear harmless.

Some sketches from the original prototype:

cute conceptOriginal Sketch

Along with this, the original interaction mechanism was that users would push a button (I know, real creative), as we found that this would be the most effective way to go about our project. In terms of design, our concept piece was more spherically shaped, with minimal square edges. In terms of user interaction, our idea for interaction did not change much, as we went with our original idea. However, we did change our design to make the reading of the answers more easily visible. The main material we made use of for this project was cardboard. Whereas our concept might have utilized more plastic pieces, cardboard was more accessible to us and easier to shape to fit our needs. While selecting which materials and elements to use, we were mainly influenced by the availability of the material along with the ability for a material to be manipulated. For example, hard plastic is very difficult to manipulate and would add lots of weight to our project, but cardboard is easy to access, easy to manipulate, and would not add too much weight. Many of the decorative factors we used for this project were things which were already on hand. The mannequin head was found on the 8th floor, and the “curtain” was made of an old t-shirt. Though our end design strayed from our concept, I feel that our design was heavily influenced by the availability of materials. In the end, our project’s aesthetic was more mystical, but it ended up working with our original concept.

Updated sketch after laser-cutting:

updated sketch of project

In the process of the production of the project, we started out by sketching how all the components would work. The project included many electronic components, including a stepper motor, an arduino, a breadboard, a button, and a ton of jumper wires. Therefore, in our production process, we knew we had to make space for the wires and components. In the production process, the stepper motor was the component with the most difficulties. The answers were displayed on a platform atop the motor, meaning that we needed to find a way to stick something to the motor without it being to heavy for the motor to move. Along with this, the motor vibrates when it turns, meaning that we needed something to hold it in place, and our project had to be transported so things could not be too heavy. Therefore, after considering these caveats, we went on to re-sketch our concept, settling on two different boxes, both of the same width, but with different lengths and depths. For fabrication, we laser-cut the box, as we needed custom holes for the button and the viewing window.

Here’s the stepper motor mount along with the wiring in the back:

Bottom box/wiringlaser cutting file for Margot Box

After cutting the cardboard with the laser, we decided that we did not want to use any of the harder material as our prototype was holding up well. Along with this, we originally wanted to laser cut two boxes, but decided to reuse one of our original boxes to make things easier. For the motor, I put together a small mount which I made out of small pieces of cardboard, cut to the dimensions of the motor. Once the mount was attached to the box, the motor would never move out of place.

For the sayings which would be placed on a plate atop the stepper motor, we originally made cards and hand-wrote the sayings. However, after user testing, we found that this disrupted our aesthetic and made things look less put together, and they were at first too heavy to be used with the stepper motor. Therefore, we printed our text components onto paper instead of writing. In user testing, we also found that our sayings were too hard to see in the box, so we cut the viewing hole larger and made things easier to see (see video below for improvements made).  Aside from this, we added a set of instructions as some users did not know what type of questions would be the best to ask.

Sketch of this addition:

post_user testing prototype sketch

In terms of decorating, we used playing cards, a mannequin head, and paint. The playing cards were also used to hide the button, which also gave a different feel to the interaction. I feel that the mannequin head was the most influential item in our design. It elevated our project from being two rectangular boxes, to actually resembling a fortune teller’s booth. Our project now had a semi-human resemblance, which I feel added another layer to our interaction, as it made it feel as if this uncanny mannequin was judging your life’s worries (pic below is before adding decoration, videos are after).

pre-user testing prototype

Before we changed the spinner text and viewing window:

After:

The original goal of this project was to give an unexpected response from an unassuming item based on a user’s interaction. In short, this project did meet this goal, but not in the way I thought it would. Based on my previous definition of interaction, I would consider this project to be interactive, but not to a high degree. One of the ways in which this project does not align with my definition is that the response it gives is not necessarily tailored to the input. I feel that this could be one of the possibilities for improvement in terms of further development of this project. Based on an input (a button push), the machine we made would spin to a random position and give an answer to the actor’s question. I do feel that the question asking could make for a better development in the interaction of this project, whether it be creating an area for user’s to enter a question on a webpage and processing this input, or having a user write their question onto something and insert it into our machine, I feel that this is one of the shortcomings in this project’s interaction. The interaction which was received during user testing was also intriguing as there were a few people that did not know how to interact with this project. Either they did not know where the button was or they did not know which types of questions to ask. While our design did change slightly after this, I do feel that there are still many possibilities for different interactions within the scope of this project. I also feel that there are other possibilities for improvement within the aesthetic values of this project. At times, I felt that the setup of the project could be seen as a little ramshackle, though in the end it worked out. However, I would like to see this project become more aesthetically developed, possibly with a more streamlined look.

Aside from the design aspects, the code for this project was particularly difficult, and there were a lot of issues with the stepper motor. I think that the things I learned while putting this project together will certainly be of use throughout my time studying IMA. In terms of the takeaways of the project, this was the first project I have created which utilizes physical computing. I find that the way in which users interacted with this project is very different to my past projects. My past work goes over two categories: interactive webpages and 3D design. In Commlab, many of the interactive aspects of my work was purely coded within JavaScript. This means that the creative process was very different than the one which I used in Interaction Lab. Along with this, my work with 3D design and Unity was more focused on aesthetics, and how aesthetics influence user experience. Similar to my works from Commlab, the creative process here was relatively similar, as the only physical piece to my project would be the controls (either a computer keyboard or a VR headset). Therefore, I believe that it was useful for me to experiment with this new way of creating and to allow it to let me combine things I am comfortable using (programming), with things I am not very comfortable working with (Arduino, stepper motors, etc.).

Week 06 – iML Midterm – Alison Frank

Purpose/Inspiration:

My main inspiration for this project came from the game Semantris, which was developed by Google’s Research team. This game utilizes tensorflow’s Word2Vec model in order to allow the AI to match user input. The goal of Semantris is to help AI understand the semantics by human language; namely, how certain words in our dialects are related to each other. For example, why might we associate “moon” with “space” but would also associate “space” with “room?”

Therefore, for this midterm project, I wanted to create something which was mildly similar to Semantris, but also had another layer. Though my app is not as developed as Semantris, I made a lot of progress along the way.

If you would like to play Semantris — here’s the link. 

My Process:

In order to create this app, I chose to use two pre-trained ml5 models: imageClassifier() and word2vec(). For the layout of this app, I chose to display an image, then ask users to guess what they saw. Simultaneously, this image would be classified via imageClassifier(), which would then lead to a function to check for matches with the user’s input. Meanwhile, the user’s input would also run through word2vec to create an array of words which are closely related to the user’s guesses. Once the user’s guess correctly matched with the results of the image classifier, then the end results would be displayed at the end of the page, so that the user may gain a better sense of what the AI’s process was.

To begin building this web app, I starting with understanding how to implement both of the ml5 models. The classifier model ran perfectly, but word2vec gave many errors, mainly due to the data files that must be used with the model, or there would be a word input that was not in the dataset. Once I resolved this, I chose to configure user input within JavaScript and P5. Originally, I tried using HTML input, but I found that the inputs were not easily integrated within my JS program. Therefore, I chose to working with the prompt() method within JavaScript, but this method was scrapped as this causes a new window to open, and created a sense of disconnect. After some research, I found a p5 input method which was easy to use and made the results easy to access. Link is included below.

Once the user input system was configured, I moved on to finding a way to match the input with the resulting array from the image classifier. With my first test of this, I kept running into an issue within the for loop. Due to the nature of for loops in JavaScript, the match would only be marked “correct” on the last index of the array. Therefore, I added a ‘break’ statement within the loop so that each index could be checked (after receiving help from Mostafa). After this statement was added, the matches could be found.

After this, I went on to find a way to configure user input with the word2vec model. I had thought of multiple ways to implement this within my program. My first idea was to just make two separate game modes based on the two different models, but I felt that this did not align with my original idea and purpose which I had in mind. My second idea was to create two input sections, one for entering an image classification, and another for users to enter words they would associate with the image. While I think this is an interesting idea, I felt that the flow of the program would be disrupted by this. Lastly, I decided to settle for one input field, and chose to process the input through both of the models which were implemented. I feel that this method is the most streamlined implementation of the models and of the user input.

After this was configured, I moved on to play with the design of the page. For this, I used p5.DOM to create div elements to house all the elements on my page. This way, all the elements in my page could be easily manipulated through a separate CSS stylesheet. Lastly, I created various callbacks within my code which would trigger certain results depending on whether an element was clicked or whether another function had been executed.

End Results:

Currently, I feel that my resulting project is fairly similar to the idea which I had in mind. If I had to develop it further, I would like to create more of an emphasis on human language semantics and would then use the distance values from word2vec to make the game more challenging or would use these to display something else. I would also like to create a sort of scoring mechanism which is related to the word2vec model, but I ran out of time to implement this. Another thing which I couldn’t figure out how to do was how to select a random image every time the page loaded. I had stored the image names in an array, and used Math.floor(Math.random() ) to get a random index value from the array, but when I tried to pass this into the ml5 classifier, the image would not be displayed on the webpage, and the console would return an error of “bad image data,” but the classifier would still run. Therefore, I chose not to mess further with this due to time.

Overall, I learned a lot while creating this webpage, and even though it appears very simple, it was rather complex to work with and I ran into error after error along the way. That being said, I am now ready to learn more and feel more prepared to work with larger projects. 

However, I feel that I accomplished what I wanted to, and I learned a lot along the way. In the future, I would like to train my own word2vec model to better understand how it works and to perhaps see how biases are created within these models. I have also found in my own research that word2vec AI models are now being utilized in language translation, which is something that I also find interesting.

Process Pictures:

Early prototype

early design of project

Guess Error:

Final Stage:

Resources Used:

p5 IO Methods

p5 DOM Reference

ml5 Image Classifier

ml5 Word2Vec

Week 05 – Midterm Concept Presentation – Alison Frank

Link to Presentation 

For this midterm project, I am looking to implement a relation between image classification and word vectors. To do this, I am looking to use ML5.js to create an interactive web page/app which utilizes both ML5’s Image Classification model and ML5’s Word2vec model. In terms of input, I was thinking that I could choose an array of images to use for input or I could use a webcam input. However, finding an array of images which will be suitable will be quite difficult, but the results given by a webcam will also be limited. I considered using a dataset such as cifar-10, but I do not know how I would combine the Python code with the HTML/JS I will be using for the rest of the project. Therefore, while I am creating this application, I will be using webcam input.

My main inspirations for this project come from exploring how Artificial Intelligence deals with data which might have subjective qualities. I am also intrigued with the idea of word vectors and I am interested to see how these relate to the classification of images. I was also inspired by some of the projects done by Google’s Research team, namely Semantris and Google Drawing. When playing Semantris, I noticed that some of the answers I gave did not match the AI’s way of thinking. Therefore, I am looking to further understand this phenomena through the creation of my own project. Google Drawing is an app which asks a user to draw an image they associate with a given word. Then, using this, the AI tries to see how well the drawing matches. If it doesn’t match (according to the AI), then it gives the results which would have been a better match. Then, it overlays the input given with the output. I find that this is an interesting way of conveying this relationship and it is also interesting to see how a drawing made by a human is understood by a machine. While drawing would be quite complicated to work with, I feel that I could do something similar within my project.

Currently, I have been able to configure ML5’s word2vec model and their image classifier model separately, but I am still working on combining them. Along with this, I have not coded with JS in over a year, and so I am still trying to configure the best format for my desired application, and am looking for ways to incorporate user input within my project.