Ananke: Final Documentation

Title: Ananke

Description:  Ananke, a reference to the greek personification of inevitability, compulsion, and necessity, deals with the double-edged nature of technology, in the midst of a society where the lines between technology, surveillance, and data privacy get increasingly blurred. The piece deals with the interactions between a dancer and a mysterious dodecahedron hanging from the stage ceiling, triggering sounds and visuals throughout the performance. The performance starts serene and playful as the dancer gets used to the seemingly innocent dodecahedron (which represents technology and social media). However, as she invests more time and focus on it, a hidden figure emerges, suggesting that what initially seem harmless might not actually be so.

Hardware:

Components:

  • Laser-cut dodecahedron
  • PS3Eye Camera 
  • IR LEDs 
  • Teensy LC 
  • 2 Lipo batteries 
  • 2 PowerBoost 1000 chargers 
  • Neopixel Ring
  • Bluefruit LE

The basic foundation of the performance was the lit up component hung on the ceiling. Since it would contain the IR lights, the Neopixel ring, the Teensy LC, and the lipo batteries, and it would also have to be stable enough for our dancer to interact with it properly, this was a main part of our process. Since we originally wanted to use a sphere, we decided to laser cut a dodecahedron instead as it had a large number of faces and would visually fit our concept as well. Creating the enclosure ourselves instead of buying it online was the best choice as it also gave us a lot of flexibility around our specific requirements. 

Attaching the dodecahedron together with acrylic glue was a particularly arduous part of the process, since we had to ensure that the angle at which it face was placed was properly set. Otherwise, the whole component would not have been able to close properly. After laser cutting, we also drilled holes at the top and on either side of the dodecahedron. The top holes would be to hang it, and the ones on either side to easily attach and detach the top and bottom faces with zip ties (we hoped to find a more environmentally-friendly alternative, but we realized we could only trust the zip ties). 

This is how the lit up dodecahedron looked in the end: 

After getting the base acrylic part done, we then had to measure and fit the components that would be placed inside. This included first creating a large ring containing various IR LEDs facing outwards (which had to be done twice, as the first one was too small). Once the IR LEDs were tested properly with the PS3Eye camera, we worked on having the Neopixel ring change color with our phones with the Bluefruit LE. Fortunately, this was one of the simplest parts of the process. 

 

We then had to figure out how we would have everything be self-contained, with no wires coming outside to get power from the computer. As such, we ended up using 2 lipo batteries connected to a PowerBoost 1000 Charger to power the Teensy LC, the Neopixel ring, and the IR LEDs. To avoid any unnecessary shadows, we used a styrofoam ring to keep all the components tightly packed at the bottom center of the dodecahedron. We also taped together the Teensy LC with the Bluefruit LE and the PowerBoost 1000 Charger as seen below. In the end, with all the working components ready, soldered, and taped to the dodecahedron, the hardware was done.  

The inside of the dodecahedron with the IR LED ring:

 

Software (Code and Sound Design)

The piece has three parts to it, reflecting the performer’s relationship to the dodecahedron:

  1. Exploration & Curiosity 
  2. Control
  3. Loss of Control

As we settled on our concept as well as the design of the glowing dodecahedron, we aimed to create consistent graphics that would only slightly change as the story and performance progressed. We wanted to be able to make changes to the colors, sound and interactions with the graphics that would properly capture the story and its mood. Therefore, we settled on particle graphics, because we thought that this would be an effective graphic that would remain open ended as we continued to work on the piece. 

  1. Exploration & Curiosity 

The first stage involves a field of particles, whose color is determined its distance from the dodecahedron. The particles that are closest to the dodecahedron are closer to white, while the further ones become blue and eventually black at the furthest distance. We decided to make this distance constantly change, creating a circular pulsing effect around the dodecahedron. The particles move closer and closer to the dodecahedron’s position and when they reach very close to it, they are set again to a random position on the screen. The music at this point is slow and builds up as she gains confidence in playing with the dodecahedron. 

This created multiple effects we desired, allowing the animation to leave a trail of particles in a line where the dodecahedron was swinging as well as creating the effect that the dodecahedron was a living object, due to its pulsing. This was also combined with the pulsing feedback sound which we programmed in OpenFrameworks, with the height of the sound being when the pulse was at its apex. 

Within this stage, we also added a part where the particles would come to form a circle, allowing our dancer to move the dodecahedron around with the dynamic particle ring following her. This effect was added to convey her rising confidence in interacting with the dodecahedron, giving her greater control of the particles around her. This was accompanied by a force field noise. 

  1. Control

This stage starts with our performer untethering the dodecahedron from the string causing the ring created by particles to ‘explode’ as the particles move off the screen. This is also accompanied by a feedback sound. From this point on, our performer is dancing freely with the dodecahedron and white particles are emitted on the screen from her current position. This part was designed to be free flowing, demonstrating her complete control of the dodecahedron. For this we made a particle class which would be emitted from the dodecahedron’s position. The velocity would be random and the particles would eventually fade out as we lowered their alpha values. After a certain amount of time, when the particles would not be visible, it would be removed from the array list. 

  1. Chaos / Loss of Control

For this part we wanted the transition to be smooth between her confidence in drawing these particles and her loss of control of the orb and its power. Initially we attempted to create particle graphics that were extremely chaotic, but we thought that this would not be powerful enough to illustrate our point. We also thought about how we could inject our narrative of surveillance and submission to technology into this final stage. We settled on creating an image made of particles, which would slowly drift towards this image from particles emitted from the dodecahedron. The image that we settled on was an eye. The code that we used would convert an image’s pixel values into x and y locations on the sketch. We then made particles in the particle class gradually move towards assigned locations using a force. The particles are initially white, but gradually become red as the contour of the eye becomes recognizable. This is also accompanied by a shift in music, as the music goes from flowy and powerful to dramatic and ominous. 

Initially we just had this eye being rendered, but we finally settled on creating a moving pupil that would be at the position of the dodecahedron. This would show the pupil following our performer as if watching her. This pupil movement uses the same code that was used with the ring at the beginning of the performance, constricting the movement of the pupil to the size of the iris. At this point the dodecahedron turns red, and the performer becomes panicked trying to regain control of the dodecahedron and her environment. As she is doing this, she taps the dodecahedron, and red particles are emitted from the dodecahedron every time she does this. 

Finally, the particles are displayed in the shape of a coherent eye, but the pupil/iris’ position is seemingly random as the eye glitches and moves rapidly and seemingly randomly. As this starts to occur, a track with static and an unsettling and incomprehensible voiceover becomes louder. Eventually it is just the glitching eye along with the static noise, and our performer falls to the floor. 

Tracking

We tracked the dodecahedron by converting the image received from the PS3Eye Camera to a grayscale one, filtering out the background and making the brightest points more prominent in the image. This was then passed through a contour finder using ofxCV, and the x and y location of the first contour in the array of found contours. This is then mapped from the size of the camera’s dimensions (640 by 480) to the full width and height of the window. We added a debug mode which would allow us to see the camera as well as a mode that would allow us to control the graphics through the mouse rather than the tracked dodecahedron. 

Here is a link to the project folder: 

https://drive.google.com/drive/folders/1DRVG50SwtRa_-QK1co6U2nHCqIJ59ngA?usp=sharing  

Here is a video of the final performance:

https://youtu.be/CLiOx6pIhwY

Reflection

Overall, though the whole process was long and time-consuming, it was incredibly rewarding. Working in pairs definitely made the ambitious performance doable and eased the amount of overall stress. This was especially the case when planning and making the hardware, and making tough decisions on the overall performance. We also got really lucky by having Erica Wu as our dancer. Her patience and dedication to our piece, along with her amazing dancing skills really enhanced the piece. In the end, though we were initially intimidated by this final performance, we’re both incredibly happy with how Ananke turned out in the end.

Prototype

Me and Mari have been playing around with using infrared lights and the PS3 Eye camera to track the brightest point. We started off by using just a breadboard with infrared LED bulbs plugged into it. This seemed to work from a close distance pretty well, and we later tried it further away (around 15 feet) and it still seemed to consistently track the bulbs. The PS3 Eye camera is programmed to find the brightest point in the video, and draw an ellipse on the brightest point.

Next, we found a styrofoam ball in the fabrication area of the IM lab, and soldered infrared LEDs along its circumference through a wire. We hung this up on an extension cord in the IM lab, similar to our intention of using a hanging ball during the performance. The tracking seemed to be consistent and we only had LEDs on about a fifth of the ball, so placing more around it will definitely improve the cameras ability to track the ball. We hope that using a clear acrylic sphere and placing the LEDs inside of it will accomplish a more consistent tracking of the ball. We are hoping to use colored LEDs or a neopixel along with IR LEDs to create an aesthetically pleasing ball along with one that can be consistently tracked using infrared. 

Finally, we used the capacitive touch example to send information from the Teensy to Openframeworks. We ran a wire from the Teensy to the styrofoam ball, and added some copper tape to make a portion of the sphere conductive for the capacitive touch. Using the serial inputs example, we were able to detect if someone was touching the part of the ball and create animations around the position of the sphere. In the video above, we simply made balls come out of the tracked position of the sphere when it was touched. We plan to make the ball touch sensitive so that whenever our performer Erica will touch it, there will be changes in the sketch to correspond with this. We are still not sure how we will make the entire acrylic sphere touch sensitive while making the wiring discreet. 

We are happy with the progress that we made over the weekend, and for this next week we plan to create most of the visuals for the performance as well as hopefully have more consistent hardware created.  

Final Performance Proposal (Steven & Mari)

For the final project me and Mari will be working together as a group. Our performer will be Erica Wu. Here is a link to the Google slides presentation that we made: https://docs.google.com/presentation/d/10ZB8SkHfJ_wPWPfbHnkYe8WA9gQsOsL8FEtGiq3uTYo/edit#slide=id.g6b36aaa811_0_14

Our performance will revolve around the relationship between our performer and a glowing ball tethered to the ceiling. The ball can be swung and can be untethered from the rope hanging it. The ball will continually be tracked, and based on the distance and touches of our performer with the sphere, different effects will be generated from the ball’s position on the projection. 

The concept of this performance revolves around the double-edged nature of our relationship with technology and seeks to explore the benefits of technology along with its negative impacts on our psyche and the environment. This is the core concept driving the story of the performance. Our intention is for the ball to represent technology, and explore how the performer’s interaction and manipulation of the ball impacts her well-being as well as the environment around her. 

In order to fully communicate and explore this, we decided to split the performance into three sections. In sequence, these are: 

  1. Playful exploration of the ball, with a growing fascination with the ball. This involves light interaction with the ball which result in effects that clearly respond to either her touch or her close proximity with the ball. Stylistically we intend for this to be soothing and relaxing, with the colors being lighter and with the graphics having smooth movements. 

 

 

 

 

2. Continued exploration of the ball, but this involves her attempting to fully control the ball. One idea we had was that our performer could take the ball (which is still tethered)  to move it around and have it function as a brush, creating shapes on the projection as she moves it around. The tone of this section is one marked by greater confidence in her use of the ball, and the music and graphics will be more representative of her strength and confidence.

3. Finally, the performer, very interested in the power of the ball, decides to untether the sphere from the rope. This results in chaos and in her inability to control the disorder that ensues, demonstrated by chaotic, dark, jarring visuals and music. This last stage serves to represent the ultimate double-edged nature of technology and social media. Users can fully embrace and be confident in the online tools they use, but at the cost of digital privacy. The performer’s unsuccessful attempt at controlling the sphere symbolizes the fact that the data we offer online is also during most times out of our hands.

 
 
 

 

We are still not sure about how to transition between sections two and three, as well as how the performance will be concluded. We are considering having Erica attach the ball back to the string with the hope of restoring order or rekindling the previous relationship she had with the ball, which is ultimately not successful with the environment still being in disarray. We are also unsure of how to technically implement this and continually track the position of the ball. We are considering using brightest point tracking and thresholding the background in order to eliminate the brightness from the projection. 

Interactive Floor

Me, Kyle and Sara got the installation working on Sunday night. We mounted the Kinect and projector up on the side near the computers. It was pretty simple to get everything up onto the beams on the ceiling, and once we figured out how the projection mapping and getting the Kinect to detect only within a certain distance. We played around with the focusing of the projector and we thought we had it optimized when we messed around with it a while ago. 

I tried it out on a chair in the IM lab just to test out the projection mapping before moving it upstairs to 153. 

I added a different graphic to mine by porting over an example from the generative design book, with the sizes of the circles changing depending on the position of the person walking on the projected area. 

Brainstorming for the Final

I am very interested in exploring humans’ relationship with technology in the final performance and believe that an interactive performance is the best way to do this. I am specifically interested in touching on issues of surveillance, the reduction of people’s entire lives to data and our increasing reliance on technology in our everyday lives. 

  1. I played around with an example I found in the Generative Design book. This example took a picture and turned pixels within a certain range of colors into a string of characters. I changed the picture to the silhouette of a person, and the string into a series of random letters, characters and numbers. This is supposed to represent our reduction into data and away from being real humans. I also added a text box, which turns the words coming from the person’s mouth into mere strings of random characters.
  2. One idea that really stood out to me is creating a segment of the performance where the performer is fleeing from surveillance. One way I thought that this could be implemented would be having moving parts of the sketch that would be surveillance (cameras, lights, etc.) and the performer maneuvering around digitally rendered structures in the environment (roofs, boxes, buildings, etc.). When the performer is caught in the light/camera then it will be as if they were caught by the surveillance. This is just one idea I had of how to technically implement this, but I think that using an interactive performance to mimic surveillance and the performer’s struggle to avoid it and/or conform and submit to it would be interesting to create. 

Documentation: Closest Point Tracking and Average Point Tracking

One difficult aspect of the assignment was smoothing the closest pixel and average pixel values. I spent most of my time trying to get the smoothing to work, with the value only being accurate once I moved the smoothing code snippet outside the for loops that looped through the x and y pixel arrays. Initially it would max out at about 800 pixels on the x-axis (even though I had the values mapped to be at a maximum of 1024 pixels). oFMap was also not working properly at times, displaying not a number errors whenever I would map it to ofGetWindowWidth()/Height(), simply changing it to ofGetWidth()/Height() fixed the issue for me. 

Other than that, everything went well and I was able to track the closest point and the average point. Here is a screenshot of my final result, with the ellipse serving as the closest point in this example.

The Implicit Body Framework: Response

In this reading, Stern discusses the necessity of evaluating interactive installations and performances through their “amplification of bodiliness.” For this he outlines an Implicit Body Framework, an in-depth approach to how the complex relationship between the body and the piece can be evaluated. The first area of analysis dives into the author’s approach to their work and how this impacts our understanding of their work. The second area is a description of the piece, the visual, auditory, etc. characteristics of the piece and how it reacts to people. I think the final two were the most helpful for me to think about how interactive experiences are conceptualized and created. The intricate relationship between participants and the interaction is highlighted in the third area, and that was really interesting to me. His discussion of the Messa di Voce piece, which I had seen before, and how people would make exaggerated movements that were not really necessary to creating that interaction because it seemed natural due to the design of the experience really captured his point. Finally, his fourth area of analysis, relationality, constantly questions the body and its relationship to both the piece and to itself and its dynamism. The final two really made me think about how installations and performances can be designed to create much deeper interactions that vary from person to person, that evolve the longer a person spends with the piece, constantly finding new ways to entertain themselves.

Refrigerator Door – Final Documentation

 Refrigerator Door

Refrigerator Door is an interactive installation that features movable magnetic letters attached to an acrylic panel. Viewers can move around letters on the board, and when letters come into a certain distance with another, letters are projected onto the panel that complete the word. Attempting to recreate the nostalgia of playing with letters on a refrigerator door as a child while also creating an engaging experience for today’s children, the piece combines a physical interface through the alphabet magnets along with a digital interface through the projected letters.

Ideation

 After seeing a video by OpenFrameworks that featured an interactive tabletop installation that augmented the movement of physical objects through changes in a tabletop projection, I wanted my project to be very similar, using objects to dictate the state of the sketch. My initial idea was to create a tabletop installation as well, making a project that relied on light, shapes and sound to create an experience that was similar to the video I saw. I am really interested in using sound and light to create an installation, and thought that using a tabletop and object tracking would be the easiest way to do it. 

I thought that I would use a Kinect or webcam to track the location of the objects from above while they would be on the tabletop, along with a projector projecting the sketch from above. 

I initially, with the help of Aaron, thought of three ways to technically implement this: 

  1. Color tracking: I thought that this might work provided the colors would be distinct from the hands and possibly clothes of the people interacting with it. 
  2. Shape detection: Might have been the most reliable technique if I used it, but after some digging around I was not able to turn it into a working prototype.
  3. Marker Detection: This is what I ended up doing. Aaron suggested that I look at the Reactivision library and how it uses marker tracking with an infrared camera to view unique markers’ locations. 

The Process: 

As stated earlier I wanted to use light and shapes to create this installation. So I started off by finding an addon that allowed me to do this. I found Light2D to be an interesting addon to accomplish this, with its ability to render shadows and different shapes easily. I started off using color tracking and found a working example with the ofxCv addon. I then combined the two, rendering the shapes using the Light2D addon to specific colors found using the color tracking example. This proved to work but was extremely unstable, changing with different light conditions. I think I made some progress with it as seen below, but I think it would not have worked very well because of how unreliable it would be. 

Prototype using color tracking and the Light2D addon.

I then tried to use background subtraction with the Kinect in order to filter out the noise that the background might cause for the stability of the sketch. However,  I was not able to get this fully working as well. I also experimented with shape detection, and Aaron was able to get a working example of it for me because the old version was extremely buggy and would not run. This also did not work too well with the sketch.

I also ran into issues with the idea behind the project and after talking to Aaron, realized that there was not much substance to it. I was still interested in moving objects and having the sketch update accordingly and the idea that me and Aaron came up with was based on my interest in alphabet magnets on a refrigerator door. The new project would be something geared towards children, perhaps as something educational or entertaining. The letters could be moved around a panel and when two letters would come within a certain proximity, letters to create real words would be projected onto the panel. This required me to change the physical structure of the project to fit the premise of the refrigerator door, so I changed it to a vertical panel with people interacting with it from the front and the projection as well as the camera detecting positions from behind the panel.

This actually made things simpler for me, since I did not have to worry about people interfering with the marker tracking since it would all be done from behind.

Initial tests using the ofxAruco library

I originally wanted it to be on the semi-opaque windows of 153, but was not able to find magnets strong enough for both sides of the panels. So I decided to use a clear acrylic panel and fine sandpaper to create the same effect of a semi-opaque board that could be projected onto from behind. 

After Aaron referred me to Reactivision, we were able to find ofxArcuo, an addon that allowed me to use marker tracking. I combined this with the PS3Eye infrared camera, allowing me to make the detection of the markers more stable regardless of lighting conditions, along with two infrared lights to see the markers and use the ofxAruco addon. 

Final Stages: 

With all this in place, I was pretty sure that the installation would work with the infrared tracking after I tested this, it was just designing it and coding the features of the letters rendering between the physical letters on the panel. With some help from Aaron with vectors to detect distances between markers, I changed it so that it would only detect letters of a certain direction as well. The markers would correspond to certain letters with their ids, and would then loop through the array with all the words to see if the first letter and the last letter are the same for a word. If it is found to be the same, the middle letters will be taken from the entry in the array, and will be rendered at the correct distance between the two letters’ positions. I then created two arrays with four-letter and three-letter words. If two letters were extremely close, the three letter words would be rendered, with just one letter displayed between the two. If they were slightly farther apart, two letters would be rendered to make a four-letter word. 

The setup

 

 

 

 

 

 

Magnets attached to markers 

To physically create the installation, I used a cart from the IM lab along with two clamps to hold the acrylic panel vertically on the top of the cart and moved this in position outside 153. I placed the projector and camera (which was clamped down using arms) onto a shelf which was positioned behind the panel and cart. I then cut out the markers that I printed out and attached them to cardboard squares using hot glue. On the other side of this I would attach a magnet. I bought cheap foam alphabet pieces from Daiso and also attached magnets on the back of them. The foam magnets would go on the front side where the users would move them, while the cardboard with the corresponding marker and magnet would go on the opposite side.

For the design of it, I was able to find a font (https://www.dafont.com/alpha-fridge-magnets.font) for something similar to alphabet magnets. I also added moving, rotating flowers to show that it is a dynamic sketch. I was worried that people might not be interested in seeing or interacting with the piece if they did not see that there was some movement on the acrylic panel. This also created an experience that might be aimed at children, resembling iPad apps that the children of today might use to learn the alphabet. 

Finally, I used the projection mapping example to ensure that the letters detected would be properly corresponding to the letters rendered on the screen. Although it was not perfect, it still allowed me to get the positioning correct which was a huge challenge when I was prototyping it in the final location. 

Since the presentation of the projects two Wednesdays ago, I have added a debug mode, slightly changed the color of the letters to be more visible and increased the size of the rendered letters. 

Final Reflection: 

This was a very challenging project and I felt overwhelmed a lot of the time. However, it was extremely rewarding and I felt that I learned a lot. Even though using Openframeworks is still challenging, I am definitely a lot more comfortable using the addons and all of the functions that Openframeworks has. I do wish that my code was more clean though, and felt that it could have been a lot more efficient. It was a challenge to constantly detect the magnets in real-time, so I felt like I needed to record the previous positions of every marker, and only update this if the marker is detected once again. 

The code to the github repo is here: https://github.com/slw515/Refrigerator-Door

Midterm: User Testing

I conducted some user testing, with three people having no information or knowledge of the project and two others having some idea of the project. For this blog post I will include the responses of the people who had no prior knowledge about the project. 

User Testing #1:

The user used the project pretty much as expected, his main comments were recommendations on how to increase the complexity of the project, recommending me to use several foam letters to determine what letters were rendered on the screen. He was a little confused about what was going on at first with the rendered letters, but realized that they were used to fill in the gaps between the letters. He also dropped one of the magnets which made me realize that I should have maybe used stronger magnets. 

User Testing #2: 

This user pretty much it as expected, she was a little confused about the letters rendered as well. She commented that maybe some music or other animations (apart from the flowers) could be added because she felt like this would be an effective experience for children. 

User Testing #3:

This user did something that I did not expect: she moved the magnetic letters immediately to try to start spelling words using the physical letters. She did not really see that the letters rendered were supposed to fill the gaps between the physical letters on the board. 

Midterm Progress

After getting the trackers working on Sunday, I started to code the logic of tracking markers’ positions and displaying letters in between. I still struggled with wrapping my head around how to do it, but I think I am fairly close. I think I have the logic worked out for searching an array of words (3 for now) and finding whether the letter is the same as the first letter in the word, and then displaying the next letter on the screen. At the moment I am not sure why the first value of each word cannot be stored as a variable in one of the loops, but I’m sure that this is just a data type conversion problem. Once that is sorted I can move on to the logic of position of each letter, whether it’s to the left or the right, bottom or top, etc. , and whether to display letters in between. 

I spent Monday night and today trying to find powerful magnets here on campus, but unfortunately, I do not think I could find any magnets strong enough to stick through the panels here in 153. Ume suggested ordering on Amazon but I want to make sure that I get the size right because they are quite expensive. I am considering creating an acrylic panel, like a mirror, that could stand more or less on its own that would function as the “refrigerator door” for this project. I think I will do some more talking with Ume and Aaron tomorrow to see what might be the best option at this point. 

I think after that, it’s just a matter of style choices for fonts and effects in rendering the whole sketch up on whatever I will be projecting onto. I know that this is a lot of work for the next week, but I think I will have more time to focus on it than this week. 

Here is a video of a simple setup I made just to show the rendering of letters on a panel I found in the IM lab, of course, this is not nearly close to the final creation I have in mind.