Introduction:

My goal for this project was not to get all fancy with gadgets and gizmos and things that I barely knew how to use, but to use the skills I had acquired during this class and before in order to tell a complete, consistent, and compelling story that was both visually and auditorially engaging. 

In order to do this I combined some of the skills in video and timeline creating in openFrameworks from my mid term with pretty much the most interesting/ fun thing I had learned in Interactive media since: projection mapping. I have had a longtime fascination with video making, and I feel like projection mapping is one of the most exciting ways to incorporate it into interactive media. Although my projection map ended up only being a small car without any moving parts, I feel like the way it integrated me into my performance was clever and wonderful. 

Description:

 

My projection mapped performance Time Machine is the story of a man stuck in traffic who suddenly realizes that his car had turned into a time Machine. He plays around with the play and pause feature, controlled by the horn of his car until he realizes that he can pause time all he wants, but that will not get him to work faster. He sleuths around to see whether anything else in his car can control time. After some searching, he finds that his turn signal can make the car go forward or backward depending on which way it is pressed. He plays with this feature too until he realizes that going backwards and forwards won’t get him to work fast either, at which point, he cranks the speed up to maximum. At this point, something happens, either a crash or a glitch, and everything stops and is drenched in red. He fiddles around for a moment until the machine starts going back uncontrollably. The video gets faster and faster. Images of the industrial revolution and dinosaurs flash by in an instant as everything begins to fall apart. A circular glitch signifies the ending and looping of time. Finally, it disappears, and all that is left is the video signal for “stop” a small square, drenched in red. He reaches out the door and presses it and everything ends. 

Now picture that story if it was a little bit funny and that is basically how my performance was. 

In order to help signify what was happening in the story, besides the audio and visual cues of the projection behind me, I had a button mapped on to the door of the car that would change between pause, play, fast forward, fast backward, and stop. 

Process:

In the end, the performance ended up consisting of several layers of video and audio cues triggered by buttons. In order to make the interaction with the car convincing, I incorporated these buttons into it. I started out planning to have separate projections for the background and the car, but ended up incorporating them into one single projection. In the end, I believe this was not only easier, but much more cohesive. 

To get the video, I actually took a couple of trips into the city in good filming light. I used my phone to take video from inside taxis whenever we were in a traffic-ridden zone. I used one of the videos I liked the best.

As far as the soundscape, I combined and sampled sounds from the NYU stock sound folder. An example of this is the key jingling and car starting sound at the beginning of the performance. I looked through hundreds of clips for just the right audio. I could not find exactly what I wanted, so I spliced multiple clips together. 

Building the car was problematic due to lack of suitable resources. In order to make it white I had to use white paper and tape, as there was no white material to make it out of, or white paint to paint it. Also, because I made it out of cardboard, it was not very structurally sound, even after I supported it with scrap acrylic. 

After my code was mostly complete, I spent several hours setting up my projection map and figuring out the perfect way to make it so that I could move the car in and out of the performance space. I had the car attached to a small trolley table. I used that to bring it to the wall. I then used the shadow of the door on the background to align the door. 

Code:

(my code is in the title for code, isn’t that clever?)

My code had several parts including Serial Communication, video playing, audio playing, image mapping.

My first major struggle was figuring out how to make a linear story play with just a trigger button. In order to do this I created (with the help of some friends) a counter that counted up each time a button was pressed. 

It took me some time to figure out exactly the format in which to play videos and audio in order to have them playing, how you want them playing, when you want them playing. Eventually I figured it out however, and was able to start building the code to my story. 

After finishing pretty much everything, I was still having problems with one of my buttons, which kept bouncing (pressing twice in a row.) I fixed it by incorporating some debouncing code into my Arduino file, which was difficult because I had to work around my serial communication.

Reflection:

Overall, I was very pleased with how this project turned out, and have received some really helpful and positive feedback. I believe that having a strong anchor point in my storyline was really helpful in keeping me on track. I also believe that I not only had acquired new skills with this project, but also had a greater realization of my limits. I played to my strengths and was able to make a powerful product with minimal headache.

I believe that my biggest shortcoming in this performance was not my tech, but actually my acting. If I were to do this performance again, I would work with an actor other than myself, so that I could give them direction, especially since the performance is better with a little well-though-out wit. This time it was easier to work alone, as I mostly knew what I wanted, and had no time to train an actor on these things, especially as I was building it. I also believe retrospectively that I could have created a better, but did not have the time away from it after the whole performance was completed to remove myself from it and think about the aesthetic.  AN example of this is making a better video set that didn’t have a long idle section, or incorporating a crash sound when everything turns red.

if anything, this project has proven that I have learned a lot in one year. In January, when I took my first IM course, I had never coded before in my life. Now I am semi-competent with the basics of a professional coding program. I’d say that’s pretty good. 

 

Final Project Progress Report: December 4, 2019 (it doesn’t feel like winter)

Alright so at this point I am feeling pretty confident about how my project is going. I have basically gathered all of the resources I need and prepared my code so that once I am in the space, I can basically write the ending to my story and make it look cool. Please watch this video explaining what my code does so far and how the hardware works. After that I will go into more detail about each of the aspects of my project that I worked on.

Soundscape and Video:

One of the first things I wanted to focus on since I got my basic code working was actually getting the video material for my piece sorted out. The video that I plan to project around my car door is a video that I actually took while sitting in a taxi at magic hour. Over the long weekend, I actually took several taxi rides like this one and tried to get the best footage of traffic that I could. This is the one I’ve liked the best so far. 

I brought a microphone out into the city as well, and recorded some natural traffic sounds, but ended up not really liking them that much. Instead I ended up looking through NYU’s stock sound library. I reviewed around 50 different city background sounds before I finally decided on the one I wanted. I wanted the sound to be able to start all at once from nothing, pulling the viewer into the performance. I may also want to incorporate some fuzzy background noises while the image is paused.

Icons and Code:

In addition to retrieving and editing the sounds and video for this project, I also decided that I needed to make myself a set of pause, play, fast forward/backward, and stop Icons. I used photoshop to make PNGs of each one and uploaded them into my data folder. I am actually really happy with how they look, and they feel like they come from the same set.

Since working on the mid term, I have felt much more confident with openFrameworks. I still don’t get everything, but it feels like I am actually at the point where I can worry about coding instead of file management. That being said, there were a few minor hiccups i had when trying to get the code to work smoothly and consistently, as well as actually do what I want to. This project is interesting because I use aspects of some oF examples, but am not really building off of any one. This really makes me feel like it is my own.

Door Building:

It took me a while to build this white car door. There was just enough cardboard in the lab to tape together into it. I used an old spool from a 3d printer, some foam, a monitor arm, and a button to make the steering wheel and I think it looks pretty good. The whole thing is supported by a small wheel-able table. I think in general it looks pretty good, but am curious about how I can make it look more like a car door. I am considering cutting out a wheel well like in this drawing. 

It would be Ideal if I could find some white paint to paint over it with and make it more uniform, but If that is not possible I don’t think it will be a huge problem because the car door will have a projection on it and the room will generally be dark.

Projection Mapping:

I got the projection mapping software working (almost), including the saving mechanism. My first and biggest problem with the mapping was that I couldn’t find a way to map just part of the image. If I tried to put the mapping function outside of just the part I wanted to map, everything else I was drawing on screen would just disappear. This may be solvable, but I am not sure how useful solving it would be, because despite getting it to work, the saving function does not work between separate startups of the program. This means that each time I open the program, I would have to projection map it again anyway.

Instead, for my final performance, I think it will be easier to just mark out my props and pre-map it .

roughPrototype.jpg

After yet another busy week of working, I have returned again with a violently mediocre project. Ok maybe it is not that bad. I have figured out most of the logic and functionality I want from my code. There are just some things that I haven’t quite figured out how to do.

I have spent the last couple of days playing with the video player example and incorporating its code into mine, then making different buttons trigger different things such as the honk sound and the play our pause button images, as well as causing the video to play or pause. 

As far as my logic, these are the things I want my buttons to be able to do. 

Ok.. but I was planning on doing buttons with serial communication! I tried, but kept running into problems with my teensy, and was unable to get some simple functions to work. After a few hours of messing about, I moved on to making my openFrameworks code work. 

Ratatouille but if Remy was actually Gordon Ramsey

For our Project this week, Lateefa and I decided to play a little bit with the idea of making our teensy come alive. We figured that a good way to make the tilt, heading, and rolling of the prop shield feel alive in an understandable way connected the body, was to hide it in a piece of clothing. And what is a more iconic piece of clothing than the chef’s hat from the Disney Pixar animated film, Ratatouille? 

In the movie, Remy the rat controls a human chef by sitting under his hat and moving his hands with his hair. In a similar fashion, our project seems like gordon may be pulling the user in the direction he chooses. the only difference is that gordon shames the person instead of cooking delicious food for them. 

Here is the code

Follow Me! (Floor Projection with Kinect)

This project uses a Kinect to sense the body, then draws triangles around it to make an interesting lattice that follows the user and changes color. 

 

My sketch Draws Triangles using the same grid that we made for the line following sketch. Basically the only thing I had to change was the shape that was being drawn and then play with the reference point of each vertex. This goes to show that small changes in aesthetic choices can turn a simple mechanism/sketch into a work of art.

Another think I like about the aesthetics of my final sketch is that the projection on the ground is not the only interesting aspect. The way that the lines move around your hands or your legs is just as mesmerizing. It is just the right amount of Chaotic.

As far as building this bad boy, the process was relatively long. We began by porting a sketch in which lines followed the mouse around the screen from P5.js into openFrameworks. After figuring out that, we attached a depth camera, and experimented with code that followed either the closest point in a depth image or the average point on the screen within a thresholded image. Then we added code for projection mapping of both the projector and the Kinect depth camera feed, so that the right area of the Kinect camera would be sampled when finding the body under the projection. Finally we hung up the projector and camera using tripod mounts, pointing them at the ground. 

We couldn’t get the calibrated maps to save in the sketch, so each time we turn it on , we need to calibrate it again, but it doesn’t take long. (we use a water bottle and put it on each corner so we can map the depth camera to it).

The Implicit Body Framework- Reading Reaction

Chapter three of Nathaniel Stern’s book Interactive Art and Embodiment describes his framework for designing and qualifying interactive art in relation to the body and how a viewer of an interactive instillation or performance uses their body to interact with it. In chapter one of the book, Stern wrote: “I pose that we forget technology and remember the body.” I believe that Stern’s Implicit Body Framework follows this sentiment, focusing on the body in a “sensorial context,” before delving in to the specifics of the technology. I believe this is because in order to know what to build or how to design your technology, it is necessary to design your user experience. Stern breaks this experience up into four parts:

  1. Inquiry: The framing of the work in the way the artist sees it, as well as its presence in the world and online, such as its name. This also includes the design of the artist the artist approaches and critiques their own work. In a way it is the idea.
  2. Description: The feeling of the work on the flesh. How it is built, where it is, and how it responds to the body. (Stern claims that in most publications on interactive art, this is as far as the study of interactive art goes.)
  3. Interactivity: The way that the body responds to the work, moving away from technological descriptions in order to study the person using or observing the work, allowing them to then to form a sense of relationally.
  4. Relationally: The way the user processes the material relationship of the artwork, and connects it with their own experiences, ant to the world around them, giving the work a more significant context.

In the chapter, Stern describes each of these in order though his through analysis of a work by Golan Levin and Zachary Lieberman titled “Messa di Voce.” I think it effectively breaks down the entire work, without taking away from the magic of the interactivity. It is interesting to note that the last two sections of the framework are a little bit less concrete, and are explained in, I believe, less specific terms. 

After reading the section, I decided to look up “Messa di Voce,” which I believe I’ve seen before (in Intro to Interactive Media), and found that the magic described in the book just wasn’t there. I believe that this goes back to something that Stern described in section 3 of his book.

“I can point to videos that show what a piece looks like and does with the participants who engage with it, but the interaction itself will always be absent. How we move, sense, and thing-feel, and even more importantly, what this highlights in doing and  making, how we relate and perform, , the work of the art , the inter-activities that are integral to it, and how they attune us to embodiment, can never be sufficiently captured and presented.”

This is the thing that I found most interesting and compelling about this reading, and may or may not be one of the reasons I changed my major to Interactive Media. 🙂

Blue Booth (Screaming Photo Booth) — Documentation

Functionality:

The Blue Booth is an interactive instillation that lulls its users into a sense of security, then startles them with a loud scream that comes from behind. The camera in the booth is motion activated and begins to record the user as soon as they walk in, or the motion threshold is exceeded. After the user is scared, the video stops recording and plays the the video multiple times on the screen so that the user can see their reaction.

Process:

I though it would be easy. The concept is simple.

Instead…

This was an incredibly difficult project that has consumed my life for the last month.

Each point of my to-do list took between 4 days and a week.

I based this project off of an Openframeworks example called ofxVideoRecorder, which would record a video when one key was pressed, and stop it when another was pressed. My professor helped me to alter the code that so after it recorded the video, it would present it on the screen. At this point, my troubles began. The project would not work on my laptop, so I spent several days figuring out which version of Xcode in combination with which version of Openframeworks would make it function. I found that the version of Xcode I needed to use would not work with the camera on my computer, so I ended up having to Install all of the components that I needed again on another computer, a Mac Mini, and attach that to an external monitor and camera. 

Even after I had that situated however, it was incredibly difficult to get the project to run as I was working on it. Every time I would move a folder around or write a new part of my code, it felt like the entire thing would catch on fire and break apart. Eventually I worked through these problems, but it took time; over a week, maybe two, to get the project to consistently record video. In the middle of this struggle, I began planning the actual booth that my project would preside in. I decided to use metal wire shelves to both provide  structure and house my components. I also made sleeve socks for the arm I attached my monitor to in order to conceal any ugly patches and preserve the aesthetic of my booth.

The second,( or is it third? or fourth??) major problem I faced is that I had a real trouble making the timers for this project. The goal was to make it so that instead of asking for both an in-key and an out-key, I wanted it to record video for a set amount of frames, play the noise, continue recording for a set amount of frames, stop recording, and play the recording on loop for a set number of frames. I won’t go into detail about how long it took me to figure out how to do this in an efficient way, but it is embarrassing. 

(Photo)

Eventually though, I figured it out. In my relaxation/break time, I constructed my booth. I ended up using three wire shelves, 7 moving blankets, a blue rug, my computer kit, and many many zip ties to construct it. I used the moving/sound blankets in order to isolate it from the outside world in a relatively easy way, making the booth its own little place. I also lit the booth with two eye level lights and two violet floor lights to both give it atmosphere and light the subject properly for the video.

The final step (before user testing) was to incorporate motion detection into the code, so that the motion would act as the trigger, or in other words, that initial pressed key. I head banged around with this for a while, until I got some help from my professor, who also helped me cleaned up the mess of code that I had made along the way. Once this was working, I was finished!

Except I wasn’t.

I feel like the user testing section will provide good insight into some of the shortcomings my instillation still had.

 

User Testing:

I have changed up some things since user testing, but here is a video of my mom, dad, and some friends trying out my booth.

Feedback and responses:

  • Get Rid of Keychain Popup
    • – I am working on this but it pops up at seemingly random times, and because I had to manually reset the password on the computer I am using (which has some very specific features) I do not know the password to make this go away. This is also a reason I have been unable to get the computer/program to turn on automatically. The wifi popup window can be fixed by connecting the computer to the wifi.
  • I’ve heard it scream before, so i know it is going to scream
    • -This booth only has one trick up its sleeve. Beyond that, the meaning it creates is based upon how the user interacts with it. I believe that actively resisting the scream on the second or third time is simply another way in which the user can interact with the instillation.
  • Have a loop of multiple scream sounds
    • – I do believe that this is a way that I could make the experience more interesting, and make the functionality of the booth more variable. I have other sounds on file, but am still working on this.
  • Make outside look more inviting
    • – One suggestion was to use beads or other decorations to make the outside look prettier and cozier. I believe that having a sign that says “ENTER HERE”also accomplishes this.
  • Make sound louder (Speaker is inconsistent)
    • – Long story short, I need a new speaker. The one I have now is connected through bluetooth, and goes in to sleep mode, only awakening a few moments after called. I tried to fix this in the code by playing a prompting sound in order to wake the speaker up, but that did not work. 
  • Flash something on screen when scare happens
    • -I think that this could ad another layer of interest to the booth, but I also believe that it limits the kinds of interaction that the user can have and may be telling the user too much about what the booth is/does.
  • Make it more like a photo booth (props, count down)
    • -I think adding some of these features could formalize the user interaction and could be cleverly manipulated in order to heighten the experience and the scare.

 

 

Code:

Here is a copy of the code that I made until I can get it on GitHub.

Reflection:

In all honesty, I am unsatisfied with this project, even after over a hundred hours of work. The disappointing thing is that I feel like I am so close to making the booth into something that I really am proud of, but I am just burnt out and don’t have any more time. I have faced so many road blocks while making this project, that it feels like I have barely done anything but tackle with the things in my way. To me, it feels like I have mostly accomplished what I said I was going to do, but I have not made something great.

 

 

 

Petition to Rename This Class Sensors, Body, Motion, and File Management (an update on my midterm project presented as a series of screenshots)

Exhibit A

After class last Wednesday, I was optimistic about my project. I was confident in my plan and project theory. I knew the steps moving forward and was prepared to make them. I still needed to figure out some of the logistics. Mainly I had no way of recording video in Openframeworks yet and needed to figure out how to do that. 

By Thursday morning, however, I had become very sick (with bronchitis). Perhaps it was an omen.

Exhibit B

I spent the weekend fiddling around with Openframeworks, figuring out wether I should use computer vision instead of capacitive touch in order to trigger my photo booth. After being bedridden all weekend, I was finally prepared to get back into making this video recorder tool work in my project. I began by researching and downloading a few repos that had code that could do what I wanted. Aaron helped me by building a beautiful ofxVideoRecorder add-on that needed openFrameworks 9.8 in order to run properly. Still it would not work, so we did some research and found that it would probably work if I used Xcode 9.4, a much earlier version.

Exhibit C

I had no news, not because I had gotten the project file to build, but because I had other work keeping me up late, and had no luck getting it to work by trying to fix the error messages. 

Exhibit D

I worked and worked and stared at code and tried to understand it and cleaned up my files and made sure every file was being directed into the right place and the project still would not work. For a long time The project wouldn’t even build, but eventually I found the right combination of code, openFrameworks version, and Xcode version. Finally the project opened! It did what it was supposed to! Almost.

The webcam on this version of Xcode would not work. I just had to fix it. I sent Aaron a message about it.

Exhibit E

Aaron explains to me that it will be impossible to get my camera to work on this version of Xcode, which is heartbreaking, as this is the only version of Xcode that will actually build this code.

Exhibit F

Pure Despair 

Exhibit G

Ok so this screenshot is a little out of context. It is actually from an email Where Aaron told me not to give up. I do not plan to give up. This has been a long rabbit hole, but i need to get this figured out. 

Our first solution is to get a new (older) laptop from the lab so that I can get the laptop and the coding programs to line up. Ideally this will make it so that I can finally get an openFrameworks project to record video and play it back. Once this happens I will finally be able to incorporate the features of my project that I want to. 

If this does not work, I think it could be possible to use a different program to complete this same task, but need to do more research.