So far, my project is coming together smoothly(ish), kind of slowly but I think i’ll get there just fine in the next couple of days.
I haven’t been able to get that much done over the short break aside from fleshing out the narrative since I needed to check a couple things out(sensors + mini sd card + little lcd screen?) from the Lab but no one was on duty so I’m hoping to get that done asap since everyone is back now; It shouldn’t be that big of a problem as I worked using one proximity sensor to get some things working and i’ll split things up later.
My plan is to figure out how to fake electronics turning on when my robot gets near them and the easiest way is to use a proximity/distance sensor to trigger things happening. I managed to fix a lamp that was in the junk shelf and connected that to my arduino(kinda happy with how it looks since it’s a different kind of light source) and the sensor; the lamp turns on when something is close. I mixed in some LED’s with fairy lights and have those turn on based on proximity as well, since I’m not confident in using the relay switches we have I just thought of a way I can “fake” them turning on; the limited amount of LED’s that turn on make it kind of look old and janky which is the aesthetic I’m going for for my performance.
I also have 2 fans and a mobile phone I need to get working as of right now and I think with that I would have enough pieces for my show aside from my robot. My idea for the mobile phone is to put it face down with a very thin acrylic box on the screen with LEDs/Neopixels that will flash to fake the phone going on; I’ll also hide a buzzer to fake the sound of a phone ringing but it might be too low for the show?
My plan for this weekend is to get all my components working in tandem and finalizing my script by Saturday morning tops so that I can dedicate the rest of the couple days to finetuning and getting my robot alive again :]
If I somehow I manage to finish everything way ahead of time i’ll try and have a mini stage with some props to create some sort of world instead of having everything just on the floor.
WorldWideWaste.AE is an interactive installation about the state of the ocean and its pollution. The piece primarily targets the UAE beaches to raise awareness of them being highly polluted but unseen, as the majority of people visit public beaches that are thoroughly cleaned and believe that as the reality of our beaches; while on the other hand private beaches are impossible to clean and maintain by individuals. Another aspect to this piece is to bring attention to e-waste by utilizing an old TV that would have been otherwise thrown out seeing as most people tend to forget that tech is also part of the problem.
Concept:
From the start of the semester I knew that I wanted to work on something that had a message and purpose rather than just an interactive piece for the sake of interactivity and with that in mind I found myself thinking about what problems do we have locally that I would like to bring awareness to and ocean pollution was the first thing that came to my mind.
I come from a family of fishermen and interacting with the sea has always been part of my culture and childhood, and for me pollution was part of that childhood. Walking down a beach and finding all sorts of junk and bringing it back home or just playing around with it. The image of our beaches always being trashed is the reality I see when I think of any beach in the Emirates, but the opposite is true to non-locals who our country curates’ clean beaches for. Seeing people around me genuinely believing that our beaches are clean, that we’re not suffering from pollution, was something I wanted to tackle through my installation. Seeing everything through a clean lens creates further ignorance on how real pollution and waste is. The drive to create this installation stems from my personal experience and urge to show people the reality we live in.
Another aspect of this project is that it is not necessarily relevant to the UAE only but rather the entire globe, however putting in the effort to bring local garbage from the ocean as sort of evidence to the masked reality we’re living is what makes it personal. The final part of this project is in the technology I’m using to output the visuals, I opted for using an old TV that was about to be disposed of instead of relying on the latest technology i.e newest projector/TV as people tend to forget that e-waste is a real problem and we’re all part of it. When one thinks of ocean pollution the first thing that comes to mind is plastic and straws, we moved onto reusable straws, reusable bags and reusable cups but immediately throw out old tech for new tech turning a blind eye to how it affects the environment. In terms of the interactivity of the project, I wanted people to feel complicit and involved with what is going on, pushing them to want to take things out and clean up. Ideally, if this was an installation outside of class I would have wanted to leave the project up and not tamper with it just to see what people would decide to do, would they choose to make it worse or clean it completely?
Process:
Now that my project has been finally completed and done, I can proudly say that this has been one of the most difficult projects for me mainly because I decided to step so far out of my comfort zone and decide to look at tools that I would usually not have touched; that includes the depth camera (intelRealSense), Shaders, addons, openFrameworks, etc.
I haven’t had any experience working with anything that I used for my project and despite all the issues I ran into I can definitely say that I learned a lot regardless even if shaders and some addons did not end up as part of my final project.
The first thing I did was find an addon that worked in tandem with the RealSense camera, although I struggled to find one that worked mainly because most of them were made for the Mac and I didn’t have the knowledge or experience to figure out how to port them to Windows. I managed to find an addon that worked thanks to Aaron’s help and with that I started working. I used depth subtraction to look at what is close and taking out what was far and then based on the amount and closeness of the objects the opacity of the image is adjusted to fade in and out to create an instant response to the interaction so that the user can tell that what they’re doing is affecting the image displayed.
In terms of the physical parts of the installation, I went to a beach somewhere near Al Maqta’ and collected as much trash as I deemed necessary while also getting some sand to fill the box that I had made. The physical aspect of putting it all together was not too complicated as I had it planned out ahead of time and just needed all the pieces to come together. One thing I overlooked was a way to guide people to understand how to interact with the piece without explicitly giving them instructions, the workaround I found over the break was using a grabber and leaving a note where I hung it that requests the users to return it after they’re done; which insinuates that they can and should try and use it.
User Testing:
I tested my project with multiple people and the consensus was, they had fun! Surprisingly, as to be honest I really thought that the interaction might be too slow or requires so much work that people would rather not interact with it; but it turned out I was wrong. All my users understood what to do without me saying anything about the project, which was also surprising as I was worried that I might have to be more explicit with my instructions; but seeing as the image fades into another image instantaneously reaffirms to the user that they’re doing what they’re supposed to. Another point that I received was that the place where I have the camera mounted is a little intrusive and acted as an obstacle when people tried to move things around, that means that I need to figure out a better position for the camera to be mounted and still be able to see what it’s supposed to.
I finally managed to figure out a way to use the RealSense Intel camera to serve my purposes after a long struggle of dealing with the addons, Visual Studio and OpenFrameworks. Although relatively basic, I worked off of the code from the sample and created a distance detector(somewhat?)
The way it works is basically by subtracting everything from within an X distance and detecting only what is close enough, then using a counter it counts the amount of pixels being detected and if it’s past a certain amount then it starts producing an output, which for now is a sound.
The input method can be adjusted based on my needs, so I think once I start having all my physical components I would be able to further finetune what I currently have and hopefully make it much accurate.
Both of the readings had me thinking more about the power and effectiveness of the human body in interactive art/installations, despite being an IM major I found myself looking back at my past projects and my lack of utilization of the human body.
Seeing as the definition of what interactive art is, is always debated I found myself wondering if we use the human body to interact with the work, does that make the work interactive? And if so, would a scripted interactive work make it less of an interactive piece and more of a displayed project? i.e How do interactive performances fall under this category if the audience is not part of the performance and is just viewing it from the outside? What if an entire performance is scripted to look like an interactive work but is instead entirely choreographed, does the perceived interactivity by the audience then define it as an interactive work?
I don’t really have any answers for these questions myself but I couldn’t help asking them when going through the readings and I thought they would be interesting to share.
I also found the part towards the end of Nathaniel Stern’s excerpt on page 6 about the affordances of Interactive art to be interesting; especially talking about how the tools don’t create the interactive installation rather the situation it creates. Thinking of interactive art in terms of situations and experiences/emotions rather than just thinking of whether it is “interactive” in the technical sense is something I hope to keep in mind as I continue with IM.
Coming up with ideas for our midterm has been quite the challenge especially since it’s very early in the semester but one thing I’ve known for sure is that I want to aim for an installation with meaningful interactions. For that I came up with two rough ideas that I sketched out to kind of visually represent what I’m looking to create.
First idea:
For my first concept, I want to make an installation that has to do with our current environmental state and to be more specific about the ocean pollution. As illustrated below, the concept is to have a large sandbox filled with trash where the interactions would take place; as users touch/pickup trash the output (screen/projector?) would start clearing the screen of the muddy and gross visuals to replace them with ripples, which would also be accompanied by pleasant sounds.
In terms of technicalities, I’m still not entirely sure how that would be executed without directly wiring all the garbage to measure the input but also I feel like the installation would not be as powerful if the users could not physically pick things up; which again would be an issue if everything was wired directly. If it’s possible I want the users to have complete freedom with the sandbox, maybe they can add more trash to it to make it worse or permanently remove things to make it pretty?
Second idea:
Moving forward with the purpose of meaningful interactions, I thought the using touch as an input exercise we did in class (where we all held hands) was interesting and it inspired me for my next idea. The concept for this installation is about coming together as a group, the idea is that interacting with the piece would generate some sort of visual (maybe ripples again because I think they’re pretty) and auditory output. Since I want to emphasize the idea of coming together, the visuals would be very quiet and dull but the more people that join the more vibrant it would be. As for how the invitation to input would look like, i’m thinking maybe two arms (mannequin arms?) extending from the screen that insinuate being held/touched/interacted with. If also possible I would like to have each person represented by a different color on the output so the user/audience can clearly see what their presence equates to.