Week 2 AI Arts Assignment – Cassie Ulvick

For this week’s assignment, I played around with an ml5.js project called BodyPix_Webcam. I was drawn to this particular project because it reminded me a lot of a green screen or the Photo Booth app on Macbooks.

Basically, it detects the presence of a body through your laptop’s webcam. The output is a real-time video of everything covered in black except for the detected bodies. When I was testing it, it worked pretty well when I was by myself with no one else to detect.

I was curious how well it would work with multiple people, so I asked my friend to test it out with me. This didn’t turn out as well, as parts of our faces were covered in black.

This project was interesting to me because of its potential applications. The fact that the bodies were detected in real time would be very useful and could be used to improve a lot of existing green screen systems. In Photo Booth, for example, there are some effects where you can change the background of your photo. However, in order to use these effects, you have to step out of the camera frame first so that the app can detect the difference in the background with and without the person in it in order to detect where the body is. BodyPix_Webcam, however, eliminates this step. If its methodology was applied to Photo Booth, I think it would create a better user experience for users wanting to use the different background effects. However, it would just need to be further trained so that multiple people could be detected more accurately.

Week 1 Artificial Intelligence Arts Assignment, Cassie Ulvick

Case Study: New Nature by Marpi

Click here for presentation slides

New Nature was an exhibit I visited at a technology-focused art gallery called ARTECHOUSE when it was on display there this past year in Washington, DC. It was created by digital artist Marpi as his first large-scale solo exhibition, and was inspired by the biology, ecology and underlying mathematics of the natural world.

About the Artist

Marpi is a Polish digital artist based in San Francisco with a focus on 3D worlds, AR and VR, interactive art and storytelling. He is interested in creating works where viewers have the opportunity to participate in the creation of the artwork, accomplishing this through creating interactive, scalable and multiplatform pieces.

About the Work

New Nature was essentially a representation of different creatures or organisms but in a very mathematical, geometric and almost futuristic visual aesthetic.

The main room of the exhibition included large screens displaying a giant creature that visitors could interact with by using an app on their smartphones to feed it. From there, the viewer was able to see how the creature moved and interacted with the food it was fed.

Another section of the exhibit, my favorite part, included smaller screens with smaller individual creatures. Each screen had a Kinect sensor attached to the bottom that would detect how the user moved their hand, visually displaying their hand and its interactions with the creature on the screen.

The exhibit incorporated machine learning so that the more the audience interacted with the creatures, the more complex behaviors the creatures would perform.

Overall, New Nature aimed to explore the intersection between the stiffness of technology and the more fluid nature of the natural world. The implementation of machine learning supported Marpi’s goal – through machine learning, he was able to give the creatures in his artwork a more realistic behavioral pattern. His creatures were able to learn and adapt just as how a real organism would.

Sources

  • https://www.dc.artechouse.com/new-nature
  • https://www.marpi.studio/exhibitions/new-nature