Progress So Far

I would liked to have made more progress than I have at this stage, but due to a few idea changes, I’m not completely on track with my production timeline. So far, I have finalized my idea and attempted to utilize an example that uses shaders to create glitch effects on images, so that it would work with live video from a Kinect. Ultimately, I want to be able to trigger the effects using body motion, mapping the glitch effect onto the interactor’s body silhouette (at the moment, they only work using keyboard keys 1-3, and are not connected to the depth data obtained from the Kinect). Since I am using three different glitch effects, I would like to have them change based on the interactor’s distance from the screen (the closer you get to the screen, the more muffled you appear).   

I think my main issue so far is that I relied too much on the Kinect addon library, rather than OpenCV which would be much more helpful for my project’s needs. 

My next steps are: 

-To be able to map the glitch effect onto the interactor’s body, keeping depth image and distance in mind, then testing the depth and distance threshold values within the installation space. 

-To find a way to resolve the issue of using an ofFBO container (used to contain the textures, or in this case, the glitch effects), and its potential hindrances to drawing the effects on a particular “blob”/the body silhouette. 

One thought on “Progress So Far”

  1. Hi Sarah,

    I’m still unsure about the concept behind the piece. This sounds like a software mirror. Did you give some more thought to what the people are distorting and why they’re distorting it? Why are they distorting it in a particular way (glitch)?

Leave a Reply

Your email address will not be published. Required fields are marked *