After Effects to Maya Pipe line




The flow from After Effects to Maya worked perfectly this week until I tried to link the video content to the plane. At that point, the video wouldn’t connect. I re-named the files, re-imported them, and restarted the project to no avail.

I am looking forward to office hours to fix the situation. 

 

Pixel by Pixel Final – Experiments with perlin noise

For my final, I wanted to experiment with Perlin Noise in a live video feed. 

First, a little on Perlin Noise and why it’s so beautiful. From WikipediaPerlin noise is a procedural texture primitive, a type of gradient noise used by visual effects artists to increase the appearance of realism in computer graphics. The function has a pseudo-random appearance, yet all of its visual details are the same size.

Examples: I’m drawn to the smooth-flowing randomization of the texture. 

Example of Perlin Noise

Example of Perlin NOise

For my piece, I wanted to work with the motion of Perlin Noise but isolate it to a single point that distorts and eventually destroys the original video feed. I didn’t realize it until I built it, but it appears to be a virus methodically taking over the screen. 

Code:

To achieve this sketch, I needed to learn how to produce the noise and deconstruct it into a single point. For the “noise walker,” I called the noise function and played around with the math to get it moving in a pattern I enjoyed. 

For the live video feed to continue to work, I needed to call a background, but the “walker” doesn’t work with a background to achieve a printing effect. I needed to call the PGraphics function to print the ball.

Finally, the ball was printing too often. I originally worked with frame count to slow it down, but that seemed to affect the video feed drastically. Instead, I isolated the printing to every two frames with an if statement. 

Thank you to Danny Rozin for walking me through several coding blocks throughout this process during office hours. 


Example 1
Example Code 2

Finals Piece: 

 

 

 

Experiments in AR

For this project, we were tasked with creating a live sculpture using AR or VR. I chose to wrestle with Unity and Vuforia to produce an AR experience revolving around two of my favorite things in NYC natural spring flowers and convenience store flower stands. 

As much as I don’t like using single plastic, there is something beautiful about the flowers wrapped in plastic and the plastic curtains draped in front of the stores. These dying plants are getting preserved as much as possible. I don’t know; I’ve always been drawn to them. 

I wanted to use the idea of plastic wrap and preservation to “protect” the flowers blooming on the trees around Brooklyn. I also wanted to experiment with the flowers at the convenience store. 

First, I set out to photograph the flowers. 

Flowers


Flowers

Flowers

Flowers

Flowers

Flowers

Back home, I took these images and made collages in Photoshop. To complete the image I added a layer of “plastic” wrap final layout. 

Flower Collage

Flower Collage

Flower Collage

Flower Collage

Flower Collage

Flower Collage

Flower Collage

Next, I built my scene in Unity. The goal for the VR experience was to have dead flowers trigger the sculpture. Luckily I had dead flowers in my apartment. Once triggered, the screen would be filled with the sculpture. I wanted the user to have to rotate the phone and look around to experience the whole piece. 

Unity ScreenShot

Unity ScreenShot

Unity ScreenShot

Unity ScreenShot

Unity ScreenShot

 

 

Problems faced: 

  • Getting the Unity build onto my phone. For some reason, every time I exported the build, it would make a folder, but the folder is empty. 
  • Lighting on one of the faces. The sculpture works, but the lighting on one of the planes is always dark. I was unable to figure out that problem. 
  • I thought the trigger would still be activated on the actual live flowers if it was lined up the same way as the image. That experiment failed. 

In the future:

After I built this, I think it needs a sound element. I should record the ambient sound of the neighborhoods. I’d also like the trigger to be the actual dead flowers, not an image of them. 

Fitting In

For this project, we were tasked with referencing an art movement. Noah and I worked together again, referencing Harry Who, a 1960’s collective from Chicago. Their work moves both Noah and me. The collective was made up of six artists who created paintings and sculptures of deformed figures, surreal wallpaper, and abstracted everyday objects. They were discussing the difficulty of fitting into this world. 


Another component we were drawn to was that they painted directly on acrylic with acrylic paints to give a unique lighting effect. 

Our idea: 

Using Isadora and a projector, we will have the subject contort themselves to fit within a surreal Harry Who silhouette. The live video feed of the matter will be recorded then played back to the viewer. To get the subject/viewer to engage, they will be presented with a silhouette and told to fit in. After six seconds of recording, the silhouette will disappear, and the video will be played back on a randomly generated background (another reference to Harry Who’s a unique use of wallpaper in their shows). 

Bellow is the production and piece documentation. 

Silhouette Of a Harry Who Painting

Silhouette Of a Harry Who Painting

 

Silhouette Of a Harry Who Painting

Production of our frame

Production of our frame

Production of our frame

Production of our frame

 

 

 

 

 

After presenting we came up with a list to polish this piece: 

  • Hide the projector
  • Make the frame 2 or 3 times bigger
  • Clean up the silhouette
  • Have a way to present all the videos maybe in another room

15 minutes – Video Sculpture

                   

Earlier ads for television placed the tv as a main point of entertainment in the home. A safe place for the family to get together and enjoy their favorite shows. As time moved on, our desire to digest media didn’t change, but the device did shape, portability, and interact with us. Modern televisions record our voices, images, and data but remain disguised in fun entertainment. 

How it works: The camera on the top is constantly recording the room. The feed is run through Isadora and output in two channels; the first feed is set to a 15-minute delay and rear-projected on the main screen – our thought is that in a gallery setting, a viewer would encounter footage of a previous viewer. The main screen cycles off to show an iPhone in the back of the casing live streaming the camera feed every two minutes. 

Lay flat of materials tested for a rear projection screen

One of the hardest parts of the physical construction was to find a material that could be used as a rear-projection screen. The balance between opaque and transparent was difficult to find. 

Noah removing a screen printing screen/

Until we landed on a screen-printing screen that worked perfectly. 

Working in isadora to create a video loop We experimented with multiple designs and tools to relay the image through Isadora back to the screens.

Portrait of Sherry a puppyProject manager Sherry. 

Rear view of homemade rear projection screen

Custom rear-projection screen. 

Tube television with rear projection

View from the front with rear-projection activated. 

Tube television with rear projection

View from the front with rear-project off and the live feed activated on the phone. 

Tube television with rear projection.

View from the front with rear-projection activated. 

Tube television with rear projection

View from the top with a projector in the back. 

Tube television with rear projection

Final view.

Inversion Video

This week we were tasked with manipulating the color values of pixels in a sketch. I chose to work with a live video feed and an overlay. 

To achieve this, I made two sketches first, the spiral background, and second, the inverted video feed with the banding. 

For the video feed, I decided to only invert the colors of the red channel. The choice was purely aesthetic. I called a single band of pixels in either the x or y-axis and applied color manipulation to make the banding effect.  I tried using an array so that I can move the band quickly later.

PROBLEMS:

      • Centering the video feed – Whenever I tried to translate, the video feed crash. 
      • Layering the graphic below the video feed – To get a taste of what I was going for, I left the graphic on top but turned down its opacity. 
  •  

Code: 

// The world pixel by pixel 2021
// Daniel Rozin
// uses PXP methods in the bottom

import processing.video.*;
Capture ourVideo; // variable to hold the video
int diaMin = 10;
int diaMax = 1000;
int diaStep = 10;
float a, b, move;

void setup() {
size(880, 720);
frameRate(120);
ourVideo = new Capture(this, 640, 480); // open default video in the size of window
ourVideo.start(); // start the video
noFill();
stroke(55,3,34,40);
strokeWeight(diaStep/4);
}

void draw() {

if (ourVideo.available()) ourVideo.read(); // get a fresh frame of video as often as we can
background (25, 215, 55);
ourVideo.loadPixels(); // load the pixels array of the video
loadPixels(); // load the pixels array of the window
int [] l1 = {20, 60, 110, 140, 158, 210, 214, 340, 390, 415, 421, 591, 610, 630, 421, 455, 580, 631, 560};
int [] l2 = {23, 34, 111, 145, 153, 167, 180, 440, 443, 460, 230, 340, 212};
for (int x = 0; x<ourVideo.width; x++) {
for (int y = 0; y<ourVideo.height; y++) {
PxPGetPixel(x, y, ourVideo.pixels, ourVideo.width);
PxPSetPixel(x, y, 255-R, G, B, 255, pixels, width);
PxPSetPixel(l1[0], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[1], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[2], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[3], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[4], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[5], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[6], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[7], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[8], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[9], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[10], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[11], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[12], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[13], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[14], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[15], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[16], y, R, G, 255-B, 255, pixels, width);
PxPSetPixel(l1[17], y, R, G, 255-B, 255, pixels, width);

PxPSetPixel(x, l2[0], R, G, 255-B, 255, pixels, width);
PxPSetPixel(x, l2[1], R, G, 255-B, 255, pixels, width);
PxPSetPixel(x, l2[2], R, G, 255-B, 255, pixels, width);
PxPSetPixel(x, l2[3], R, G, 255-B, 255, pixels, width);
PxPSetPixel(x, l2[4], R, G, 255-B, 255, pixels, width);
PxPSetPixel(x, l2[5], R, G, 255-B, 255, pixels, width);
PxPSetPixel(x, l2[6], R, G, 255-B, 255, pixels, width);
PxPSetPixel(x, l2[7], R, G, 255-B, 255, pixels, width);
PxPSetPixel(x, l2[8], R, G, 255-B, 255, pixels, width);
PxPSetPixel(x, l2[9], R, G, 255-B, 255, pixels, width);
PxPSetPixel(x, l2[10], R, G, 255-B, 255, pixels, width);
PxPSetPixel(x, l2[11], R, G, 255-B, 255, pixels, width);
PxPSetPixel(x, l2[12], R, G, 255-B, 255, pixels, width);
}
}
updatePixels(); // must call updatePixels once were done messing with pixels[]
a = sin(radians(move+10))*450;
b = cos(radians(move))*450;

translate(width/2, height/2);
for (float dia=diaMin; dia<=diaMax; dia+=diaStep) {
ellipse(-a, b, dia, dia);
ellipse(-a, -b, dia, dia);
ellipse(a, -b, dia, dia);
ellipse(a, b, dia, dia);
}
move++;
}

 

// our function for getting color components , it requires that you have global variables
// R,G,B (not elegant but the simples way to go, see the example PxP methods in object for
// a more elegant solution
int R, G, B, A; // you must have these global varables to use the PxPGetPixel()
void PxPGetPixel(int x, int y, int[] pixelArray, int pixelsWidth) {
int thisPixel=pixelArray[x+y*pixelsWidth]; // getting the colors as an int from the pixels[]
A = (thisPixel >> 24) & 0xFF; // we need to shift and mask to get each component alone
R = (thisPixel >> 16) & 0xFF; // this is faster than calling red(), green() , blue()
G = (thisPixel >> 8) & 0xFF;
B = thisPixel & 0xFF;
}

//our function for setting color components RGB into the pixels[] , we need to efine the XY of where
// to set the pixel, the RGB values we want and the pixels[] array we want to use and it’s width

void PxPSetPixel(int x, int y, int r, int g, int b, int a, int[] pixelArray, int pixelsWidth) {
a =(a << 24);
r = r << 16; // We are packing all 4 composents into one int
g = g << 8; // so we need to shift them to their places
color argb = a | r | g | b; // binary “or” operation adds them all into one int
pixelArray[x+y*pixelsWidth]= argb; // finaly we set the int with te colors into the pixels[]
}

Video Portrait

For this video portrait, we have decided to stroll down memory lane and make a portrait of television/technology and its role in our homes and lives. 

Early televisions were seen as a point of pride and furniture in the homes, with whole rooms dedicated to it. Today the role of televisions has diminished in the house. Instead of dedicating a room around the device, they’re designed to be hidden. On top of that, our relationship to media has changed; we’re digesting more content across multiple devices that isn’t as precious as it once was. In fact, some of our videos today are designed to be destroyed after a short period of time instead of archived and held on to forever. 

For the piece, we want to present our memory of TVs in their glory while referencing the new technology’s prominence. To achieve this, we will deconstruct an old television and build a diorama inside that closely resembles our memory of living rooms of the past. The television inside will be replaced with an iPhone that will be along the back wall. There will be three live channels feeding into the phone – the camera on the phone, a camera in front of the viewers, and a camera behind them. The three inputs will cycle. Below are a series of sketches, renders, and experiments in feed transitions. 

Drawing of video sculpture

Drawing of video sculpture

Television Render: 

Experiments with Channel transitions in Isadora:

Playing With Shapes

This week we jumped into processing and started to play with shapes. I made a simple piece about how I’m feeling this semester. The editable line represents my work. The randomized cubs that keep hiding the line represent distractions, new ideas, new concepts that I want to explore, all preventing me from getting anything done no matter how hard I try. 

Below is my code and an example video: 

Screenshot of a processing sketch