Week 4 MLNI – Interactive Game w/ PoseNet (Cherry Cai)

Snow Controller

For this assignment, I created a screen-based interactive sketch that utilizes poseNet under the object-oriented programming concept

  • Inspiration

As I was reviewing my last assignment, I came up with the idea of creating a scene of the winter snow. Inspired by the example in p5js.org, I want to create a scene where the position of the hand can be a representative and simulation of the wind. (like the gif showed below⬇️)

  • Coding

By first using the poseNet  I input the position of the rightwrist: keypoints[10] and the leftwrist: keypoints[9] by referencing the keypoints list below.

To track the position of each new position, I recorded the x and y position for both the elements:

newrwX = poses[0].pose.keypoints[10].position.x;
newrwY = poses[0].pose.keypoints[10].position.y;

newlwX = poses[0].pose.keypoints[9].position.x;
newlwY = poses[0].pose.keypoints[9].position.y;

In order to reduce the noise and smooth the value of the changing value in the position, I used the function called 

lerp(x, y, amt) 
” x Numberthe x component
y Numberthe y component
amt Numberthe amount of interpolation; some value between 0.0 (old vector) and 1.0 (new vector). 0.9 is very near the new vector. 0.5 is halfway in-between” (p5js.org).
 
So what it really did is add the first two-element together and times the amt number and generated a new value in between.
 
After getting the position and successfully detected the movement of my wrist, I started to code for the interface of the falling snow.  First by creating a class and gives it a position, velocity, acceleration and a random number for the size. To actually simulating the snow in the real world, I added a special force, which will be multiplied by the size of the snow and then added to the acceleration. In this case, the bigger snow tends to fall faster than those smaller ones shown on the canvas.
 

// class for snowFlakes
class snowFlakes {
constructor(sx, sy) {
let x = sx || random(width);
let y = sy || random(-100, -10);
this.pos = createVector(x, y); //position
this.vel = createVector(0,0); //velocity
this.acc = createVector(); //acceleration
this.r = getRandomSize();
this.R = random(255);
this.G = random(255);
this.B = random(255);
this.isDone = false;
}

//add force to the acc
//make the bigger one fall more faster than smaller one
Force(force) {
let f = force.copy();
f.mult(this.r);
this.acc.add(f);
}

update() {
this.vel.add(this.acc);
this.vel.limit(this.r * 0.07);//limit the size associat with speed
this.pos.add(this.vel);
this.acc.mult(0);
if (this.pos.y > height + this.r) {
this.randomize();
}
}

//when they get to the bottom, use your rightwrist to add them into the canvas
randomize() {
let x = random(width);
let y = random(-100, -10);
this.pos = createVector(x, y);
this.vel = createVector(0, 0); //velocity
this.acc = createVector(); //acceleration
this.r = getRandomSize(); //size of the snow
}

display() {
stroke('#a3c5ff');
strokeWeight(this.r);
point(this.pos.x, this.pos.y);
}
}

The last step is to associate the position of the wrist with the snow. What I did was adding another force called the wind and using the map() function to input the detected position of the wrist and using it to control the snow in the x-axis. 
 
//the snow will move following the rightwrist position
let windX = map(rightwristX, 0, width, -0.7, 0.7);
let wind = createVector(windX, 0);
 
  • Conclusion

To sum up, I would say there is still a lot of function that can be associated with poseNet model and there must be more interaction that can be embedded. The assignment helped me to understand better how poseNet worked and again reinforced my ability coding using class/functions/for loops/if statements. 

Week 3 MLNI – Generative Art w/ OOP (Cherry Cai)

Project: Reduce Light Pollution

For this week’s assignment, I created a generative art that utilizes object-oriented programming concept and interactivity with the mouse.

Presentation of the Project

Inspiration

This project is inspired by light pollution (aka photo pollution), which is the presence of anthropogenic and artificial light in the night environment. Cities like Shanghai, New York, and London are nowadays hardly witnessing any stars in the night due to all kinds of artificial light resources.

Code

  • vertex()

I started by drawing the generative art of a star. 

//divided a circle into 5 part
let angle = TWO_PI / 5;

//defined a halfAngle
let halfAngle = angle / 2.0;

fill(255);
translate(x, y);

beginShape();
for (let a = 0; a < TWO_PI; a += angle) {
 //cos() Values are returned in the range -1~1.
 let sx = 0 + cos(a) * 20;
 //sin() Values are returned in the range -1~1
 let sy = 0 + sin(a) * 20;
 vertex(sx, sy);
 sx = 0 + cos(a + halfAngle) * 10;
 sy = 0 + sin(a + halfAngle) * 10;
 vertex(sx, sy);
}
endShape(CLOSE);
  • class

Then I wrote it as a class so that I will be able to call it multiple times by creating an array. Also, in order to simulate the night sky, I wrote another class which embedded with little twinkle stars. 

class Star {
  constructor() {
   this.x = random(width);
   this.y = random(height);
   this.size = random(0.25, 3);
   this.t = random(TAU); //TAU = TWO_PI
  }

  draw() {
   this.t += 0.1;
   var scale = this.size + sin(this.t) * 2;
   noStroke();
   fill(255);
   ellipse(this.x, this.y, scale, scale);
  }
}

class click{
  
  constructor() {
    this.x = width/2;
    this.y = height/2;
    this.angle = TWO_PI / 5;
    this.halfAngle = this.angle / 2.0;
    this.size = random(0.25, 0.5);
    this.t = random(TAU);
  }
  
  drawStar() {
    this.t += 0.1;
    //sin() Values are returned in the range -1 to 1
    var scale1 = this.size + sin(this.t) * 0.5;
    
    translate(this.x, this.y);
    rotate(frameCount * 0.01);
    scale(scale1);
    fill(255,255,0);
    
    beginShape();
    for (let a = 0; a < TWO_PI; a += this.angle) {
      let sx = 0 + cos(a) * 20;
      let sy = 0 + sin(a) * 20;
      vertex(sx, sy);
      sx = 0 + cos(a + this.halfAngle) * 10;
      sy = 0 + sin(a + this.halfAngle) * 10;
      vertex(sx, sy);
    }
    endShape(CLOSE);
  }
}
  • mouseClicked()

After all the elements in this project all settled, I created the interactive part by using mouseClicked().

I identified the original background color as white in the global variable let gray = 255 (to symbolize the night with artificial light) and then in the mouseClicked() function I gradually decrease the light by gray -= 20, so that the background of the canvas will be turning darker gradually as the user click their mouse on the canvas.

Reflection

Overall, the coding process was smooth. I was a little bit stuck at the initial stage of generating the art graphic of the star. In the beginning, I was using direct numbers in the vertex() function and the graphic looks crappy and hard to be manipulated. So I searched online for a more mathematical way to depict the star and successfully generate the congruent star at last. The interactive might be a little bit confusing if the user has little knowledge of the intention, so maybe next time I will be adding some guidance into the project and make it more well-rounded as a whole. 

Week 2 MLNI – Case Study-Interface with ML or AI (Cherry Cai)

Partner: Wei Wang (ww1110)

Project Topic: Refraction Emotion 

In-class Presentation Slides 

Introduction

Ouchhh created Artificial Intelligence and the t-SNE visualization of the hundreds of books and articles [approx. 20 million lines of text] written by scientists who changed the destiny of the world -and wrote history- were fed to the Recurrent Neural Network during the training. Later on, by using the trained models, new texts are generated. AI was used to convert those text into visuals, while the 136 projectors created a real-time interaction for the audience during the exhibition. 

This kind of poetic refraction of scientific consciousness generated innovated method for the audience to perceive the beauty of literature from a new perspective.

This is a cognitive performance created by Ouchhh, which takes a pianist’s brain waves at his concert and visualizes it. The visualization is wrapped with the data concerning emotion and neural mechanism with electroencephalogram (EEG). This project is based on superstring theory, which argues that the world consists of only vibrating thin strings. Ouchhh define the melodies as matter and symphonies of the melodies as the universe and take eleven dimensions is abstract directions which will change and turn into a reality with AI algorithm.

The use of musical refraction of the artist strengthens the emotion the performer conveys and helps the audience to get emerged in the performance.

Implementation 

Help the audience to better understand the artists’ emotion in the context of his/her creation.

  • Concert/Show
  • Educational Purpose
  • A larger group of audience (e.g. disabled people)

Week 1 MLNI – Response to Golan Levin & Presentation Assignment (Cherry Cai)

Reading and Video Assignment Response

According to Golan Levin’s article and his speech, the development of technology and algorithm has been transformed much of the human activities into a form of interactive art. Not only allowing people to “communicate” with the machine in various forms but also see another sort of interpretation of their activities under the scope of the computer vision. Take for instance of the use of body gestures, sounds, and other human behavior as input and through a complicated transformation process, people are able to be their own actor in the new pattern of art and perceive it through innovated ideas. As the machine is able to simulate human behaviors, I wondered whether advanced technology is able to let machine self-generate new form of art without the intervention of human through learning by itself. 

Presentation: Faceless Portraits Transcending Time by AICAN + Dr. Ahmed Elgammal

Faceless Portraits Transcending Time was presented by HG Contemporary, New York during February 13 – March 5, 2019. This was an art collaboration between an artificial intelligence named AICAN and its creator, Dr. Ahmed Elgammal. “The exhibition shows two series of works on canvas portraying uncanny, dream-like imagery generated by AICAN excavating the ageless themes of mortality and representation of the human figure” (HG Contemporary). The two series have examined into two different topics of human-machine collaboration: first, “the joint effort between man and machine as a historically specific moment in the chronology of image-making”; second, “a focus on how artificial intelligence serves as a mirror to reflect human (sub)consciousness”.

    • AICAN

AICAN is a complex algorithm that draws from psychological theories of the brain’s response to aesthetics and using art historical knowledge to create new artwork without human intervention. It can be working without an artist collaborator which will automatically choose a style, subject, composition, colors, and texture of its work. Due to this combination of knowledge with its independent creativity, AICAN facilitates a new way for artists of the past and present to engage in dialogue across centuries of art history.

    • Summary

The best way to avoid being judged on the merits of a work of art is to make it novel and unexpected. While machine learning can chronologically arrange artistic portraits of styles including Renaissance, Baroque, Realism, Impressionism, etc., it can strengthen its ability and create a new form of art through learning by itself. In addition to this remarkable achievement, technologies such as AICAN can predict upcoming art trends based on current popular art techniques and styles, which makes AICAN and similar artificial intelligence a valuable business in the future art field.

Presentation Link

https://docs.google.com/presentation/d/1K2i6qw1hEAVK-20pA37dUQO68Vrh4Hxd4dHyp1hM7CI/edit?usp=sharing

Reference

HG Contemporary, New York, Faceless Portraits Transcending Time. (2019). [PDF]. Available at: https://uploads.strikinglycdn.com/files/3e2cdfa0-8b8f-44ea-a6ca-d12f123e3b0c/AICAN-HG-Catalogue-web.pdf

Bogost, I. (2019). The AI-Art Gold Rush Is Here. [online] The Atlantic. Available at: https://www.theatlantic.com/technology/archive/2019/03/ai-created-art-invades-chelsea-gallery-scene/584134/