Week 3: Warm Up with Programing Fundamentals! – Eszter Vigh

Homework OOP

So what did I do exactly? 

I made my input the location of my mouse (the x and y coordinates). I then worked to declare the type of movement I wanted to happen. It wasn’t anything particularly special, but this is essentially the more organized way of getting movement for multiple objects.

If I really wanted to I could have a million circles. I didn’t want to do that, because I wanted to show how the mouse input works as clearly as possible. 

I could have just as easily done the keyboard, Up, Down, Left, Right arrow keys, but I like the real time updates this way… I am kind of a press and hold sort of person and I get annoyed easily if I hav to repeatedly press the keys. 

Anyway… this is my code (you can also see it in the video).

const MAX_BALLS = 10000, balls = [];
let bg;

function setup() {
pixelDensity(displayDensity());
createCanvas(500, 500).mousePressed(createBall);
frameElement && (frameElement.width = width, frameElement.height = height);

bg = color(random(255),random(255),random(255));
}

function draw() {
background(bg);
ellipseMode(CENTER).noLoop();
strokeWeight(2.5).stroke((random(255), random(255),random(255))).fill((random(255), random(255),random(255)));
for (let b of balls) b.display();
}

function createBall() {
const b = balls.length == MAX_BALLS && balls.shift() || new Ball;
balls.push(b.setXY(mouseX, mouseY));
redraw();
}

class Ball {
static get DIAM() { return 50; }

constructor(x, y) { this.setXY(x, y); }

setXY(x, y) {
this.x = x, this.y = y;
return this;
}

display() { ellipse(this.x, this.y, Ball.DIAM, Ball.DIAM); }
}

Week03: ml5.js project Eszter Vigh

I was playing with the Image Classification. I wanted to experiment to see how I could use it with multiple images at the same time. I know it sounds simple, but it is important to build off of what he had access to in class and expand it so that we can implement it in possibly a meaningful way. 

Step 1 to doing that… really when you are learning anything coding is making multiple of the thing you want to appear, appear. 

I decided that I wanted to remove the image we had already (the Tesla-cat) and experiment with a just as recognizable image: a golden retriever puppy. I wanted to see how specific the classification could get. In this case, the program successfully identified the image as a Golden Retriever with a 0.94 certainty. 

After seeing this, I wanted to see how well the program could do with a less recognizable animal, a Toucan. I was thinking the system would say something like : colorful bird, large bird, tropical bird, etc. I was shocked when it returned toucan with a 0.99 certainty. 

I was concerned about the Toucan because in the Tesla-cat sample, the system identifies the cat as an Egyptian Cat with 0.44 certainty. I consider that a misidentification as the cat is simply an average black cat, with no real distinguishing features. 

But all identification questions aside, I wanted to experiment with getting multiple pictures to appear and subsequently, getting them accurately identified. 

Problems I had: 

  • Determining a way to separate images. (it took a while, but this ended up being the first thing I could successfully complete)
  • My labels would get all messed up (so basically the labels were at first, what I thought, stuck together, but then it turned out that once I separated the two labels, there was like a third ghost label getting drawn over and over again in the loop… eventually with the help of a fellow (Konrad) I realized that the background has to be looped as well in order to get a crisp, clear label).
  • Identification of individual images (I couldn’t figure out whether or not I could do things in one or multiple label functions as well as multiple or one classifier functions, it turned out to be multiple… but at first before the separation both the golden retriever and the toucan were identified as turn bills with 0.00 certainty).

I’m going to link my GitHub where I have posted this code. It’s fun to play with honestly. I think in the future I would implement arrays, but honestly for me even getting the images correctly identified at the same time was a massive victory. 

Week 2: Introduction to Machine Learning Models and ml5.js

Semi-Conductor

Partner: Jessica Chon

What the project is:

Semi-Conductor is a virtual conducting app where arm movements yield a reaction from your virtual orchestra. It is quite fun, but there are definitely bugs, like sometimes you have to amp up for energy because some subtle movements work and get registered with the system while others yield no reaction with the virtual orchestra (Which is sad and the slightest bit annoying). 

Beneficial Qualities

It goes beyond say Leap motion plus because it has more range in motion, that is even slight changes to position and angle allow for a completely different reading and hence reaction from the software. 

It makes the user fully interact with the technology, which makes it “Zero-UI”. So yeah, if you get your whole body in the movement then it can yield a very clear, long response from the system which is really cool. I guess in that way it is kind of like Kinect, but just off of your webcam which makes it far more accessible than Kinect based programs. 

How we would implement the tech:

Gauge the effectiveness of young conductors (conductors in training). You could train the virtual orchestra based on say a real orchestra to improve the receptiveness of the program to the movements and make it as realistic as possible. 

Helpful for deaf students because you can still see the metronome and instruments moving and having a sense of control keeping tempo because of visual cues. Music is a requirement in a lot of schools, maybe this is a way to engage a population that previously just couldn’t participate because music was so reliant on one of their senses that they just didn’t have or severely lacked in.

Other simulators such as sports games/training, presentations. It would be cool to use this technology to train goal keepers in soccer, or like even batters in baseball. There is only so many situations that they can train for, but like the computer could test extreme cases to even prep them for unlikely, but match-winning potential goals/ pitches.

Our presentation 

And then… my code for the P5 practice using transformation and display functions can be found at this link. 

Hopefully this works!!! “(Maybe not, high key not sure about the whole zip-folder thing).

Week 02 Assignment: ml5.js Experiment – Eszter Vigh

So, I wanted to break the ml5 sentiment analysis demo. I guess the main question is, why? 

I knew that the training module likely had underlying bias, based on readings from my other class. I wanted to see how I could break the system through interesting, or deceptive phrasing. I put in phrases that leaned more positive, but were phrased negatively just to explore the limitations of the example. 

So I put in phrases like this:

example

example

example

example

So what I found was that inserting “not” (ex: not poor, not bad, etc) did not negate the negative (that is, it did not make the phrase positive when tested by the sentiment analyzer). I think this is really cool to test the extreme cases. It shows the short comings of the training. It is important to acknowledge this issue when training your application. It is incredibly valuable to test these extreme cases because it could significantly affect the success of the project as the meaning is completely the opposite of what this algorithm showed. 

Sure maybe in this case it isn’t a big deal, but imagine when testing this with a different trained data set. It was really amazing that there was a correlation factor that worked for a classic data set. I’ve never trained anything before, so it was really amazing to see what a trained model could do. 

Do I think I pushed this example to the limit? No, I was hoping that the reading would be somewhere closer to 0.5 for these negated negatives, it was really surprising to see the overwhelmingly negative score. I feel like I have learned a lot about how “black and white” ml5 is when it comes to training modules. Which I think will be helpful when creating our own projects later. We want to make them as accurate as possible right? Or maybe our art will come from the “mistakes” it makes. 

Week 1: Introduction to Machine Learning for New Interfaces (Homework) Eszter Vigh

My Presentation

The guiding question of this work on the “painting robots” is “Can robots be creative?”. Now the creator, Pindar Van Arman would argue yes, mainly because at times robots have to make the decisions in terms of where a stroke goes and have the ability to view their own work via camera. 

Summary of Hardware

  • Robot arm
  • The robot “watched” what they make using a camera.
  • Art Supplies

Summary of Method

  • A target image and a style image… the target image is stylized based on the characteristics and images presented in the style inspiration work.
  • This combination comes through a complex algorithm. 
  • The robot has memory of the previous works it has created
  • Over time the works develop a unique style based on the training

Why is it interesting?

  • I never thought about robots as creative.
  • The robots aren’t simply combining the images, there is actual painting happening. 
  • Will robots at one point have the power to be creative?
  • For now all the inputs come from the data scientist, but how long before robots can “think” of their own combination ideas? 
  • Not even “painting the next person you see” is really the idea of the robot, it’s just part of the coding. 

Sources

“An Artificially Intelligent Painting Robot.” Cloudpainter, http://www.cloudpainter.com/.

Arman, Pindar Van. “Creativity Is Probably Just a Complex Mix of Generative Art Algorithms.” Medium, Data Driven Investor, 29 Dec. 2018, https://medium.com/datadriveninvestor/creativity-is-probably-just-a-complex-mix-of-generative-art-algorithms-6d37a0087e86.

Arman, Pindar Van. “From Printing to Painting: Computationally Creative Robots.” Medium, Medium, 16 May 2018, https://medium.com/@pindar.vanarman/from-printing-to-painting-the-emergence-of-computationally-creative-robots-cb2f41846dd0.

Arman, Pindar Van. “Does AI Art Belong in the Physical, Digital, or Crypto World?” Medium, Medium, 8 Apr. 2019, https://medium.com/@pindar.vanarman/does-ai-art-belong-in-the-physical-digital-or-crypto-world-3cb4fe5e01b0.

Fitzpatrick, Sophie. “Art in a Technological World.” EDGY_ Labs, 4 June 2019, https://edgy.app/art-in-a-technological-world.

Muoio, Danielle. “Watch a Robot Paint Incredible Pieces of Art.” Business Insider, Business Insider, 5 Feb. 2016, https://www.businessinsider.com/pindar-van-armans-robot-can-paint-2016-2.

News, VICE. YouTube, YouTube, 10 Aug. 2017, https://www.youtube.com/watch?time_continue=112&v=dkTjEi7O4Ic.