MLNI Week 4 HW Tiger Li

I started off wanting to create an animation based on the movement and relationship between hands because the hands are a body part that we use the most to control and move things naturally. To me, it was the more user-friendly option to use in comparison to nose ears or shoulders. 

I approached the coding part the same way as shown in the professor’s demo. Using points of input from the body tracking program in a similar way as using the X and Y coordinates of the mouse. 

The only thing next is a creative graphic to put on the screen for the user to control. After getting some inspiration from other interactive graphics online. The idea of playing with size change and color stood out to me. I wanted to simulate a ball of energy in the users’ hands like naruto. So I decided to go with ball design.  As the user moves there hand around the size and location of the graphic would change. Moving around in a circular motion would male it seem like the light blue ball is on top of the dark blue ball.  

MLNI Object Animation Tiger Li

https://editor.p5js.org/Tgr/sketches/lursf5XEP

This is my animation of a water drop and it also includes a drawing pad to include some interactivity.  I tried to make it as visually pleasing as possible but it only turned out to be so.

I started off by making a drawing board using the mouseX and mouseY positions as the X and Y coordinates of my circle.  

Then I wanted to put in a growing object os some sort. so I started to play around with the dimensions and movements of my favorite shape. the circle. I found that I cannot let the circle continue growing, because it would just fill up the canvas. So I added an if statement that limits its growth. On top of that I wanted to play with the movement of the coordinates to add more dimension to the ball. 

At last, I ended up with this big raindrop of some sort. 

MLNI – Mock Presentation (Tiger Ronan)

Because I’m interested in artificial “creativity”. After looking into visuals my first reaction was to look into music.  I was very inspired when I watched a video explaining how to train a model to produce original music.  This new concept of controlled original music production made me think about its possible applications.  

Music is a fundamental part of entertainment in the modern-day. There are numerous ways of consuming audio content, most of all, the combination of live music and other visual forms of art can be interesting under the assistance of a machine learning algorithm. 

My first Idea is combining AI generated music with live choreographers. And using visual recognition to track the dancer’s movement as input to generate new music, synchronize lights and produce live visuals on a large screen behind the performer.

By installing such a system, the performing artist can use his or her body to command much more space and convey much more emotion. Thereby delivering a more stunning performance for the audience.  

For example, when the dancer is moving in a fluid and slow way the music combined with light would be able to express the same feeling. On the contrary, if the artist’s performance is based on large bold movements, the computer would decide to play music and arrange light sequences in an according way.

https://youtu.be/2ZRXbXuihEU

With the combination of lasers or a “light suit”a similar effect can be achieved on a bigger scale.

My second idea is connecting live Electronic music events (raves) with neutral networks. I drew inspiration from hologram concerts of fictional or dead artists. Hologram concerts of dead artists are great because it draws an audience from a strong existing fanbase. 

Neural networks can recreate an artists voice so that he can interact with the audience live. An example of enhancing these events would be to play the artists original music and mixing in music created by the neural network. 

Just like real artists playing their new music in live events. So can a dead Ai artist like Advicii.  It would be as if he never passed away. 

Cameras with visual recognition can also be installed to oversee the crowd and understand if the crowd likes the music or not and do live adjustments catering towards a better crowd response.

MLNI: p5 Sketch – Tiger Li

https://editor.p5js.org/Tgr/sketches/L0yXJWHRG

When I saw a rotating object I immediately thought of a type of drawing tool I played with as a child. With that tool young children in grad school are able to draw complex geometric patterns. 

I really wanted to recreate it in p5 but I did not yet find the tools I need to make it. So I decided to make a flower instead. 

I used rotating lines to draw the patterns instead of points, unlike the children’s drawing tool. I wanted to add more complementary colors to it but my colors fill would not work in draw and I still am not sure of the reason. I hope to solve this issue and make my rose better looking tomorrow.

This image requires alt text, but the alt text is currently blank. Either add alt text or mark the image as decorative.

MLNI – Presentation Homework (Tiger Li)

Technology used: Machine Learning 

Question: How does machine learning generate such trippy images? 

Answer: The neural network is similar to the human visual cortex. It increases fixation and the ability to recognize patterns like an overly stimulated human brain.

Question: If neural networks can be so similar to human minds, can it be creative artistically? 

Insight: We discovered neurons in our brains use electricity to communicate with each other around the same time as computers started developing.

“Visual cortex works like a series of computational elements that pass information from one to the next in a cascade.”

“Perception and Creativity are intimately connected ”

X * W = Y