MLNI – Week 02: Machine Learning Research and Mock Project(Ronan & Tiger)

Partner: Tiger Li

Click here to get the slides!

Inspiration:

This project is an AI-powered music generator created by codeParade? It utilized two algorithms: It uses auto-encoder and PCA(Principal Component Analysis).
Auto-encoder is a type of artificial neural network used to learn efficient data codings in an unsupervised manner. And PCA can transfer a set of observations of possibly correlated variables into a set of linearly uncorrelated variables called principal components. PCA is often used in exploratory data analysis and for making predictive models.

Our first idea:

What?

We often hear the phrase “dance to the music” and traditionally, a choreographer usually design dance moves based on the melody. But how about “music to the dance”? What if we just use the movements to generate composition?

After discussing with my partner, we came up with the idea to use this model to generate different music and melody based on the artist’s movements.

How?

First, we use different choreography as the data to train the model. And then we can use the camera to capture the artist’s movement. Based on that movement, music is generated.

Why?

1.An embodiment of Zero UI

2.Makes a single dancer visually command more space 

3.Better more immersive experience for the audience 

Our second idea:

What?

The second idea is to use this technology to generate new music based on some specific artist’s style, especially some dead artists.

How?

1. Use the artist’s original music as data input to train the model. 

2. Use the model to generate new music of dead artists. Coordinate lighting with the music 3.because the music is predictable in a synchronized system

Why?

First of all, there is a huge existing fan base for some dead artists and we have this existing technology. So why not utilize it to emulates the human aspect of a live performance better because of artificial creativity?

MLNI – Mock Presentation (Tiger Ronan)

Because I’m interested in artificial “creativity”. After looking into visuals my first reaction was to look into music.  I was very inspired when I watched a video explaining how to train a model to produce original music.  This new concept of controlled original music production made me think about its possible applications.  

Music is a fundamental part of entertainment in the modern-day. There are numerous ways of consuming audio content, most of all, the combination of live music and other visual forms of art can be interesting under the assistance of a machine learning algorithm. 

My first Idea is combining AI generated music with live choreographers. And using visual recognition to track the dancer’s movement as input to generate new music, synchronize lights and produce live visuals on a large screen behind the performer.

By installing such a system, the performing artist can use his or her body to command much more space and convey much more emotion. Thereby delivering a more stunning performance for the audience.  

For example, when the dancer is moving in a fluid and slow way the music combined with light would be able to express the same feeling. On the contrary, if the artist’s performance is based on large bold movements, the computer would decide to play music and arrange light sequences in an according way.

https://youtu.be/2ZRXbXuihEU

With the combination of lasers or a “light suit”a similar effect can be achieved on a bigger scale.

My second idea is connecting live Electronic music events (raves) with neutral networks. I drew inspiration from hologram concerts of fictional or dead artists. Hologram concerts of dead artists are great because it draws an audience from a strong existing fanbase. 

Neural networks can recreate an artists voice so that he can interact with the audience live. An example of enhancing these events would be to play the artists original music and mixing in music created by the neural network. 

Just like real artists playing their new music in live events. So can a dead Ai artist like Advicii.  It would be as if he never passed away. 

Cameras with visual recognition can also be installed to oversee the crowd and understand if the crowd likes the music or not and do live adjustments catering towards a better crowd response.

Week 2: Research Presentations about applications with Machine Learning – Samantha Cui

Project name : OrCam Myeye 

Partner: Shenshen Lei

Presentation Slides: Here!

For this research, we decided to focus on the visual recognition technology I talked about last week. While doing research, we tried to focus on finding some projects designed to help people using this technology. Then it suddenly hit me, what if this technique could be used to help the visually impaired? Then we started focusing on this field and did some research. And we found a project called OrCam Myeye.

OrCam Myeye is a product designed to help the visually impaired people. the product is small. It’s wireless and it can be easily installed on the user’s glasses. Through visual recognization, the OrCam Myeye can detect where and what the user is looking and pointing. And then it would use a speaker to output what it saw. The visual recognization is a big step in helping the visually impaired. Since they have trouble seeing things, this would be their second pair of eyes.

While learning about this product, we found that there could be more improvement to this product so that it would be more convenient. One of the improvements we thought is that it could add a pupil tracking technique. So people won’t have to point but could just move their eyes. Other is that it could improve the distance of objects it detects. So people could ‘see’ a broader view.

     

Week 2: p5 Sketch – Samantha Cui

The sketch I created is a Dandelion.

How it would work is that the flower is developping more petals  as the time increses. Once it is large enough, the user could click the mouse to create a “blown” effect to the dandelion. And all the petals would disappear. Then it would start drawing a new one all over again.

The main funtion I used is the rotation of a quadrilateral I drew. I filled it with white to look closer to a dandelion. Then I drew its cernter and its stem.  To create the ‘blow’ effect, I just used ‘mousepressed’ and ‘clear’.  

MLNI Week 2:ML/AI Case Study – Alex Wang

AI Technology in music production

Intro:

lately I got really interested in music production, and I realized that there are many new technologies available for music production that is actually powered by machine learning! 

Phases of music production:

There are multiple phases in the production of a song. A composer writes out a song, a producer then creates it using a digital audio workspace, then a audio engineer mix and masters the track to perfect its dynamics and clarity.

The role of an audio engineer:

Audio engineer actually plays a really big role in the making of a song, it is not required for them to have any knowledge in song structure, music theory , instrumental skills, etc. But they are the ones that makes the song sounds perfect without creating anything themselves. A audio engineer can spend his whole life perfecting his skills as it is a very complicated job.

AI as audio engineer:

AI is very good at replacing humans in areas that does not require creativity, but requires skills. So music mastering is actually what people have been working on to let AI replace. You do not need to create anything, but you need very good ears and experience to perfect the song. 

I was at a speech by Kai fu Lee here in NYUSH a year ago, and he talks about the types of jobs that are easily taken over by AI and the ones that needs more human elements to it. Audio engineers, even though they are really respected and takes years to optimize their skills. They are still a job that does not require creativity and human compassion, which is why they are currently a target to be replaced by technology.

Landr/Ozone:

existing services like Landr and Ozone is already using AI as a tool to master the track for you. These convenient services, along with the advancements in making DAW and samples more accessible, it is actually really easy to get started in making professional sounding music nowadays.

https://www.youtube.com/watch?v=43Uad9C6LeQ

Articles:

WILL AI REVOLUTIONIZE THE WAY ARTISTS MAKE MUSIC AND FIND SAMPLES? LANDR SAYS YES…

Landr raises $26 million for AI-powered music creation tools

“Original” work with python Neural composer + Izotope Ozone mastering:

I made a song using melodies composed by a python network, I did the producer part of the creation process, while Ozone AI did the mastering of the track.

Reflection:

Over the past few years I realized how AI is capable of not only replacing human in physical labor, but also be an amazing resource in the world of arts. Even though they might not yet be capable of replacing artists, they are now a very strong tool in the assistance of making quality products. They are closing the gaps between a professional musician with a expensive studio and a whole team of producers/audio engineers along with professional gear, and any ordinary person with a laptop.