Week 4 AI Arts – What are the relationship between neural networks and human brain?(Ronan)

With the development of Artificial intelligence, more and more attention has been drawn to the relationship between neural networks and human brains. What are the similarities? Why do we call it neural networks? Can computers actually “think” like human beings?

As we all know, there are a lot of things that a computer can do better than human beings such as calculating the square root of a number or browsing from the websites. At the same time, there are also things that human brains are better at: such as imagination and inspiration. Therefore, combining both of the strengths of computers and human brains, scientists have invented neural networks to simulate human brains and to help the machine behave more like us. Therefore, in my opinion, neural networks is a similar structure to human brain neurons for processing information and making decisions.

One of the reasons why we want machines to reason more like human beings is that we are developing technology for better life quality. Thus, we need to utilize the power of technology to think in the position of human beings and to understand human behaviors better. Second of all, human beings have the ability to learn and gain knowledge from previous experiences and we want computers to also have this ability so that they can have self-learning experiences at a high speed than us.

 In human being’s neural networks, there are three key parts which are the dendrites (the input mechanism), the Soma (the calculation mechanism) and the axon (the output mechanism). Similarly, there is an equivalent structure in computer neural networks: Incoming connections, the linear calculation and the activation functions and the output connections. (see pictures below)

 

According to Yariv Aden, “Plasticity — one of the unique characteristics of the brain, and the key feature that enables learning and memory is its plasticity — ability to morph and change. New synaptic connections are made, old ones go away, and existing connections become stronger or weaker, based on experience. Plasticity even plays a role in the single neuron — impacting its electromagnetic behavior, and its tendency to trigger a spike in reaction to certain inputs.” This is also the key point for training computer neural networks.

However, although neural networks are inspired by human brains, the ML implementation of these concepts has diverged significantly from how the brain works. 

First of all, neural networks’ complexity is much lower than human brains. This does not just mean the number of neurons but about the internal complexity of the single neuron. 

Second of all , power consumption. The brain is an extremely efficient computing machine, consuming on the order of 10 Watts. This is about one third the power consumption of a single CPU. (Adan)

In a word, although neural networks are inspired by human brains and they are indeed a lot of similarities between two, there are still some key differences: human brains are far more complex and cost-efficient than neural networks.

Source:

Do neural networks really work like neurons?

https://medium.com/swlh/do-neural-networks-really-work-like-neurons-667859dbfb4f

What are Artifical Neural Networks?

https://www.forbes.com/sites/bernardmarr/2018/09/24/what-are-artificial-neural-networks-a-simple-explanation-for-absolutely-anyone/

MLNI – Week 02: Machine Learning Research and Mock Project(Ronan & Tiger)

Partner: Tiger Li

Click here to get the slides!

Inspiration:

This project is an AI-powered music generator created by codeParade? It utilized two algorithms: It uses auto-encoder and PCA(Principal Component Analysis).
Auto-encoder is a type of artificial neural network used to learn efficient data codings in an unsupervised manner. And PCA can transfer a set of observations of possibly correlated variables into a set of linearly uncorrelated variables called principal components. PCA is often used in exploratory data analysis and for making predictive models.

Our first idea:

What?

We often hear the phrase “dance to the music” and traditionally, a choreographer usually design dance moves based on the melody. But how about “music to the dance”? What if we just use the movements to generate composition?

After discussing with my partner, we came up with the idea to use this model to generate different music and melody based on the artist’s movements.

How?

First, we use different choreography as the data to train the model. And then we can use the camera to capture the artist’s movement. Based on that movement, music is generated.

Why?

1.An embodiment of Zero UI

2.Makes a single dancer visually command more space 

3.Better more immersive experience for the audience 

Our second idea:

What?

The second idea is to use this technology to generate new music based on some specific artist’s style, especially some dead artists.

How?

1. Use the artist’s original music as data input to train the model. 

2. Use the model to generate new music of dead artists. Coordinate lighting with the music 3.because the music is predictable in a synchronized system

Why?

First of all, there is a huge existing fan base for some dead artists and we have this existing technology. So why not utilize it to emulates the human aspect of a live performance better because of artificial creativity?

MLNI Week 02: P5.js Basics (Ronan)

Fractal Tree.

L-system is an algorithm for modeling cellular growth. L system is actually text-based. But I want to utilize it to generate graphics. It involves two things: alphabets, an axiom(the beginning of the L-system) and a set of rules. L system is a recursive way to generate sentences using the same strings over and over again. There are lots of different ways to interpret L-system, such as a poem or a melody.

Week 1 MLNI: Reading Response and Presentation(Ronan)

Click here to get my presentation slides.

What I think of Zero UI:

The best interface is no interface.  – Golden Krishna

After reading the articles and watching the lectures, I think that zero UI really means “screenless user interface”. Just as Golden Krishna says, “the best interface is no interface”. I really like the idea that designers need to think “non-linearly” as John Brownlee indicated in his article, which requires the designers to think beyond a 2D screen and work in a 3D environment to cohere different senses such as sound, vision, etc. This has led me to wonder: How does zero UI influence our life now? Through what means can zero UI change our life experience?

How zero UI change our life:

Although zero UI is a fairly new phrase, it actually has been used in several different areas in our life.

1.Zero UI and Game:

八分音符酱(Ba Fen Yin Fu Jiang) is a Japanese platforming game where the player controls the main character using their voice. The higher the pitch is, the higher the character can jump. Although it is just a simple game, it really changes our traditional way of playing platform games which involves the keyboard. Click here to see the video

2.Zero UI and Business

Amazon Alexa is a virtual assistant embedded in the smart speaker called Echo. Now it is in collaboration with both Uber and Starbucks. When installed at one’s home, Alexa will record the geographical information automatically and connects itself to Uber and Starbuck’s service. The user simply needs to say “Book a ride to ..” or “I’d like to order ..” to Alexa, it will complete the order for you.

3. Screenless interface: wearable technology

Huawei has collaborated with Gentle Monster, a Korean sunglass brand, and developed smart sunglasses. The user just needs to tap on the glasses to answer phone calls or play music. Click here to learn more.