Week 04 Assignment: The relationship of The Neural Network and the Biological Neuron-Crystal Liu

Overview

At first I thought that the Neural Network was a work of bionics, which meant it mainly mimicked how the biological neurons worked. But this is just a random guess according to their similar names. However, after doing several research on the Neural Network and biological neurons as well as their connections and differences, I have a new conclusion about their relationship. Neural Network is inspired by the biological neurons and at first the research personnel wanted to imitate the way biological neurons worked and applied the method to the machine. However, since our biological neural network has been evolved for millions of years, it has already reached a considerably mature level. Thus it is impossible for the machine learning to completely copy the method or put the method into practice. Therefore in its later stage of development, it created new ways to close to or even reach the effect of biological neurons but essentially, it is just a  mathematical or computational model. That is to say, neural network loosely model biological neurons and the connection between these two is weaker and weaker.

General introduction

The most essential factor of biological neuron is nerve impulses, which are electrical events produced by neurons. Nerve impulse can let neurons communicate with each other, processing information and performing computation. When a neuron spikes, it releases a neurotransmitter, a chemical that move across a synapse for a tiny distance and then reach other neurons. In that way, this neuron is able to communicate with other hundreds of neurons.

For the Neural Network, it is composed of connected artificial neurons. Each connection can transmit a signal to other neurons. It has three kinds of layers: input layers, output layers and hidden layers. On the input layer, many neurons accept non-linear information. Output layer shows the result after transmitting and analyzing the input information. Hidden layer, composed by neurons and links, is between the input layer and output layer and the number of layers and neurons on each layer is uncertain. Generally speaking, the more the number of neurons, the more nonlinear the neural network is and the more significant the robustness of the neural network is. To adjust the weights of each layer according to the correction of training samples is the process of creating the model. And the model is then validated by a back-propagation algorithm.

.

Connections & Difference

Obviously artificial neurons loosely model the biological neurons and the way they communicate is similar. According to wikipedia, both of them have the characters of nonlinearity, distribution, parallelization, local computation, and adaptability.

However, the connection among biological neurons is pretty weak, and new connection can be formed only when there is a strong impulse to let it release neurotransmitter. And only a small group of neurons strongly connected with each other. While the artificial neurons are highly connected with each other. The connection is fixed and it can’t create new connections by itself. While the artificial neuron layers are usually fully connected, the character of biological neurons can only be simulated by introducing weights that are 0 to mimic the lack of connections between two neurons.

Also I’ve watched a really interesting video demonstrating the difference on the operation mechanism between biological neuron and artificial neuron. This video takes a baby and candies as an example. When the baby sees the candy, there is an impulse to let the neurons of  the mouth connect the neurons of the hands, and the signal will transmit on these connected neurons and then produce the feedback: hold out hand and make a gesture for sugar. 

 

But for ANN, it has already formed a complete system and have learnt countless examples from the database to know what to do after seeing candies. Every time it uses back-propagation algorithm to validate and to do corrections.

Conclusion

Therefore, as we can see, ANN is inspired by biological neurons but the connection between them is loose. Also, at the late stage of its development, it abandoned the way that biological neurons works and turn to apply statistical methods and algorithm to build models to receive, process and analyze information.

Reference:

https://towardsdatascience.com/the-differences-between-artificial-and-biological-neural-networks-a8b46db828b7

https://en.wikipedia.org/wiki/Artificial_neural_network#Learning

https://www.bilibili.com/video/av15997699from=search&seid=4264978940427641437

MLNI: Object Interaction Crystal Liu

My project for this week is inspired by a picture I’ve taken in Inner Mongolia.

Since the wheel in that picture is suitable to apply the rotate function. Also I want to add some interaction part such as clicking the mouse or pressing the keyboard. My original thought is that if the user moves the mouse, then a bunch of ellipses with random size will show up and be around the coordinates of the mouse in certain range. I create two objects–bubble and wheel. For the object Wheel I use translate function to set the new coordinate so that the wheel can rotate in the way I expect. But there is a problem, I don’t know why my bubbles cannot move with my mouse, they just rotate with my wheel and it looks weird. I suppose it may have something to do with my previous translate setting in class wheel. But after several tries I failed so I made some slight changes. I decided to use keyPressed function to let users tap arrow key to change the color of the bubbles. In order to make them more natural, I set random data for the size and the color. But I only changed one of the RGB data to ensure the color is in the pink range. In a word, through this practice I found that I still need to learn more about translate function and functions about mouse and keyboard.

Link to my project

Week 03 Assignment–Crystal

Demonstration 

My project is called Good Omens. Here is a brief demonstration of my project.

Inspiration

After seeing all the examples, I was impressed by the ear one and the nose one. Then I reviewed the basic Posenet example to see what I can create based on it. For the basic Posenet example, I found that this model can position certain parts of the human body and facial parts, such as our left and right eyes. And the feedbacks are the small green ellipses and the name of the part. As for me the feedback is a little bit boring because the aim of it is only to give information. Therefore, my plan is to take advantage of this image recognition and location technology to create an interesting and interactive project, and the initial step is to replace the original feedback with some funny pictures.

The next step is to think about the thesis and the content of my project. I was inspired by a poster of Good Omens, which is a British TV series. The content of this poster is a combination of angle, Aziraphale and devil, Crowley. 

I think this poster is very interesting because it embodies the harmonious coexistence of the opposite side. So I want to make a kind of special effect to let the users act as an angel or a devil and they can switch the character only by a hit on the space bar. Also in order to show the harmonious coexistence, I have exchanged some elements of the angel and devil. For example, in my project, the angle has a devil aura while the devil has an angel aura.

Technical issues 

In the process of my programming, I met lots of difficulties and problems. At first my code did not work because it could not find the files that were some related pictures. After asking my friends for help, I knew that I should create a new folder in the project folder and put all my files in it. I also needed to mark the file path in the code to let the program know where they are. What’s more, the functions of images and GIFs are totally different. At first I just applied image() and loadImage() on GIFs and the website didn’t run. But now I know that I should use createImg() and [name].position(), [name].size() to display a GIF.

When the images can be shown on the screen, here comes another issue: When the ellipses become images, the position is slightly deviated. I have to test several times to adjust the coordinates of images to guarantee they are in the right position. 

And it is difficult for the model to identify the difference between the left and right wrists. My initial idea was to put different images on the left and right wrist but the outcome was not what I expected. Thus I had to make some changes and got my final version of my project.

Reflection & Development 

I have become more familiar with the functions of JavaScript and the principle behind the object or image classification through this assignment. It reminds me of some special effects of TikTok, Snapchat and beauty camera. Now I can make a special effect by myself and it is really exciting. However, this model still needs improving to solve the issues that I have mentioned before. My expectation of a mature version is a more interactive special effect with various forms of feedback. The user can change the character by some specific gestures or by saying certain words. Also the background and the lightness can be changed to fit the mood or style of music, and we can use Body pix to achieve this expectation. therefore, users will fully interact with this project by moving their bodies, hearing sound, shouting words and so on.   

link to my code

MLNI Week2- p5 drawing functions (Crystal Liu)

Inspired by my phone case, I used p5 to draw a cactus. I mainly used functions about drawing shapes such as rect(), triangle(), ellipse() and quad(). It helped me recall the knowledge about how to draw a polygon and how many elements that each shape needs. I also found some problems about coding. One of them is that I’m not familiar with the function about mouse and keyboard. At first I want to achieve this effect – a triangle appears on the screen with a click of the mouse. I’ve tried mouseClicked but it doesn’t work. After several tries I had to search online about the tutorial but still didn’t work. Thus I had to give up for a moment and tried other functions first. If after next class I still don’t know how to achieve this effect, I will ask professor for help. Finally it looks like this:

My code

MLNI Week02- Talk to Books (Crystal Liu)

Introduction

For this week I and my partner Lisa have chosen a project called Talk to Books to be the inspiration about how we can apply machine learning to improve the ability of search engines to understand our daily or oral language. 

Talk to Books is produced by Google and it is a search engine but only targeting on books. Basically the user can ask Talk to Books a question by typing or speaking, and then it will list  the relative part of books as the possible answers. The accuracy of the answer depends on the sentence integrity. Inputting a complete sentence instead of a few key words can get better answers. Here are some examples of my user experiences. 

This is just a normal and practical question 

This is more about relationship issue and it’s more difficult to answer.

From those examples I suppose that Talk to Books quite successful in understanding and analyzing our language and even some emotional queries.

Research

 I did some more research about the core technology and found Natural Language Processing. According to the article written by Badreesh Shetty on the Medium, NLP is a field in machine learning with the ability of a computer to understand, analyze, manipulate, and potentially generate human language. Thus I came up with an idea about the extension of Talk to Books, which means we can use NLP to enhance the ability of search engines to comprehensively understand the user’s queries and provide more accurate answers. 

Application 

At this stage, thanks to NLP,  search engines can analyze and understand the words in a sentence as a whole, rather than just singly analyzes and ignore connections among each word. However, there are still many things about search engine that can be improved. For example, since the form of input is now various, the user can simply say the question instead of typing it to let the search engine know. Therefore, it is necessary to improve its ability of understanding oral language and automatically filter out nonsense and meaningless modal particles. Also it would be better if  the computer or machine could understand the explicit and implicit meaning of our statement. And this progress requires a stronger ability to analyze human language. Ranking search results according to the relevance of questions and improving the accuracy of targeting audience groups are also potential ways that machine learning can be used on the improvement of search engines.

Presentation