Week #4 Writing Assignment The Neural Network and the Brain/Neuron Stuff —— Lishan Qin

I’ve always felt that the relation between artificial neural network and human brain stuff is just like the relation between planes and birds, motorcycles and horses, or radars and bats. The former inventions are all in a way inspired by the latter organisms, however, they do not actually conform with all the features or abilities of those organisms. In my opinion, even though the invention of neural network is inspired by the structure of human brain neurons, there are still fundamental differences between how the artificial neurons and biological neurons work. Thus, despite the amazing development of the AI technology and the great potential of how it will aid humans in various fields in the future, the artificial intelligence will never be as same as the intelligence of a human brain, nor will it replace or beat humans in the future. Because they are fundamentally two different things.

The way the AI work with artificial neural network is a simplified mathematical model of how human brains work with our neural network. While human neural network receives signals from dendrites and sent signals down the axon to stimulate other neurons and trigger them to react accordingly, artificial neural network mimics such processes by coming up with a function that receives a list of weighted input signals and outputs some kinds of signal if the sum of these weighted inputs reaches a certain bias (RichĂĄrd). However incomplete the model is, it still provides the chance for a computer to mimic learning from experience still offers a great deal of innovative applications to help improve our lives in different ways.

However, just because the model is inspired by human brain network, it doesn’t mean the way they work is same. As far as I’m concerned, the learning of artificial neural network and the learning of human brain stuff are fundamentally two matters. In terms of human brains, the neuron network in our mind is not set for once, instead, it changes all the time. When we learn, our brain is able to add or remove neuron connections, and the strength of our synapse can be altered basing on the task at the present. Artificial neural networks in the other hand, have a predefined model, where no further neurons or connections can be added or removed. Even if it’s learning, or thinking, it’s doing that based on one predefined model. It needs to go through each and every condition set in the model every time, without getting new connections or riding of old connections. Even if the output it gives may seems “creative” or “intelligent”, it’s clear that the artificial intelligence is only finding an optimal solution to a set of problems basing on one set model.

To sum up, in my opinion, since the way the artificial neural network and human brains “learn” and act upon their “intelligence is fundamentally different, the artificial intelligence today is still a completely different matter from human intelligence. However diverse or creative its works may appear to be, it’s still just finding an optimal solution to a set of problems basing on one set model. It’s not truly intelligent. Nonetheless, there is no denying that such technology still can aid humans to improve our lives in various fields to a great extent.

Sourcehttps://towardsdatascience.com/the-differences-between-artificial-and-biological-neural-networks-a8b46db828b7

Week #3 MLNI – Generative Art w/OOP —— Lishan Qin

Overview

For this week’s assignment, I used p5.js to create a drawing of a series of shuriken(a Japanese weapon) vortex. First, I used beginShape(), curveVertex(), and endShape() to design the shuriken. Then I created a class called “object” to represent the shuriken. I put the x,y position of the shuriken in the constructor(), and wrote functions like move(), bounce(), display() to let the shuriken bouncing within the canvas. Then I wrote translate(), scale() and rotate() functions to allow theshuriken to appear to be rotating. Finally, I created a list a wrote a for loop so that there are more shuriken appearing in the canvas. I also changed the (x,y) in the translate() function into (mouseX,mouseY) so that the users are able to draw vortex with their mouse on their own. The code of this project can be found Here.

Japenese Shuriken)

Porject Presentation:

Technical Problem

My initial idea is to let all of the shuriken moving, bouncing within the canvas, and rotating itself around its center, and the number of it can be determined by the users’ input. But then I found that the center of the shuriken is not as easy to be found as I thought and there always seems to be errors that make it rotate around another point that is not its center and makes the whole image looks weird. Thus, I finally decided to make it rotate around mouseX and mouseY so that the user can be in charge of the whole painting and make drawings on their own.

Week #3 Assignment: ml5.js project —— Lishan Qin

MyCodehttps://drive.google.com/open?id=1Vci8NRnUh9j7PCPs_xzOobpghGQjaXq-

Project Name: “How to Train Your Dragon with Machine Learning”

Intro

For this week’s assignment, I utilize the ml5.js model PosenetBasics to develop an entertaining interactive project called “How to Train Your Dragon with Machine Learning”. My idea is to create an interactive AR app that allows users to interact with the virtue creature in screen using their whole body in the physical world. With the help of the PosenetBasics example, I’m able to make the users appear to be wearing a virtual Viking hat in the screen and allow them to hang out with the virtual creature “Toothless” by making different gestures and actions in real physical world to interact with it. 

Overview & Demo

The PosenetBasics model is able to recognize people’s nose, eyes and wrists, and provides data of the positions of these organs. Thus, I’m able to program “Toothless” to make different reactions according to the users’ action by changing the image and the position of “Toothless”. This project allows users to become a dragon trainer and interact with “Toothless” through different hand gestures. Firstly, the program recognizes the users’ face, and puts a Viking helmet on the users’ head. Then, the users can make different action to interact with “Toothless”, such as petting its head, poking its face or raising the right wrist to ask “Toothless” to fly.

Inspiration

The work that inspires me to build this project is PokĂ©mon Go developed by Niantic. Pokemon Go allows users to use GPS to locate, capture, battle, and train virtual creature Pokemons, which appear in the game as if they’re in the player’s real world. The recent update of PokĂ©mon Go brings a new function that allows users to take photos with the virtual Pokemon in the real world. Nonethless, even though I love this game and this update so very much, I still find the interaction between trainers and pokemons this game provides to be limited. The players of Pokemon Go can only interact with these virtual creatures through phones with their taps on the screen to switch the pokemon’s posture rather than using the users’  physical movement in the real world as input. Therefore, I wish to create a deeper and more direct ways of interaction between game players and those virtual creatures in the real world.

Technical Issues & Future Improvement

My original plan was to make Toothless react differently to both the right hand and left hand of the user. However, I found the model’s collected data of the wright and left wrists to be highly unstable. It often mistook the right wrist for the left, and when there was only one wrist in the screen, it’s not able to tell whether it’s a left wrist or a right wrist. Therefore, in my final work “Toothless” can only react to the users’ right hand’s movement. Also, the size of the Viking helmet that appears in the user’s head in the screen is not able to match the size of the users’ head automatically. I believe there is an algorithm that can make it work but I can’t figure it out. In addition, due to the limitation of time to finish this project, there are also a lot of different output I want to try that I haven’t finished. For example, if given more time, I’d like to add more different sound effects of Toothless to create more diverse output.

Source: https://giphy.com/stickers/howtotrainyourdragon-httyd-toothless-httyd3-1yTgtHaRuuaFWUWk7Q

MLNI – Week #2 Case Study: Interface with ML or AI (Lishan Qin)

Intro:

For this week’s case research, my partner Crystal and I studied a Google’s new AI experiment called “talk to books”. Here is the link to our presentation slides.

What is it? Why is it special?

Talk to Books is Google’s latest AI experiment that uses AI to talk to books and test word association skills. It is not a traditional means of searching engines that simply search webs and then give outputs with great relation to the keyword the users input. Instead, the algorithm of this AI is trained via machine learning on human conversations to predict the next response in a conversation. It doesn’t search the webs but the books. It allows users to explore ideas and discover books by getting quotes from books that respond to users’ queries. And since it is trained on human conversations, when users use natural language to speak to it in sentences, they will often get better results than keywords. It allows users to make a statement or ask a question, and the tool finds sentences in books that respond, with no dependence on keyword matching in a sense you are talking to the books, getting responses which can help you determine if you’re interested in reading them or not.

For example, when you type how can I stop thinking and fall asleep, it will give you quotes from books as an answer in an actual conversation. It makes users feel as if they’re making conversations to a real person rather than a machine giving outputs according to the input keyword.

How does it work? What are the techs involved?

 It is hugely depending on the development of a sub-field of AI known as word vectors, a type of natural language understanding that maps semantically similar phrases to nearby points based on equivalence, similarity or relatedness of ideas and language. It’s a way to enable algorithms to learn about the relationships between words, based on examples of actual language usage. (Ray Kurzweil)

Potential Future Application

I think this experiment has laid the foundation for the future development of search engine, as well as the evolvement of how machine perceive human minds. For example, Siri can be even more understanding towards our commands, Google search can provide more personalized service, and technology can also appear to be less cold and more “human”. In addition to helping the existing technology develop, I believe this experiment may also shed light on the other innovative application such as an AI teacher. If the AI is able to really make conversations with users, it will make the users feel more at ease and encouraged to learn knowledge from it, rather than simply receiving answers from it.

source:

https://www.theverge.com/2018/4/13/17235306/google-ai-experiments-natural-language-understanding-semantics-word-games

https://experiments.withgoogle.com/talk-to-books

https://ai.googleblog.com/2018/04/introducing-semantic-experiences-with.html

MLNI-Week#2 p5 drawing functions ——Lishan Qin

I used p5.js to draw the image above. The link to the code is here. I used functions like rect(), ellipse() and triangle() to draw the Totoro. Then I found the snowflake function on p5.js example. This example contributed by Aatish Bhatia uses an array of objects to hold the snowflake particles. It uses frameCount to update time and initialize the coordinates of snowflakes using functions like “this.posX=0”. I find this example to be so beautiful so I added it to my codes to create a peaceful atmosphere for my Totoro.