Archives for March 2020
NOC Midterm Project Concepts: Atomic Structure & Particle Movement
Abstract
Currently, we are working on Forces. Attractions catch my eyes! I can never imagine how we can work with physics in coding. And, I am always inspired by the icon of the Atom, which is an atomic structure. A core of an atom and several electrons surrounding. I think this is the beauty of nature and the beauty of the world. Everything consists of millions of atoms like this. That is really amazing. There have been a lot of forms of forces covered in the class, like friction, attraction, gravity, etc. However, these kinds of forces are all at the level of macrophysics. We never touch the micro world, atoms, electrons, protons, neutrons, and even quarks. So for the midterm project, I want to explore the forces inside the micro world. I want to build an atom sketch to show the atomic structure and the movement of each component of an atom.
Icon of Atom(text editor)
Inspiration&Reference
What Does An Atom REALLY Look Like?
This Youtube video perfectly explains what does an atom look like and it also reveals what does the movement inside an atom looks like. It is different from how the planet moves around the sun. To learn more about the movement of an atom. I keep researching on the Internet.
In the article “Q: How do electrons move around the nucleus of an atom?”, by TAMRA REALL and DEANNA LANKFORD of MU’s Office of Science Outreach, the author says, ” Electrons are found in different levels — or orbitals — surrounding the nucleus. The electrons can be found at any point in their orbital. The orbitals can be shaped as a sphere, as lobes — which kind of look like two squashes put together at the small ends — or in the shape of a doughnut around the nucleus.”
Visual Reference
It should look something like this.
Techniques support
As I researched, the force to drive the electron moves is a form of force existing among the particle with electricity–coulomb force. The formula is:
where ke is Coulomb’s constant (ke ≈ 9×109 N⋅m2⋅C−2), q1 and q2 are the signed magnitudes of the charges, and the scalar r is the distance between the charges.
–Wikipedia
In my project, I suppose to utilize coulomb force to different particles. I think the major challenge is to make the object moves in the way that it is expected to move according to Coulomb’s law.
Plan
Check detailed plan in the following link:
Reference
- What Does An Atom REALLY Look Like?
-
Coulomb’s law, Wikipedia. https://en.wikipedia.org/wiki/Coulomb%27s_law
- “How do electrons move around the nucleus of an atom”, Columbia Daily Tribune, https://www.columbiatribune.com/article/20140115/lifestyle/301159869
NOC – W5 – Atom
Week05: Machine Learning Model Defining&Training Report
For this week’s assignment, I defined my first machine learning model and train it with my computer based on the previous setup and the tutorial videos/coding.
Setup
Firstly, I used the foundational setting from the fashion_mnist coding to setup my cifar10 model setting. I changed several places:
- Import cifar10 in the first code block
- import other optimizers from Keras
- load cifar10 data in the second code block
- change the class names of dataset
Then I spent 681 seconds on downloading cifar10 data.
I encountered some problems in the whole process. For example:
Since it is different from the fashion mnist project, the input of cifat10 is not the same as fashion mnist(actually I didn’t know what it is.) I follow the error message and change the code: class_names[int(train_labels[index])]
start training:
Finish training.:
Optimizer: RMSprop
Loss function: Cross-Entropy
Total time: 136s
Loss: 1.5224
Accuracy: 0.4642
Change optimizer and loss function
I changed optimizer and loss function to SGD and Hinge. For SGD, I did not know that much. But for Hinge, I learned from loss function cheat sheet
https://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html#cross-entropy
Hinge is a loss function for classification, which fits the cifar1o project.
However, I got an error and I could not solve it. Aven told me that the problem is the input shape is not exactly what the next layer expects to receive.
After that, I tried several loss functions, but all failed.
Only change optimizer
SGD+Cross-entropy
Time: 120s
Loss: 2.2906
Accuracy: 0.1748
Conclusion: compared to RMSprop, SGD is faster but worse in loss and accuracy.
Adam + Cross-entropy
Time: 125s
Loss: 1.5115
Accuracy: 0.4684
\
Conclusion: Adam as an optimizer, is almost as fast as SGD, as accurate as RMSprop.
Data augmentation method
I tried to use a data augmentation method to enrich the dataset. But I failed. I used the code below
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
# example of vertical shift image augmentation
from numpy import expand_dims
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import ImageDataGenerator
from matplotlib import pyplot
# load the image
img = load_img(‘bird.jpg’)
# convert to numpy array
data = img_to_array(img)
# expand dimension to one sample
samples = expand_dims(data, 0)
# create image data augmentation generator
datagen = ImageDataGenerator(height_shift_range=0.5)
# prepare iterator
it = datagen.flow(samples, batch_size=1)
# generate samples and plot
for i in range(9):
# define subplot
pyplot.subplot(330 + 1 + i)
# generate batch of images
batch = it.next()
# convert to unsigned integers for viewing
image = batch[0].astype(‘uint8’)
# plot raw pixel data
pyplot.imshow(image)
# show the figure
pyplot.show()
|
I adjusted the code to fit in my code.
But I got one error:
After googling, I found out I need a “pillow” package to run this code. And then
I downloaded it. However, I still got the same error.
NOC – W4– Impulsion&Wind&Gravity of Water Fountains
This week I refine my Vector Fountain from last week. I apply three kinds of forces: impulsion of the fountain, gravity and the wind force from both sides.
I use map() function to connect the force vector’s magnitude with the mouse position.
Week4: Writing assignment
I think neuron networks are inspired by the biological neuron structure. As we all know, information is transmitted in the form of an electric signal through each neuron. The outside stimuli will be transferred to a neurotransmitter that will be accepted by the brain eventually. Human beings will do reaction according to the influence they receive from the neurotransmitter. Neuron networks, similarly, will receive the input data and process calculation. Since the function of activation function, the input will be transferred to the “neurotransmitter” of neuron networks—the result of the activation function(0 or 1, 0 or the sum of input, etc. Depends on the activation function). These “neurotransmitters” will be pass from the hidden layer to the next neuron of other layers. After going through all the layers of the neuron network structure, the programming will reach to the final output, and make reaction according to the output. Hence, we can conclude that neuron networks mimic the operating principle of the human body’s neuron structure and brains.
However, there are also differences between neuron networks and neuron structure of the human body. Since programming makes prediction instead of responding accordingly as the neuron system of humans do, we need to use abundant data to train the programming model so that the accuracy of neuron networks will be improved. What’s more, the direction of the correcting process is from the output layer to the input layer, which is a mechanism that the human neuron system doesn’t have. From the output layer to the input layer, each layer will correct the mistakes it makes. But in the human neuron system, the neuron signal’s conduction has its directivity. There is no way that neurons can reverse the direction of the signal’s transmit and pass the signal back to the last neuron.
Deep neural networks have something in common with the human neuron system, but we cannot say something like “the deep neural networks are completely simulating”. The way both two solve problems is the same.
NOC – W3 – Fountains with vector
NOC – W3 – Research
James Turrell is my favorite contemporary artist. He is the master of space and light in the field of contemporary art. I really love the idea of “light and shadow”. So I try to get inspiration from his art pieces.
Link: http://rodencrater.com/spaces/alpha-east-tunnel/
This art project is a tunnel designed for the observation of light. As the tunnel continues to stretch forward, the ring lights starting from the left bottom to right bottom, gradually pile up, bringing the audience a sense of speed and space. Walking in this passageway, people seem to be immersed in this sci-fi version of the sense of space and light. As the distance grows, layer upon layer of light eventually merges, becoming a channel of light in the eyes’ vision. This reminds me of a black hole. No one knows what it looks like inside the black hole. No light? Spaciousness? It provokes my imaginations of lights and space inside a black hole.
The color James Turrell choose is really interesting. Firstly, it is light in color. The color looks blue, but there is something white mixed inside. With the overlay of color and the distance increase, the color of the whole surface inside the tunnel appears to be dark blue and black. The feelings of mystery and the unknown strikes me when my eyes reach the end of the tunnel.
In general, this project provides me with some keywords: light, space, deep, mysterious and unknown. I will conclude my perception on this project as “the desire to exploration”
Week3-Case study3: Two-Stream Convolutional Networks for Dynamic Texture Synthesis
Computer Vision art project: Two-Stream Convolutional Networks for Dynamic Texture Synthesis
This week I cased study on “Two-Stream Convolutional Networks for Dynamic Texture Synthesis”
Link: https://ryersonvisionlab.github.io/two-stream-projpage/
This project mainly utilizes pre-trained convolutional networks (ConvNets) to recognize dynamics texture and synthesize optical flow prediction. Through the recognition and analysis of dynamic texture, the dynamic texture of each frame of dynamic texture is encapsulated separately, and the model will initialize and optimize to form many different sub-sequences. Typically a sequence will have 12 frames. This is the basic process of generating a dynamics texture.
As we can see in this example, the water flow in painting style is the result of the recognition and synthesis of real water flow texture in the middle of the top.
Data source: The author applied their dynamic texture synthesis process to a wide range of textures that were selected from the DynTex database as well as others collected in-the-wild. The total amount of synthesized results they provided is nearly 60.
In order to avoid the consistency between the synthesized dynamic texture and the original texture, the subsequence is optimized and initialized. For example, the first frame of a subsequence becomes the last frame of the preceding subsequence. Therefore, the differences between the original texture and synthesized texture can be told by our eyes in the endless loop of playing.
Besides, this AI model can transfer one dynamics texture into another one. As the documentation page says, “The underlying assumption of our model is that the appearance and dynamics of a dynamic texture can be factorized. As such, it should allow for the transfer of the dynamics of one texture onto the appearance of another“.
General feeling:
This project is a little bit different from the previous CV cases I studied. Firstly, it is not based on the camera but based on the inputs of texture(images/videos). It is such a genius project that we can use this model to make something static(like paintings) dynamic by analyzing the existence of texture from nature.