This is the link to my project and its p5.js code: p5.js Web Editor | 2.R Lya interactive memories and dreams of Torico (p5js.org)
This is the picture of Torico’s original work:
This is a video documentation of this mini project: https://drive.google.com/file/d/1dsCAfhI41fNy0O_2ygROJB-ylF5Ekqrh/view?usp=drive_link
Torico showed me the code of her self-portrait and shared with me a dream where she was the protagonist. In the dream, she was making an exploration in a forest and trying to find some treasure, being told that she would be 100% safe unless she stared in a specific direction. She could still make herself feel better by holding the nearest tree branch, but the only way to be completely rescued is to simply turn around and look at other places.
I didn’t come up with a good inspiration to deal with the treasure part, but I had a clear idea to “realize” her dream with the codes we’ve learned. I made her eye follow a “Look here!” symbol that can move around with my mouse to show that she is exploring; I used a big red question mark to represent the direction she shouldn’t look at, and if my mouse is on that question mark, her facial expression would look kind of insane. To present the self-rescue part, I chose to reproduce the “holding tree branch” part by holding the mouse, and her face would look calmer after that. Besides, she mentioned that she loves different kinds of earrings, so I made her earrings spin and change color over time.
I worked really long on the codes of this project, especially the eye movement part. It would have been really easy if I already learned the map function because I had to calculate how much the eye should move with the mouse to make it an equal proportion back then. Besides, I also used a long “if else” condition to merge different kinds of eye movements which achieve something crucial to my idea, including the part controlled by the mouse. This is my final code (I added new comments next to the origin code to show its function):
// let eyes move with mouse if (mouseX<=80 && mouseY>=340 && mouseY<=420){ // mouse on the question mark area: eye vibrate fill(244,204,204) // eye looks red if (mouseIsPressed == true) { x1=random(-0.57,1) x2=random(-1.45,2) y1=random(-0.57,1) y2=random(-1.45,2) // eye vibrates a little } else{ x1=random(-2.57,5) x2=random(-15.45,21) y1=random(-2.57,5) y2=random(-15.45,21) // eye chaoticly vibrates } } else{ // mouse not on question mark area: eye move with mouse fill(255) // eye looks normal if (mouseX<=400){ // x-axis: mouse in canvas x1=(mouseX-170) / 66 x2=(mouseX-170) / 11 // the movement of differnt parts of eye } else{ x1=5 x2=21 // x-axis: mouse out of canvas } if (mouseY<=400){ // y-axis: mouse in canvas y1=(mouseY-170) / 66 y2=(mouseY-170) / 11 // the movement of differnt parts of eye } else{ y1=5 y2=21 // y-axis: mouse out of canvas } } // eyes push() scale(0.8); translate(-20, -35); rotate(PI * 0.04); ellipse(260+x1, 210+y1, 90, 90); fill(0); noStroke(); ellipse(260+x2, 210+y2, 50, 50); pop()
The earrings rotate and change color over time, independently of the mouse. I added some variables that are related to the frame count and it is not that hard.
Torico experienced this project and gave me her feedback. She really likes the way the nystagmus turns pink to express “lost sanity”. It has a ridiculous horror game feel. She also noticed that the stretching ratio of the black and white of the eyes is different when the eyes look around. She thinks it is very impressive that I was able to achieve this detail.
I am gradually getting used to using the setup function for giving standard values to variables and the draw function for visual expressions. The biggest thing I learned is that more variables would make the code pretty easy to read. I spent lots of time understanding the effects of each part of the origin code and adding new variables to it, and my partner Torico also asked a lot about my self-portrait’s code. Besides, I think I could also make my code better with variables at the eye movement part since there are a lot of division calculations.
It would be a lot more real if there could be a sensor that could detect the direction I look into. Besides, I think it is an interesting task to try to visualize the chaotic feelings surrounding the user.