Categories
6. Final

Posture Corrector

My final project uses machine learning to create a posture corrector. It works as a monitor for your posture during working sessions or extended periods of time standing or sitting down. It should be set up on the user’s side, and based on whether the person using it is slouching or not, the sketch interacts with the user and the computer in different ways. Time also plays a role on the outputs of the sketch.

While a correct posture is detected, a game runs on the screen. Every 10 consecutive seconds increases the user’s high score by 1, causing the ball to get thrown in the hoop. If slouching or bad posture is detected, the game disappears and the high score is reset to zero.

After a certain amount of time an alert is shown on the screen displaying three stretches to be done. If the user remains in the slouching position for a certain amount of time, a new tab will open to an image that takes over the screen. If a correct position is not detected after a long amount of time, then the tab will continuously open up causing the window to crash. The sketch can also detect when the user stretches and encourages them to hold it for a longer period.

Process sketches 

Images:

Output after 50 detections of slouching:

Output after 200 detections of slouching:

Output after 600 detections of slouching:

Output after 4500 detections of slouching:

Sketch:

https://editor.p5js.org/zme209/sketches/GELPE9NpR

 

Categories
Research Post

Research Post

My first Inspiration is Training Poses (Installation, 2019) by Sam Lavigne. The audience is invited to copy the poses being projected from the Microsoft COCO image dataset. I was very interested in how the installation looks deeper into how new technology views and interacts with the human body. In light of our class discussions on bias in machine learning databases, the fact that this installation lets people embody the dataset images sets the scene for a very important conversation.
The second inspiration for my final project, CVDazzle, is a project by Adam Harvey in which he experiments with design and computer vision algorithms in order to create a set of “looks” that can protect people from facial recognition systems. This project pushed me to think of the intricacies of our interactions with computers and how much they really see.
My third inspiration is my personal experience with back problems as well as it being a very common issue. During extended periods of working, sitting down, or using electronics, we tend to neglect proper posture and back health. During my personal research, I’ve come across a few stretches and collected theoretical knowledge that I will use in my final project.

 

References:

https://lav.io/projects/training-poses-installation/
https://theshed.org/program/80-open-call-sam-lavigne
https://www.stuk.be/en/program/training-poses
https://ahprojects.com/cvdazzle

Categories
Uncategorized

Classify This – Baby Monitor

My project is a baby monitor that can detect whether a child is crying or coughing while being monitored remotely. The parent will have 4 responsive options when the device detects crying or coughing. The first one is “Play crib mobile” followed by “Swing Rocking bed”, “open video”, and “call maid” . This project is a representation of what could be a future working model (possibly using IOT) where the different options for the users will be linked to different outcomes. I used machine learning and teachable machine in order to train it to detect different baby noises.

When the screen first appears, it is green and has “I’m listening…” written on it. Once the code detects audio input, The screen then turns red and it will display whether the baby is coughing or crying. At the top, there are different options for the parents to react with. Once a choice is detected, a confirmation message is displayed on the bottom of the screen. 



Voice recordings incase you need to test it:

 

https://editor.p5js.org/zme209/sketches/FQHZT-Tiv

Categories
Uncategorized

Research Post

One thing that struck me was “FindFace”, a program where you can take a picture of someone and in result get their “profile” or a collection of pictures of them from the internet. This could be used by anyone and on anyone. This shows much privacy is being taken away with the increased usage and development of social media and technology.

 

Joy Buolamwini discusses algorithmic bias in her TED Talk. The way facial recognition software generally works is that they are given a “training set” which tells the machine what is a face and what is not. This gives the machine the ability to detect faces. However, the training sets are not always inclusive of all people. Despite this, I especially liked how she phrased this issue by emphasizing that more inclusive training sets must and can be made.

 

Links: 

Categories
4. The Clock

The Clock

The Clock Project: Newton’s Clock 

My project indicates time based on a set of 6 different pendulums. The speed of each pendulum is dependent on time but they vary from each other. As seen in the picture attached below, the pendulums are dependent on milli seconds / seconds / minutes / hours / days / months. The point of the pendulums is to indicate when the next minute/hour/day/month/year will come. This can be observed by the speed of the pendulum: the faster the pendulum, the closer it is to the next minute/hour/day/month/year. 

 

code: https://editor.p5js.org/zme209/sketches/y2_KWKQTM

 

 

Categories
Research Post

Research Post

Thinking Machine – Martin Wattenburg 

 

The Thinking Machine (2003) is an artificial intelligence program, made using JavaScript, that creates an artwork representing a computer’s thought process while playing chess against someone.. It was created by Martin Wattenburg, a designer that focuses on data visualization. The computer’s thought process presents itself as a map of thousands of possible movement choices from which the best is chosen from. 

I admire that the project causes the viewer to think about the nature of thought. It allows them to deeply consider the humanness of machines and the way they process information is better understood. It is also a slow paced and focused game, as opposed to most popular computer games. 

While playing, you come to notice that when the machine thinks, curves intertwined in a form of network appear on top of the chess board. The colors of the paths are representative of different things, green is for moves by the white, orange is for the black pieces. The brightness of the colors are also representative of how beneficial certain moves are for the white pieces.

  • Links

 

http://bewitched.com/chess.html 

https://www.popularmechanics.com/technology/design/a21249/play-a-chess-computer-that-shows-you-all-its-moves/

 

 

 

Categories
3. Generative Thing

Digital Chip Chop – Ziad Elkammah

The inspiration for my project came from a childhood game that we all played many times and most of us are familiar with, chip-chop. The original game can be seen in the following images as well as my initial sketch. 


I wanted to create a game that has the same idea of choosing one of 4 options that lead you to 4 different games. I also wanted to create something interactive and has a sense of satisfaction or a trippy effect when watching it. I wanted the audience to take control of the art. Therefore, I created a canvas that gives you an empty screen with a pop up stating: “Start Here: Press anywhere in the empty space”. 

When the user presses in the empty space the four options pop right where the mouse location is, as seen in the figure:

The first option:

The second option:

The third option:

The fourth option:

The four different “modes” had different generative possibilities within them, such as color, shape, size, and orientation. These possibilities were generated using random variables with specific restrictions that shape the mode of each setting. These modes were controlled by mouse clicks on specifics buttons I created in order to choose the settings. 

I submitted two sketches below as I was not sure if it was a requirement or not for the objects to stay visible and not disappear after the mouse is released. If that is a requirement please use the second version. 

1: https://editor.p5js.org/zme209/sketches/arlnzyLfw

2: https://editor.p5js.org/zme209/sketches/f8mwnFEPa

 

 

Categories
Research Post

Research Post

In her work, Lauren McCarthy investigates the effect of surveillance, automation, and network culture on our social relationships. She lives and works in Los Angeles with her husband and two children. She is the creator of p5.js, an open source programming language for learning creative expression via code that has gained more than 1.5 million users since its release in 2013. As Co-Director of the Processing Foundation, she works to increase software literacy in the visual arts and visual literacy in technology-related professions, with the aim of making these areas more accessible to individuals from all backgrounds. She works as an Assistant Professor of Design Media Arts at UCLA. Lauren’s artwork has been shown in a number of different places across the globe. 

The project, Lauren, is one of the most intriguing ideas I’ve seen recently and it truly caught my eye. My first reaction to the idea was that it was something I could never imagine ever existing. Even if such a thing existed, I never imagined that people would be eager to utilize it if it did. According to the concept, the artist wishes to be a human equivalent of Amazon Alexa, a smart home intelligence system that people may utilize in their own residences. She would monitor them 24/7 for a week. Installation of a networked smart device system, which incorporates cameras, microphones, switches, door locks, faucets and other electronic devices as well as the installation of a networked smart device system. Her job then consists of remotely monitoring and managing the individual’s home throughout the day. She aims to be more advanced than artificial intelligence since she is able to understand and predict their desires as people. The link that results bridges the emotional gap between both of them making the experience a lot better.


Categories
2. Lost and Found

Lost Item

My partner’s lost item is a “tiny chunky cartoonish elephant rubbery greenish blue” elephant. These were her exact words when describing the item.

Reflection: While working my main focus was to keep looking at the description and trying to make it as similar as I can to the description. I really enjoyed this exercise because I started to get used to the programming language. I also started exploring new ways to implement my code which was really interesting.

https://editor.p5js.org/zme209/sketches/Cj-E411zW

Categories
1. Coding from life

Ziad Elkammah Assignment 0

https://editor.p5js.org/zme209/sketches/V5QDCH-Cn

I really enjoyed working on this project. my main issue was determining the correct x and y coordinates.