MLNI Week 1: Zero UI research – Alex Wang

Reading reflection:

Computer vision is a game changing technology that have already entered our daily lives, applications being the scanning of QR codes or the recognition of car number plates replaces physical labour of humans at a parking lot. Just recently our campus and dorms also started the usage of facial recognition as an alternative to scanning student ID cards. Aside from these practical impacts on our daily lives, computer vision also has many applications to the creation of art. Having the power to recognize objects, it gives the computer power to perform specific operations using its understanding of the images, as opposed to traditional image manipulations where the computer only reads in pixel values but not understand what is being processed. I think the most obvious example would be applications that manipulates the human face, after recognizing that it is the face of a human, while leaving the rest of the image alone.

Zero UI project research:

After some research on the recent developments of Zero UI projects, I came across a project by googles advanced technology and projects team which they named project Soli . Project Soli is a chip that uses miniature radars to sense hand gestures and is exactly what I would consider the future of zero UI. Users can now control their devices without physical contact with their devices, and have all their control gestures be natural, as if controlling a physical device.

Technology:

I believe that the chip collects radar information of hand movements, then uses software to make interpretations of what the gesture means, I believe this could definitely benefit from computer vision/machine learning since the interpretation component of this technology requires the computer to predict what gesture the user is trying to input.

Current application:

Project Soli started around 2014. But just recently, Google is planning on releasing their newest phone model called “Google Pixel 4“. This phone is one of the most anticipated phone of 2019, as it planned to incorporate the Soli chip into the phone. There are many leaks and rumors online, building up a lot of tension before the release of the phone, which is expected to be in September/October 2019 which is really just around the corner.

Connection to Zero UI and potential future applications:

I think this technology could be very interesting and useful as it provides a very natural way to interact with machines, just like the ideas of zero UI. It could also be used in creative ways such as new control for gaming, or new tools for art creation. Just the new way of interaction opens up doors for endless possibilities on the application of this new interaction technique.

Videos:

Sources:

https://atap.google.com/soli/

https://www.techradar.com/news/google-pixel-4

https://www.xda-developers.com/google-pixel-4-motion-sense-gestures-leak/

Final Project: Pacman Shooter – Alex Wang – Eric Parren

People of my generation grew up playing all kinds of video games. But most of the time the control and feel of the games are generally lacking in the ability to create a immersive experience. Controllers like the mouse and keyboard or joystick and buttons are convenient and accurate, but does not contribute to making the player feel engaged with the gameplay. This is why me and my partner Henry decided to create a retro arcade themed shooter game, along with our original controller that emulates the real life action of shooting.

early version of the game:

I mostly worked on the coding of the game, while Henry worked on creating the physical components to this project. I started by creating an object class called targets, which was used to create all the pacman ghosts for the players to shoot. OOP is a great way to organize my code since a game like this requires a lot of features to make it complete, and organizing it through creating objects really makes the code more approachable and easy to debug. Instead of using inheritance, I simply added an extra parameter to the target class indicating which kind of target it will be. During the game design process, I decided to create different kinds of target to add more variance to the game play. Different targets have different speed, size, points, and sprite assets. I also kept the color theme and overall graphics consistent with the video game theme, taking images and music from the original pacman and tetris to give the user a more complete retro arcade experience. After the implementation of all these game features, I spent most of my time asking others to help with user testing as well as playing the game myself to make sure the game play is comfortable. Aside from the original two player competitive gameplay, I also added a high score page to give the user a sense of achievement along with the feeling of playing arcade games.

All my gameplay design, visuals, and sound choices are specifically chosen for the coherence and consistency of the game theme. I used pixel game fonts to create a more 8 bit feel, and even used an oscillator synth to create video game sound effects. I also photoshopped all my visuals to make the color a recurring theme, making players able to distinguish the difference by giving them a clear blue and red difference, while everything neutral is purple.

The physical fabrication of the project was the hardest part of the project, to be able to design a controller with the resources we have and have it be natural to human interaction requires a lot of planning. We decided to use the potentiometer as the sensor for this purpose as it was a very simple sensor to work with, we then 3d printed shells for the potentiometers to live in. These shells acted as a medium for hot glueing the body of the controller with the potentiometer. The final result was really nice and sturdy and served its purpose for this interactive gameplay.

Testing potentiometers:

Prototype controller for user testing:

During user testing we received a lot of helpful feedback on how to improve this project, such as adding an intro to the game and adding protection to the controller. There was multiple users who interacted differently with the controller than we intended, they were trying to turn the controller sideways as if steering a wheel. To cope with this, we designed a extra attachment to the controller, it not only made the design of the controller clearer and made it more sturdier, but also acted as a physical stop for the controller when it is not being used.

Aside from user testing, we also ran into many problems when we are developing the project. For example the sound library within processing will be overloaded if a file is constantly being played, leaving the audio distorted and disturbing. We fixed this by setting statements within the code so that it the sound file does not play over the previous. Still wanting the sound of constant firing, I edited the original sound file so that it is cut short, but also have a fade effect to it.

editing sound:

improving crosshair:

The initial goal of the project was to create a game that has a natural interaction, which is what I believe is the best form of interaction for a immersive user experience. We achieved this natural interaction by creating controllers that operates like a cannon that tilts left and right, up and down. With the improvements we made after user testing, most users during final presentation and end of semester show seemed to be using our project the way we intended them to use it. I am personally very proud of this achievement as I grew up playing arcade games and always wanted to create my own. The value behind my project would be to bring joy to the user, just like the original classics did when I was young. Hopefully it can also bring a sense of nostalgia to people of my generation. There was a lot of young kids during the IMA show, and seeing them enjoying the game that I created is a priceless reward, and it gives me the motivation to create more projects that can benefit the world.

attribution:

music: Ghost & Kozmos – Tetris Theme

background image: Rafael Covo

Sprites: Pacman

Font: https://onextrapixel.com/25-free-pixel-perfect-fonts-for-8-bit-designs/

Recitation 11: Workshops (Alex Wang)

In this weeks recitation, we are split into different workshops to work on what we think is more helpful to our final projects. Since I am planning on making a game, I decided to join the object oriented programming workshop. We were then tasked of writing our own object using the skills we have learned from the workshop.

I decided to create a target class for my shooter game, and started of by setting the fields to x and y position. I then added display and move function for that class, one for updating the position, and the other for drawing the object on to the screen. I also downloaded a sprite image for the target to make it look like a game character.

I also created an array list to be able to create a lot of these targets, but rather than using a for loop to create all of them at once, I decided to add a new target object to the list once every few seconds. After that I made the constructor function of a new target be random, y position and speed and direction. They will also appear on the other side of the screen if they walk out of the screen. The last thing I did was to make them disappear when they are clicked, so I checked mouse position and whether they are within the bounds of the objects size, and delete them from the array list if they are clicked. I also took a crosshair image to make it look more like a shooter game.

I learned a lot this recitation and I really like how it is structured so that we can brush up on the specific subject that we need to work on, and it is also directly related to our final project, which gave me a great starting code to work off of for my project as I got to figure out a lot of OOP syntax problems with the help of fellows.

Code below:

class target{
int x;
int y;
int type;
int speed;
int size;
PImage img;

target(int _type){
type = _type;

y = int(random(550)); //whatever height and width is
img = loadImage(“pacman.png”);
speed = int(random(-4,3));
size = int(random(40,100));
if (speed != -1){
speed+=1;
}
if (speed > 0){
x = -20;
}
else{
x = width+20;
}
}
target(int _x,int _y,int _type){
x = _x;
y = _y;
type = _type;
img = loadImage(“bow1.png”);
}

void display(){
image(img,x,y,size,size);
}
void move(){
x += speed;
if (x-(size/2)>= width && speed >0){
x=-20;
}
else if(x+(size)<=0 && speed <0){
x=width+20;
}
}

}

Recitation 10: Media Controller(Alex Wang)

In this weeks recitation, we need to create a system that manipulates images or videos through inputs from the Arduino. I started by building off of the code template used for transferring a single value from Arduino to processing. I created a very simple circuit with only an Arduino and a potentiometer. I mapped and wrote the potentiometer value to processing where I can use that value for manipulating media. I chose the potentiometer as it is the most simplest form of analog input, as I want my media manipulations to be smooth and analog as opposed to digital buttons.

I then wrote code that takes the camera footage and pixelates it by creating ellipses of the same rgb color value.

After making the pixelation effect, I changed the values I use to control the pixelation to a variable which is then controlled by the potentiometer value. Which allows real time media control.

After I got the potentiometer working, I added more functionalities on top of the pixelation that already existed. I first added a blur filter, then I added a tint. I tweaked the numbers so that the transitions looks smoother. And I also made sure that filter goes after the image function while tint goes before.

The final product looks like the video above. I really like it cause the pixelation makes me look like i am a retro arcade game character.

I think this weeks recitation is pretty useful as it sets me up to do image manipulation in my final project. I was not planning on adding videos, but now that I am more familiar with these techniques I might consider adding extra visual effects to spice up my project.

Recitation 9:Final Project Process(Alex Wang)

In this recitation we were put into groups of four to present our final project proposals and give advice to others proposals. 

project 1:

One of the projects that my group members came up with was a interactive remake of the mona lisa. She wanted to recreate it digitally and give it interaction. She proposed the idea of having buttons that the users can press in order to manipulate the original image into a specific mood that the user desires. 

I really like her idea of giving classic art interactivity, and I gave her the advice of using analog inputs as opposed to digital buttons. Maybe instead of having the image change instantly from the press of a button, she can instead make use of analog sensors to make the image gradually change.

Project 2:

This project was inspired by the astonishment of how much data consumption happens in someones life, and she wants to make an effort to make the user aware of the time they spent on data consuming by playing gifs of different visual intensity to notify the user of their time spent, and eventually set off an a car to alarm the user that they have been using the computer for too long. 

I can relate to spending too much time on the computer and I appreciate projects of this type to shorten my computer time usage. I gave her the advice of tracking non productive computer usage, such as entertainment data consumption as opposed to purely putting a timer that goes off every thirty minutes, as the user could be using their time on work or other appropriate use. The other advice was that the car did not make much sense to me, I feel like an alarm or buzzer can do the same job of getting the attention of the user.

Project 3:

For this project, the designer wanted to create a drawing platform with virtual controls, as if the user is drawing in the air. She also wants to include functionalities such as allowing the user to draw shapes as opposed to just lines.

This is probably my favorite idea, as I am also attempting to make something that is virtual. I also gave her the advice of adding more complex functionalities such as spawning random colors or random shapes or even random waves depending on the input of users pen position. 

My project:

I wanted to create a virtual archery game, with physical controls using a glove. But having the processing game respond as if the user is actually shooting arrows.

I received positive feedbacks, but also a lot of concerns as to how I should implement the project with the tools available to me, and I personally am also slightly worried as to how to make the controls possible. I came up with the idea of making “fake 3d” objects by manipulating 2d images so that they look like they are moving in a three dimensional scale. I also got valuable advice from Rudi, he told me that this could be made possible without the use of advanced sensors like the gyroscope, but a processing library for picking up certain colors can be used as the mechanism for tracing user movement.

I learned a lot from this recitation, I personally am creating a interactive video game solely for entertainment while others in the group are all pursuing for something with artistic value and real world application. Even though I think I will not change the plans for my project, but I will definitely appreciate others projects more as I get an insight to what meanings lie behind them.