Midterm Writing Assignment

In my Week 07 documentation, I rewrote background and inspiration to clarify, so I’ve posted my Week 07 version of these along with the social impact, artistic influence and future plans.


Is it possible to teach an AI design? This is a problem at the forefront of Machine Learning exploration and a fascinating topic that could change how designers work and how companies operate. For my midterm project, I wanted to examine ways to create an algorithm that could design posters based on a specific style and user inputs. In the end, I believe that my proposed idea is  likely possible, but would require time, extensive data and skills beyond my range.

Background

I wanted to do this project because of my interest in design, as well as the Swiss Minimalist style, which I chose to focus on. Swiss minimalism, particularly the poster style I chose to focus on, was developed in Switzerland during WWII and made famous by designer and author Josef Muller-Brockmann, editor of magazine Neue Grafik. Brockmann’s posters used bold colors, a grid layout and simple, clean typography to create aesthetic posters. Brockmann’s style changed graphic design, ushering in an appreciation for layout and typography. His style is time-tested and still favored today by artists and institutions.

Another source of inspiration is the class Programming Design Systems, which I’m taking this semester with Cici Liu. Before this class, I mostly used Adobe Illustrator, Photoshop and InDesign to create designs, which often involves a lot of manual work, i.e. physically moving things around the canvas and placing them myself to see what looks good. I love making posters, whether it’s for a party I’m throwing, part of my freelance work or just for fun, but I often find myself saving countless versions of the same poster with slightly different positioning and elements. Through the class, particularly the poster-designing assignment, I found it fascinating to see how code could be used to simplify the design process, and how an aesthetic poster could be broken down into coding data. We work in P5, which can also be used to randomize posters by setting limits and creating arrays with sets of data to allow the code itself to generate many different versions of posters within set ranges.

I used P5 to recreate a few Brockmann posters and realized the Swiss Minimalist style poster could be entirely recreated with P5, as the posters usually involve a few headings of text (sometimes rotated or overlapping), use of simple shapes, curves and lines and a grid layout, all elements that can be translated into code to reproduce the posters.

Inspiration

As mentioned in my project proposal, I was inspired by Alibaba’s LuBan AI software, which is used to generate banners and ads for products on Taobao and Tmall. Most of the pop-up ads in the app are now generated by the software, which can generate 8,000 banners per second. This AI was trained with a more complex process involving teaching it design elements separately, and then having teams of designers use reinforcement training to steer the AI towards “good” designs. The AI was trained on copy, product shot, background, logo and decorating artifacts, all the elements needed in a simple ad, from a huge dataset of these individual features. This project took years and likely thousands of programmers and designers, but it shows that a Machine Learning algorithm can be taught design elements and produce designs indistinguishable with those from a designer.

Process

After bringing Aven my idea of teaching a Machine Learning algorithm poster design, we brainstormed possible ways of doing the project. Inspired by the Programming Design Systems class, I was interested in using P5 as the way the algorithm could “design” the posters as that way the posters would be represented as data. This means that I could use P5 “data” to teach the program, and it could create an algorithm based on the data to design original posters with Swiss Minimalist design principles. My goal by then for the midterm was to find an effective way of creating and gathering this data.

Aven said I would need at least 1,000 posters, so I started my research, saving posters that could be turned into data (eliminating those with images and design elements that couldn’t easily be recreated with code) and saved several hundred minimalist posters.

I began to notice that many of the posters had 3 main types of text, which I began to call Heading1, Heading2 and Heading3. Heading1 was usually bold and almost decorative, often very large and distinctive on the poster. Heading2 and 3 usually had smaller font and creative placement, containing the actual information for the poster. The posters often used one or two types of shapes, repeated and in various colors. After observing these posters, I created an Excel spreadsheet for the data I wanted to gather and broke it down into a few different categories that I thought could represent most of the posters:

Heading1 text placement (location on the canvas, text size, text color, text rotation, and the same data points for Heading2 and Heading3, then Shape1 placement, size, fill, rotation and stroke, continued for the rest of the shapes present on the poster, and background fill. I was planning to input the data as P5 code.

I then brainstormed the user interface, which I wanted to be simple but allow users to create their own personalized posters. While my original idea involved different “styles,” I decided to simplify that to “black and white” or “color.” I wanted to allow users to input the three headings themselves, perhaps with a guideline indicating that Heading1 should involve little text and Heading3 should contain the main data for the poster.

Potential solutions and challenges

The first major challenge I encountered was gathering the data. While I knew what data I wanted to pull from the posters and I had the posters to use for this, I quickly realized that placement, rotation and fill weren’t that intuitive. In order to get this data, I would have to “redesign” each poster in P5. Some of the posters didn’t share the same dimensions, so I would have to reformat the posters to one dimension to create consistent data. I would also have to use color pickers online to find the colors inside the posters, and then pick a consistent color system, such as HSB in P5, to represent the colors. Rotation also presented a challenge as it would take time to find the rotation and placement, which I would have to do by essentially “eyeballing” the poster and trying to match the coded version to the real one. Also, recreating curved lines with Bezier curves would prove a major challenge, as well as any irregular shapes that would involve more coded data than I could easily input. I quickly realized that turning 1,000 posters into P5 data was a huge task that would take many hours and could produce imperfect results.

The second major challenge was how to input this data into the Machine Learning algorithm. I met with Aven again to discuss this challenge, and he suggested JSON, as I would be able to upload my Excel spreadsheet of data into the platform and then import JSON into the model to train it.

Also, since I wouldn’t be using a simple pre-trained model, I would need to find a program that could read and learn Swiss Minimalist principles from the data, and then be able to generate new P5 data and create a sketch for a new poster. This seemed very challenging as we discussed how we didn’t know whether such a program already existed, so I may get through all the steps of data gathering to realize that I wouldn’t be able to train it at all.

In order to create the user interface, I would also have to find a way to input the user’s headings into the P5 sketch while allowing the algorithm to generate the rest of the design based on the data, and then display that sketch. This also posed a challenge as while I have experience in P5, linking all the systems together to create the project I wanted would involve many steps I wasn’t familiar with.

After reaching the conclusion that turning thousands of posters into data would be beyond my ability this semester, I started looking into other ways of finishing the project, such as human in the loop. I knew from my experience with P5 that I could use several different sketches to produce infinite posters in the Swiss Minimalist style by randomizing certain elements within ranges. With this in mind, I was curious whether I could have feed an algorithm these sketches and have it produce its own random designs, which I could then evaluate in order to essentially teach it to produce more of the “good” designs. This is very similar to LuBan’s reinforcement process, where teams of designers evaluated the AI’s designs in order to teach it the finer points of design. While Swiss Minimalism famously uses grid systems, which generally follow “rules,” many of the posters also “break” these grid system rules in order to create interesting and unexpected designs. This would be one aspect I could teach the algorithm through human in the loop: when to “break” these rules and when to follow them in a design to create aesthetic results. One challenge however is that most of the resources on human in the loop that I came across start with a high quality training dataset, and I couldn’t find references on whether I could have the algorithm randomize posters from different ranges of data and elements to create its own dataset. I briefly considered somehow gathering the data from different iterations of P5 sketches to create a database, but if all the sketches use the same code to create different posters, this would also prove to be a huge challenge.

With this is mind, I conclude that while this project may be possible if a team can turn thousands of Swiss Minimalist posters into P5 data and then teach an algorithm to use that data to generate its own posters in P5, it would take time, resources and knowledge out of my reach in order to do so. Also, at any step of the way if something went wrong or if the actual algorithm needed doesn’t exist, this could easily fail to produce the results I was looking for. This project was a fascinating exploration for me into the world of building databases and training models, and while in the end I’m not able to create the project I wanted to, I have a much better grasp on the realistic challenges of data, training and algorithms, which I can take with me for my next, more achievable, project.

Social impact

I think this project could have a large social impact as AI is currently changing how designers work in many companies, and the future of design almost certainly includes collaborating with AI to streamline the design process. In terms of poster design specifically, I could see online design services like Canva incorporating AI programs that allow users with limited design experiences to simply input their poster’s headers and style and immediately generate a well-designed, unique poster. This would change the accessibility of well-designed materials, as well as the speed, since anyone would be able to enter text and generate simple posters immediately.

Artistic influence

If an AI can learn layout and basic design principles, this could have many artistic uses as many pieces of visual art rely on certain visual principles such as the golden ratio and placement. Using the program I want to create, the algorithm could be trained to produce abstract visual art by learning P5 sketches with abstract shapes, patterns and colors and producing its own. It would be fascinating to see how AI produces art in different abstract styles and how different styles can be boiled down into algorithms. For example, many of Wassily Kandinsky’s abstract paintings can be broken down into shapes in P5, making it possible to turn them into data and train an algorithm on the paintings themselves. While the hurdle may be that there are not enough paintings by one particular artist to create a reliable dataset, perhaps a dataset could be created on a particular style of painting to train the AI.

Future plans

In the future I would love to expand the styles from Swiss Minimalism into Bauhaus, Vaporwave and more artistic styles with sets of design principles. In order to do this, I would also want to find a different way of turning posters into data besides P5 code, as it would allow for a lot more flexibility for visual elements. An interesting addition would be allowing photos and training the algorithm on photo placement and scale, so that users could upload photos into the interface and have them placed on the poster by the AI.

Sources

https://www.figure-eight.com/resources/human-in-the-loop/

Swiss (International) Style Of Design: The Guiding Principles That Influence Flat Design

https://medium.com/@rexrothX/ai-visual-design-is-already-here-and-it-wont-hesitate-to-take-over-your-petty-design-job-934d756db82e

https://medium.com/@deepsystems/human-in-the-loop-for-object-detection-with-supervisely-and-yolo-v3-fa205ff07c1f

Week 9: Midterm Update (EB)

Background:

Although I originally planned to create a framework for webVR on A-Frame using posenet, the process turned out to be too difficult and beyond my capabilities and understanding of coding. Although the idea itself is relatively doable compared to my initial proposal, I still needed more time to understand how A-Frame works and the specific coding that goes into the 3D environment. However, I wanted to create something doable and yet creative possibly incorporating sonic elements to the project.

Motivation:

For the midterm, I decided to create an interactive sound visualization experiment using posenet.  I downloaded and used a library called “Simple Tones” containing multiple different sounds of various pitches. The user will use their left wrist to choose what sound they want to play by placing their wrist along the x-axis.  This project was inspired by programs such as Reason and FL Studio as I like to create music in my spare time.

Methodology

I used the professor’s week 3 posenet example 1 as a basis for my project. It already had the code which allows the user to paint circles with their nose. I wanted to incorporate music into the project, so I looked online and came across an open-source library with different simple sounds called “Simple Tones”.  

I wanted the position of my hand in the posenet framework to play sounds. Therefore I decided that the x-axis of my left wrist would be used to determine the pitch.

if (partname == “leftWrist”) {
if (score > 0.8) {
playSound(square, x*3, 0.5);
let randomX = Math.floor(randomNumber(0,windowWidth));
let randomY = Math.floor(randomNumber(0,windowHeight));
console.log(‘x’ + randomX);
console.log(‘y’ + randomY);
graphic.noStroke();
graphic.fill(180, 120, 10);
graphic.ellipse(randomX, randomY, x/7, x/7);

the “playSound” command and its attributes relate to the library that I have in place. Because the x-axis might not have high enough numbers to play certain pitches and sounds, I decided to multiply the number by 3. Left is  high-pitch, while the right is low-pitch.

I ran it by itself and it seemed to work perfectly.

After some experimentation, I also wanted some sort of visual feedback that would represent what is being heard. I altered the graphic.ellipse to follow the x-axis coordinate of the left wrist. The higher the pitch (the more left it was on the axis) – the bigger the circle.

The end result is something like this. The color and sounds that it produces give off the impression of an old movie. 

Experience and difficulties

I really wanted to add a fading effect on the circles, but for some reason, it would always crash when I write a “for” loop. I looked into different ways to produce the fading effect, but I wasn’t able to include it in the code. 

I would also try to work on my visual appearance for the UI. It does seem basic and could use further adjustment. However, currently, this is as much as my coding skills can provide.

This idea and concept did seem to be a very doable task at first, but it required a lot more skill than I expected. However, I did enjoy the process, especially the breakthrough moment when I could hear the sounds reacting to my movement. 

Overall, I have now learned how to use the positioning of a bodypart to do something. Going further, I do want to work on the webVR project and this experience can help in the understanding and implementation.

Social Impact:

In the process of my midterm, I worked on two different projects. The first project was pairing WebVR with posenet in order to develop a means to control the VR experience with the use of the equipment required. The second project was the one I presented in class – Theremin-inspired posenet project. Although I only managed to complete one posenet project, I believe that both projects have a lot of potential for social impact.

First, let’s talk about the WebVR project. The initial idea behind the project was to make VR more inclusive by allowing people without the funds to buy the equipment to experience VR. HTC Vive and other famous brands all cost over 3000RMB to purchase. By allowing posenet to be used inside WebVR, we can allow anyone with an internet connection to experience VR. Obviously, the experience won’t exactly be the same, but it should give a similar enough experience

Secondly, the Theremin-inspired project. I found out about the instrument a while back and thought to myself “What an interesting instrument?”. While the social impact of this project isn’t as important or serious as the previous one,  I can see people using this project to get a feel or understand of the instrument. The theremin differs from traditional instruments in that it is more approachable for children, or anyone for that matter. It is easy to create sounds with the theremin but it has a very steep learning curve. By allowing this kind of project to exist, people of any background can experience music and sound without buying the instrument.

Future Development:

For the first project, I can see the project developing into an add-on that works for every WebVR project. For this to be real, one has to have an extensive understanding of the framework A-Frame. By understanding the framework, one can possibly use it to develop the necessary tools for the external machine learning program to be integrated. 

The machine learning algorithm also needs to be more accurate in order to allow as many functions to be used as possible. 

For the second project, I can see music classes using this project to explain the concept of frequencies and velocities to younger children or those with beginner knowledge in music production. It allows a visual and interactive experience for these people. 

For the future, it can be possible to add the velocity and volume of each point on the x and y-axis to make it more quantifiable for the person who is using it. For those who want to 

Midterm Writing Assignment – Ziying Wang (Jamie)

For the midterm project, I developed an interactive experimental art project: Dancing with a stranger.

Background:

Dancing with a stranger is an interactive experiment art project that requires two users to participate. The idea is that user A and B’s limb movements will be detected. User A will be in control of the figure A’s arms and figure B’s legs, user B will be in control of figure A’s legs and figure B’s arms. The result will be presented on the screen with a dark starry night background with two glowy figures dancing. Ideally, the webcam can also detect the speed of the users’ feet movements and switch through a set of different songs that match distinctive speeds. 

The following photoshopped image illustrates the project. The white dots are used here to demonstrate the joints that will be detected on the users, and they will not appear in the final result.

The yellow figure’s arms and the pink figure’s legs are the movements of one user; the yellow figure’s legs and the pink figure’s arms are the movements of the other user.

Motivation:

The idea of this project was inspired by Sam Smith and Normani’s song “Dancing with a stranger”. When I’m listening to this song, it presents me with a picture that even though the two people are not familiar with each other, they are bonded by the music and therefore create a tacit, mutual understanding. In most dual games/ interactive designs, each player is asked to take full control of his/her character, I decided to pursue a different way. What if a person can only control half of the character, and only with the cooperation with another person, can they successfully create a beautiful dance together? That’s how this came to mind. 

Methodology

To create this project: Dancing With a Stranger, I need the Posenet Model to detect two people and record their coordinates simultaneously but separately. The model records the left and right shoulders, left and right elbows, left and right wrists, left and right hips, left and right knees and left and right feet. After storing all the coordinates in the parameters, I create bezier through every three dots and simulates the limbs of the two users. I further used the nose coordinates of the two users to represent the face positions. When the body structure has been formed, I trace the trail of the figure’s body movement, so that when the figure moves on the screen, there will be colored trails tracing the movement. The last step for this project was to detect the speed of the two movements and switch between the fast and the slow song according to the average speed of the two people.

Experiments

This is the video of a single person demonstration (sound on):

This is a screenshot of the two people model:

I started with accessing two sets of body coordinates in Posenet model. By console.log(poses), I am able to access all the data stored as objects inside different arrays within the poses array. Even though Posenet is distinguishing subject by subject, which means that they are classifying a full set of bodyparts’ coordinates within the array of subject 1 and another set for subject 2 and so on, it fails to be completely accurate when there are overlap bodyparts between different subjects. This is a huge obstacle for me since my primary goal was to apply a pair of arms of one person on the pair of legs of another person, without distinguishing the bodyparts clearly, the effect can’t be achieved and the same arms of the person will be on his/her own pair of legs. After I planted the bezier coordinates separately and attached them to the nose position, theoretically the model would work as I imagined. I then started a single-person-demo to work on the time-lapse effect. 

Originally, I thought about building arrays that would store the previous 100 coordinates of each body part and display the 100 beziers at the same time, only in different opacity, but as I started to work on it, I discovered that the dataset is huge and confusing. I then consider changing the background opacity to create the fading effect, but somehow it doesn’t work on canvas, the areas that are covered by the trail becomes dark gray and remains on the canvas. I, therefore, decided to take the model down from the canvas and build it directly on p5.js. By using the function background(0,20), the opacity change successfully worked for p5. The movement of the figure leaves the trace behind it and it would fade away as time goes on.

I then started to work on the nose speed, I stored the coordinates of the nose 100 loops before and compare the distance between the two coordinates, this may seem not too accurate since the person can go back and forth then return to the previous coordinates in really fast speed, but since the interval is controlled as 100 loops, the possibility of this is very low. To refine this system, I can shorten the intervals and adjust the constant. To apply this to the two people model, I only need to calculate the average distance of the two distances and build the conditions upon the speed (defined by distance). When the average speed is above a number, the movingFast() is executed, vice versa. However, it didn’t perform well because every time the movingFast function is performed, the song starts to play, and it therefore constantly does the starting action. I then revise it to recognize whether the song is playing or not, if it’s already being played, the program would skip the play function. 

Then I apply all techniques to the two people model, by calculating the average speed of the two noses, the program would switch between the fast and slow song. The two figures both consist of the color yellow and pink, indicating the body parts belonged to different users. When there aren’t two and only two people in front of the webcam, it displays a loading gif.

The loading page:

However, due to the problem of not classifying the two users’ bodyparts clearly, the two people model can’t perform as well as I imagined and would display the wrong color if it misrecognizes. It would also fail to locate the hips’ y position accurately. I suppose if a better camera is used for detecting, instead of the laptop’s webcam, the project would perform better.

Social Impact & Artistic Influence

For Dancing With a Stranger, my goal is to bring people closer by interacting with each other through dancing to music. For me, music and dancing are the things that break down boundaries. By combining technology (PoseNet and p5) and art together, I created a new approach to entertainment. The users (currently 2) are closely connected and influencing each other constantly in this project and together, they control the music choices——the music choice depends on the average moving speed of the users. We are living in a society where our lives are closely connected with technologies, there were lots of previous designs that aim at separating humans from their technologies in order to strengthen the human-to-human bonds. However, I don’t think that is the right attitude to deal with the booming technologies in our era, a better approach should be strengthening the bond among people with the help of technology. With p5, I managed to create an unrealistic visual effect of body movements. With PoseNet, I get to present what happens in the physical world on the screen and mirrored the movement in a non-human form. The users can, therefore, enjoy this process of being themselves but not actually themselves on a virtual platform.

Further Development

For further development, I’d like to transform my project that based on one pc to a platform that allows multiple users to use their own devices for PoseNet detection and mirror their images onto one public platform. The user gets to see their own image on a communal screen and perform the dancing together on that communal platform while using personal devices. Preferably, the color of the user’s figure changes after the model collect the speed of each user and compare them together, then assign different colors from the fastest to the slowest figure. I’ll also input more music choices into the model and implement different ways to decide on music. If possible, I’d also switch to different backgrounds instead of the all-black one. I would also improve the figures displayed on the screen by adding more p5 effect to the figures.

Midterm Writing assignment —— Lishan Qin

Overview

For the midterm project, I developed an interactive two player combat battle game with a “Ninjia” theme that allows players to use their body movement to control the actions of the character and make certain gestures to trigger the moves and skills of the characters in the game to battle against each other. This game is called “Battle Like a Ninjia”.

Background

My idea for this form of interaction between the players and the game that involves using players’ physical body movement to  trigger the characters in the game to release different skills is to a large extent inspired by the cartoon “Naruto”. In Naruto, most of the Ninjas need to do a series of hand gestures before they launch their powerful Ninjitsu skills. However, in most of the existing battle games of Naruto today, players launch the character’s skills simply by pushing different button on joystick. Thus, in my project, I want to put more emphasis on all the body preparations these characters do in the animation before they release their skills by having the players pose different body gestures to trigger different moves of the character in the game. Google’s pixel 4 that features with hand-gesture interaction also inspires me.

Motivation

I’ve always found that in most of the games today, the physical interaction between players and games is limited. Even though with the development of VR and physical computing technology, more games like “Beat Saber” and “Just Dance” are coming up, still, the number of video games that can give people the feeling of physical involvement is limited. Thus, I think it will be fun to explore more possibilities of diversifying the ways of the interaction between the game and players by riding of the keyboards and joysticks and having the players to use their body to control a battle game.

Methodology

In order to track the movement of the players’ body and use them as input to trigger the game, I utilized the PoseNet model to get the coordination of each part of the player’s body. I first constructed the conditions each body part’s coordination needs to meet to trigger the actions of the characters. I started by documenting the coordination of certain body part when a specific body gesture is posed. I then set a range for the coordination and coded that when these body parts’ coordinations are all in the range, a move of the characters in the screen can be triggered. By doing so, I “trained” the program to recognize the body gesture of the player by comparing the coordination of the players’ body part with the pre-coded coordination needed to trigger the game. For instance, in the first picture below, when the player poses her hand together and made a specific hand sign like the one Naruto does in the animation before he releases a skill, the Naruto figure in the game will do the same thing and  release the skill. However, what the program recognize is actually not the hand gesture of the player, but the correct coordination of the player’s wrists and elbows. When the Y coordination of both the left and right wrists and elbows of the player is the same, the program is able to recognize that and gives an output.

Experiments

Originally, I wanted to use a hand-tracking model to train a hand gesture recognition model that is able to recognize hand gestures and alter the moves of the character in the game accordingly. However, I later found that PoseNet can fulfill the goal I wanted just fine and even better. So I ended up just using the PoseNet. Even though it’s sometimes less stable than I hope, it makes more using diverse body movement as input possible.During the process of building this game, I encountered many difficulties. I tried using the coordination of ankles to make the game react to the players’ feet movement. However due to the position of the web cam, it’s very difficult for the webcam to get the look of the players’ ankle. The player would need to stand very far from the screen, which prevents them from seeing the game. Even if the model got the coordination of the ankles, the numbers are still very inaccurate. The PoseNet model also proves to be not very good at telling right wrist from right wrist. At first I wanted the model to recognize when the right hand of the player was held high and then make the character go right. However, I found that when there is only one hand on the screen the model is not able to tell right from left so I have to programmed it to recognize that when the right wrist of the player is higher than his left wrist, the character needs to go right.

Social Impact 

This project is not only an entertainment game, but also a new approach to apply the technology of machine learning in the process of designing interactive game. I hope this project can not only bring joys to the players, but also show that the interaction between game and players is not limited by keyboards or joysticks. By using the PoseNet model in my project, I hope this project allows people to see the great potential the machine learning technology can bring to game design in terms of diversifying the interaction between players and games, and also raise their interest in learning more about the machine learning technology through a fun and interactive game. Even though today most of games still focus on the application of joysticks, mouse, or keyboards, which is not necessarily a bad thing, I still hope that in the future with the help of machine learning technology, more and more innovative way to interact with games will become possible. I hope people can find inspiration from my project.

Further Development

If given more time, I will first improve the interface of the game. Since it has brought to my attention during user test that many players often forgot the gestures they need to do to trigger the character’s skill. Thus, I might need to include an instruction page on the web. In addition, I will also try to make more moves available to react to player’s gesture to make this game more fun. I was also advised to create more characters that players can choose from which character to choose to use. So perhaps in the final version of this game, I will apply a style transfer model and ask the model to generate different character and battle scene to diversify the players’ choice.

Midterm Documentation — Crystal Liu

Background

         My inspiration is a project called teachable machine. This model can be trained in real time. The user can make some similar poses as the input for a class. The maximum of the class is 3. Each pose corresponds an image or GIF. After setting up the dataset, once the user makes one of the three poses, the corresponding result will come out.

For me the core idea is excellent but the form of output is a little.  There are also some projects about motion and music or other sound.

So I want to add audio as output. Also, the sound of different musical instruments is really artistic and people are familiar with it. Thus, my final thought is to let users trigger the sound of the instrument by acting like playing the corresponding musical instrument. 

Motivation

My expected midterm project is an interactive virtual instrument. Firstly, the trained model can identify the differences in how musical instruments are played. Once it gets the result it will play the corresponding sound of the instrument. Also, there will be a picture of the instrument on the screen around the user.

        For example, if the user pretended to be playing guitar, the model would recognize the instrument is guitar and automatically play the sound of guitar. Then there will be an image of guitar showing on the screen. The expected result is that it looks like the user is really playing the guitar on the screen.

Methodology

In order to achieve my expectation, I need the technology to locate each part of my body and to identify different poses and then classify them automatically and immediately. According to the previous knowledge, I decide to use PoseNet to do the location part.

I plan to set a button on the camera canvas so that the users don’t need to press the mouse to input information and the interaction will be more natural. To achieve it, I need to set a range for the coordination of my hand. When I lift my hand to the range, the model will start receiving the input image and 3 seconds later it will automatically end it. Next time the user makes similar poses the model will give corresponding output. KNN is a traditional algorithm to classify things. So it can be used to let the model classify the poses in a short time and achieve real time training.

Experiment

Virtual button

The first step to make my project is to replace buttons that users need to use the mouse to press by to the virtual button triggered by the human body part, for example, the wrist. To achieve it, I searched for the image for the button and found the following GIF to represent the virtual button. To avoid touching mistakenly, I put the buttons at the top of the screen which looks like the following picture.

I used the poseNet to get the coordinates of the user’s left and right wrists and then set a range for each virtual button. If the user’s wrist approaches the button, the button will change into GIFs containing different instruments (Guitar, drum and violin). These GIFs play the role of feedback to let the user know they have successfully triggered the button. 

After that the model should automatically record the video as the dataset. The original one is that if the user press the button once, there will be five examples added to the class A. For my virtual button, the recording part should run once the user trigger the button. However, I need to set a delay function to give users time to put their hands down and prepare to play a musical instrument. Because the model shouldn’t count the image that users put their hands down as the dataset. So I set 3s delay for the users. But collecting examples is discontinuous if I keep raising my hand and dropping it.

Sound

The second step is to add audio as the output. At first, I said if the classification result is A and then the song will play (song.play( ); ). But the result is that the song played a thousand time in 1 second. Thus, I can only hear noise not the sound of guitar. So I asked Cindy and Tristan for help, and they suggested me to use the following method: if the result is A and the song is not playing right now, the song will play. Finally, it worked. There was only one sound at a time.

UI

The third step is to beautify UI of my project. First is the title: Virtual Instrument. I made a rectangle as the border and added an image to decorate it. It took some time to change the size of the border to the smaller one. Also, I have added shadow to the words and added 🎧🎼 to emphasize music.

Then I added some GIFs which shows the connection between body movement and music. They are beside the camera canvas.

At last I added the border to the result part:

Problems

The problems I found in the experiment are as follows:

  1. The process of recording and collecting examples is discontinuous. It often gets stuck. But the advantage is that the user will know whether the collection part end by seeing if the picture is smooth or stuck. Also, the stuck image may have something to do with my computer.
  2. Sometimes the user might touched two buttons at the same time, but it is hard for me to avoid this situation through the code. So I just changed the range of each button to widen the gap between them. 
  3. I have set the button to start predicting but it was hard for the model to catch the coordinates of left wrist. Sometimes it took a lot of time for the user to start predicting. Thus, I have changed the score from 0.8 to 0.5 to make it better.
  4. Once the user pressed the start button, there would be a sound of the drum, even though the user didn’t do anything. It made me confused. Maybe it is because that KNN cannot consider the result that doesn’t belong to any classification. The model can only consider the most possible classification the input belongs to and give the corresponding sound.

Therefore, the next step is to solve the problems and enrich the output. For example, I can add more kinds of musical instruments. And also the melody can be changed according to different speed of the body movement. 

Social impact

My goal is to create a project to let people interact with AI in an artistic and natural way. They can make the sound of a musical instrument without having a real physical one. Also, it is a way to combine advanced technology and daily art. It provides an interesting and easy way to help people learn about and experience Artificial Intelligence in their daily life. In a word, it provides a new way of interaction between human and computer. I think this project can be used as a way to display the project in the museum and as a medium to bridge the viewers and the project.

Further development

The next step is to solve the problems happened in the experiment and enrich the output. And I need to fully utilize the advantage of real-time training in my final project. My idea is to give users more opportunities to show their thought and creativity. To let them decide which kind of input can trigger which kind of output. Also, I want to add style transfer model to enrich the visual output. The style of the canvas can be changed as the mood of melody changes. For example, if the user choose the romantic style of the melody, the color of the canvas can turn to pink and also there will be pink bubble on the canvas. But the most essential problem is how to let the users create their own ways of expression through the real-time training. How to make the interaction smoother is also important for the final project. Also, I want to take advantage of the sound classification to play the role of the visual button. On the one hand, the users can create their own sound command to control the model. On the other hand, this function can avoid the problem happened in my previous experiment. But I am worried that the model of sound classification can not work accurately enough so that the result can not reach my expectation.