Final Project Proposal Documentation – Kevin Xu

Kevin Xu

Kx421

Final Project Concept

The concept of my final project will involve expressing to the human eye how I believe computers perceive music. This idea first came from a thought about how computers perceive everything differently from humans. After researching how coding works and how computers can only truly read binary, I decided to base my project on not how computers read data but on how computers read music. My project will be a website which presents three instruments playable by key presses on the keyboard. However, those key presses will also generate visible binary which appears on the side of the instruments. If the binary is then converted back into text, it will read the exact notes played by the user. This way, people can exchange music played through this website not through audio but through binary. I chose this topic because I wanted to explore the concept of conversion. The more I learn about all areas of study, the more I realize that most things can be converted into other things, and that most units of measurement or symbols can be traced back to a completely different area of study. Therefore, I wanted to make an experience that urges users to think more about conversion, and how even music can be converted to a language computers speak and vice versa. In addition, I will include a sort of easter egg where people who understand the basics of code can access a special part of the website which has a few extra features. This easter egg will serve as a sort of reward for discovery and deeper thinking on the part of the user.

One inspiration I had for this project is not a single artist, but rather is rather a method of music production. Autotune is describes what is essentially modifying a person’s vocals using a computer. This method of AI and computer aided music is one of the major things that made me think about how computers can read music. In addition, the MIDI board is another method of music production which inspired my project. The MIDI board is a tool for producers and musicians which allows them to emulate other instruments using a computer. Most MIDI boards include a number of pressure sensitive buttons as well as a keyboard. I wanted to match that experience using a computer’s pre-existing keyboard, which inspired how my project looked and felt.

My project will focus on audio and making the act of playing notes as simple as possible for the user. I will use the “keydown” and “keyup” function to create instruments playable through keypresses, and include visuals to help the user better understand the notes they are playing. The user will mainly use the keyboard to create notes through the instruments, but in addition to that, the user is encouraged to open a binary converter to convert the notes they played into actual notes rather than binary. Then, they can look at the notes in a separate window while looking at my website, creating a sort of virtual sheet music which can be followed by humans.

sent.ai.rt – Final Project Proposal – Abdullah Zameek

sent.ai.rt – an interactive portrait  

concept :

sent.ai.rt is a real-time interactive self-portrait that utilizes techniques in machine learning to create an art installation which changes its behavior depending on the user’s mood derived from his/her facial expressions. The user looks into a camera and is presented a video feed in the shape of a portrait which they can then interact with. The video feed responds in two ways to the user’s emotion – it changes its “style” depending on the expression, and the web page plays back music corresponding to the mood. The style that is overlaid onto the video feed comes from famous paintings that have a color palette that is associated with the mood, and the music was crowd sourced from a group of students at a highly diverse university. The primary idea is to give individuals the ability to create art that they might have not been otherwise been able to create. On a secondary level, since the styles used come from popular artists such as Vincent Van Gogh, it is essentially paying homage to their craft and creating new pieces that essentially draw from their art pieces.

The phrase “sent.ai.rt” comes from the words “sentiment” which is meant to represent emotion which dictates how the portrait responds, and “art”. The “ai” in the middle represents the term “artificial intelligence” which is the driving force behind the actual interaction.

inspiration:

The use of machine learning in the arts has never been more prominent. With more technical tools coming out each day, artists have found new and exciting ways to display their craft. One such artist, Gene Kogan, was one of the pioneers in the use of machine learning learning to create interactive art. Inspiration for sent.ai.rt was heavily drawn from his project “Experiments with Style Transfer (2015)” where Kogan essentially recreated several paintings in the styles of others. For example, he re-created the popular Mona Lisa in styles ranging from Van Gogh’s “Starry Night” to the style of the Google Maps layout. Another popular artist, Memo Akten, also created a portrait based AI piece called “Learning to See : Hello World (2017)” which involves involves teaching an AI agent how to see. Thus, my project draws heavy inspiration from both these artists and their work in order to create a cohesive piece that takes into account human emotion and its interaction with computer based intelligence.

production:

The project is completely web-based – it uses standard web technologies such as HTML (which defines the structure of the website), CSS (which dictates how the website looks) and JavaScript (which allows for programming/algorithmic knowledge to be implemented). In addition to these, the website will also use several JavaScript-based frameworks, namely p5.js(which allows for a great deal of design and multimedia work to be done) and ml5.js (which is a machine learning framework). The machine learning components can be further divided down to two distinct tasks – recognizing human emotion, and applying the style related to that emotion to the video feed. The former is referred to as “sentiment analysis” and will be done with the help of an additional JavaScript add-on called FaceAPI. The latter is referred to as “neural style transfer” and will be done with the help of a ml5js functionality.

Additionally, the assets for this project such as the images and music have been procured from the Internet. The choice of music was determined by an online survey in a closed university group where students were asked to list songs that they associate with a particular set of moods.

In terms of feasibility, the technology exists to make this project reality and can certainly be extended to add further functionality (such as the ability to freeze, save and tweet out a frame from the feed) if necessary.

Week 13: Final Project Proposal- Evan and Kai

How Do We Fall in Love with a City?

Concept:

As somewhat cosmopolitan city dwellers, Evan and Kai wanted to address their adoration for cities through an artistic approach. Having lived in American cities like Richmond (Kai) and New York (Evan) while also being introduced to Shanghai, they chose these three cities to explore and detail how these cities captivated them overall.

The concept behind the code is that you arrive on the homepage, which looks like a desktop, and you’ll be able to navigate a few specific folders for example: Richmond, New York, and Shanghai. The audio part of the homepage will simply be an on-off switch for the background music. The folder icons of will be specialized, and when they are clicked, you’ll be linked to a new tab that focuses on that city.

Sources:

Kai-

Miriam Singer: http://www.miriamsinger.net/

Singer has worked in Philadelphia and her cityscapes are clustered, cluttered, and colorful. Her work can be murals or drawings, but they all have this childlike impossible-to-actualize fantasy that highlights the artist’s love for her city. Kai chose her as a source simply because she had been inspired by Singer for some time after browsing a zine library in Las Vegas, Nevada.   

Evan-

Derek McCrea: http://watercolorpaintingart.blogspot.com/2013/08/new-york-city-skyline.html

McCrea encapsulates the beauty of Manhattan through a very simplistic style of watercolor art. By focusing on a very imprecise and almost unclear style of blending colors and objects together, McCrea creates a distinct representation of different cities that are incredibly breathtaking.

Production:

Evan intends to create a city-inspired soundtrack that includes different elements of sound (not only music) that represent the noises and overall feeling of each of their chosen cities. He will include various sounds that are specific to each city and portray them to try and capture the feeling of physically being in that city. The project will also have some voice over storytelling of what Evan and Kai love about their respective cities. The voice overs will be somewhat casual in rhetoric, yet scripted with professional audio quality.

Many graphic design aspects from personal scans will be incorporated as opposed to digital art. The thought-process behind using real hand-drawings is to dedicate a more sentimental touch to the overall online platform. The execution of this project first involves us to draw new art and curiate some previous art, which will be their graphic design assets. Later, these will be process made into minimalistic .pngs using photoshop.

Finally, user-interaction will lead users to have a clearer understanding of the three cities Evan and Kai chose. The aim is to illustrate an artistic approach to city life through their own examples that can be expanded upon by others’ interpretations.

In creating this interactive scope of their 3 cities, Evan and Kai want the user to feel and experience what it is like to be present in these areas and what specific cultural aspects make these cities special/unique.

Week 12: Response to “A History of Net Art” – Grace Currier

Prior to reading this article, I was relatively unaware of the term “internet art” and what exactly it meant. I had never really considered using the internet as a platform with which to create art. Neither had I truly thought about it as a form of art. Rachel Greene shed a light on the growing popularity of the medium as of course, the world is becoming increasingly more connected, and thus, plugged in. Although this may be true, it is still a rather undervalued art form as many, including myself, are relatively unaware of its existence. When the word “art” comes to mind, one might think of a museum, painting, photography, or even acting, but rarely would one ever think of the internet. The emergence of internet art has demonstrated not only the creativity of those who produce it, but the fluidity and flexibility of the internet itself. That said, I think that because the internet already has so many uses, it is easy for the artistic aspect of it to become obsolete, or slightly forgotten. Nonetheless, it is a beautiful art form that will hopefully continue for the duration of the existence of the internet. After reading this article, I have a newfound appreciation for internet art and its creators. 

Week 11: Video Project Documentation – Grace Currier

imanas.shanghai.nyu.edu/~hrm305/videoProjectFinal

Project Idea 

Our initial idea for this project was an investigative murder mystery, that was a parody of serious crime television shows, etc. We wanted the story itself to be comedic in nature, and thus decided on two suspects, both of which were played by me, and the cop and victim, both played by Xavi. We wanted our user interaction to mainly be choosing between two options, or investigating evidence of the crime. 

The Process 

We encountered a lot of ups and downs throughout this project. First, we could not get the tripod to connect to the camera, thus creating a somewhat shaky appearance in the final product. Second, the audio was not as crisp as I think we were hoping, which was another criticism we received. This project was very time consuming and required a lot of attention from all of us, but I can proudly say that I am very happy with how it turned out in the end. Xavi and I did most of the work in front of the camera while Hanna and Selena did the behind-the-scenes work. In terms of interactivity, we included choice-based interaction, pop-ups and buttons that enabled the user to better understand the plot. The editing was seamless and the story flowed smoothly, just as when we first planned it out. One reason why we decided upon a comedic video is because Xavi and I are by no means, professional actors. So, it would be embarrassing (at least I think) for both of us to actually try to act, rather than willing embarrass ourselves for the sake of laughter and enjoyment from the viewer. 

Post-Mortem

Our original idea stuck with us from beginning to end. Despite the negative feedback, it was so fun to film and was also a learning experience, for sure. Hanna and Selena did a fantastic job of editing and coding to make the final project one that we could all be proud of. I think we all worked very well as a team, and they frankly made this project my favorite in the class thus far.