Simplinstrument – Tristan Murdoch – Marcela Godoy

Final Project:

Concept and Design:

Our first idea for the final project was to create an instrument that requires little to no understanding of music to play. There would be no note reading or unfamiliar mouth or hand positions. The instrument would be controlled by simple hand movements and a mouthpiece that measures the air pressure. Here is a clip of Daniel simulating what he hoped our project’s experience would look like:

Here Daniel is using two hands and his breath to simulate the control of amplitude,  frequency, and modulation. But we quickly discovered that much like playing the piano, using two hands is much more difficult than it looks. In essence, we wanted users to be able to easily understand and play this without the hassle of extensive learning. In order to simplify our design, we decided not to include the modulation feature and instead spend more time perfecting the other two inputs. As far as specific input methods, we used a gyroscope and pressure sensor. One of the other inputs methods we considered was a slide potentiometer, which would allow the user to be much more accurate with the pitch. But we agreed that measuring the angle of the gyroscope located on a ring would enhance the experience much more than a slide potentiometer. 

Fabrication and Production:

As far as physical fabrication, there was not much to do. I created a ring out of velcro strips that would hold the gyroscope, as well as a ring stand to keep it steady for calibration before use. hand take ring off from the platformInitially, there was no ring stand and the user was expected to calibrate the device. During the user testing session, we quickly realized how confusing and complicated this was and built it from a box with chopsticks. Marcela also gave us an idea that we hadn’t considered before: if we are focusing on the experience, why should we have an interface at all? Instead, we would blindfold the user and limit them to the headphones to completely focus on the experience of making music. 

Now, I will go more into detail about the process of fabrication. The first step after receiving the pressure sensor in the mail was to build a mouthpiece prototype for testing purposes. 

Mouthpiece Print (Part 1)

We used this to test out (a) whether the sensor worked well and (b) the range so we could calibrate it properly.

Testing Mouthpiece and Sensor

After we checked if everything worked (it did), we finished the mouthpiece.

Mouthpiece 3D Model (2nd Part)
Prototype Mouthpiece (Failure)
Completed Mouthpiece (Final Product)

The next step was to write the code, which included processes functions neither Daniel nor I had ever even heard of. This was by far the hardest part of the project. Calibrating the gyroscope required a lot math and online resources. Take a look at parts of the process:

After coding, we built the ring with the gyroscope in it, hoping to allow users to move their wrists up or down to control the pitch. Surprisingly, this worked quite well!

Here is the ring, designed to fit any finger size:

As a side note, we created two versions: easy and expert. Easy mode plays a track in the back and limits the user to perfect chords, meaning they cannot sound bad even if they tried. The other mode allows the user to freestyle with any note available, making it much harder to produce a pleasing melody.

Here I will include all the user tests, including some during the user testing sessions.  Unfortunately, most were done with headphones, since the experience is best when the sound fills your ears. 

^^ Featuring us amazed that it works properly for the first time

In the final version, we also included the blindfold to give a more authentic experience and make the user not worry about his or her surroundings. In the next clip, Daniel begins with the expert mode and then I activate the easy mode (to showcase both). It is easier to hear how the easy mode pushes the pitch to the closest chord.

Conclusion:

In the end, we are very proud of what we managed to accomplish. The interactive experience is composed of the user inputting a certain pitch and amplitude, and once he or she receives the feedback from the machine, can continue to change the inputs to create a melody. The user can also try and match the beat of the background on easy mode, which makes the experience much more interactive than the freestyle. Additionally, we believe the hand movement altering the pitch adds a level of interactivity. Ultimately, our audience reacted to our project in the way we hoped they would. During the tests, it was hard for people to get the full experience because our code would stop working, or the gyroscope calibration would fail. However, near the end, we asked Kyle to test it once more when it was fully operational, and he couldn’t stop using it (didn’t get footage of that, just an image).

If we could improve this project, we have a few ideas in mind. Thus far, this experience has been all about the individual: the user is only performing for himself. However, if we were to use loudspeakers and longer wires, the blindfolded user could stand up and effectively “perform” a piece. Other modifications we might have added were a recording feature to play back the user’s music and the implementation on other instruments. It feels weird to blow into a mouthpiece and hear a string ensemble, but it was the cleanest sounding instrumentals we could easily acquire. All in all, I think this project was successful because it strays from mainstream music. It invites any and all individuals, regardless of musical background to immerse themselves in their own music. 

Recitation 11: Workshops by Tristan Murdoch

For this recitation, I chose the “Serial Communication” workshop. This seemed ideal because Daniel and I have some Arduino to Processing serial communication, and in order to help I wanted to brush up on it. Luckily, we mostly focused on Arduino to Processing, as many people’s projects also went in this direction. 

However, I was mildly disappointed that the workshop just rehashed material that we have already learned. Although I understand that practice increases my ability to code, I feel in this instance that it was rather simple and repetitive. 

By the end, we used simple inputs like buttons and potentiometers through A0 and A1 to control color, size, and movement of various things on Processing. First, we used the Arduino code in the docs for “multiple values”. In processing, we created an ellipse that we could move with the potentiometers. We then added a red square in the mouse position whenever a button was pressed.

Here is the final product (with a partner).

Recitation 10: Media Controller by Tristan Murdoch

In this recitation, we utilized knowledge gained from the previous week to alter an image, video, or live video from the computer’s webcam. I chose to use an image, of a tree in Costa Rica with beautiful natural pink flowers.

I used part of the code we used in the class before that changes the size and orientation of the pixels as the mouse moves around the image, but I used potentiometers as inputs instead of mouseX and mouseY.  To be completely honest, I did not have a particular goal in mind, but I played with the size values and distance values and had some interesting results.

This was one of the more successful attempts (in the sense that I could understand what I was doing), but as I replaced different variables to see what effects would concur, the image manipulation got a little crazy:

I took a look at “Computer Vision for Artist and Designers,” but it appears to me that the technology the author focuses on is mostly a computer’s ability to recognize the movement of a human and interact with that movement in some way. I used an image for the object of manipulation, which does not use the same technology in the reading. However, I had the option to activate the webcam and manipulate the live footage. Although a webcam gives the computer the ability to ‘see,’ what we did in the recitation is not the same as the examples in the reading. 

Recitation 9: Final Project Process by Tristan Murdoch

Step 1:

Monica’s Project:

Monica’s (I hope her name is actually Monica because now I’m not sure)  project, titled “A Project in Sound,” intends to act as a role-playing game (RPG) to escape from reality: or, as Monica calls it, “imagination diversion.” She sounded very ambitions claiming that there would be many, many different possible endings. My advice was, from experience playing RPGs, that some choices can be “game-changing,” but others can just queue different dialogue responses but end the scene in the same way. In other words, don’t overburden yourself with too many endings when the user can feel the same level of interactivity from fewer changes. Once you split the story once, twice, three times, it begins to become very difficult and time-consuming.  As far as input methods for choices, she would like to use sensors to activate when you put your hand over it. There would be two options for each encounter, and the game would last roughly 5 minutes. Personally, I don’t see much difference between “A Project in Sound” and an RPG I can play I my phone (besides the input method).  Our definitions of interaction seem to line up pretty well, as both agents change their actions based on the other’s previous action to achieve a particular goal. 

Serene’s Project

Her project is called “Driving into Imagination,” and is also an RPG style project. The main idea is that the user chooses different paths to drive along, but every path strays more and more from reality. In essence, the environment becomes like someone’s imagination. Serene also wants to add key functions to play songs on the radio while playing. One question that wasn’t cleared up for me was how the scene changes will work. Will the car always be still on-screen, or is there movement of the vehicle, environment, etc.. Another idea we gave her was to think of an ending for the game. Where are you driving to? Are there different destinations? Can you crash from a wrong turn? These questions hopefully helped guide her to create a more interactive and dynamic interactive experience. Again, our versions of interaction line up fairly well. If she can provide a path or goal for her project, I believe it will be more successful.

Daniel’s Project

Daniel plans to create what he calls an “Interactive Heartrate Monitor.” This project will measure the user’s heart rate from a finger and display it on a screen. At the same time, a series of images and words geared to trigger thought-provoking mental activity will be displayed under the heart rate. Ideally, the user will be able to see how each image/phrase changed their heart rate and leave them pondering about the images they saw. Unfortunately, we collectively agreed that users will not see this as the experience he hopes they will. I suggested he includes headphones with relaxing sounds, perhaps alpha waves or whatever sounds that would bring the user from reality into the project. The interaction level in this project does not reflect how I define interaction because there is little chance the user will reflect on the project after use. The user may likely forget the experience and move to the next project, eliminating a part of the interactivity Daniel had in mind. 

Step 2

For our project, Monica recommended somehow incorporating simple instructions for an easy tune to play in order to give users a goal, instead of them just playing what they want. Another person recommended finding people with no musical background to user test to see if we are achieving our plan (we both have music experience, so it may be difficult to know). 

The most successful part was the pressure sensor as the mouthpiece—and I agree—because it is clearly intuitive. Although we did not explicitly discuss the least successful part of our project it seemed to be the “simple” part.  The group members seemed to think that the project would be more complicated than playing a regular instrument. I disagreed, not because it would be hard though. Clearly, it will be difficult to play, but the point is that it is easy to understand and requires no previous knowledge, meaning the learning curve is light. 

We will definitely incorporate a few things into our project. For one, will create a goal for people to accomplish, perhaps making them harder and harder if users want to attempt them. Next, from Daniel’s (my partner, not the same Daniel from my group) group we received an idea to use a disposable mouthpiece.  Lastly, we are currently brainstorming if we can make inputs more intuitive than we have in order to avoid any unnecessary complications. 

Final Project Proposal/Essay by Tristan Murdoch

I would like to begin by mentioning that although I have researched and brainstormed with my partner, Daniel, we have not discussed any specifics like naming or the exact design. We may initially have very different ideas of how this project will turn out, so the following project proposal is definitely subject to adjustments after we consult on exactly how to begin creating our project. 

The working title I have devised for our project is the “Simplinstrument.” Not because it is an extremely simple instrument to play—it may be quite difficult—but because it requires no prior knowledge of music or experience in playing instruments. Arguably the biggest reason people do not want to learn an instrument is because of the challenge of learning to read sheet music, fingerings, or even breath control. Our Simplinstrument removes those aspects of music and brings forth a host of new, interesting ways to create sound and music. 

The initial research I did for the final actually did not end up contributing very heavily to this project idea. After seeing Daniel’s idea, however, I started looking into theremins, which are types of instruments that fit our project. A theremin measures the distance of the user’s hands from certain sensors and transfers it into frequency and amplitude that affect the pitch and volume of a sound. We hope to keep the base of Theremin in mind, but perhaps change the inputs to make the instrument more interesting and original. In this way, both users who have a background in music and those who do not can alike experience a new method of music creation.

To begin I hope to make the instrument look like some notion of what most people would perceive to be an instrument. The most likely option, for two important reasons, is something like a recorder. Firstly, most users can identify with a recorder as it is a very common instrument used to teach music in elementary and middle school. Secondly, the top has a mouthpiece, which we hope to incorporate. Much like the bagpipes, however, the strength and precision of the airflow are irrelevant; rather, there will be a sensor that will detect a minimum amount of pressure that will trigger a buzzer (or other sound-emitter).  Another option for the mouthpiece, which I prefer, is to actually map the amount of pressure from the sensor and translate that to the amplitude, which would be easier for users to handle, albeit less interesting. Then, we have many sensors to choose from to alter the frequency (and potentially amplitude, depending on the nature of the mouthpiece) of the sound waves. What I find to be the most alluring is a slide mechanism along the shaft of the instrument that as moved with one hand increases or decreases the pitch of the sound. Daniel seems to be more attracted to the idea of using ultrasonic sensors, much like the Theremin, to change the pitch. Lastly, we will use processing to display which note is being played, as well as the current volume. If the coding proves not too difficult, I would like to use sound files to play the notes instead of a buzzer. The sound quality would be much better, and the project would be more interactive (processing to Arduino as well as Arduino to processing). As far as time management, I hope to work with Daniel next week to complete the code and planning for the building, as well as buy/print/find any materials not immediately at hand. After that is complete, we will begin as early as possible actually wiring and building the Simplinstrument. We have decided that if we cannot complete the mouthpiece (one of the more troublesome parts of the projects) by May 5th, we will have to change our project, or at least make significant alterations. We struggled to complete the midterm project on time, and will not make the same mistake this time. We will try and be done two or three days before the deadline in order to troubleshoot problems and make last-minute tweaks and changes.

As mentioned earlier, the direction of the project itself has not derived from the research of my blog post projects. I looked into the components of a theremin and similar instruments over the past few days and we got inspired to make an even stranger instrument. I got the slide idea from the Haken-Continuum, an instrument that creates pitch and timbre values based on the location of the fingers along the slide surface. That being said, there are a variety of options for inputs. As for my definition of interaction, I believe this project fits very nicely with it. I included in my last definition the idea of different levels of interaction. In this case, the sound/music will play either way as long as the user blows, and the project becomes more interactive as the user learns and gets used to how to control the sound. I also included the idea that the agents, in a perfect interactive environment, would change their responses to better complete the task of the interaction. In this instance, the user’s actions and movements will change as they hear what sound they are producing, which will (hopefully) lead to a better sense of how to use the instrument to make the desired melodies. As mentioned earlier, this is a version of an instrument that already exists, but our modifications change (for the better, in my opinion) the way it is used to create a more dynamic instrument.  It should be noted that this project/ musical instrument is not for those looking to learn about music or find a loophole in modern music methodology. Our target audience, rather, is people who want a different or new experience with music. Our project invites anyone to share a musical experience different from one the user may have experienced in the past. It should be accessible for anyone and easy to learn because of its intuitive design. If this project is successful, it may well invite others to replace our methods of changing the amplitude, frequency, or even modulation with other interesting sensors/methods.