Week 4 – AR Content II

This week,  I had a lot of troubles getting Xcode to work with my RC project, and displaying the model in AR on my phone. Building off my previous model from last week – the pig. I imported the pig into Substance Painter, where I found the mesh of the model was a bit confusing to work with, but eventually got the hang of it. I added a pink calf skin color as a base color, then added some leaves (b/c fall is upon us) and darker splatter paint on the bottom to resemble mud. 

Next, I put the model into RC and created 3 notification triggers to be accessed in Xcode. This is where I found the majority of my troubles to be. Initially, the pig model wouldn’t show on my phone, but the triggers were able to be seen. In particular, the third trigger of sound didn’t work and I wasn’t sure how to make it sound on my phone. I will need to go back and try to fix this problem. But for now, I was able to get the Flip and Spin triggers to work. 

Here is a link to the screen recording! Xcode was kind of confusing to me, but more on the side of all the weird functionalities and getting things to work. 

Week 3 – 3D Modeling in AR

For this week’s assignments, we were given the task to create 3 custom 3D models and put them into Reality Composer. Starting off, I had a really difficult time figuring my way around and creating models in Blender. There was just a lot going on everywhere and there were so many hot keys to remember, even after watching the introduction tutorial (multiple times).

I decided to build off my idea from last week – starting off with creating a sheep! To get some inspiration, I literally just searched “beginners tutorial blender sheep” and found this one tutorial to follow. I’m giving credit to Grant here because I followed his model pretty closely. However, after modeling the sheep and adding color via the shading tab, I soon ran into a problem that only affected my sheep model. As I was exporting the model (to glb and fdx format) and putting it into Reality Converter, some parts of my model did not get exported! You can see this in my final AR experience – the main head part and eyes weren’t able to transfer over so the sheep just looks a bit scary. The first screenshot shows what my sheep is supposed to look like, but then when I put it into Reality converter to be converted into usdz format, it lost some key components. I even tried to delete the original eyes and head portion and replace it with a new duplicate, but for some reason it just wouldn’t hold. I’m not sure why this happened for my sheep because for my other farm animals, everything was fine! I was able to export the model as a glb file and convert it via Reality Converter to a usdz so that it could be read in Reality Composer.

Now, onto my next two models. Recently, my brother has been raving about his Minecraft worlds and so I was inspired to create Minecraft-like models so I could brag to him about MY models. For my second model (pig), I ran into another inconvenience! I didn’t center the body portions of the pig nicely/parallel to the anchor, so I had to do extra alignment work so that the head portions and leg lined up nicely. You can see in the second screenshot below how the pig is slightly sideways. Other screenshots show the model and color. 

Finally, I made my chicken. However, the coloring in Blender turned out different when I converted it to usdz so I had to go back and redo the color shades multiple times before I was satisfied.

Now, it was time to put my models into Reality Composer. I wanted to create a farm animal experience so, whenever you tap on the animal, its associated sound would play. I chose a horizontal anchor. Below is a screenshot of what the experience looks like in RC. 

 Here is the link to the screen recording.

In the future, I would have loved to develop the models a bit further but I didn’t feel like I had gotten the hang of Blender quite enough yet. I would have loved to create a swirly pig tail for my pig and even looked online for ways to do it, but there wasn’t too much information on it and if there was, it was in the older Blender versions. It also would have been really cool to add some flappy wing animations for my chicken. Even though the process was a bit overwhelming at first, I’d say this was a good start and am looking forward to really getting to know the ins and outs of Blender for future models.

Week 2 – World Sensing with AR

This week, we were given the task to create 2 End-to-End AR experiences that improve/elevate some of our daily experiences. For my first experience, I wanted to improve upon something that I have struggled with my whole life: falling asleep. Traveling around and adjusting to different time zones have made the process of falling asleep simply unbearable. So, I wanted to create and AR experience that could make that process a bit more pleasing. Plus, everyone goes on their phone before sleeping!

Initially, I decided to use a vertical anchor to be able to mimic the position the user would be in. I’d imagine the user would be laying down in bed, holding their phone above their face, meaning their phone would be facing the ceiling in a vertical fashion. However, I quickly found out that RC has a difficult time examining the world of the ceiling because there is simply no texture to the wall, no matter how long I tried moving or even if I added some LED galaxy stars I have. So, I had to demonstrate the experience standing up, facing a wall with more texture. 

Now onto the fun stuff – the item components! I focused mainly on including components that I believed would help one feel more relaxed. I added some text (“zzz”) and stars surrounding the main introductory text (“it’s sleepy time…”). The “zzz” text will float away if you tap on it, mimicking the idea that you are slowly falling into a sleepy mood. When you tap on the stars, they all pulse a few times, as if you were outside watching the stars glisten. On scene start, a chill lofi beat begins playing, immediately entrancing you into a relaxation state. On the right, I included 3D sheep models that if you tap on them, they will “float” across the screen, like counting sheep jumping across a meadow (aka. one of the most typical things people say to do if you can’t fall asleep). Thank you polybuis for the model. 

Here is the link to my screen recording.

For my second AR experience. I wanted to enhance a visit to a friend’s new apartment – specifically focusing on elevating the art experience. My inspiration from this was drawn from housewarming gatherings. Having just arrived in NYC a few weeks ago and moving into a new place, often when people come over, they ask me about the tapestries and artwork I have on my walls, and vice versa. 

I decided to use an image anchor and imported a photo of a chakra tapestry I have in my room. Yoga and spirituality is a big part of my life and I really enjoy talking about the different chakras. At first, things were a bit disorientated because the image anchor was facing a different way, so when I went into AR mode on my phone, the chakra text was facing the wrong way. So, I just rotated the original image to fit RC’s orientation.

My view on computer

My view on phone

After solving this issue, I was able to add arrows and more text explaining what the chakras mean. When the phone recognized the image, the name of the chakras immediately showed, but in order to know what they mean, you could tap on the chakras and the meaning would appear. I kept the arrows and extra text hidden at scene start, so when the user clicked on the chakras, it would suddenly “appear”.

 

Here is the link to a screen recording.

Moving forward, I would love to make the first sleepy time AR experience compatible with the ceiling so that the user could really be immersed in the experience, laying in bed. But, I am not sure if RC would be able to recognize the surface if it had no texture. I would have also loved to know how to make the trajectory of movement of the sheep be circular (like a half moon route) rather than just straight across the screen. This way, it would better mimic the motion of a sheep jumping and frolicking across a field. For the second chakra AR experience, I would have loved to include some voice overs. So rather than tapping on the chakra and having more text appear, a voice memo of a very zen person speaking about the chakra would play from the phone. Kind of like those interactive art galleries! It was a lot of fun creating these experiences. I would love to learn more about creating my own 3D models and what RC has the capacity to do. 

AR Starter Project

For my first AR project, I wanted to create something that incorporates something I just love so much: noodles. (It’s a bit embarrassing to admit, but I eat noodles at least once every few days…) Being in Shanghai for the last few years and growing up in an Asian household, noodles have become a staple for me. 

Initially, I wanted to create an experience where you could walk into a bowl of ramen and sit down in it, but I wasn’t sure how to make a bowl so big or do that, so I had to scale back my idea. Instead, I created large bowls of ramen that would embody the essence of an experience eating ramen in a restaurant. Covid has made me nostalgic about dining out at some of my favorite ramen spots and I wanted to create an experience that could bring me back to that ambience and environment. 

I got the ramen figures from Reality Composer’s food section, and chose different styles. I also included a little banner in front of the bowls to explain what to do once you’re in the AR world. With each bowl of ramen, you tap on it and an associated sound or ambience plays along with a movement behavior to indicate the user has indeed tapped on the bowl. The first image shows how I was able to merge the behaviors together so that both behaviors would go off at the same time, rather than sound first than movement. The second image shows a bird eye view of the ramen. The third image shows what the user will see at first.

Here is the link to a screen recording on my phone in AR: https://youtu.be/mpJ4nd2ef5k

In the future, I would have liked to pursue my original idea of creating a giant bowl of ramen where the user could “walk into” because that would have just been such a dream for ramen lovers! I think a lot of the process of creating this experience was largely based on just exploring what RC was and how to use it. I was especially confused with the anchor aspect and how to maneuver around that, so that ended up taking a lot of time. After I gain more knowledge about RC, I would like to continue with this project!

Tip Tap Slap – Julie (Marcela)

Final Project Documentaion

Instructor: Marcela

Partner: Justin Wu

Project Title: Tip Tap Snap

Conception and Design

While the functionality of our final project strayed quite far from our original idea stated in our Final Project Essay, the underlying idea was still the same: providing a fun and enjoyable experience through music. Through a lot of self-testing, user-testing, and conversations with IMA fellows and professors, we came to our final form. Initially, we wanted to resemble our project after games like Tap Tap Revenge and Guitar Hero. We used Processing to create a visual game of the balls dropping. We created a start and end page that displays the users score. The song is triggered to start when the user clicks the screen and the game ends when the duration of the song is over. We wanted to include a start and end page, so the user could try and get the full experience of a game. However, when we imported the images, it slowed down Arduino and made the music sound weird. We learned that you could adjust the image to the size of the screen we had set and that most of the time, Processing lags when the background images are different sizes and when we have something constantly looping, rather than being called on in void setup(). We used the class function for each of the balls that dropped, used Arduino to procession serial communication to connect the button input to Processing. So, when a connection for the button was made at the same time the ball that dropped aligned with the area of the circle destination below, the ball would disappear, and the score would go up.

However, our original scoring system also made it difficult for the user to receive a positive score. We originally had 2 balls below: one bigger outer ball and a smaller inner ball. If you made the connection in the area in between the 2 balls you would only get 10 points, but if you miss you lose 20. But if you get the ball perfectly aligned with the middle ball, you would gain 50 points. But, it was too hard to ever get the balls perfectly aligned, and most people would miss, making the scores accumulate to negative. We decided to make the target ball only one at the bigger outer ball size and have it just gain 20 when the button is pressed anywhere in the target ball. This way, the user wouldn’t be discouraged and would incentivize the player to continue playing or play again.

We planned to differentiate our project by having a physical box with buttons and an additional feature that would take a picture of the player at a moment of distress. We planned to map out the beats to the song ourselves, however, we found that it was a bit too difficult to figure out in the time constraint we had. We would have had to create a CSV file that would update every millisecond to track what the player was doing using the table function. Rather, we let the balls drop randomly. However, to make it more challenging for the user, we adjusted each ball drop’s speed and took away the aspect of the camera capture because now we wouldn’t be able to tell the moment of distress. But, we figured that just pressing buttons while watching some balls drop on the screen wouldn’t attract and retain user attention enough, therefore we came to the idea of having buttons directly on the body. This way, rather than being able to see the buttons right in front of you, the user would have to register where the buttons were on their body and where each color was. Our medium was a soccer jersey.

The next thing that came into our design process was how we were going to place the buttons on our body. Originally, we wanted to order real, physical buttons and place that on the body. But, after talking to Leon, we realized a hard button might be painful for the user to wear and press. Instead, Leon suggested we create our own button using conductor tape. This way, Arduino would register that a button was pressed without using a physical button. By using the conductor tape, we could make a flat button that can be pressed on if placed on your body. Our initial prototype was just on paper. We use the copper wires over the regular silver wire to account for flexibility. However, the strands of wires were difficult to solder, but manageable. We put the tape on a piece of leather, to allow for flexibility to attach to the jersey fabric. To eliminate the amount wires attached, we utilized the pull up resister function on Arduino. This essentially lets us remove the power wire and the resistor. This works because the pull up resistor flips the circuit and always has the input at 1, rather than 0.

Finally, since we put the buttons on the body and were unable to map out the notes itself, we decided to have the game be collaborative rather than competitive. Through user testing, we learned that it was too difficult for the user to manage and press all 4 buttons via one vest. So, as a solution, we moved 2 of the buttons to another vest and made our game collaborative rather than competitive. Originally, we wanted to have the game more competitive, and have either one player compete with themselves or 2 players go back to back to compete. This actually turned out for the better because now 2 people can play together and have a collaborative experience.

Fabrication and Production

Process:

First, we started out with keys on the keyboard to make the ball connections. Then, we switched to sending Arduino values to Processing by putting buttons that would resemble our larger buttons on the breadboard. Then we moved from these buttons to our conduction buttons.

Ball Drop:

Mapping the ball drop rhythm was a very important aspect of our production process. We started with random dropping to test if the buttons on Arduino were able to process the information. From there, we planned to map out our own beats using Arduino and saving that file as the file that the users mapping would be compared to. However, after talking to Nick (as his robot project used some of the same mapping functions), we learned that we needed to include a table function that registers long lines of data and imports them to an excel file that is refreshed every few milliseconds. After using this method to try and map out the beats, we decided that we wouldn’t have enough time to complete this and that we should focus on creating the conductor tape buttons first before we focus on this. After we were able to successfully create out buttons, we consulted with Marcela on another possible strategy of mapping a ball drop pattern. She suggested we try the amp function we had learned in class (when sound was louder the ball would move further down the screen), however, after a few failed attempted, we decided to focus our attention on making the conductor buttons compatible to the body. Thus, going back to our original state of random ball dropping. This process taught us that sometimes, if you are under a time constraint that restricts you from continuing on with the project, to move on instead of staying stuck.

Button Arrangement:

After user testing, we implemented a number of changes. First, rather than have one vest with 4 buttons, we split the number of buttons and have 2 buttons on 2 vests. Before, it was a bit difficult for the user to balance 4 dropping balls at once. Most people were getting negative points. This way, we made it easier for the user to play, and introduced the aspect of a collaborative game. Before, it was only one person playing at a time, but this way, we allow two players to work together, making the game more engaging and fun. Also, since we were still working with cords, it made the cord organization less messy. User testing also helped us better understand the optimal button arrangement. We originally have 4 buttons on the front but found that most people found it unnatural to be pounding their chest. This prompted us to change the locations of the buttons and also contributed to the expansion to 2 vests. This way, users were able to more comfortably press the buttons and work together rather than frantically trying to individually find the button connections.

Gloves: 

User testing also changed out medium for the gloves. In the very beginning, we wanted to create a button that has cushioning in the middle that would separate the 2 conduction tapes. However, since we decided to put the button on a vest with flexible fabric, we decided to use gloves to make the connection instead. We were afraid the flexibility of the leather would make the fabric bend and already have the connection made. During user testing, our original idea was to have gloves with an x shape of conduction tape across the palm, but it kept falling off and was difficult for people to make the connection. And the gloves were a bit too small for everyone’s hands to fit. So, this led us to our idea to have adjustable Velcro straps that would be attachable to the hand.

Conclusion

Our goal of this project was to create a fun and enjoyable experience for users through the medium of music. Even though out project strayed quite far from our original idea, I still feel that our project aligns with my definition of interaction. My definition of interaction requires a continuous relationship between two or more actors, composed of communication through either verbal, physical, and/or emotional feedback. And in order for the interaction to have meaning, the relationship between the actors must echo some sort of feeling of personal connection that drives the relationship to be either held or persuaded further. Our project involves more than 2 actors: the computer, the vest, and 2 players. The interaction between these actors have a feedback loop where the computer displays the game, the user tries to make the connection between the buttons, and if they do the ball on the screen disappears and the score goes up, but if they miss the score goes down. If the game is challenging enough or interesting enough to the user, the user will then attempt the game again and try to beat his/her previous score.

Regarding to the strength and meaning of this interaction, some people may not feel a connection to our project. They may not like the song or the fact that they have to wear a vest, or because they have to collaborate with someone. Our project also lacked verbal feedback. No sound was given off to the user to indicate that they made the connection at the right spot or didn’t make the connection. If we have more time, I would have liked to include a more drastic visual and verbal change that indicates to the user that they made the point or lost it. Many users were confused if they got the connection because the change wasn’t big enough. I would also have loved to make the vests Bluetooth. The cords were pretty messy and limited the movement of the user because it had to be attached to Arduino.

From this experience, I’ve learned that teamwork is essential to persisting. Justin and I made a really good team and bounced positively off each other. While we had different strengths, it made it easier to share knowledge and learn from each other. For instance, he understood the pull up resistor and explained how it worked to me but I understood more of the coding and so I was able to explain to him more about the functions, etc. One of the most frustrating setbacks we had was the ball mapping. We circled through a lot of attempts and felt constantly discouraged, thinking it was too hard. At one point, we wanted to change our whole project idea. But, we decided to stick with this idea and see how far we could take it. And I think it shows that we were able to do it! And it was due to the fact that we kept trying and didn’t give up. We are very proud of our project and have grown very fond of it. At first, we were just brainstorming random ideas and thought about this one for fun. But, I never thought that we would actually be able to execute a game like this! But of course, none of it would have been possible without the great IMA fellows and professors that helped us out along our way, especially when we felt discouraged. Overall, I’m really happy with how our project turned out and I am super proud of this work.