Week 8: Encoding Space

Source: Own Image

How to be Invisible– Anita Luo

1. PROPOSAL

In this week’s assignment, I will construct a navigable visualization of space in Unity through the collection of datasets with automated techniques. Through this assignment, I will consider the different ways in which digital space and machine sensibilities manifest in physical space as suggested by James Bridle in his talk about “The New Aesthetic.” I intend to use techniques such as sound, photography, video, photogrammetry, and possibly vibration to encode a space that effectively allows players to interpret/understand these signals (decode). The final form of the data will be abstracted to a set of values through compression or conversions such as test patterns. As a result, the player becomes informed not only about the interaction between the digital and physical space but also how machine-readable data is an ongoing invisible process in our everyday lives—and how we can still have human autonomy over this process.  

2. BRAINSTORMING 

Source: Own Image

3. INSPIRATION & CONCEPTUAL INTEGRATION

Concept #1: Invisible Images and “Invisuality”

Visual culture has changed to a form unsusceptible to the human eye. This “invisible visuality” has many implications such as reinforcing forms of power that extend inequalities. It prompts us to scrutinize the proliferation of surveillance technologies that “actively intervene in everyday life” (Paglen 27). Trevor Paglen highlights the ethical consequence of pervasive invisible images that are constantly watching us (24). Artificial neural networks cannot create their own classes: they relate the data they are fed to data that they have been trained on. As a result, the inputted training data reveal the “historical, geographical, racial, and socio-economic positions of their trainers” or in other words, their developers (Paglen 27). These datasets are one of the sources of the algorithmic bias that Bridle notes in his talk. As Trevor Paglen notes in this article, digital and other kinds of technological images have become less about representations,  and more of activations and operations. As noted in the new aesthetic blog, “[p]ictures (like everything else) are relationships, not objects.” When considering what they deploy, what they influence, and what they make reimagined, images become a powerful tool. Paglen argues that the invention of machine-readable images can threaten human autonomy significantly which also introduces the media theory term known as invisuality. “Invisuality” refers to the algorithmic analysis of the vast banks of images that are collated by online platforms. In class, we learned that our reality is increasingly structured to fit into machine-readable models for performance and compatibility. 

image
Source: https://new-aesthetic.tumblr.com/

Concept #2: How not to be seen: A fucking didactic educational.MOV File

I intend to not only explore the invisible algorithmic analytical process of the space around machines, but I would also like to consider how humans can regain their sense of agency and privacy by analyzing Hito Steyerl’s video about the politics of visibility and how people can stay invisible in the increasing surveillance of the digital age. Steyerl’s tutorial allows people to combat the “extra layer of vision” of machines embedded in Nikon cameras that Bridle mentions.

Lesson I: How to Make Something Invisible for a Camera

  • To hide
  • To remove
  • To go offscreen
  • To disappear; “resolution determines visibility” which calibrates the world into a picture

Lesson II: How to Be Invisible in Plain Sight

  • Pretend you are not there
  • Hide in plain sight
  • To scroll
  • To wipe
  • To erase
  • To shrink
  • To take a picture

Lesson III: How to Become Invisible by Becoming a Picture

  • To camouflage
  • To conceal
  • To cloak
  • To mask
  • To be painted
  • To disguise
  • To mimic
  • To key
  • “to become invisible, one has to become smaller or equal to one pixel

Lesson IV: How to Be Invisible by Disappearing

  • living in a gated community
  • living in a military zone
  • being in an airport
  • factory or museum
  • being fitted with an invisibility cloak
  • being a superhero
  • being a female and over 50
  • surfing the dark web
  • being a dead pixel
  • being a WiFi signal moving through human bodies
  • being a disappeared person as an enemy of the state

“Are people hidden by too many images? Do they go hide amongst other images? Do they become images?” (Steyerl according to Day and Lury, 2016).

Concept #3: Sound Visualization

Anita Luo, Sample X, 2023, Print

Click here for better viewing: Sound visualization

The idea of Test Pattern as a system that converts different types of data (text, sounds, photos, and movies) into barcode patterns and binary patterns of 0s and 1s reminded me of my Communication Lab assignment on sound visualization. I translated sound data into a visual representation that incorporated aspects of the DNA profile, music editing panels, and the barcode. The piece was inspired by Richard Skelton’s Visual Poetry which uses repeated letter shapes to create rhythm and an organic subject matter. The work is about the resurgence of concrete poetry and the influences of digital text and the internet.

Source: https://www.wallpaper.com/lifestyle/just-our-type-a-new-book-traces-concrete-poetry-in-the-digital-age

4. PROCESS 

1) Fieldwork 

a) Through brainstorming, I planned to use the metro station as my choice of space. The migration and movement of the people will be the dataset which will be encoded in the space through the flow and rate of the people moving. Videos will be the technique with which I collect data to analyze and abstract.

b) The following video shows the congestion of people inside the circle and the low rate at which they walk compared to those entering and exiting.

c) In the following video, the people can be regarded as data points within the metro system that inform the system of their whereabouts once they scan their transport QR code into the machine.

2) Reconstruction

 

a) I placed cubes around a cylinder so that the cubes were placed in a perfect circle. Pairs of cubes act as the entry/exit points in the metro station.

Source: Own Image

b)

Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image

Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image

https://pixabay.com/sound-effects/search/scanner%20beep/

Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image

 

5. AUDIENCE RECEPTION & 6. CONCLUSIONS

 

7. APPENDIX

None.

8. REFERENCES

Bridle, James. “Web Directions Sydney 2011 Final.” Vimeo, 2 Dec. 2011, vimeo.com/32976928. Accessed 02 Nov. 2024.

“The New Aesthetic.” Tumblr, new-aesthetic.tumblr.com/. Accessed 02 Nov. 2024.

Paglen, Trevor. “Invisible images: Your pictures are looking at you.” Architectural Design, vol. 89, no. 1, Jan. 2019, pp. 22–27, https://doi.org/10.1002/ad.2383.

Steyerl, Hito. “How Not to Be Seen: A Fucking Didactic Educational.” Artforum, 20 Apr. 2015, www.artforum.com/video/hito-steyerl-how-not-to-be-seen-a-fucking-didactic-educational-mov-file-2013-165845/. Accessed 02 Nov. 2024.

 

Week 7: Non-human Perspective

Source: Own Image

Source: Own Image

Pop!– Anita Luo

Playthrough:

1. PROPOSAL

In Olga Ravn’s The Employees, the characters are either human or humanoids. They blur the line between what it means to be alive or dead through their sentimental and descriptive statements. Through this week’s assignment, I intend to build a Unity scene that incorporates the theory of embodied cognition and the careful consideration of a non-human perspective. In this process, I hope to develop a deeper consciousness of how forms/bodies sense and react to their environment and how their sensorimotor abilities are embedded in a larger context. If animals have differences in sensing their environments, what sensing differences do objects have? In terms of the technical aspect, I would like to challenge myself with a more extensive sound designing phase to explore what unique sounds I can produce. I would also like to include animations, post-processing, and triggers.

2. BRAINSTORMING 

I wanted to construct a scene that not only exhibited the perspective of a non-living object but also the system in which it is found to draw meaningful concepts. A system in Shanghai that I am intrigued about is the designer toy culture, most notably found in the Chinese toy company POP MART. I sketched the ways in which I would use audio, visual, and spatial aspects to illustrate my idea.

Visual and Space:

Source: Own Image

Audio (keeping in mind the 3 layers of audio being the soundscape, auditory landmarks and auditory signals):

Source: Own Image

Reference:

3. INSPIRATION & CONCEPTUAL INTEGRATION

Concept #1: “The Employees” by Olga Ravn

As Ravn’s characters attempt to describe the nature of the objects in the world of the novel, the characteristics and relationships between the objects are revealed. At times the objects are found to be humming, at others it is “[a]s if they don’t actually exist on their own, but only in the idea of each other.” The notion that objects “reiterate” each other will be established through the repetition of forms within the virtual space and the unity of movements

cover image of the book The Employees
Source: https://www.ndbooks.com/book/the-employees/

Concept #2: POP MART

The best example of objects that lack a sense of individuality and are seen as a collective is the toys in POP MART. Taking the perspective of a single toy, their intrinsic value seems lacking amidst the proliferation of overconsumption where the presence of the collective is much more significant than one toy; the toys can be easily replaced by another. The toys may feel inferior, or replaceable. Furthermore, the perspective of a small object is worth exploring and replicating. Lowering the pitch of larger entities to construct the perspective of a small object would be effective.

Through trigger events, I would like the player to move between different spotlights in the “diorama” to imitate manufacturing and lack of grounding. When the player reaches one spot, a new spot is open and when the player leaves their old spot they will realize that they have already been replaced by another toy.

4. PROCESS 

1) Research

I visited, observed, and researched more about the toys in POP MART by documenting photos, listening to the sounds, and noting their interaction and relationship with their environment. (I also bought some.)

Source: Own Image
Source: Own Image

2) Audio editing

a) I decided to use raw audio from the previous week’s assignments because I wanted to focus on the audio designing phase this week. Using Adobe Audition, I experimented with different effects. 

b) I adjusted the pitch for the public background noise to suggest the presence of giant entities. 

Source: Own Image

c) I used more “exaggerated” FFT filters such as the “C Major Triad” and “Only the Subwoofer” to create alien audio. I also attempted to adjust the graph manually to get a better grasp of the mechanism of this effect tool.

Source: Own Image

3) Reconstruction

a) Following the tutorial below, I made a cube with a glass material. This will be the walls of the “diorama.”

Source: Own Image

b) I searched for free assets and found this website where I downloaded the material for the walls of the diorama. The style of the graphics was very similar to the ones I saw in POP MART. I used the image with simple gradation as the material for the ground. I also took a photograph of the wooden door in my dorms as the material of the table in the Unity scene.

Source: Own Image

c) I constructed the different leveled platforms using cubes.

Source: Own Image

d) I adjusted the material’s metallic and smoothness settings.

Source: Own Image

e) I used the background image with the most detail as the main background.

Source: Own Image

f) I added a red cube at the top corner of the background to replicate the logo in POP MART’s “dioramas.”

g) I then added the text “POP!” through TextMesh Pro.

Source: Own Image

h) I added spotlights in 9 positions where the toys will be placed. 

Source: Own Image

i) I added a player capsule to the scene for playtesting and later acting as the non-human object.

Source: Own Image

j) I experimented with the skybox and environmental lighting settings to change the undertone of the scene and to make the scene look more realistic.

Source: Own Image

k) I found a Unity character asset that looked like a POP MART toy as the toy in my reconstruction. I had to manually edit the surface inputs and shader of these assets. 

Source: Own Image
Source: Own Image

l) I placed these characters in the scene in each spotlight making sure they are evenly distributed.

Source: Own Image
Source: Own Image
Source: Own Image

m) I had a brief overview of the in-built animations from this Unity asset by looking at the “Animator” panel. While playtesting, I found that the random movements and synchronization of the toys aligned well with the concept of the project so I did not make any additional changes after placing the toys in the scene. However, I would like to grasp a better understanding of why the toys are animated the way they are, what is triggering them to move in the same or different ways, etc.

Source: Own Image

n) I initially wanted to place the camera over the shoulder of the player to show that they are a toy. However, later when playtesting triggers, I found the camera to be very disorientating. thus, I placed the camera in the position where the head would be later in the process.

Source: Own Image
Source: Own Image

o) After learning different functions of Post Processing, I created a global volume where I added bloom effects and chromatic aberration. These effects, I found, enhanced the aspect of desire within the context of consumerism of designer toys.

Source: Own Image

p) The depth of field for the toy is probably different from that of a human, thus I also adjusted the depth of field in the global volume.

Source: Own Image

q) To lower the depth of vision further I also added fog so that the toy would not be able to see the horizon.

Source: Own Image

4) Sound

a) Taking the suggestion of the professor, instead of using CineMachine to create custom-shaped zones of sound, I adjusted the 3D sound setting of the game object to snap the volume without gradual volume changes.

Source: Own Image

b) Instead of using CineMachine to create a separate outdoor sound zone, I put in a game object and will adjust its activeness using trigger scripts. Therefore, when the player enters the glass case, the background sound will be deactivated.

Source: Own Image
 

5) Trigger

a) At first I wanted the two spots where the player moves between to be in the middle of the top row and left of the bottom row. However, I realized while playtesting that this route is not that easy to see for the player to know where to go next. Thus I changed the trigger events to occur at the bottom row only.

Source: Own Image

b) Through multiple playtesting, I tried to find the perfect size of the box collider as the trigger for when the players move. The trigger events that I want to add are complex to some degree so the most appropriate size of the trigger is necessary.

Source: Own Image

c) Finally I made the final changes to the trigger events. When a player lands on a spotlight, the spotlight disappears which reduces the level of value/importance of the player. Simultaneously, an earcon will play to inform the player that they have landed on a desirable spot that will trigger an event. When the player moves away from their initial position, they will find that a new toy has been replaced. Thus, they are prompted to find a new spot. This pattern repeats.

Source: Own Image

5. AUDIENCE RECEPTION & 6. CONCLUSIONS

Through this project, I experienced going to a site and collecting information to construct a scene that embodies a perspective other than my own. As a result, I was prompted to consider the cultural and social context of designer toys in POP MART that exemplifies overconsumption and materialism. The player can be easily replaced by another object similar to their own which brings up questions about the player’s value and sense of self. Thus, the lack of spiritual residue of the toys presents a diorama of meaningless “stuff.” During the classroom check-in, my classmates complimented me on the appealing visuals of the scene which enhanced the feelings of superficiality within the space. The earcon also effectively informed the player that moving into the spotlight was encouraged. However, my classmates were still confused about what non-human entity they embodied. Therefore, a possible solution to this issue is to adjust the camera root position either inside the hoodie so that there camera is framed by the toy’s garment or to try a higher angle behind the player. However, there is a lack of first-person perspective and immersion in the latter method. Furthermore, I believe adding footsteps with a higher pitch can suggest further that the player is very lightweight as a toy. Additionally, more discernable dialogue for the larger entities (humans) mentioning certain terms about the shop, the designer toys, or purchasing the toys would provide more context to the player. 

There is always a critique of consumerism by comparing commercial products to commodities that lack intrinsic value. However, I wanted to change my perspective to consider how the commercial objects feel in this spectacle. 

7. APPENDIX

Source: Own Image
Source: Own Image
Source: Own Image

8. REFERENCES

Ravn, Olga. The Employees: A workplace novel of the 22nd century. 2018.

Week 6: Secret Sound Creatures

Source: Own Image

Playthrough:

Echosystem – Anita Luo

Download Unity Package here!

1. PROPOSAL

In this week’s assignment, I am prompted to use microphones and audio filtering/manipulation skills to discover sounds in an object/site. These sounds will become creatures that live in the speculative environment that will be reconstructed in Unity. In the process of making the scene, I will deeply reflect on how the “species” of a soundscape creates its “acoustic niche” through attentive listening. Furthermore, I will note down, as Krause puts it, the density (the total number of vocal organisms) and diversity (the total number of different vocal species) of a soundscape ecosystem and its different ecological levels (biophony, geophony, and anthrophony) (28). By immersing myself in a certain object/site, I hope to think and value sounds more constructively and have a deeper intent on how sounds are placed and support each other in a space.

2. BRAINSTORMING

I intend to create a conceptual space of a water bottle as my chosen object/site. I believe that Unity opens much more opportunities for expression and documentation of space. Thus, to challenge the extent to which I can utilize a virtual space to illustrate a soundscape creatively, I planned how I would convey the sounds inside a bottle and what sounds of real-world sites I could translate into this conceptual space.

Source: Own Image

3. INSPIRATION & CONCEPTUAL INTEGRATION

Concept #1: Microscopic View of a Water Bottle

I would like to expand on my previous work called Molecular Voyage (it is still hanging outside the 3rd Floor IMA studio if you would like to check it out!) which I have also written a documentation blog on. By comparing a water bottle to a metro train I want to communicate the message that the continuation of our materialistic life has an uncertain destination. In my opinion, the manufacturing and selling of bottled water is the epitome of consumerism and waste culture. How can I convey this concept through a virtual space?

Anita Luo, Molecular Voyage, 2023, Photograph

I intend not to stop time but to track it; not to fragment space but to enlarge it; and not to tell an unmediated truth but to engage in a dialectic with others (Ritchin xx, 125).

Below is the contact sheet I made during the making of my diptych.

Source: Own Image

Keywords: Consumerism, Mitosis, Watercraft, Birth of Inventions, Uncertainty, Voyage, packaging: Consumed in Consumerism, commute, materialistic lifestyle

Concept #2: Coke Cans 

More info soon.

4. PROCESS 

1) Fieldwork

a) Using a contact mic, Tascam microphone, filtering, and listening through the microphone, I experimented with different mic placements and ways of interacting with the water bottle. Firstly, I tried to capture the sound of water trickling along the sides of the bottle by pouring water from one bottle to the other.

Source: Own Image

b) I rubbed two bottle caps to create a zipper sound which made me envision an anxious hyperactive creature or a creature that is threatening.

Source: Own Image

c) I also blew through a straw to create bubble sounds which I imagined a creature that moves in a repetitive and “bubbly” manner. Other sounds I induced included tapping on the plastic bottle, blowing through the bottle hole, and recording ambiance from a computer engine (which I will edit later).

Source: Own Image

d) Once I collected my raw audio, I edited it through Audacity. For every sound, I used “Noise Reduction” to remove the background noise by first getting a noise profile and filtering out the noise.

Source: Own Image
Source: Own Image

e) When I want a certain sound to loop I would manually isolate the sound and add some “silence” at the end so that when the audio loops there is space between the sounds.

Source: Own Image
Source: Own Image

f) I wanted to create an ambiance sound within the bottle to replicate the vibrating gas that you hear when you put your ear against the hole of the water bottle. To do so, I used a “Low-Pass Filter” and “Reverb.”

Source: Own Image
Source: Own Image

g) I also experimented with filter curves to treble boost certain audios.

Source: Own Image

For the bubble sound, I changed the pitch higher as it sounded more realistic to do so.

Source: Own Image

h) Additionally, I experimented with the “High-Pass Filter” to see if the audio would sound lighter.

Source: Own Image

2) Reconstruction

a) I first constructed the bottle cap by taking photographs and importing them to Unity.

Source: Own Image
Source: Own Image

b) Using cubes I built the cap in Unity.

Source: Own Image

c) Using water blocks from the Unity Asset Store, I placed water as the floor of my scene. All assets used will be linked at the end of this section.

Source: Own Image

d) I experimented with different shaders of water.

Source: Own Image
Source: Own Image

e) I used potion objects from the Unity Asset Store for the body of the water bottle.

Source: Own Image

f) For the packaging of the water bottle I used quads on three sides of the water bottle.

Source: Own Image

g) I imported photographs that I took of the packaging into Unity and on to the three quads. Orientation had to be adjusted individually for each photograph.

Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image

h) I experimented with lighting settings and the skybox to make the scene more appealing and immersive. The undertone I wanted in the scene was purple.

Source: Own Image

I) Using assets from the Unity Asset Store, I imported creatures like crabs and squid to align with the theme of water. If I had experience with Blender, I would like to have made my own creatures with more alien features.

Source: Own Image

j) I also tested out the animations available in each package through the “Animator” function. The animations matched the mood of the sounds assigned to each creature.

Source: Own Image

k) I added bathroom reverb zones inside the water bottle so that there is a clear distinction between the external and internal space of the object when listening to sound.

Source: Own Image

l) I used the script provided by the professor in week 2 to make the crab look at the player for the whole duration. The zipper sound that I assigned to the creature is in some way intimidating so I wanted to enhance that feeling by making the crab glare at the player.

Source: Own Image

m) Lastly, I attached audio sources to their respective creatures.

Source: Own Image

I ensured that “Loop” was checked, the spatial blend was 3D and adjusted the volume rolloff to “Linear Rolloff.”

Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image

Unity Asset Store Resources

Water: https://assetstore.unity.com/packages/2d/textures-materials/water/simple-water-shader-urp-191449

Water bottle and skybox: https://assetstore.unity.com/packages/3d/props/potions-115115

Crab: https://assetstore.unity.com/packages/3d/characters/creatures/big-bat-carb-115954

Other creatures: https://assetstore.unity.com/packages/3d/characters/animals/quirky-series-free-animals-pack-178235

5. AUDIENCE RECEPTION & 6. CONCLUSIONS

Using a water bottle as my chosen object and resonant body, I discovered the many ways in which our environment and even its mundane objects within it can carry a diversity of sounds and transduced sounds. I hope that through this assignment the audience is able to appreciate and be more conscious of the little things in life and how they can bring about unique qualities. In the future, I would like to extend my scope by paying more attention to soundscape ecology in terms of identifying and investigating the biophony, geophony, and anthrophony layers of a specific site. My intention to focus on biophonies specifically is also “to reconnect to the potent sonic elixirs found in our remaining wild soundscapes…to think of and value natural sound in constructive ways” (Krause 38). Furthermore, I would like to use Adobe Audition, with its more versatile features, to experiment with adjusting sounds to make alien audio sources discernible from each other. During the class check-in, my partner could only separate a few audio sources from each other due to the similarity between all the audio. I hope to tackle this problem in the next activity where I will edit sounds of distinct timbres that also fit into their own acoustic niche.

7. APPENDIX

None.

8. REFERENCES

Krause, Bernie. Wild Soundscapes Ch. 2, 14 Feb. 2020, https://doi.org/10.12987/9780300221114.

Ritchin, Fred. In Our Own Image: The Coming Revolution in Photography. Aperture.

Week 5: Audio Wayfinding

Mgla – Anita Luo and Beatrice Zhang

Download final Unity Package here!

Download old version here

Playthrough:

Map:

Source: Own Image

1. PROPOSAL

Through the sensory experience of Proteus, we are instructed to construct a virtual course that must be navigated using only sound through the use of a combination of icons (earcons), beacons, and soundscapes to allow a player to locate waypoints. Through this activity, we explore how sounds, as carriers of information, help an individual navigate through a physical or virtual space. 

The scenario to be constructed in Unity:

You’re a delivery driver sent to the planet Mgla carrying a mysterious package. The planet is covered in a layer of fog so dense that you can barely read the address on the box.

The inept drop-ship operator set you 5 ticks (~1km) away from your destination, so you’ll have to walk the rest of the way. To make matters worse, your locator earpiece only has a range of 1 tick (200m), so you need to navigate to landmarks along the way instead of going directly to the goal.

Visibility is nonexistent, but there is a rich soundscape and plenty of distinct sounds that you can use to figure out where you are. Undeterred by the fog, and with your locator earpiece and the ambient sounds to guide you, you embark to deliver the package.

In this activity, we are prompted to set at least 4 waypoints between the start and end of the mission on the planet Mgla. To guide the player to these waypoints, we will implement the 3 layers in the auditory scene including the soundscape (as seen in Week 4’s activity), auditory landmarks (guiding beacons), and auditory signals (icons and earcons). Inspired by Proteus, we also want to see how groups of sounds in one area can change the mood of the space for the player to make sense of where they are and at what waypoint they are.

2. BRAINSTORMING

Proteus had area-specific sounds which made sense of the space for the player. Unfortunately, we are not afforded the use of visual representations for the player to connect the sounds to the environment. However, we believe with timing/delay, loudness, and timbre, the player can collect audio information to understand where they are within a virtual space.

Source: Own Image

We aim to combine the scenario and space so that there is a narrative for the player to follow. This includes the delivery driver calling for assistance and the person on the other side of the call explaining the mysterious but thrilling wonders of the planet Mgla. As the person on call leaves, the delivery driver is left with their own devices to figure out where they are. Mgla is characterized by abrupt topography (land features being closely situated) which render the journey open to surprises and unique easter eggs around the corner! At the end of the journey the player realizes that although the distance of the destination was 5 ticks, they had made a big U-turn—making their displacement a tick away from their starting point!

Through this sketch, we also noted the list of sounds we plan to use:

  • car engine
  • phone ring
  • phone hang up
  • footstep sound
  • sprinting sound
  • wind
  • water droplets
  • crickets
  • birds
  • water ripples
  • fire cracking
  • forest wind
  • cars
  • doorbell
  • speaking?
  • earcons such as error sound

3. INSPIRATION & CONCEPTUAL INTEGRATION

Concept #1: The “auditory scene”

Source: https://www.mijksenaar.com/mobility/creating-a-wayfinding-soundscape-in-our-lab/

According to the theory, a well-organized auditory scene consists of 3 layers – the soundscape that brings the overall spacial station structure to life audibly, the auditory landmarks serving as guiding beacons and enriching the soundscape, and the auditory signals that support navigation and provide information. We applied these three layers to constructing our auditory scenes: we intend to assign the three layers for each waypoint.

Concept #2: Stardew Valley

Stardew Valley is a virtual farming game that features diverse living settings and scenes from different parts of the worldview. The audio effect varies according to the waypoints, through which the player is guided to complete the tasks assigned at various sites. The footsteps and every tiny sound effect triggered feel real thanks to the well-modified reverb effects, inspiring us to continue applying the appropriate sound editing techniques.

The Best Tips And Tricks For Playing The Stardew Valley 1.6 Update
Source: https://www.thegamer.com/stardew-valley-1-6-update-tips-tricks/

We are also inspired by the game Proteus in terms of how it constructs area sounds characteristic of some specific scenes. In the instruction video on YouTube about creating an ambient zone, the author makes an entity of the forest by circling the audio around the rim of the forest. Our forest scene is inspired by this, and we want to further conceptualize the scene by adding reverb effects and multiple layers of sounds.

4. PROCESS 

1) Audio Preparation

Because this activity focuses on audio wayfinding and playtesting, we utilized copyright-free sound effects from https://pixabay.com/ to save time on the recording stage. We were time-efficient in finding the audio we needed because of effective planning beforehand so we had a premade list of sounds.

Source: Own Image

Folder with all downloaded audio!

2) Set Up

a) We first set up a low-visibility scene using this template scene which consists of a white skybox with white Fog. 

Source: Own Image

b) We then added footstep and, this time, sprinting sounds by adding footstep and sprinting audio sources. The same tutorial was used as the last assignment. The footstep audio tempo is increased to create the sprinting audio through the use of Audacity. Thus, when the “shift” key is pressed the sprinting sound is enabled. Due to time constraints, we decided to test different types of surfaces and the use of velocity for footstep sounds (feedback from the professor) in the next assignment. Because there is minimum visibility, we believe that there is not a strong need to tackle this aspect for this specific assignment. Jumping is also not a major feature of the overall experience for the wayfinding activity so we also did not address the jumping limitations of the script.

Lookback: During playtesting in class, I realized that my classmate used the arrow keys instead of WASD. Thus, the player moved but without the footstep sounds. I did not consider this possibility. Thus, I think for future projects, it is important to have different people try playtesting to notice these possible challenges. As a solution, I would attempt to use velocity as the professor has suggested in future projects. This will be an effective solution to avoid similar situations in the future.

3) Virtual Space Design

a) We first used 3D primitive shapes to create the 6 main landscapes which include the car, cave, lake, campfire, forest, and house. We followed our sketch during the brainstorming phase which helped us organize the landscapes at an appropriate distance.

Cave:

Source: Own Image

Lake:

Source: Own Image

Trees:

Source: Own Image

Birds eye view:

Source: Own Image

Challenge: Although this project mainly focused on sound for navigation, playtesting multiple times informed me that little object obstruction could increase the chances for the player to get lost. Thus, we added more trees around the “obstacle.” To ensure that the player does not skip a waypoint, we also added box colliders to avoid the player moving across the lake directly.

Source: Own Image

4) Audio Design

a) Car

Source: Own Image

We used cubes to create the basic shape of a car. We then added the car engine audio as a game object with a 3D spatial blend so that the player knows that they are moving away from the car as they enter the cave. The outside area also has sounds of crickets to communicate the information that it is nighttime and the delivery driver is outside in an open area. The cave’s wind and water droplets audio will act as a guiding beacon for the player to move towards the new area. 

Challenge: Through playtesting in class, we found that the player was unsure about whether to approach the cave or the car due to a conflict in dominance. To inform the player that they should follow guiding beacons of sound and that they have a mission as a delivery driver, we used the suggestions from the class. We initially wanted to create different scenes so that the first scene acted as the instructions communicating the scenario that the player was in. However, due to time, we were unable to execute this well. Our second idea was to have a call voiceover where the character, the player, asks for help to navigate on the planet Mgla. However, due to time constraints, we were also unable to do this. With feedback from both the professor and classmates, we used ElevenLabs’ text-to-speech function to illustrate the scene. Unfortunately, we ran into a problem with access so we used the audio we were able to generate to create a phone call in Audacity. We also incorporate phone call and phone beeping sound effects from PixaBay to complete this step.

Source: Own Image
Source: Own Image
Source: Own Image

b) Cave (1st Waypoint)

Source: Own Image

The cave has a rich soundscape of wind, water droplets, cave ambiance, and reverb zones. As the player moves towards the other end of the cave, the sounds of water ripples from the lake are evident which draws the player. This signifies a new setting and mood of tranquility that juxtaposes the ominous one from the cave. When the player moves out of the cave, a trigger of a success earcon is played to signal to the player that they are on the right path and passed the first waypoint. 

Source: Own Image
Source: Own Image

Challenge: Unfortunately, we ran into a problem with putting the trigger scripts offered in class into the project. We tried to fix the problem by going through the script. However, we did not understand what the error message meant.

Source: Own Image

When editing the script, there were also no evident errors found.

Source: Own Image

After meeting with the professor, we found that we had two scripts in our assets which resulted in this challenge. After understanding the issue and how to solve this, we added all the audio triggers that we had planned to embed in this space.

c) Lake (2nd Waypoint)

Source: Own Image

As the player passes the first waypoint trigger, the audio game object of nature by the lake such as frogs becomes active. This audio comes from the corner of the lake which draws the player to move to the second waypoint as a new audio landmark.

Once the player moves out of the cave, there is no more reverb zone which changes the timbre of the footsteps—creating a different feeling of space for the player. The soundscape of the lake is ornamented by the subtle ripples of the lake water and the night ambiance.

When the player reaches the source of the audio beacon, the sound stops and the completion earcon plays to signal the second waypoint has been reached.

As the player turns around the corner of the lake, an owl sound will guide the player to the next waypoint. Animation is used to simulate a moving owl.

Source: Own Image

d) Campfire (3rd Waypoint)

Source: Own Image

The crackles of the campfire act as the auditory landmark of the new waypoint which is enhanced by soundscapes of crickets and night ambience. A reward audio cue will indicate that the player has reached the 3rd waypoint. Simultaneously, a new sound of owls will be heard in the direction of the forest.

e) Forest (4th Waypoint)

Source: Own Image
Source: Own Image

Birds will be a guiding beacon to the next area which is the forest. We added forest reverb in this zone. This created a closed but intimate space for the viewers to experience another type of mood and setting. The distance sound of cars draws the player to the last waypoint: the final destination. During the trip to the road, we wanted it to bring about a sense of melancholy to the players as they experience the presence of tension between nature and urbanization. The sensory experience is heightened by the echoes of the footsteps in the reverb which also mimics the sound of blood pumping in the ears.

Source: Own Image

f) Final Destination (5th Waypoint)

To signal the end of the journey, we wanted to use the sound of the doorbell which is triggered when the player stands in front of the door of the house.

We also added the sound of the character calling out to the owner of the package. The reply will come from an unfamiliar and strange voice which concludes the peculiarity of the planet Mgla.

5) Triggers

Source: Own Image
Source: Own Image

We would like to explore more of these features in the future!

Source: Own Image

5. AUDIENCE RECEPTION & 6. CONCLUSIONS

During the user-testing session held in class, the player initially encountered an issue where they got stuck in the wall of the cave. This unexpected glitch halted their progress for a moment, but with our guidance and instructions, they were able to navigate out of the wall and continue the gameplay. Once they moved forward, the playthrough was largely smooth, although there were still some technical challenges, particularly with the proper triggering of waypoints. These issues caused minor delays but did not significantly detract from the overall experience.

Feedback from both the audience and the instructor was constructive. They appreciated the design and concept of the game, and overall impressions were positive, though they highlighted the importance of refining the waypoint triggers to ensure a more seamless experience. Additionally, we were given valuable insights on how to improve the game’s functionality. Specifically, after the class, we were informed by the instructor that having two scripts with identical content could lead to serious errors during the compilation process. This could cause conflicts and prevent the game from running as intended.

To address this, we were advised to streamline our script organization by either combining overlapping scripts or ensuring that each script serves a distinct purpose. This will help us avoid any potential conflicts and ensure that the game compiles and runs smoothly. Moving forward, we plan to restructure the scripting to prevent these errors, while continuing to refine the game based on user feedback.

7. APPENDIX

Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image

8. REFERENCES

None.

Experimental Collaborative Work with a GenAI Tool

Luo_Anita_Experimental Collaborative Work with GenAI (PDF)

Anita Luo

Dr. Jana Fedtke

PoH: AI-Assisted Writing, Agency, and Authenticity

October 16, 2024

Experimental Collaborative Work with GenAI

Project Proposal

    In this project, I aim to use generative artificial intelligence (GenAI) tools to produce promotional material for tourism in South Africa. The graphic design process will involve using AI-powered tools built into Canva such as Magic Media and Magic Design which are Canva’s GenAI models for images, videos, and graphic designing. Canva makes designing more accessible to non-designers with its customizable templates and user-friendly interface. With this new set of AI-powered design tools, Canva is increasing its accessibility and potential use in the mass population. To evaluate its efficacy, I intend to create a poster of South Africa that is an accurate representation of the country and its features while evaluating how much producer oversight is required, in terms of text prompt lengths or rearrangement of design elements while editing, to achieve this result. By actively collaborating with these GenAI tools, this project is an opportunity mainly to analyze the advantages and shortcomings of GenAI in creativity, problem-solving, and decision-making; and to identify ethical implications in terms of intellectual property, bias, and the potential influence of AI-generated content in the future.

The Creative Piece

Luo, Anita. South Africa Advertisement using Generative AI. 15 October 2024. Author’s personal collection.

    Throughout the creative process, I documented my interaction with the GenAI tool by saving drafts, screenshots, and logs of my prompts and the responses I received.

Magic Design 

    Firstly, I used Magic Design and typed a generic prompt “south africa tourism poster” to investigate how the GenAI Tool would interpret it. The outputs all consisted of natural landscapes of South Africa or the South African flag. Although nature reserves are one of the main tourist attractions in South Africa, I believe that the prevalence of nature visuals further exacerbates the existing stereotype that the country of South Africa mainly encompasses a safari scene.

Luo, Anita. Magid Design: First Attempt. 15 October, 2024. Author’s personal collection.

For my next step, I edited my previous prompt and included the requirement to have various tourist attractions to explore the extent of the GenAI tool’s discriminatory output. I found that Magic Design had the limitation of only having one image in its outputs. Furthermore, I only had the option to put a one-time prompt so I cannot provide more prompts to further develop the initial generated outputs.

Luo, Anita. Magic Design’s limitation of creating more than one image in their design. 15 October, 2024. Author’s personal collection.

I specified my prompt further by adding “nature and city attraction” and “more than 2 images.” The purpose of doing this is to attempt to solve the challenges in the previous step. This time I received more variation in images. However, the limitation of having one image was still evident. Furthermore, I noticed that the text produced in the image was generic and could be applicable to any country advertised. Thus, in the next step, I aimed to resolve this issue.

Luo, Anita. Further limitations of Magic Design. 15 October, 2024. Author’s personal collection.

In the next prompt, I wanted to explore Magic Design’s capacity to design and understand the context. Thus, I refined my prompt to include a specific design style that is contemporary and specific South African phrases. As a result, I found that there were improvements in the design’s complexity and arrangement of elements. Furthermore, I noticed that some outputs also contained the term “Rainbow Nation” which is a popular term used in South Africa.

Luo, Anita. Exploration of Magic Design’s features. 15 October, 2024. Author’s personal collection.

My final choice for the design template is due to the relevance of the term “Rainbow Nation.” I dismissed the image as I intended to use Magic Media to add more visual elements.

Luo, Anita. The final choice for the advertisement. 15 October, 2024. Author’s personal collection.

The interface for Magic Media is easy to interpret. However, similar to Magic Design, the GenAI tool only allows one prompt at a time.

Luo, Anita. Interface for Magic Media. 15 October, 2024. Author’s personal collection.

My first prompt was “South African locals showing tourists around South African tourist attractions with a smile on their faces.” Through this prompt, I intend to see a variety of races and tourist attractions. However, the outputs were limited to only the black race, and the location of the background was imperceptible.

Luo, Anita. Output for “South African locals showing tourists around South African tourist attractions with a smile on their faces”. 15 October, 2024. Author’s personal collection.

As a result, I had to be decisive and particularize the output I wanted by stating that I wanted to generate content with South Africans of different ages and races in a particular place in the forest ziplining. Unfortunately, the outputs were still dominated by people of the black race.

Luo, Anita. Output of “South Africans of different ages and races doing ziplining in the forest”. 15 October, 2024. Author’s personal collection.

To tackle the limitation of representation, I wrote what races I wanted which required prior knowledge of South Africa’s diverse population.

Luo, Anita. Output and prompt for third image. 15 October, 2024. Author’s personal collection.
Luo, Anita. Output and prompt for Nelson Mandela. 15 October, 2024. Author’s personal collection.

One particular shortcoming I noticed was the inability of Magic Media to put different groups of people of different demographics in one output. 

Luo, Anita. Close-up of output and prompt for Nelson Mandela. 15 October, 2024. Author’s personal collection.

On the other hand, I removed any words or phrases relevant to South Africa to test whether the bias towards the black race in the outputs was a result of the prompt having an association with South Africa. Thus, I found that the outputs displayed more diverse races including the white race. The results confirm that the AI model associates South Africa with the black race.

Luo, Anita. Output of prompt with no South African reference. 15 October, 2024. Author’s personal collection.

Unfortunately, Magic Design did not have a function to rearrange visual elements so I had to organize the media independently.

In order to test whether the AI model has the capacity to put people of different races into one image, I specified my prompts further. The outputs showed that it is possible to put people of different demographics together. However, the GenAI tool requires the user’s preference to do so and automatically groups people of the same background together. In a larger context, the AI model’s output is a reflection of a lack of diversity and inclusion culture between different groups of people.

Luo, Anita. More diversity in images. 15 October, 2024. Author’s personal collection.

As an experiment, I also test Dall-E. The tool displayed similar limitations to Magic Media concerning the strong association of South Africa to the black race, and the lack of inclusivity of different types of people.

Luo, Anita. Screenshot. 15 October, 2024. Author’s personal collection.
Luo, Anita. Screenshot. 15 October, 2024. Author’s personal collection.

Reflection

     Generative AI (GenAI) tools massively reduce the time investment needed to create content. Thus, in a fast-paced world, GenAI tools such as Magic Media and Magic Design are transforming the ideation process for both designers and non-designers in many different disciplines in which any person can conceptualize an idea and create a proof of concept swiftly. However, with the assistance of AI, these GenAI tools allow us to explore how the generated outcomes redefine the perceived creativity of a piece of work, how it differs from human-only collaborative processes, and how it further complicates the existing authorship attribution problems in the world. As the role of the “author,” I aim to use Magic Media and Magic Design on Canva to create a poster that will promote features of South Africa, such as landscape, activity, and people, accurately. As a South African, I will be evaluating how much guidance, refining, and editing is needed from me to create a satisfactory promotional poster of my country that avoids potential bias, stereotypes, or misrepresentation.  

     For the process of this project, I first inputted prompts in Magic Design. My approach to inputting prompts was to first type a simple prompt that was open to many interpretations to investigate whether the GenAI tool had enough context to generate an accurate outcome. After finding drawbacks to the generated outcome and identifying areas in need of improvement, I would refine my prompts through additions and alterations to the previous prompts. This procedure was repeated until I found an output that was free from potential issues. Satisfactory outcomes were found through this iteration process. The same iteration process is used with Magic Media. In the final stage, I organized the media on the poster in a way that I deemed presentable and usable in a real scenario of tourism advertisement. Throughout the process, I worked closely with the GenAI tool as it required heavy producer oversight. I felt like a supervisor who adopted autocratic leadership. Firstly, all the ideas were developed by myself and there was little input from Canva in giving suggestions or feedback to me about alternatives or possible improvements to my initial concepts and prompts. There was no presence of a dialectic relationship between the GenAI tool and me—the interaction was purely on a utilitarian basis. In this case, I have the sole role of determining the direction of the project. In my opinion, the lack of dialogue with the GenAI tool is one of the biggest differences between AI assistance and human assistance. Nevertheless, there are many other GenAI tools with the capacity for dialogue. Secondly, Canva mostly produces less than satisfactory results. As a result, I am required to provide guidance, support, and identify development needs to Canva in terms of specifying what I envisioned to a large extent. Therefore, I have control over the generated outcomes.

     On the one hand, Canvas algorithms can augment and emulate human work which saves money and time. For example, I do not have to take or hire other people to take photographs in South Africa. Furthermore, I do not have to go through any procedures and spend time to ensure I am using copyright-free material—or to ensure I have my own intellectual property right to the advertisement. According to Canva’s AI Product Terms, users who utilize Canva’s Magic Studio features for commercial purposes may not have exclusive rights to the generated outputs. Additionally, there are instances in which the generated images have challenged my ideas by producing unforeseen visual elements that can enhance the poster’s purpose. For example, for one of the prompts I only included “Cape Town.” However, it generated Table Mountain which I did not envision initially. I accepted the image due to the fact that Table Mountain is one of the classical landmarks in South Africa. Thus, this was a moment where the AI’s contribution felt truly “creative.”

     On the other hand, there are many drawbacks to achieving my creative goal of producing an accurate and reliable advertisement of South Africa. I have mentioned before that there is a high demand for producer oversight through increasing specificity in prompts. This high producer oversight is also due to a limited prompt input option—you cannot refine images with follow-up prompts. I also encountered, as Joy Buolamwini calls it, the “coded gaze” which refers to algorithmic bias. I discovered when my prompts included “South Africa” the generated images only consisted of people of the black race. When I specified for people of different races and ages, the GenAI tool continued to generate similar results as before. Thus, this experience informs us about how AI is shaped by and shapes society. By having preconceived and oversimplified images of a “mid-tier” or “secondary” country such as South Africa, which may have less focus in the datasets that train the AI models, I am concerned about the potential misrepresentation and misinformation that these tools may provide to people outside the country—exacerbating the misunderstanding between different groups of people. Furthermore, there was also the tendency for the GenAI tool to group people of similar demographics together—a reflection of a lack of diversity and inclusion culture in society. Relating to ethical implications, there has been an increasing concern that algorithms used by GenAI tools produce discriminative outputs as they are trained on data that is embedded with societal biases. Additionally, Canva discloses that it does not provide any guarantee that the content generated is cleared for use, particularly if the output reproduces text and images from existing works. There are potential challenges in ensuring that all outputs generated are free of legal risks too. 

   Lastly, the application of Canva’s GenAI tools for commercial use problematizes authorship attributions. Canva requires users to disclose that AI has been used to generate content according to their sharing & publication policy. However, producers are found to receive more credit for work when they are assisted by algorithms, compared to assistance from humans (Jago and Carroll). Through my experience of using Magic Design and Magic Media, I strongly agree that algorithmic assistance demands more producer oversight than human assistance—at least for simple and accessible GenAI tools such as those of Canvas. Despite my contributions, from a legal perspective, I am not assigned credit. In the greater context of distribution, compensation, and accountability, authorship is crucial. Therefore, tension is found between the social perceptions of authorship attributions and the actual legal authorship attributions in the area of AI. Thus, in light of the possible legal risks of claiming intellectual property as discussed before, I believe that the creative industry could embrace this new phenomenon of a free, “non-proprietary” domain of content creation separate from the domain characterized by patents, copyrights, and trademarks.

    In this project, I have outlined the advantages and disadvantages of using GenAI tools such as Magic Design and Magic Media on Canvas and their impact on the traditional process of content creation. Despite the GenAI tools’ ability to reduce work time and money, they exhibit and contribute to existing societal biases in their outputs; they particularly showcase racist underpinning and lack of diversity. Artificial neural networks may be both affected by and affecting the masses—“a cycle of bias propagation between society, AI, and users” (Vlasceanu and Amodio). In order to mitigate the discriminatory outputs, a high level of producer oversight is required to control the generated content within the objective of the project. Thus, I believe that these AI tools should be seen as “extensions of human organs” (Agüera y Arcas 5). The project contributes to the discourse of AI authorship attributions in which there is a limitation of having a copyright over the generated content in spite of the level of contributions of humans and AI. However, as a result, a new form of freedom is introduced in the creative sector.

Works Cited

    Agüera y Arcas, Blaise. “Art in the age of Machine Intelligence.” Arts, vol. 6, no. 4, 29 Sept. 2017, p. 18, https://doi.org/10.3390/arts6040018.

   Buolamwini, Joy. “How I’m Fighting Bias in Algorithms.” YouTube, TED, 30 Mar. 2017, www.youtube.com/watch?v=UG_X_7g63rY&t=156s.

    Jago, Arthur S., and Glenn R. Carroll. “Who made this? algorithms and authorship credit.” Personality and Social Psychology Bulletin, vol. 50, no. 5, 3 Feb. 2023, pp. 793–806, https://doi.org/10.1177/01461672221149815.

    Pavlick, Ellie. “From the MIT GenAI Summit: A Crash Course in Generative AI.” YouTube, MIT AI ML Club, 14 Mar. 2023, www.youtube.com/watch?v=f5Cm68GzEDE&ab_channel=MITAIMLClub.

    Vlasceanu, Madalina, and David M. Amodio. “Propagation of societal gender inequality by internet search algorithms.” Proceedings of the National Academy of Sciences, vol. 119, no. 29, 12 July 2022, https://doi.org/10.1073/pnas.2204529119.

 

Week 4: Soundscapes: Inside/Outside

Depths (with Sound) – Anita Luo

Download Unity package here!

 Playthrough of the final scene:

“Sound map” of the scene:

Source: Own Image

1. PROPOSAL

I am prompted to create a conceptual “soundscape” through fieldwork and reconstruction of inside and outside sound. I intend to experiment with “hi-fi” and “lo-fi” systems, mentioned in R. Murray Schafer’s “Music of the Environment,” within a single space to see how I can create meaning and effect through their contrasting perspectives for the audience. By removing sounds from their “natural sockets” and relocating them into the space of my unity project, I would like to build a soundscape of “moving and stationary sound events.” In an attempt to illustrate transient memory in Depths, I also wish to take advantage of silence in some way within the virtual time and space.

2. BRAINSTORMING

In my sketch, I wanted to categorize what sounds would be considered outside or inside and how that would be intertwined with the concept of hi-fi and lo-fi systems. As a result, I wanted to create a reverb zone within the convenience store that would act as the silent space within my project and exhibit a lo-fi sound system. Integrating my concept, this space of stillness acts as the momentary recall and isolation of the space in my head. Everything outside the store is at a standstill too; the external world outside my head pauses as I focus my attention on the store. 

Additionally, I also considered the gameplay of Promesa and intended to add footsteps as the audience moves through the space to enhance their sense of particularity and immersion. 

Source: Own Image

3. INSPIRATION & CONCEPTUAL INTEGRATION

Concept #1: SOMA Underwater Soundscape

The sound design in Soma effectively captures the underwater environment with its echo effects that adapt to the room size and spontaneous sounds that elicit agitation toward the unknown. I think that using similar sound effects could enhance my soundscape in the project.

Concept #2: Silence in John Cages’ 4’33”

I am inspired by how the prevalence of silence in 4’33” encourages the contemplation of our surroundings by enclosing the environmental and involuntary sounds, The audience is put into a moment of attention and feels a sense of immersion and particularity.

4. PROCESS

1) Field Work

a) For this project, I used a Tascam recorder and phone to record the environment. I spent some time sitting and listening in the convenience store, the reference space in this project, for a few minutes to note what discrete and indistinct sounds I heard. At times there was only the sound of the fridge which sounded like an air conditioner. Its dominance in the space overpowers any small sounds present—in the context of recalling memories, one might lose these minuscule details. When walking out of the convenience store, the external space turns into a lo-fi soundscape where constant information becomes noise. Thus, I wanted to record sounds that could emulate this dichotomy between the inside and outside soundscape.

b) For the bus that passes the convenience store, I used my phone to record. The quality of the phone is not as high as the Tascam recorder however it picked up all the dominant sounds of the space in the bus.

c) I also recorded the inside of a cafe to capture the sound of people talking. However, because I used a phone to record, the quality of the recording was not satisfactory to edit and embed in my scene further along the process.

d) Using a shotgun recorder I also captured the soundscape of an area near a street to capture the discrete sounds of cars, wind, trees, and environmental ambiance. 

Source: Own Image

e) I made an individual recording of the air conditioner to isolate the sound for better editing later so that I can have a more realistic audio source in the scene.

Source: Own Image

f) Using my hand, I knocked on a thick book to simulate the sound of footsteps. 

Click here for raw audio folder!

*includes audio not mentioned and used

2) Audio Editing

a) I used Audacity, the only editing software available to me at the time, to edit the audio that I recorded. There are many limitations to this editing software and would have preferred to use Adobe Audition. However, for the objective of this project, I believe that the software’s functions would produce satisfactory outcomes. 

b) Firstly, I edited the audio of “Outside.wav” by following this tutorial on how to create an underwater effect for audio in Audacity. To make a muffled effect, I selected the “Low-Pass Filter” and adjusted the “Roll-off” to 24dB. Additionally, I added some reverb to highlight the room size of the space—that the external world is boundless with incessant income of information. I exported the audio and named it “Outside water.wav.”

Source: Own Image

 

c) For the footsteps I used “Noise Reduction” to remove background noise and isolated two knocks for the footsteps.

Click here for edited audio folder!

3) Reconstruction

a) I incorporate the audio into the week 3 assignment. I considered how the time and location of these audio sources would affect the mood and feeling of the space. By initially categorizing sound as either outside and inside; lo-fi and hi-fi; and external and internal. This distinction offers a juxtaposition between the overwhelming ominous environment outside the convenience store and an isolated still environment inside. 

b) I added the bus sound and animated it to simulate the movement of the bus. In this way, I utilized moving sound which helped inform the audience of the space.

Source: Own Image

c) To create this distinction between the two spaces, I followed the tutorial below to create a custom-shaped sound zone where the audience hears the outside water audio before they arrive at the convenience store. 

Source: Own Image
Source: Own Image

d) I followed the tutorial below to add the footstep audio to the player capsule so that when the WASD keys are pressed the footstep audio will play. Unfortunately, there are some flaws in the script. Firstly, the script does not account for when the player jumps which means the footstep sound will play even in midair. If I had time I would have added booleans where if “onground” is false the audio will not play. Secondly, I would also add sprinting sounds. However, I did not think of this during the recording step.  Thirdly, I would like to change the footstep sound depending on the material of the ground. I wanted to do this initially. However, I did not have time to find good tutorials on how to do this.

Source: Own Image
Source: Own Image

e) Lastly, I followed the tutorial below to create reverb zones. The reverb zone in the middle is the bathroom reverb which will highlight by echoing the sound produced in this space. In this way, I enhance the hi-fi characteristic of the space. The reverb zone surrounding this space is underwater reverb which added a good effect to the already edited audio of the outside audio.

Source: Own Image
Source: Own Image

f) I added more colliders and also made adjustments to the player capsule—slowing the speed of both the walk and sprint.

Source: Own Image
Source: Own Image

5. AUDIENCE RECEPTION & 6. CONCLUSIONS

Through this project, I became more intentional in where I allocated sound in a space. I understood what my space entailed, its narrative and mood and to express these aspects more effectively the audio played a crucial role. Some techniques I used include reverb zones, animation, distinction, lack of distinction, and 3D sound design. In terms of areas to improve, I would like to tackle the limitations of the script for the footsteps—which was also mentioned during class. I would like to take jumping, sprinting and ground material into account.  The sense of solitude and player-particularity within this space is enhanced by the segregation of noise and silence. This silence is barricaded by a transparent fence enclosing a hi-fi soundscape and the noise is free to run outside. As a result, as the person bearing the memories, I feel the need to stay within the convenience store to appreciate and remember the past—sometimes escaping reality. 

7. APPENDIX

First playthrough without footsteps:

Second playthrough without slow walk:

8. REFERENCES

Schafer, R. M. (1973). The music of the environment. Universal Edition.

 

Week 3: Personal Space

Depths – Anita Luo


 Playthrough of my Scene

Source: Own Image
Source: Own Image
Source: Own Image
Source: Own Image

UNITY PACKAGE: https://drive.google.com/file/d/1WT6yhWFmKYnfGK_paBgC5rr26PAaw3FF/view?usp=sharing 

Original space:

Source: Own Image

1. PROPOSAL

I was prompted to consider how the artist of Promesa created narrative and mood in various locations through lighting, materials, camera movement, composition, and other elements of spatial and visual design. I found a place in a convenience store that made me feel deja vu and partially evoked a sense of a lucid dream. 

2. BRAINSTORMING

Firstly, I played Promesa to take note of the various techniques by which the game conveyed nostalgia. I noted the following:

  • The details of objects and textures became more vivid as time became more recent for the narrator.
  • The color is monotonous and cohesive when the narrator is little. This illustrates a sense of loss or innocence in the details of the scene. 
  • The scale of objects in memories was not always accurate; this might be due to importance or attention.
  • There are abstract scenes that may have functioned as a representation of the narrator’s emotions at that point in their life.
  • The fragmentation of memories is enhanced by disappearing or transparent objects.
  • The white mist of certain scenes suggests a sense of mystery while framing one specific space.
  • The sound of footsteps puts the player into the shoes of the narrator which strengthens the player’s immersion and connection to the game.

3. INSPIRATION & CONCEPTUAL INTEGRATION

Concept #1: Rising sea levels

Source: https://www.pokemopolis.com/episodes/dp/515.htm

After using photogrammetry to create an object of the space in the convenience store, I noticed that there were many areas of error despite taking 50+ photos of the space. As a result, I decided to take advantage of this setback by incorporating the theme of decay. Later while exploring and editing the space, the blue undertone I came across reminded me of an episode of Pokémon called “Sandshrew’s Locker” in which Ash and his friend explored a drowned city. The surreal feeling evoked by such a space was something that I wanted to incorporate. Furthermore, the action of drowning can be a metaphor for my memories that slowly wither through the passage of time.

4. PROCESS

Using Polycam, I scanned the area in the convenience store that I wanted. I downloaded the glb file, converted it to fbx and imported it into Unity.

Source: Own Image

The texture did not come with the object as expected. Thus, I extracted the texture in “Material.” I also adjusted the scale of the object in the project panel.

Once I added the texture and material I tried adjusting the color of the lighting for experimentation. I found that the blue created an appealing atmosphere which led me to approach the space in an underwater manner.I added a spotlight and experimented with different focal points to my scene. I found the low-intensity light over the entire scene worked well in making my scene look like it was underwater.I played around with the position of the spotlight. The direction of the light made more sense coming from the outside of the window shedding light into the convenience store. Furthermore, the shadows of the object created made the tones of the scene more interesting.

To create a foggy and refraction effect like the player is underwater, I followed a video called Unity URP – Underwater Effect Renderer Feature to create an underwater render effect.

I followed the video Unity URP – Simple Caustics Effect Tutorial to create a caustic effect on the floor. This effect would effectively communicate to the player that we are underwater. Furthermore, I learned how to utilize shader graphs.

I created a URP decal projector component for an empty object.

I experimented with different colors of the caustic effect and lig to see which one was more natural and coherent with the environment.

I one again explored different settings and positions of spotlights and environmental lighting.

I imported a pre-made aluminum can from the Unity Asset Store so I can make it float in the environment and enhance the underwater effect.

Source: https://assetstore.unity.com/packages/3d/props/exterior/aluminum-can-standard-210802

Lastly, I adjusted the first-person controller settings so that the payer moves, rotates, and jumps at a lower speed to indicate the density of water.

5. AUDIENCE RECEPTION & 6. CONCLUSIONS

In this Unity scene, I aimed to evoke a sense of deja vu in which there is an interplay between decay and preservation in the depths of the ocean. In class, I received feedback that the underwater setting amplified this duality due to its association with shipwrecks. The sense of familiarity in a certain area sometimes is accompanied by feelings of loss where one feels an essence of what has been but the memory is assembled by fragmented or floating memories. Whenever I see corners of the convenience stores I can feel myself reaching deep into my memories to recollect why I feel intimacy with these places. I recollect the days when I sat near the windows of these convenience stores, maybe drinking a soda can or eating ice cream, while watching the sunset until it was the cool dawn. Thankfully, it was pointed out in class that my intention of changing the scale of the space was apparent as the player was a lot smaller than the space—which was a suggestion that the player is playing as a child in this space. At the same time, there is a sense of loneliness when I realize that I am experiencing these emotions alone in my head and the only thing keeping these memories alive is myself. Thus, in this void, I feel a deep appreciation for the little things of the past that are kept at a standstill between longing and tension.

7. APPENDIX

Video of the entire space:

Source: Own Image

Week 2: Site Specific Photocollage

Commodity – Anita Luo & Beatrice Zhang 

Source: Own Image
Source: Own Image
Source: Own Image

1. PROPOSAL

We are prompted to create a reinterpretation of a space that embodies the “spectacle” in Society of the Spectacle by Guy Debord. With our documentation of the space, we can illustrate Debord’s social phenomena through the elements of art (line, color, shape, form, value, space, texture) to create a digital sculpture in Unity.

2. BRAINSTORMING

After doing an independent reading of Chapter 1 of Debord’s Society of the Spectacle, we shared a mind map of our understanding, ideas, and questions about the “spectacle” and the concept of “Separation Perfected.” Through this session of comprehension, knowledge-sharing, and examination, themes of autonomy, objectification, and representation were prominent. We chose our location based on these themes.

Source: Own Image

 

Main concepts from quotes:

  • “social relation among people mediated by images
  • autonomous images” and the “autonomous movement of the non-living
  • “technologies based in isolation” like “automobile to television” which has “constant reinforcement of…isolation of ‘lonely crowds‘”
  • “separate pseudo-world that can be looked at

3. INSPIRATION & CONCEPTUAL INTEGRATION

Concept #1: Nam June Paik

Nam June Paik - Dadaikseon (The more, the better) | G A R N E T | Flickr
Source: https://www.flickr.com/photos/youraccount/7177851500

Nam June Paik – Dadaikseon (The more, the better)

Artist Nam June Paik explored how moving images and technology would become inherently embedded into the daily lives of humans through video art. While the artist conveyed the possibility of human connectivity, we wanted to adopt his use of television as a means to communicate the isolation a viewer may feel from autonomous moving images. In this way, we deviate from his artwork to critique the “spectacle.”

Concept #2: Robotic and autonomous

Can't Help Myself (Sun Yuan and Peng Yu) - Wikipedia
Source: https://en.wikipedia.org/wiki/Can%27t_Help_Myself_%28Sun_Yuan_and_Peng_Yu%29

Sun Yuan and Peng Yu – Can’t Help Myself

We were inspired by the artwork’s intention to convey the repetitive and automated reality. Similarly, everything that we experience in our lives translates into fragmented arrangements of images and representations. As a result, we experience a “separate pseudo-world that can be looked at.” Ultimately, this perspective can be dehumanizing and depressing.

4. PROCESS

  1. Using our mind map, we extracted some keywords to choose the space we wanted to document. We decided to explore a commercial setting that exhibited the concerns of capitalism and commodities. Thus, we went to Taikooli, a shopping mall characterized by lights, products, and luxury.
  2. In Taikooli, we sought to find any space that brought about isolation to someone who would view it. A common motif in the shopping mall was shelves with objects placed at equal distances from each other. They could become fragmented arrangements of images in the consumer’s daily lives or become a representation of the consumer’s experience. Photos can be found under “7. APPENDIX.” 
  3. Thus, we started to sketch ideas of different methods we could convey the ideas of having something but not being. In this ideation phase, we incorporated additions, subtractions, alterations, and combinations of our various ideas.

    Source: Own Image
  4. We further ideated ways to incorporate the documentation and our sketches in a more coherent and effective manner. Thus, we sketched our final idea which consisted of having televisions displaying experiences or memories on their screen that are placed on display shelves.

    Source: Own Image
  5. We added free unity assets of a television and shelf from the Unity Asset Store.
    Source: https://assetstore.unity.com/packages/3d/props/furniture/low-poly-metal-rack-213045

    Source: https://assetstore.unity.com/packages/3d/props/interior/soviet-television-electron-718-125030
  6. We opened a new 3D project on Unity and created a room for the sculpture.

    Source: Own Image
  7. We encountered a few problems. For example, we had the default magenta texture over all our objects. However, after watching this video, we were able to fix it! The other problem we encountered was being unable to import assets. Ultimately, we decided to use the textures from these assets and used our own 3D objects. 
  8. By creating empty parents and grouping, duplicating the televisions was a relatively easy task.  

    Source: Own Image
  9. We then inserted quads with a video each. The videos taken in Taikoolii were edited using CapCut as its filter function was fast and accessible The same filter is used to create unity within the sculpture.
    Source: Own Image

    The edited videos can be found below:







  10. We then adjusted the texture of the wall, floor, and ceiling to black. This enhances the isolation and creates a focal point in the sculpture. 
  11. We lit the sculpture using a spotlight. Unfortunately, we could not make the videos emissive despite searching a lot of tutorials. The overlay of our software looked different from the ones in the tutorial. However, we would like to figure this out in the future.
  12. We intended to embed scripts to create some animation. However, it did not add to the sculpture’s purpose so we discarded this idea.
  13. We adjusted the lighting settings and rendering to make our sculpture immersive through lighting, texture, and color. Through the repetition of the television form and negative space, the sculpture evokes a sense of an impersonal relation or detachment for the viewer from the content of the television. 
  14. Due to time constraints. we were unable to add the shopping cart from our sketch.

5. AUDIENCE RECEPTION & 6. CONCLUSIONS

In the Zoom class, we remotely shared our project and at the same time explained our concepts and creation process. One positive feedback is the specificity of the space – it is condensed into sculpture existing within a focused area, which fixes the audience’s attention to what we have created. Televisions are smart choices to embody the alienation in a commercialized society where people pay for experience and objects but lose their subjectivity as beings. On the other hand, we also received comments about our choice to use televisions as our subject matter. We are prompted to consider its implications on a deeper level. When making choices in a project we should consider why we make certain decisions, how effective they are, and whether there are alternative ways we could approach a concept. We think one of our main reasons for using the television is because the traditional function and understanding of it is to be looked at to receive information. Its one-way channel from the screen to the viewer highlights this human disconnectivity. In this aspect, we could not think of any other subject matter more suitable for this. However, in a modern context, we could find a subject that is more relatable to draw the audience and illustrate our message more effectively. Additionally, we also received feedback that the viewer can interpret the sculpture as surveillance cameras. As a result, we are also considering ways in which we can take advantage of this aspect—maybe surveillance can be a means to extend the critique of the spectacle. However, we believe that we could have avoided this issue if we added the shopping cart to the sculpture. Ultimately, through the repetitive and geometric form of the sculpture, we want to communicate the messages that the boxing, repackaging, condensing, concentration, and separation in the arrangement of society undermine the fluidity and flexibility of our experiences as living beings.

7. APPENDIX

Videos: IMG_9009 IMG_9013 IMG_8998 IMG_8997

8. REFERENCES

Debord, Guy. “Separation Perfected.” Society of the Spectacle, Black and Red, Detroit, 1977.

 

Week 1: Spatial design exercise

Source: Own Image

Views:

1. PROPOSAL

Considering the “things” regarding Space, Form, Parti, Paths, Views, and Composition in Matthew Frederick’s book “101 Things I Learned in Architecture School,” this exercise requires me to design a virtual space. Simultaneously, I need to consider how the “things” are similar or different from the environment approaches often used in level design. I am prompted to think about the specific intent and potential narrative of the virtual space I will create. What am I trying to express with this space?

2. BRAINSTORMING

An idea that I have been very interested in recently is diaspora as a Chinese South African.

Source: Own Image

Main concepts:

  • Chinese South African Identity
  • “Floating”
  • Identity and identity-inflected space
  • Cultural Politics
  • Empowering Act of Occupation
  • COVID-19?

3. INSPIRATION & CONCEPTUAL INTEGRATION

Concept #1: The Disrupted Home

Source: L’eau Design

Source: L’eau Design

Concept #2: COVID-19 

COVID-19 and ART

Source: https://www.eshre.eu/covid19 

4. PROCESS

1. I first sketched my idea for a space that is organized around the idea of diaspora and the concepts that come with it. 

Source: Own Image

2. Using only Unity’s default 3D Game Objects (primitives) and the first-person character controller, I try to recreate my sketched idea. While working, I would occasionally drop into play mode to see how it feels to move through the space.

After testing out the mechanics of Unity, I first used cubes to create the wall of the house. Due to the limited available objects we have, I modified the house form my sketch.

After completing the house, I created an empty parent called “House” to combine all the cubes.

I adjusted the positions of the house so that it would float.

Naming every object was very helpful while navigating my project.

I started making the spheres in my sketch. However, I decided to adjust the spheres into hybrid forms to contrast the house and floating elements.

A view of my project after following my sketch:

First-person starting point:

3. Lastly, I made adjustments to the sketch based on my playtesting.

For example, I noticed that some of the floating objects were too large for the player to navigate in so I minimized the scale for them.

I also noticed that I needed to place more objects to enhance the feeling of rootlessness.

Lastly, an error occurred where the camera of the player was skewed. However, I kept this to suggest an imbalance in my space.

“Things” I added:

  • Positive and negative space
  • Dwelling in a positive space
  • Thoughtful making of space
  • Contrasting elements
  • Denial and Reward
  • Underlying ideas

5. AUDIENCE RECEPTION & 6. CONCLUSIONS

Through my sharing in class, I received feedback that there is a good narrative in the juxtaposition of the floating unstable objects and the grounded player. The idea of being in the space despite a lack of roots and the feeling of hysteria is thus communicated more effectively. There is also a clear path seen in the space from the floating objects, an ascendance, which offers dynamic in my work.

7. APPENDIX

 

8. REFERENCES

Frederick, Matthew. 101 Things I Learned in Architecture School. MIT Press, 2007.

Final Project: 6. Report

A. Excalibur – Anita Luo & Kiana Ng – Inmi Lee

 



 

*Photographs taken by IXL Fellow, Shengli 

B.CONCEPTION AND DESIGN

Fiction offers creative reconstructions of events that narrate and organize behavior through the representation and implementation of space—shaping how the audience perceives and interacts with their environment. Games, for example, deviate from reality to make an illusion playable as they channel agency into new forms on the screen. Through actualizing fiction, “Excalibur” aims to alter the conventional game approach of reality-in-illusion by bringing illusion to reality and the physical space—reversing the roles. Taking inspiration from Star Wars’ lightsaber roleplay toy, “Excalibur” embeds the concept of “illusion-in-reality” by implementing familiar game strategies such as a timer, heart count (health tracker), competition, roleplaying, and sound and visual effects that observe the user’s real-time movements. Consequentially, the project also contributes to the theory that gamification, the application of elements of game-playing to other areas of activity, can enhance engagement and satisfaction. Additionally, the project is an extension of virtual games into the real world: by using the same types of manipulation adopted by virtual games, such as employing sound effects in a certain space, “Excalibur” constructs a virtual space without the need for a VR headset—introducing the user with convenience and innovation. The project is designed in a way that makes the game playable independent of lore, age, and physical fitness to fit a wider audience consisting of not just children. To do so, the option to skip or disregard elements in the game was welcome; players were given the option to skip the introduction explaining the character’s background and game instructions if applicable. Furthermore, disregarding game constraints, such as the game rules, did not put the project at a disadvantage as players could take pleasure in the system’s fast response to their actions through sound and visual effects accompanied by whimsical medieval background music. In terms of sound, the project used serial communication between Arduino and Processing to add sound effects based on the copper attached to two swords, two shields, and armor which allows electricity to flow to create a closed circuit—creating a switch. The simple and single use of switches as the foundational framework of the project ensures fast system response and easier troubleshooting. During user testing, it was suggested to add images of the characters, King Arthur and Mordred, in their respective positions to the players on the monitor to differentiate their roles. By using media objects as a representation of them, the players can position themselves within the context of a duel, to embody openness and experimentation in roleplaying. Secondly, it was recommended to incorporate game states with an open menu, instructions, the actual gameplay, and an ending screen to create an experience where the user sees the system’s response, and takes more action in response, in a continuous loop. Furthermore, including an instruction state stimulates the player to listen, think, and respond to the system as well as gives their gameplay a meaningful aim and purpose. Both suggestions from the user testing were effective. While analyzing the user’s experience during our end-of-semester IMA show, we found that most players would look at the monitor to make sense of the situation and formulate their next plan of action which made the system itself a great communicator.

Initial sketch

 

Revised sketch

*Note: Small changes were made to the final sketch

User’s Interactions




 

C. FABRICATION AND PRODUCTION

Materials

  • 1x Breadboard
  • 1x Arduino Uno
  • 1x USB Cable
  • 1x USB Protector
  • 4x 10 kOhm Resistors
  • A handful of M/M jumper cables
  • Jumper cables from the studio
  • 1x Ruler
  • 1x Hot glue gun
  • 1x Black cloth
  • 1x Brown cloth
  • 1x White cloth
  • 1x Silver cloth
  • 1x Tablecloth
  • 1x Strip of fake gemstones
  • 2x Copper tape
  • 1x Scissors
  • 1x Wire cutters
  • 1x Cutting board
  • Cardboard
  • 1x Masking tape
  • 1x Marker
  • 1x Electrical tape
  • Soldering set
  • 1x Wood for laser cutting

Not included anymore:

  • 3x Accelerometers

Project Plan

During the proposal phase, I created a Gantt Chart for the “Excalibur” Project so Kiana and I would have a better plan of action.

 

Lookback: One of the flaws of our midterm project was our lack of planning. As a result, many of the steps in the respective areas of coding, physical computing, and building did not work together. There were moments when we had to temporarily stop on one step and go back to working on another component of the project so that we could continue our last step because there was not a clear picture of how everything worked together. This stagnation affected our productivity and morale throughout our midterm project which caused us to have last-minute obstacles that we could not address before the final critique. To fix this mistake, we utilized a Gantt Chart for the final project to plan how every step correlated with the last having specific objectives, and how every step was building us towards a final goal. I believe the reason why we were able to finish the project on time with extra time to make additional adjustments was because of proper planning and staying on track; not much time was wasted which gave us extra time to do other work in other courses. By learning and proactively addressing past mistakes, the process of creating the project was much faster, and our experience was more rewarding and fun!

We also made a flow chart to understand the essential components of our game.

 

Building the Circuit (Anita and Kiana)

Step 1: Testing the accelerometer

 

Our initial idea was to use the accelerometer to detect changes to the sword orientation to make sword sheath sound effects to immerse the players in the idea of a manipulated reality. Additionally, we wanted to add a wand for Merlin, a wizard in the Excalibur story as an additional role for the audience if they would like to add special effects on the screen without affecting the game. Firstly, with the recommendation from Professor Lee, I consulted with the tutorial on how to use an accelerometer from the IXl tutorial repository. In this step, I was able to calibrate the accelerometers with xyz.

Secondly, the professor assisted me further by giving me a code on how we can send xyz values from the Arduino to Processing with the following:

void setup() {
  Serial.begin(9600);
}

void loop() {
  // to send values to Processing assign the values you want to send
  // this is an example:
  int sensor0 = analogRead(A0);
  int sensor1 = analogRead(A1);
  int sensor2 = analogRead(A2);

  // send the values keeping this format
  Serial.print(sensor0);
  Serial.print(",");  // put comma between sensor values
  Serial.print(sensor1);
  Serial.print(","); // add linefeed after sending the last sensor value
  Serial.print(sensor2);
  Serial.println();  // add linefeed after sending the last sensor value

  // too fast communication might cause some latency in Processing
  // this delay resolves the issue
  delay(100);

  // end of example sending values
} 

Lookback: While the code worked, we chose not to use the accelerometer because

1) its purpose was useful but not essential to the game design

2) there would be additional cautions to consider when attaching the sensor to the weapons

3) allowing single-player (one of the main reasons why we wanted to implement this sense) could be a distraction rather than a special addition to the game that would confuse the players with what the objective of the game is. When we initially wanted a single-player option, the idea of a health tracker and timer in the game rules had not been conceptualized so due to the recent changes, our old ideas were reevaluated and discarded.

4) No Merline anymore 🙁   (would be too distracting)

Lookback: Reevaluating every component in a project continuously throughout the project-making process is very valuable. Asking oneself what the significance and value of each component is a useful skill to adopt and probably the responsibility of the creator to ensure that all the elements of a project work together effectively to produce a meaningful and interactive product for the audience. Furthermore, as the making process progresses sometimes new elements can affect the old. Unnecessary or ineffective components could obscure the vision and mission of the product when the users interact with it: sometimes less is more.

Step 2: Prototype

For the prototype in user testing, I used a simple diagram from the internet to assist me in building the circuit. I consulted with an Arduino switch diagram because the mechanisms of the weapons were based on the idea of a switch creating a closed circuit when different weapons touched. For this, I used digital pins 2-4 to detect a 0 or 1 to send to Processing.

Digital Read Serial | Arduino Documentation

Source: docs.arduino

In the place of the switch would be the weapons. For the wires attached to the weapons, I used stranded wire which Kiana and I connected with solid wires for better connection with the breadboard.

 

https://drive.google.com/file/d/1Tb4Ia8wRYc3gnT6zFarGyT7_Mp6AMZOY/view 

Obstacle: It was difficult to differentiate the wires because the colors were the same and the wires were too long to trace back which object was which. Thus, we decided to label the wires. This step, along with Kiana’s help, was an effective step for me to quickly find wires in the case of troubleshooting.

 

Step 3: Final adjustments

 

 

We were also recommended to add the stone that King Arthur drew the sword from, in user-testing, because it would further illustrate the story of The Legend of Excalibur. This was a simple step in which I added one more digital pin value called “stonePin” on digital pin 5. 

We laser cute new objects and for this step, I attached solid wires to the copper on the weapons.

Obstacle: During user-testing, we found that the stranded wires would be tangled together because of their flexibility and material. To fix this issue, we used solid wires instead that would be more difficult to bend as well as cloth covering over it created even more friction. We found that the wires would not tangle up anymore and the cloth also added a professional touch to it.


 

Writing the code (Anita and Kiana)

 

I used Canva to create my video with voice narration by an AI voice generator website called ElevenLabs. Royalty-free sound effects were downloaded from Pixabay.

Obstacle: ElevanLabs required a subscription to continue the use of their service. Unfortunately, this meant that we could not fulfill our initial idea of having a storyline incorporated into the installation as we only had a limited amount of uses. If we had more time, we would have tried to consider a different narrator, maybe a real-life person, to do all the necessary voiceovers to make our game more complete, customizable, and unique to the audience. Furthermore, having voiceovers for the characters themselves would also have been a good addition to the game (especially in the ending state with different endings).

Images used:

Lookback: The midterm taught me a lot about the importance of synergy and teamwork. Therefore, for this project, I aimed to delegate as much of the codework as possible. Furthermore, I was more open to assistance from the learning assistants and fellow in Interaction Lab. Firstly, Kiana and I delegated the task of coding by being in charge of the coding for the different software respectively, namely Arduino and Processing. Because of clear communication and discussion, we both had a good idea of how the two codes would interact with each other; Arduino values were sent to Processing. Because Processing required more coding we compensated the difference between workload by giving Kiana more work in the building step. I felt a noticeable difference in my stress levels; I had a lot of pleasure writing the code this time. 

Obstacle: I encountered a multitude of issues while coding in Processing. Firstly, I had to learn game states which were not covered in our coursework. To do so, I looked at this YouTube tutorial. Independent learning is effective when the task is small. However, I encountered more problems with bigger issues like sound. The sounds would often overlap or play more than once. I struggled with the concepts of boolean, time (millis), and sound conditionals. Learning assistant, Daniel, and IXL fellow, Kevin, helped me extensively during this time. Additionally, Kevin helped me a lot on how to add videos to the “slide1” state as we did not learn how to add videos in class. This part took a long time (2+ hours) as we found the sound of the video continued after the state ended. Furthermore, we found that having void movieEvent(Movie m) was an essential part of playing the video on the screen (it had not been playing before adding this). Through experimentation, we were able to add a video that I created beforehand to the project. I found how effective it was to work together with others which I had not done in the midterm; I had more motivation and fun working on the project with others. I also found how supportive the IMA community is at NYU Shanghai!

 

Lookback: I also organized, formatted, and titled sections of my code for better viewing.

Arduino Code:

const int shieldPin = 2; // Digital pin for shield 1
const int swordPin = 3; // Digital pin for sword 1
const int heartPin = 4;
const int stonePin = 5;

void setup() {
 Serial.begin(9600); // Initialize serial communication
 pinMode(swordPin, INPUT); // Set sword  pin as input
 pinMode(shieldPin, INPUT); // Set shield  pin as input
 pinMode(heartPin, INPUT);
 pinMode(stonePin, INPUT); // Set shield  pin as input
}

void loop() {
 int swordValue = digitalRead(swordPin); // Read value from sword 1 pin
 int shieldValue = digitalRead(shieldPin); // Read value from shield 1 pin
 int heartValue = digitalRead(heartPin);
int stoneValue = digitalRead(stonePin); // Read value from shield 1 pin


// if ((swordValue == HIGH) || (shieldValue == LOW)) {
// )) Serial.println("1");

Serial.print(swordValue);
Serial.print(",");  // put comma between sensor values
Serial.print(shieldValue);
Serial.print(",");  // put comma between sensor values
Serial.print(heartValue);  // add linefeed after sending the last sensor value
Serial.print(",");
Serial.print(stoneValue);
Serial.println();

  // too fast communication might cause some latency in Processing
  // this delay resolves the issue
  delay(20);
} 

Processing Code:

String gameState;

import processing.serial.*;
import processing.sound.*;
import processing.video.*;

Serial serialPort;

int NUM_VALUES_FROM_ARDUINO = 4;
int arduino_values[]= new int[NUM_VALUES_FROM_ARDUINO];

//number of times the heart is attacked
int heartcounter = 0;

//images
PImage bg;
PImage heart;
PImage holyg;
PImage one;
PImage two;
PImage three;
PImage four;
PImage arthur;
PImage mordred;
PImage strike1;
PImage strike2;
PImage strike3;
int randomVal = 0;

//soundfiles
SoundFile bgm;
SoundFile countdown;
SoundFile hearts;
SoundFile sword;
SoundFile shield;
SoundFile start;
SoundFile startmenu;
SoundFile end;
SoundFile tick;

//video
Movie video1;
int previousValue;

//booleans
boolean playStartOnce = false;
boolean playEndOnce = false;
int starts = -1;

//fonts
PFont font;

//timer
String time = "60";
int t;
int interval = 60;
long startMillis;

//touching of swords and shields
int lastTouch;
int dist;

void setup() {
  fullScreen();

  gameState = "STARTMENU";
  //gameState values: STARTMENU, GAME, ARTHUR, MORDRED

  printArray(Serial.list());
  serialPort = new Serial(this, "COM5", 9600);

  //images
  bg = loadImage("Medieval Background.jpg");
  heart = loadImage("Pixel Heart.png");
  holyg = loadImage("Holy Grail.png");
  one = loadImage("1.png");
  two = loadImage("2.png");
  three = loadImage("3.png");
  four = loadImage("5.png");
  arthur = loadImage("King Arthur 1.png");
  mordred = loadImage("Mordred.png");
  strike1 = loadImage("Strike 1.png");
  strike2 = loadImage("Strike 2.png");
  strike3 = loadImage("Strike 3.png");
  //String[] strike = { "strike1", "strike2", "strike3"};
  //int index = int(random(strike.length));

  //fonts
  font = createFont("Arial", 30);

  //sound files
  bgm= new SoundFile(this, "Medieval Noble Music.mp3");
  countdown= new SoundFile(this, "Clock tick.mp3");
  hearts= new SoundFile(this, "Attack Heart.mp3");
  sword= new SoundFile(this, "Attack Sword.mp3");
  shield= new SoundFile(this, "Attack Shield.mp3");
  start= new SoundFile(this, "Starting sound.mp3");
  startmenu= new SoundFile(this, "Medieval Carnival Music.mp3");
  end= new SoundFile(this, "Endgame.mp3");
  tick = new SoundFile (this, "Clock tick.mp3");

  //video files
  video1 = new Movie(this, "Long version (online-video-cutter.com).mp4");
}

void draw() {

  getSerialData();

  if (gameState == "STARTMENU") {
    startmenu();
  } else if (gameState == "SLIDE1") {
    slide1();
  } else if (gameState == "GAME") {
    game();
  } else if (gameState == "ARTHUR") {
    arthur();
  } else if (gameState == "MORDRED") {
    mordred();
  } else {

    println ("something went wrong with gameState");
  };
  previousValue = arduino_values[1];
}

void startmenu() {
  image(one, 0, 0);
  if (startmenu.isPlaying()==false) {
    startmenu.loop();
  }
  if (arduino_values[3] == 0) {
    gameState = "SLIDE1";
  }
} //end startmenu

void slide1() {
  startmenu.pause();
  image(video1, 0, 0);
  video1.play();

  if (arduino_values[1] == 1 && previousValue == 0) {
    gameState = "GAME";
  }
  startMillis = millis();
}

void game() {
  video1.stop();
  if (start.isPlaying()==false && playStartOnce == false) {
    playStartOnce = true;
    start.play();
  }

  if (start.isPlaying() == false && bgm.isPlaying() == false && playStartOnce == true) {
    bgm.loop();
  }

  image(bg, 0, 0);
  //image(arthur, 30, 100);
  //image(mordred, 950, 170);
  image(heart, 1300, 5);
  image(heart, 1180, 5);
  image(heart, 1060, 5);

  //timer
  textSize(100);
  text(time, 650, 160);
  textSize(75);
  text("Timer", 630, 70);
  textSize(55);
  text("Arthur", 300, 900);
  text("Mordred", 1000, 900);
  t = interval-int((millis()-startMillis)/1000);
  time = nf(t, 3);
  
  if (t == 5 &&  tick.isPlaying() == false) {
    tick.play();
  }
  
  if (t == 0) {
    gameState = "MORDRED";
  }
  if (heartcounter==1 && end.isPlaying() == false) {
    image(bg, 0, 0);
    textSize(90);
    text(time, 650, 160);
    textSize(50);
    text("Timer", 660, 50);
    textSize(55);
    text("Arthur", 350, 900);
    text("Mordred", 1000, 900);
    image(heart, 1180, 5);
    image(heart, 1060, 5);
    //end.play();
  }

  if (heartcounter==2 && end.isPlaying() == false) {
    image(bg, 0, 0);
    textSize(90);
    text(time, 650, 160);
    textSize(50);
    text("Timer", 660, 50);
    textSize(55);
    text("Arthur", 350, 900);
    text("Mordred", 1000, 900);
    image(heart, 1060, 5);
    //end.play();
  }

  if (heartcounter==3 && end.isPlaying() == false) {
    image(bg, 0, 0);
    textSize(90);
    text(time, 650, 160);
    textSize(50);
    text("Timer", 660, 50);
    textSize(55);
    text("Arthur", 350, 900);
    text("Mordred", 1000, 900);
    //end.play();
  }

  if (heartcounter==3) {
    heartcounter = 0;
    gameState = "ARTHUR";
  }

  if (t==0) {
    heartcounter = 0;
    gameState = "MORDRED";
  }

  // play audio based on received value
  if (arduino_values[0] == 1 && shield.isPlaying() == false) {
    shield.play();
    lastTouch = millis();
    randomVal = floor(random(1, 4));
  }

  if (arduino_values[1] == 1 && sword.isPlaying() == false) {
    sword.play();
    lastTouch = millis();
    randomVal = floor(random(1, 4));
  }

  if (arduino_values[2] == 1 && hearts.isPlaying() == false) {
    hearts.play();
    heartcounter++;
  }



  dist = millis()-lastTouch;
  dist = constrain(dist, 50, 300);
  image(arthur, 500-dist, 100);
  image(mordred, 600+dist, 170);

  if (millis()-lastTouch<200) {
    if (randomVal == 1) {
      image(strike1, 0, 0);
    } else if (randomVal == 2) {
      image(strike2, 0, 0);
    } else if (randomVal == 3) {
      image(strike3, 0, 0);
    }
  }
}

void arthur() {
  //int startTime = millis();
  image(four, 0, 0);
  bgm.pause();
  tick.pause();
  if (playEndOnce == false) {
    playEndOnce = true;
    end.play();
  }
  //if (millis()-startTime >=4000) {
  if (arduino_values[3] == 1) {
    gameState = "STARTMENU";
  }
}

void mordred() {
  image(four, 0, 0);
  bgm.pause();
  tick.pause();
  if (playEndOnce == false) {
    playEndOnce = true;
    end.play();
  };
  if (arduino_values[3] == 1) {
    gameState = "STARTMENU";
  };
}

void movieEvent(Movie m) {
  m.read();
}

void getSerialData() {
  while (serialPort.available() > 0) {
    String in = serialPort.readStringUntil( 10 );  // 10 = '\n'  Linefeed in ASCII
    if (in != null) {
      print("From Arduino: " + in);
      String[] serialInArray = split(trim(in), ",");
      if (serialInArray.length == NUM_VALUES_FROM_ARDUINO) {
        for (int i=0; i<serialInArray.length; i++) {
          arduino_values[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

Building the Installation (Anita and Kiana)

Step 1: Prototype

Kiana used Cuttle to laser-cut miniature weapons for user testing. 

 

Lookback: During the midterms, we put our attention too much on the visceral level of the project which left little time for coding and circuits. This time, we decided to leave external consideration until the end when the coding and circuit were done. We spent more time working on the functionality of the project which was a good choice because we were more confident that there were fewer obstacles during the installation stage (less time needed). We also decided to make the prototypes smaller for better transportation, but also to save resources at the IMA lab. More resources can be used for final touch-ups for both us and other students.

Initial “vest” design:

Final prototype:
 

Our first testers, Al and Henry:

 

Step 2: Final weapons

Sword

 

  • Kiana used Cuttle to create an SVG file which she then sent to the FabLab for lasercutting.
  • We then wrapped the top portion in copper tape.
  • We soldered solid wired to the base of the sword.
  • We wrapped the wire along with the grip of the sword in electrical tape.
  • We attached fake gemstones along the guard of the sword for design.
  • We hot glued the cloth covering the sold wire with hot glue onto the pommel of the sword.

 

We wanted to use silver as the color is more often associated with swords but the silver tape that the lab had was not conductive. 

Shield

  • Kiana used Cuttle to create an SVG file of two shields and two handles which she then sent to the FabLab for laser cutting.
  • Professor Lee recommended making triangular supporting material along the handle to ensure it doesn’t fall off. However, due to limited time, we used hot glue instead of wood which was sturdy nonetheless. 
  • We wrapped the entire shield in copper tape.
  • We soldered solid wire at the bottom of the shield.
  • We hot glued the cloth covering the sold wire with hot glue onto the bottom of the shield.

Armor

  • With cardboard from our midterm, we cut two identical pieces for the armor.
  • We cut two identical strips of brown cloth for the should piece.
  • We taped copper around the exterior parts of the armor. Electrical tape was attached on the inside for a better visual look and to prevent any possible electrocution (although highly unlikely).
  • We soldered solid wire at the back of the armor.
  • We hot glued the cloth covering the sold wire with hot glue onto the back of the armor.

Lookback: While observing the players at the IMA Show, I found that the wood material that we used could be hazardous. As a result, the users were either hurt or apprehensive during the duel. Although the wood was light, it was still hard enough to cause harm to the users if they put force in hitting their opponents. Thus, I propose changing the material of the weapons to a sturdy sponge so that it remains in shape but soft enough to avoid injury. This material would have been more child-friendly for more aggressive users. Due to the requirement of using digital fabrication, we could not follow up on this decision. Other things we noticed during the final IMA Show:

  • Due to the background noise, it was difficult for the users to hear the narrator. Thus, not many participants were invested in the lore/background of the game which confused them why there was an offensive and defensive role. This project is best suited in a quiet environment.
  • I wanted to allow players to skip slides in the introduction rather than the whole part so that they could see the instructions at the end of the game state. Because of this setback, I found that many players did not grasp the game rules because they skipped the introduction. I would like to fix this video issue if I had more time.

Lookback: Because we had more time, we once again delegated the task based on strengths and weaknesses (or more accurately familiarity with certain software). Kiana worked with Cuttle and I worked with Processing. Exposure to this step also required trust in each other because we did not work closely with each other which built a greater connection between us as teammates and I have become more open to working with others with different backgrounds and expertise.

Step 3: Stone

  • We used a big piece of cardboard from our midterm as the base of the stone.
  • Kiana created an SVG file of a box using a box generator which she then sent to the FabLab for laser cutting.

 

  • We made a smaller body with copper tape wrapped around it and solid wire soldered at the base. The solid wire was attached to pin 5.

 

  • We drilled a rectangular hole at the top of the big box and attached the smaller box to create a cave for the sword to slip in.
  • We hot glued bubble wrap at the top to create an organic shape for the stone.
  • We left one side open so we could access the inside; we plan to put the circuit in the box.
  • We covered the entire object in silver cloth to mimic a stone.
  • We hot glued some of the cloth onto the cardboard base to secure the position of the cloth.



 

Step 4: Instructions

 

  • Kiana engraved words on a shield that we planned to discard as additional instructions to the player. We noticed that people may not know which color, white or brown, belonged to which player. Thus we made a small riddle for the users to play before engaging with the game.

Step 5: Setup

  • A red tablecloth was placed on a table.
  • A monitor and speakers are placed on the table.
  • Mordred’s weapons are placed on the table.
  • Arthur’s rock and weapons are placed on the front left of the table.
  • The shield with additional instructions is placed on a chair on the right side. 
  • When everything is ready, the USB cable is attached to the laptop with the Processing sketch. The laptop is placed under the table, which will not be seen by the audience.

 

IMA Show:

 

D. CONCLUSIONS

The video below is a demonstration of the final project:

 

“Excalibur” is an interactive 2-player game that incites participants to explore their own movements within the intersection of technology and human perception, encouraging them to engage with the environment in a unique and surreal manner.  In doing so, the project becomes a derivative of reality by making an illusion playable as it channels agency into new forms of the conventional screen. Guided by user input and a basic set of rules, the project invites engagement and interaction, inviting people of all ages to play and create their unique patterns of sound and outcomes. Through this interactive experience, “Excalibur” successfully fosters a sense of community and shared creativity as participants collectively contribute to the game through the body as the channel for information. Furthermore, the connection is made unironically through the connection of the user’s weapons in a closed circuit that is signaled by the sounds of clash and force. This observation is evident from users’ engagement exemplified by the popularity and entertainment that brought laughter to the crowd at the IMA Show. Thus, the project resides in a new realm of creative expression, that transcends traditional boundaries between the human and technologychallenging the conventional notions of game. On the other hand, the project aligned with the dialectic definition of interaction as proposed by Chris Crawford: after receiving information from the user’s “extended organs”—their weapons—the game processes and interprets the data to produce variable results onscreen whether that is through sword slashes, character movement, or health reduction. In turn, the user sees the system’s response, and takes more action in response, in a continuous loop.  Furthermore, the dialogue is multi-faceted too: there is dialogue between the players too. To reinforce this interaction, Kiana and I would have liked to add more narrative and witty remarks to increase the humor of the game. To do so, we would either buy a subscription to continue using ElevenLab’s AI voice or hire a voice actor/actress to play the narrator role. Furthermore, to make the project more authentic, we would have drawn the media objects (images and video) ourselves or hired someone to do so instead of borrowing them from online. Overall, Kiana and I are both proud of our progress and dedication to this project. During our setbacks, we changed our old approach of troubleshooting independently to one that is open-minded: we asked for assistance from whoever whenever and wherever we needed it! By doing so, we’ve also come to appreciate the big social network structure that essentially holds our project together—a social network that comprises of user-testing, office hours, one-on-one assistance, and even the support Kiana and I gave each other. I plan to carry this sentiment with me in my next steps on this IMA journey.

E. DISASSEMBLY

 

F. APPENDIX









 

G. REFERENCES

Crawford, Chris. The Art of Interactive Design. San Francisco,  No Starch Press, 2002.

Norman, Don. The Design of Everyday Things. Cambridge, MA: The MIT Press, 2013.