Author Archives: Lang Qin

Week1- Lighting Project Research

Analysis of the Nautilus Light Installation

a hq h

More at:  https://www.theatlantic.com/sponsored/nautilus/lincoln-nautilus/3076/

 

The Four Goals of Stage Lighting in Nautilus:

  1. Selective Visibility: Nautilus directs the audience’s attention through its interactive poles. By activating specific poles, participants can influence where and how the light and sound symphony unfolds, thereby focusing the experience on their interaction rather than the whole installation. However, spectators observing from the outside will experience a completely different scene.
  2. Mood: The installation creates a dynamic mood that shifts with participant interaction. The gentle glow from the poles and the accompanying melodies produce an ambiance of wonder and communal engagement. The mood shifts from serene to vibrant as more participants interact, embodying the natural light’s interaction with the installation. This mood transformation underscores the lighting’s role in evoking specific emotional states.
  3. Composition: Nautilus uses lighting to organize spaces within the installation, separating the interactive field of poles from the central viewing area. This composition invites participants to engage with the installation from within or to observe from a distance, creating a layered experience that blends the pier’s edge with the surrounding environment. The lighting design supports and enhances the installation’s spatial dynamics, emphasizing the division between the exterior and the interior.
  4. Revelation of Form: Through its interactive lighting, Nautilus reveals the form of each pole and the collective installation. The lighting responds to touch by illuminating and creating shadows, which add dynamics to the experience. This interplay of light and shadow sculpts the installation, allowing it to emerge as a multi-interacting object that participants can explore and shape.

The Four Properties of Stage Lighting in Nautilus:

  1. Color: The Nautilus features a spectrum of colors that respond to the tones generated by participant interaction. These colors reflect off the acrylic panels and illuminate the poles, enriching the visual and audio experiences.
  2. Intensity: The intensity of the lights within Nautilus varies with the strength of interaction, providing a visual feedback loop that encourages continued engagement. Moreover, it also depends on the weather and the gift of natural light.
  3. Form: The strategic placement of the 95 interactive poles and the central viewing area dictates the installation’s form. The angles at which the lights are placed within each pole affect how participants perceive the installation, from the pulsing lights that emanate with each touch to the overall illumination of the space. This arrangement invites interaction, guiding participants through it.
  4. Movement: The Nautilus embodies movement in its changing light and sound responses to human touch. This movement is not just physical but temporal, as the installation evolves with the day and the changing natural light. The ability of the installation to sequence interactions over time, creating a layered chorus of light and sound, highlights the dynamic nature of movement within the installation.

 

Sold!-Documentation

Oct 4, 2023 


Idea Proposal: The scenario of Christine’s auction

Characters: auctioneer, hammer, a bidder

1 1 

Material: wood, clay, metal wire

q

♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠

Oct 18, 2023

First, I recap the scenario which shows the auction, one auctioneer, falling the hammer to make a deal, and one bidder, lifting the number paddle to bid.

Based on last week’s feedback, I added one more layer as a bearing in the middle of the cams and scenario’s platform to constrain the moving rods in a relatively limited path. 

As the result of thinking about how to make the characters in the most effective way, I sketch each component of the characters in Illustrators in advance which can give relative guidance when I making it even though practice is the only way to test assumptions.

1

I prioritized the making of the platform as the last step since the relative distance between each character is fixed, the platform’ size should depend on the whole scenario to avoid over-big or over-small. Thus the validations of movement are finished on my cardboard prototype. 

1  1

Midway through making the characters, I found that there 2 linkages in each arm, and more effort had to be made on it. I noticed that foam clay, which I prepared to use for the head, could be a good material for making the nut or block. In my case, it’s a good way to simplify the making since my characters are small and leverage the materials I have. 

 1 1 

One thing I want to challenge in this project is the relationship between the time difference, while the falling of a hammer once another hand should shake three times. Even though I tried to adopt the different shapes of cams to achieve the time difference, the relationship is not apparent. The gears might be a more precise solution for this. But in general, I’m satisfied with the assignment. 

EmoSynth-Documentation

EmoSynth for Sensorium AI

EmoSynth: A combination of ‘Emotion’ and ‘Synthesizer’, highlighting the device’s ability to synthesize emotional expressions.

Designers

  • Lang Qin
  • Junru Chen
  • Emily Lei
  • Kassia Zheng
  • Muqing Wang

Final Presentation

<Click to see our video>

Project Description

“We are Sensorium AI Group from the Multisensory class. We designed a wearable device that works with Sensorium Ex, Luke, Michael, and our professor, Lauren. In order to fulfill the design objectives of facilitating real-time artistic expression and modulation for performers with cerebral palsy and/or limited speech, our device allows the user to freely select the emotion and level of emotion they want to express. Additionally, there is a selection of high and low audio to represent diverse genders, along with a reset button to prevent accidental touches during movement on stage. Each selection on the device is clearly denoted by an embedded LED hint to better inform users.

Timeline

  • 09/14/2023: Primary research ( Reference cases and Inspiration)
  • 09/21/2023: Ideas brainstorm and sketch
  • 09/28/2023: Finalized our project idea
  • 10/05/2023:Developed first quick prototype
  • 10/12/2023: User testing with the quick prototype
  • 10/19/2023: Implementation of user testing result
  • 10/26/2023: Finalize the sensors and buy materials
  • 11/02/2023: Prototyping with the sensors (input signal) 
  • 11/09/2023: Start to work on the visual and audio output
  • 11/16/2023: Finalize our soft material and the functions of each button
  • 11/30/2023: Connect flex sensors, lights, and buttons with the Arduino
  • 12/07/2023: Finalized the fabrication part
  • 12/14/2023: Documentation and final testing/troubleshooting.

Weekly Updates

Click here for our Google Slides Weekly Presentation link (66 pages).

Click here for our Final Google Slides Presentation link (15  pages).


09/14/23

Reference

Inspiration 1: http://xhslink.com/6NqCyu

Inspiration 2: https://www.media.mit.edu/projects/alterego/overview/

Inspiration 3: https://www.deafwest.org/

 

Other Possibilities:

Emotion AI: https://www.affectiva.com/about-affectiva/
EMOTIC Dataset: Context Analysis: https://s3.sunai.uoc.edu/emotic/index.html
Brainwave: https://prezi.com/p/fvxrqad30mjq/music-from-brainwaves/

09/21/2023

Background:

     

Takeaways from meeting with Luke

Based on our target user, Thomas, and the general audience, we have proposed two ideas: sign language translation and alternative emotional expression.

Link of Miro board

“ Translation” of sign language

Sketch:

Alternative Emotional Expression

Notes from Crit:

  • Invisible and visible design
  • Thomas’s frequent language/issues
  • Pre-set device? Technical issues.
  • The generated voice is more vivid than the machinery voice.
  • How to train AI mode? 
  • Product or part of art/opera?
  • Connect to Thomas.

 

09/28/2023

10/05/2023

aa aaa

10/12/2023

Hypothesis

An educated guess on what you think the results of the study will be. 

If we can create a wearable device that can recognize sign languages and analyze the emotions of the user based on an AI model, then it may help them to effectively communicate with people who do not understand sign languages. In the case of the Sensorium AI, with a pre-set movement, content, and voice output, it may allow Thomas to express the context with the sign language as triggers and translate it into voices while he is performing on the stage.

Research Goals

  1. Help individuals with speech impairments communicate with others and express their emotions more effectively. 
  2. Specifically, focus on performers in opera with speech impairments to help them express emotions effectively and timely on stage.

Methodology

Number of participants: 5
Location: NYC
Age range: not limited
Gender: not limited
Disability Identity: speech impairments
Experience: prefer people with performance experience

Procedure

Data collection: brainstorming, feasibility testing  Find more methods here
Length: approximately 30 minutes
Location: Unknown

 

Testing Script

Informed Consent

Hi, my name is <XX>. Today, we would like to invite you to participate in a research study designed to explore the potential benefits of wearable devices in aiding non-verbal performers in conveying content and emotions on stage. Before you decide whether or not to participate, it is important for you to understand the purpose of the study, your role, and your rights as a participant

Please let me know if you’d like a break, have any access requirements, or need any accommodations. There are no right or wrong answers, and we can stop at any point. Do I have your permission to video and audio record? Quotes will be anonymized, and all data will only be shared internally with my team, stored on a secure, password-protected cloud server, and deleted after the study is completed.

General questions:

  • What’s your first impression of this wearable device?
    1. Do you feel any barriers from this device?
    2. Will anything from this device affect the costumes and stage design?
    3. What is the safest way to have this device attached to you by considering your interactions with other performers on the stage?

s

User Task Feedback:

Mary:

It will be cool to wear it on stage.

There might be some kind of conflict between recognizing movements and the freedom of performers on the stage.

To ensure a wearable device, I might need to move in a very specific way.

Maybe the speed of movement should be considered.

Cindy:

This looks very futuristic.

There might be some unintended activation.

Nicole:

The appearance of this device kind of aligns with the story’s background.

The device may make the performer concerned that their movements will damage the device so that they cannot perform as well as they used to.

I am concerned about the learning curve of this device. 

Are there any tips I need to learn or remember by wearing this device?

Josh:

Will sweating and makeup influence the circuit?

Potential risk to the body’s security since it’s an electronic device.

Fanny:

It’s the solo device for each individual? The construction of relationships is inevitable on the stage/in the opera, multi-player might be the concept that you can dive in.

Edward:

I like the design because it expands the expression of dancing and movement on the stage. As a dancer who performs in large theaters, we are concerned about how to affect audiences far away from the stage. I think the design would help us express and visualize our emotions more dramatically. I am looking forward to your next step! The one suggestion I would like to make is that be sure to pay attention to the comfort as well as the beauty of design. It’s quite hard to imagine wearing a winter glove all the time while dancing. Find some breathable material and try it on yourself for several hours before you decide to use the material.

10/19/2023

Comments on the form:

It will be cool to wear it on stage.

This looks very futuristic.

The appearance of this device kind of aligns with the story’s background.

Potential Constraints in the Performance

There might be some kind of conflict between recognizing movements and the freedom of performers on the stage.

To ensure a wearable device, I might need to move in a very specific way.

The device may make the performer concerned that their movements will damage the device so that they cannot perform as well as they used to.

Damage Concerns

There might be some unintended activation.

Will sweating and makeup influence the circuit?

There’s a potential risk to body security since it’s an electronic device.

Learning Process

I am concerned about the learning curve of this device.

Are there any tips I need to learn or remember while wearing this device?

Our question for the weekly meeting with Luke, Jerome, and Michael

  1. What’s going to happen in this opera? Such as movement and positioning…
  2. How many people are going to use this product?
  3. What’s the maximum of the movement going to be?
  4. Can this prototype be used in this opera with some movement preset into the performance?

Takeaway from meeting with Luke, Jerome and Michael

Regulation

Number of users: 10–15 performers

Suggestion:

Maybe the speed of movement should be considered.

Take away from in-person hack with Luke

We learned about how to use MaxMSP to create output, which helped us get a better understanding of what would the final prototype be.

10/26/2023

our Prototype

The main component in the circuit

lock/unlock

Reset

Slider

Reference link for prototype: Physical computing project from another website

Reference link for slider: YouTube video

IMG_8212

In our weekly group meeting, we also bought materials for the final prototype including different sensors and parts:

11/01/2023

11/07/2023

11/14/2023

11/19/2023

11/30/2023

12/07/2023

 

12/14/2023

This week, we filmed our documentation with the whole team.

5 W’s Chart

Components Questions and considerations
Background (Why) Upon receiving the mission, we got to know an incredible opera team called Sensorium Ex. To fulfill the design objectives of facilitating real-time artistic expression and modulation for performers with cerebral palsy and/or limited speech, we resolved to craft a user-controlled device aimed at alleviating the inconveniences experienced on the stage.
Impressions (What) Anticipating the availability of pre-existing AAC devices, we fashioned a wearable voice-emotion synthesizer with a soft interface. In our design process, we considered various factors that may influence human voices, encompassing aspects such as gender, age, emotions, accents, and more.
Events (When) This device is designed to be both accessible and user-friendly, requiring no prolonged learning curve. Furthermore, we considered sizing and explored diverse placement methods, culminating in the development of an adjustable extension fabric. Consequently, it can be comfortably worn on the arms, legs, or wheelchair handles, accommodating a spectrum of usage scenarios.
Sensory Elements (How) In the final stages, we prototyped pivotal components, including pitch selections representing diverse genders, visualizations of emotional expressions as placeholders for the yet-to-be-incorporated model, and a reset button to forestall inadvertent activation during on-stage movement or dancing. Our design showcases a highly tactile and instinctive control system, empowering users to effortlessly navigate the device without the need for visual attention. Each selection on the device is clearly denoted by an embedded LED hint.
Designer/User (Who/Whom) The designers are Kassia Zheng, Junru Chen, Lang Qin, Emily Lei, and Muqing Wang. Lang did the coding associated with Arduino and electronic connections. Junru and Kassia handled the fabrication part, connecting sensors and sewing all the materials into one piece. Emily was responsible for the visual output using TouchDesigner, while Muqing learned MaxMSP and actively communicated with Luke and Michael. As a team, we worked together on the final output and documentation.

Workflow

  

Final Sketch

Code

<Link to code/ Github>

Sources

<Design brief>

<Sensorium Ex website>

<Design research from summer>

Egggo-Documentation

Oct 15, 2023

Junru and I worked together to start brainstorming for our midterm sketch this week. We plan to create an egg timer that can time eggs for different levels of doneness. It will also display a real-time clock, essentially providing a clock for the kitchen space.

ddd

♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠

Oct 29, 2023

Last week, we finished the majority of the code and will continue to work on the fabrication this week. We made the egg and pot by 3D printing and the cooking bench (box) by laser cutting. And we added LEDs on the interface to show the timing process more directly. 

a a

We soldered the buttons and LEDs, installed them on the box first, and then plugged the wires back into the breadboard.

a a

a uj a a

 a a

Code

Demo:

 

Crimson Tide-Documentation

Apr 18, 2023 

Basic Idea:

I want to use tide changes from the NOAA API.

My desire to make an art piece shows these data, but my physical outlook could relate to the female period abstractly. 

Life on Earth depends on tidal rhythms, and tides depend on the moon. 

Female menstruation is a miniature of the rhythmic movement of life on Earth.

The tidal changes are shown in the numbers and colors of LEDs and the direction shown by a servo. I still think about what the exterior should look like; it’s supposed to be a kinetic piece right now.

This week I explored more on how to physically show the pretty analogy of tides for the ocean, just like menstruation for women. I make a sketch and think about the mechanical issues. I met Tom to discuss my whole idea and the possible solution for the wiring problem with the continuous servo. And then I gave up. It’s still a good idea that Servo gives a sense of growth, but truly, it’s too hard for me, at least right now, whatever the gear system or the audio jack stuff. But it still should rotate; the posited servo is not bad.

The installation is supposed to be made with acrylic sheets and wood, the LEDs attach on discs to show the tide changes, and servo rotation is driven by the current direction I also try to add the data on the moon phase, which would be more helpful for visitors to think about the analogy between the tide and menstruation. But something changes after this week.

ss

I explored lots of stuff this week.

@ HTTP delay

I get the tide data quite smoothly from NOAA, and I add the LEDs as output to test. However, the animation is not good because of the delay in the HTTP request. Tom told me a good library [scheduler] to make multiple loops together, one for HTTP requests and another for the output, but running locally. I can set the frequency of requests; during the interval time, the animation will work pretty smoothly and have a bit of delay when the data refreshes, but it’s not a big deal.

@ LED strip animation

In the beginning, I thought that making an LED animation would not take me a long time since there are lots of LED strip effect tutorials on YouTube, but I was wrong. The effects look cool, but not the feeling I want—the peaceful and gentle one. Then I dove myself into the world of LED animation; it took me hours to figure out the effects I wanted. I will upload the code later.

 

Better in this video: 

@ Using Data drives the servo? or just run locally?

Tom suggested that I only use the tide down from the highest water level and the tide up from the lowest water level to trigger the change of the servo’s direction. Same theory to use when I get the direction data. But it takes 6 hours and 12 minutes to change direction once, the servo just moves so slowly that people cannot even notice it. The main reason I want to use the servo is for the sense of growth by rotating, so I decided to run the servo locally without any real-time data.

@ Jitter of servo

Since I plan to use about 160 LEDs, which need high currents and 5 voltages, the servo is just super jitter when all of them work at the same time.  I tried to add capacitors and a servo driver but the jitter still be there. But the jitter has been solved by the API issues.

@ Moon phase Data:

I found a website named Stormglass that can provide astronomy data. Even though the free account can only make requests 10 times a day, it’s enough for me. But the data is a little strange; it only shows 4 phases in one data object. The closest data shows the New moonFirst quarterFull moon or Third quarter. Current data shows the Waxing crescentWaxing gibbous, Vaning gibbous, Vaning crescent.The documentation wrote that there may be 8 phases in the current data, but not…

tt

 x

I still want to get the data for 8 phases, and I am trying to learn the moon data. The website [https://www.moongiant.com/] is really good, and it is easy to learn some basic moon-related knowledge. The moon’s illumination also gives hints of the moon’s phase; the only problem is that it has the same value between waxing and waning. Thus, I request the current position and moon illumination to make sure that I can get 8 phases.

@ Two APIs in One Sketch

Since I make requests to two servers I code them separately at the beginning and works well, and it still works well after I put them together.  But something goes wrong after I add all of my physical output, nearly 160 LEDs, and a high-torque 35kg servo. The NOAA still gives me data, but the Stormglass is just waiting for a long time and getting the status code, -3, timeout. But I think multiple APIs in one sketch should be fine; I just wonder about the reason…

The interval setting for each API is also annoying; make a request to NOAA every 6 minutes; make a request to Stormglass once a day. And for the interval, I simply add a delay. It will not work in the same sketch. I think it should have another way to fix it…

Right now, I found the solution is to use two Arduinos and separately make requests to two APIs, and with the servo included in the moon phase code, the jitter problem just solved itself naturally. Maybe using two Arduino is stupid but it did solve something😂 At least right now, everything is going well…

Next week I will work on the mechanical parts since the acrylic sheet with 1/4 thickness is quite heavy and the sketch I am drawing right now has requirements on the structure. I need to test and adjust, and finding a material for the base that is heavy enough to stabilize the whole installation is also important.

I made a mistake last week in figuring out the API, I hadn’t used Tom’s demo, which gives a straightforward demo of how to set the interval time. I just noticed it after this week’s class and immediately removed the stupid delay. Here is my code.

Collecting all the API data I need via the server and getting the data to Arduino is super helpful since the moon phase API only has 10 requests per day, so there are not many chances to let me try the server code. I plan to continue using 2 Arduino Nanos this time, if I can find a better API, I will change to the server way.

This week, I worked on the structure of the installation. I tried to 3D print some stuff as the fixations of LED strips and servos and engrave some patterns on acrylic to see what it looks like when the light goes through.

    

 

♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠

Apr 25, 2023

I made a mistake last week in figuring out the API, I hadn’t used Tom’s demo, which gives a straightforward demo of how to set the interval time. I just noticed it after this week’s class and immediately removed the stupid delay. Here is my code.

Collecting all the API data I need via the server and getting the data to Arduino is super helpful since the moon phase API only has 10 requests per day, so there are not many chances to let me try the server code. I plan to continue using 2 Arduino Nanos this time; if I can find a better API, I will change to the server way.

This week, I worked on the structure of the installation. I tried to 3D print some stuff as the fixations of LED strips and servos and engrave some patterns on acrylic to see what it looks like when the light goes through.

    

 

♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠

May 4, 2023

I checked the astronomical data and found that the interval between each moon phase is not constant, even though we use the same interval to calculate the date. Since this piece is more important to relate to nature, I stick with the moon phase API.

The user experience is always the part I keep in mind. Compared with a product, subjective expression should be the priority for the art piece rather than the UX, but that doesn’t mean the UX is not important. Conversely, the UX becomes much harder in this way. I have the desire to let viewers know what the piece talks about, but instead of trying to please viewers and seek viewers’ approval, I hope they might be able to move from confusion to enlightenment. Naturally, the UX design has to be different with various pieces, and the common user research methodologies might not apply. 

For this piece, I gave the name “Crimson Tide,” and I prepared a poster to “explain” the analogy relationship between tide and menstruation. I choose to use the excerpts from poetry and literature as a reference rather than the direct inspiration description. Even though both the text and the installation are relatively abstract, they can explain each other. I assume this is an effective way, but let’s see what happened in the spring show.

gfds

Final Demo: