Movement exploration: wearable control prototype

The wearable is co-created with Zoe Margolis

Dance of the Perlins (an Exploration of Movement)

Dance of the Perlins is a performance that explores the interaction of movement,  choreography and dancer triggered particle flow. In the performance, a dancer wears a custom made wearable comprised of two gloves with built-in sensors. The movement data is wirelessly transmitted to a p5 particle flow visualizer triggering different visuals that evolve as the choreography progresses, thereby creating the Dance of the Perlins (in reference to the perlin noise that is used as the basis for the particle movement). The first live performance with the wearable prototype took place at the ITP winter show 2022, where the visuals were projected onto the screen behind the dancer:

Music credit: PROBASS ∆ HARDI – Good Evening (Where Are You From?)

Project details 

Prototype development

The first prototype consisted of one Adafruit LIS3DH triaxial accelerometer that was taped to the dancer’s wrist, with wires extending to a breadboard with an Arduino Nano 33 IoT (shown below).

This prototype was used for gathering acceleration and movement data to understand recognizable patterns associated with specific movements. 

 After gathering the data, I started training a Long Short Term Memory (LSTM) network using tensorflow to detect the moves and trigger controls in real-time. The preliminary results were promising showing that the algorithm can clearly recognize two of the moves which we gathered data for. However, due to project deadlines I left the full development of the ML algorithm for later. Ultimately, we ended up hard-coding the triggers corresponding to specific accelerations or changes in acceleration.

Following the initial user-testing, we decided to add more controls independent of the accelerations. Inspired by one of our classmates’, Dre’s project we added two capacitive resistors on the left hand using stripped wires. Finally, we soldered the wires extending from the sensors on the two gloves to a protoboard and connected it with a 9V battery, such that the device could fit in a pouch situated on the dancer’s lower back. And voila, the final prototype :

The prototype worked quite well, although it was difficult to adjust on the fly :).

The Arduino Code used in this project can be found on my GitHub.

Visualization

Sometime in November 2022, I visited Wolfgang Tillmans’ exhibit at the MoMA and was mesmerized by his work from the Freischwimmer series (below). This series is comprised  of camera-less pictures that he created by shining flashlights at photographic paper in the darkroom. This work really resonated with me since I wanted to create visuals that captured the feeling of effortless movement and abstract flow.

The ultimate visualization engine that I created was largely inspired by Dan Schiffman’s tutorial on Perlin Noise Flow Field and was originally written in p5 web editor. The original sketch (link) is controlled by 10 keys on the keyboard, each corresponding to a change in ‘perlins’ movement. The following 10 commands were used:

      • “-”/”=”: remove/add particles to the sketch (200 at a time)
      • “a”/”d”: decrease/increase speed of the particles
      • “s”/”w”: downward/upward movement of the particles
      • “p”: all directional flow field (default vectors)
      • “r”: random direction of the particle movement
      • “c”: chaotic movement (rapidly changing vector field direction)
      • “q”: color scheme change from white to black background and vice versa

The snapshot below is one of the visualizations created using the control keys.

You can try the visualizer by clicking the interface below, adding particles with the “+” key, and using other keys for control:

The wearable prototype was connected to the p5 sketch via bluetooth, where acceleration and resistive capacitor data triggered different controls . The final files for the p5 sketch can be found on my GitHub.

Choreography and performance

The final puzzle piece was to test the prototype with the visualization engine using movement. Ultimately, I wanted to create a choreography that existed in harmony with the visuals and their evolution.

Choreographing with the wearable was an interesting experience to say the least… The prototype felt like a separate intelligent entity, often misbehaving and not reacting to the moves in the way I wanted. I had to be patient and also adapt my moves so that the visuals behaved and worked with me. I also noticed that many of my moves didn’t work with the visuals and it really constrained my dancing freedom which I did not expect. The following video is a compilation of footage from testing and the choreography process.

Finally, after spending some hours with the device I was able to choreograph a piece in which the gloves worked with me and the visuals felt in harmony with the movement. As per Tom Igoe’s advice, I also had Zoe, my project partner,  use the manual keyboard controls in case my movements did not trigger certain visuals. 

All-in-all, I performed the piece 8 times during the ITP winter show 2022. Certain times the wearable behaved better than others but regardless I loved each of the performances and developed a very special relationship with the wearable.

Takeaways and future development

Many questions came up during the project development, especially as related to the interaction between the dancer and the wearable. While we designed the wearable to control the visuals, in a way the dancer was controlled and in a way constrained by the wearable. So a question rises: does the dancer control the visuals or do the visuals control the dancer? This power dynamic also changes depending on if the dancer is able to see the visuals or not.

The process of choreography also felt different — I did not feel like I was choreographing by myself but that a had a moody and often constraining partner. The Perlin visualizer ended up dictating the choice of music, moves and performance style.

There were also some unresolved technical issues, primarily the ongoing interruption of the bluetooth signal that sometimes disrupted the performance, in which case the stage manager would take over the controls (thank you Zoe!). 

I would like to continue developing this project in several directions. First, I want to finish incorporating machine learning for movement recognition. I hope to introduce controls that are based not only on the dancer’s movement but also experiment with the recognition of the quality, rhythm and emotion of the movement. Another direction that I would like to pursue is to further explore the question of control between the algorithm and the human, through improvisation by both the dancer and the algorithm. 

your morning tea with “Zach the Life Influencer” – augmented reality

Created with Mat Olson

It’s time to have your morning tea and get your daily life balance advice from Zach the Life Influencer.

Try out the full Zack the Life Influencer Adobe Aero experience on your own mobile device [link] (must have Adobe Aero installed).


Written by Mat Olson

Zach the Life Influencer: the Process

Tired all the time? Feeling stressed out, spread thin, unhappy? Maybe what you need are a handful of tips on how to live a better life, all delivered by a cool social media influencer who seems to have no problems in their life at all.

Enter Zach.

For this Hypercinema project, our two person group–Mary Mark and Mat Olson–began with an idea of the animated look and feel we wanted the central character to have, then worked backwards from those touchstones to define who it is that’d actually be starring in this AR overlay. Here’s Mary on who Zach is:

Zach is a life balance influencer who feeds his followers inspirational quotes. His strategy for acquiring influencers is strictly based on clickability and likes, with things like ‘10 hacks to improve your life’. He chooses quotes that draw in the biggest audience thinking very little about their content. However, when Zach tries to follow-his own advice, he crumbles, as much of his advice is in opposition with itself. Zach starts with a big ego which is crushed under the ‘life balance’ expectations.

This idea of Zach collapsing under the accumulated weight of all his overused mottos came about as we explored the possibilities and constraints of animating a 2D cut-out marionette puppet. We began with a test puppet made from photos of Mat split into layers for each limb and joint we wanted to control and contort. Rather than have a flexible puppet with stretchy mesh-enabled movements, we wanted to stick to the constraints at these joints, which generally make for more unnatural and painful movements as you exaggerate them further.

The test puppet. Note the points jutting out at the joints, which we improved with Zach.

The most absurd, precarious positions we put our test puppet into led to our desire to create some tension in the piece: we wanted our character to gradually contort into these increasingly difficult poses, then release that tension with a collapse at the end. In our ideation from there, we batted around a few ideas. Maybe this character was a dancer, overexerting themselves between poses. Maybe the character is a person struggling to keep up with the demands of life.

Tweaking that idea gave us Zach. Instead of a character struggling under abstract visual or textual representations of hardships, we made the character a person who can’t hold up the weight of pithy advice meant to help live a better life–someone who projects a sort of blandly aspirational confidence, but who ultimately fails at holding up all their simplistic and occasionally contradictory advice.

The capture and animation process

We enlisted fellow ITP ‘24 student Josh Zhong to become our model for Zach. Mat took the photographs on the week of Halloween using a Canon EOS R5 and a tripod borrowed from the ER–we learned from the test puppet that photos at smartphone resolutions were not as nice to work with when isolating the model from the background.

It should be noted that Josh was a total pro. He helped with checkout from the ER, had no problem keeping the awkward pose with his legs turned out to the sides, and took direction for Zach’s progression of pained facial expressions with ease.

The photo of the base pose for the puppet.

With photos in hand, we began to split up the work of making the puppet and getting animation-ready. Those steps proceeded roughly as follows:

  • Mat cut the main photo of Josh along with all his facial expressions out from the background using Photoshop
  • Mary divided those photos into separate layers for each joint and extended elements where necessary (e.g. lengthening the neck to give us more leeway in animation)
  • Mat rigged up a skeleton with the Duik Bassel plugin (this video tutorial from Jake In Motion was most helpful) to aide in animation
  • Mary began learning the ins and outs of working with Aero sequences

Duik’s built-in IK controllers were really helpful in reducing the overall complexity of animating Zach’s movements, freeing us from having to keyframe almost every joint with each movement. Still, it wasn’t without its own weird limitations, and the rigging step had to be repeated a few times since changing the dimensions of the composition would irreversibly alter the relationships between joints in the puppet.

The storyboard of Zach’s arc from confident to crumpled mess is all Mary. The list of Zach’s tips was initially devised by Mat, pulling inspiration from various online articles about cliched advice.

Our interaction pattern for the animation is pretty straightforward: tapping through it, it tells a story with a beginning, middle, and an end (one that hopefully feels pretty final, given how defeated Zach looks).

Diagramming it with text, it flows like so:

    1. Load the AR overlay
    2. Tap to advance through the introductory screen
    3. Aero swaps the intro screen for part 1 of the animation
    4. Part 1 plays (Zach effortlessly holds two pieces of advice)
    5. Tap to advance to part 2
    6. Aero swaps parts 1 and 2
    7. Part 2 plays (Zach’s ok, but has to use his foot)
    8. Tap to advance to part 3
    9. Aero swaps parts 2 and 3
    10. Part 3 plays (Zach clearly begins to struggle)
    11. Tap to advance to part 4
    12. Aero swaps part 3 and 4
    13. Part 4 plays (Zach collapses)

At each of the stages after these 4 parts play, we originally wanted to include an idle animation that would loop between discrete parts. We split the 8 tips across 4 sections of animation for this reason: we’d essentially have 7 section of animation, 4 main parts and 3 idle loops. Making an idle loop between each of the 8 steps would’ve meant making more than twice as many chunks of animation as we ultimately did.

We ended up deciding against using the idle animations for a couple reasons: for one, they’re a little too animated. If someone is going through the character overlay slowly, it might take them a while to realize that they’ve entered an idle loop and should tap again to advance the animation. Also, in some limited testing with other ITP students, some just wanted to keep tapping through, which would mean the idle animations would likely not be seen.

One of the unused idle animations.

Some more explicit on-screen controls might be a way of solving this tapping behavior problem and could justify adding idle animations back in; if there was a big button, for instance, that users would need to press in order to drop the next piece of advice on Zach.

Then again, adding more mechanical controls to this piece could detract from the feel of it. Zach is a character inspired by the kinds of people who might actually go around calling themselves life influencers, figures who exist in the public eye largely by way of videos housed in Story carousels and algorithmically managed feeds. This is a guy whose content you might otherwise be compelled to tap or swipe through in a hurry–now he’s in your physical space, and in this more immediate context, we present a physical metaphor for how trying to follow all kinds of vapid life advice might pan out. If a guy like this was really real, not a character we made up, a weird AR puppet, or a persona crafted to rack up followers to sell ads against, what good would the kind of advice he peddles really be to him?

Original post –11/10/22

week 10 – final project development

Co-created with Zoe Margolis

For our final project, Zoe and I would like to create a device that helps dancers control visualizations that accompany their performance with movement. One of the inspirations that we explored was the work of Lisa Jamhoury. When we started thinking about such a control device we first considered using tracking sensors and software such as PoseNet or Kinect, but we decided to go a simpler way and use accelerometers. There is something very beautiful about having the piece of technology directly on your body so that you can physically feel connection to the control device. I also personally love signal processing and time series analysis, and the seeming anonymity that comes with it, i.e. no camera tracking, no body posture or face recognition recognition, just acceleration. Maybe it’s something about having worked with earthquakes for almost 10 years that draws me into accelerometers; after all, earthquake records are produced using triaxial accelerometers.

The user of this device is a performer, and they will have two accelerometers attached to their wrist. At later points we might consider introducing more sensors, but for now we start simple. The wires will run underneath a tight bodysuit to the Arduino and a battery that will be housed in a mic belt. The signal in the Arduino will be sent via Bluetooth to a computer which will create visuals using processed signal in either p5 or Touch Designer.

Our workplan for building a prototype and the progress so far is as follows:

  1. Fabricate the circuit using accelerometers, Arduino, and a battery.
    So far we were able to set up one LIS3DH triple-axis accelerometer with an Arduino and read the x, y, z accelerations using I2C synchronous communication. We have not been able to figure out how to connect the second accelerometer yet. The data is currently being output as a string for each reading. For example for the two accelerometers the output will be a time stamp and accelerations for each accelerometer:  “t, x1,y1,z1,x2,y2,z2” or “1,-0.84,4.92,8.67,-0.84,4.92,8.67”.
  2. Send the data from accelerometers over Bluetooth to p5.
    We have been able to set up Arduino as a peripheral device, and receive one string via Bluetooth on p5. However, at the moment we are having trouble receiving updated data from the Bluetooth.
  3. Gather accelerometer data along with movement ‘labels’ to build a ML algorithm to identify moves.
    We have been able to gather the data for two simple and distinct moves for the right hand as a start. The moves are arm raise and circular ‘door opening’. To produce a labeled time series data that has both accelerations and move ‘labels’ we added two buttons to the circuit each representing a move. The following video documents some of the data gathering process.

    The output file is a .csv file that contain the following data: “t, x1,y1,z1, acc1 move1, acc1 move2, x2, y2, z2, acc2 move1, acc2 move2” or “1129,-9.32,19.58,9.92,0,1,-9.32,19.58,9.92,0,0”. So far we gathered nearly ~4000 data points of the right wrist, which is enough to start experimenting with an ML model. The triple-axis accelerometer data is shown below where you can see clear peaks when the moves were made.

    records of triaxis accelerometer

  4. Train a machine learning model to recognize the moves in real-time.
    This will be one of the main focus areas over the next two weeks. We are going to try to build a Long Short Term Memory (LSTM) network using tensorflow, because LTSM tends to work well with sequential data.
  5. Create a p5 or Touch Designer visualization, where certain moves trigger changes.
    This part of the project hasn’t been started but we did find some visuals inspiration from MoMA’s Wolfgang Tillmans exhibit below. I think we can recreate a similar flowing feeling using Perlin noise in p5.
    picture by Wolfgang Tillmans in MoMA
  6. Create a more permanent version of the prototype to be worn for a Winter Show performance.
    This part of the project hasn’t been started yet.

week 9 – bluetooth and accelerometer lab

This week our ultimate goal was to connect an external accelerometer to the Arduino and transmit the X, Y, Z accelerations through bluetooth. We were not able to do it but we had partial success in both tasks. 

Accelerometer

For the accelerometer we used the LIS3DH Triple-Axis accelerometer and followed an online tutorial and an example in the Adafruit LIS3DH library (link). We were successful in getting and plotting the signal but a few questions remain: 

!questions/comments!

  • What are I2C vs. SPI wiring? Which one is better for our application?
  • Is this the right accelerometer for us to use or are there better ones?
  • How many accelerometers can we use with one Arduino? Is it possible to use multiple accelerometers with I2C wiring?

Bluetooth

Bluetooth was certainly more of a struggle, even following the bluetooth lab. We didn’t quite understand how to send three float numbers (x,y,z accelerations) using characteristics. I was also confused about how to select UUID and alsoset characteristics.

We were able to turn on the Arduino LED using p5.ble.js library and also read a button output from a pin. However, we stopped short of sending accelerometer readings via bluetooth since we didn’t know how. There are many questions remaining.

!questions/comments!

  • a very important note was that bluetooth does not connect using p5.js editor. We learned that the hard way. We were only able to connect when we made a local server with Visual Studio Code.
  • The Arduino code from p5.ble.js website was not working (we could find the peripheral Arduino). But when we tried the example Arduino BLE codes form the Arduino library, they worked.
  • What characteristic would the LIS3DH readings be?
  • What is the difference between read, write and notify?

 

week 8 – color rain

Co-created with Bosung Kim

When Bosung and I first met we quickly converged on the idea of trying to convey the feeling of water through pixel manipulation.  We sought to recreate the visual of water ripples from a drop and got inspired by a YouTube video that used an equation to visualize the z-axis of a circular wave pattern as a function of time and space (unfortunately, I lost the link to the video):

equation used for the simulation

Once we figured out the mechanics of the equation we were mesmerized by the pattern. But the question remained: what in the image to do we actually have to change to create a color ripple effect? After some experimentation we ended up remapping the z-values from the equation onto the color range (0-255) and assigning the few values within the raindrop radius to either R, G, or B values of the pixels (randomly chosen). We also varied the radius of the waves for each drop.

Ultimately, our work uses the color rain to give new life to an existing image: a desaturated desert, void of water and color.

After several minutes of color rain, the desert takes on a new life:

The full experience can be found here [link].


 

The three words that we think describe the image are: contrast, tranquility, and mesmerizing.

Attribution: The desert image was taken by Santiago Manuel De la Colina, and accessed on pexels.com. The music is from …

 

 

week 8 – asynchronous communication

This week the lab was a review of serial communication, since we had a lot of asynchronous communication in the midterm (extended blog coming soon).  There were still a few useful things that I picked up in this lab:

    • When we were doing two-way communication for the midterm we implemented the ‘handshake’ communication protocol. However,  it was very difficult to debug what Arduino was seeing since we couldn’t use the Serial Monitor.  In this lab I learned a useful trick, where you send the read Byte directly back to p5 and print it using either console.log or canvas. This is where command byte() was very useful. In Arduino, as soon as you read the byte, you can send it back:
      int inByte = Serial.read();
      Serial.write(inByte); Question: in p5 reference for byte() it says “Converts a number, string representation of a number, or boolean to its byte representation. A byte can be only a whole number between -128 and 127”. However, we were printing 255 on the screen using byte(). How does that work?
    • The graphing function was pretty cool and useful to monitor the signal. I will save that for later. Below is an example of potentiometer output:

Here are some of the pictures of the circuits I built in this lab.

stop motion: “A Spark of Passion” and “Intro to Mice”

Co-created with Kat Kitay

This week in Stop Motion production we have two experimental shorts: ‘a spark of passion’ and ‘intro to mice’.

Spark of Passion

In our first stop motion we really wanted to use inanimate objects and give them a human character, give them a story. We thought we would use objects that we interact with everyday and what better items to use than the ones in our PComp kit! We landed on the battery and battery connector, since the together, the two make things move and light up. The theme of passion immediately came up and the following video is the result.

Here are the actors and the video set-up:

ITP- 93856 001: Intro to Mice

The second slow motion video was an experiment with Pixilation, which proved to me much harder than we thought. We wanted to create the effect of a human moving like another creature. In this story, the snake professor gives an overview of Intro to Mice, slithering around the class to interact with ‘invisible’ snake students. 

 

week 5 lab – DC motors

This week I was only able to get through one lab, since we were actively working on the midterm. I successfully powered-up a DC motor using a 9V power supply and controlled it with a potentiometer. Here is the set up:

set up for a DC motor controlled by a potentiometer

The video shows the variable speed of the motor: 

 ! remaining questions!

Most of my questions from this week are about diodes:

  • What do the numbers on the diode mean and what is the difference between  different diodes?
  • How to we know the direction of the diode and which way should it be pointing when connected to collector and emitter?
  • Which was does the electricity flow, or more specifically, how does the diode prevent the wrong flow of electricity?

your evening news

Co-created by Sam De Arms

When we watch cartoons as kids, we often experience them as playful, funny, and very entertaining. The beloved 90’s cartoon ‘Hey Arnold!’, for example, tells a story of a nine year old kid who has a football shaped head and goes on fun adventures to help his friends with personal problems.

But if we rewatch our childhood cartoons as adults we realize that they often explore very serious and often troubling themes. Take the same Arnold. He constantly gets bullied by a girl at school and he is raised by his grandparents not knowing what happened to his parents.

During the creation of this project my co-creator Sam De Armes and I discussed the different cartoons we watched as children: Soviet Vinni Puh, Courage the Cowardly Dog, CatDog, Ну, погоди! and others.

We also realized that the news that we recently watched often has a similar but reserve effect: you watch them intending to hear information on serious topics, but sometimes they feel absurd or even comical.  We decided to juxtapose these two ideas through the use of synthetic media.

“Your evening news” is a video collage, that uses a combination of cartoons, the news and AI generated images. 

Process

We started by getting a sample of cartoons we watched as children, collectively gathering 1.5 hours of footage.

recording of videos to create training dataset

Then we used a custom scripts to generate ~5000 image frames from the videos to create a training set .

We used RunwayML to train a StyleGAN on the dataset. We did not know what this was going to produce but we were happy with the overall results. In the video below you can see the resemblance between the esoteric shapes and the original cartoons. In particular, we found Vinni Puh was prominently featured in a lot of the generated images.

Finally we worked in Premier Pro to create the video using original and green-screened video footage, and AI generated cartoons.

screenshot of working in premier on the project 

week 5 – functions as units of labour

This week I really wanted to experiment with recursive functions to make snow flakes. I quickly discovered that while snowflakes have a pattern, the patterns do not have a recursive relationship. Here are a few examples of my attempts to make a recursive snowflake [link to code]:

Eventually, I decided to pivot and make a recursive plant that grows when there is more sunshine :). The sun comes out as the user moves the mouse in the vertical direction [link to code]: