your morning tea with “Zach the Life Influencer” – augmented reality

Created with Mat Olson

It’s time to have your morning tea and get your daily life balance advice from Zach the Life Influencer.

Try out the full Zack the Life Influencer Adobe Aero experience on your own mobile device [link] (must have Adobe Aero installed).


Written by Mat Olson

Zach the Life Influencer: the Process

Tired all the time? Feeling stressed out, spread thin, unhappy? Maybe what you need are a handful of tips on how to live a better life, all delivered by a cool social media influencer who seems to have no problems in their life at all.

Enter Zach.

For this Hypercinema project, our two person group–Mary Mark and Mat Olson–began with an idea of the animated look and feel we wanted the central character to have, then worked backwards from those touchstones to define who it is that’d actually be starring in this AR overlay. Here’s Mary on who Zach is:

Zach is a life balance influencer who feeds his followers inspirational quotes. His strategy for acquiring influencers is strictly based on clickability and likes, with things like ‘10 hacks to improve your life’. He chooses quotes that draw in the biggest audience thinking very little about their content. However, when Zach tries to follow-his own advice, he crumbles, as much of his advice is in opposition with itself. Zach starts with a big ego which is crushed under the ‘life balance’ expectations.

This idea of Zach collapsing under the accumulated weight of all his overused mottos came about as we explored the possibilities and constraints of animating a 2D cut-out marionette puppet. We began with a test puppet made from photos of Mat split into layers for each limb and joint we wanted to control and contort. Rather than have a flexible puppet with stretchy mesh-enabled movements, we wanted to stick to the constraints at these joints, which generally make for more unnatural and painful movements as you exaggerate them further.

The test puppet. Note the points jutting out at the joints, which we improved with Zach.

The most absurd, precarious positions we put our test puppet into led to our desire to create some tension in the piece: we wanted our character to gradually contort into these increasingly difficult poses, then release that tension with a collapse at the end. In our ideation from there, we batted around a few ideas. Maybe this character was a dancer, overexerting themselves between poses. Maybe the character is a person struggling to keep up with the demands of life.

Tweaking that idea gave us Zach. Instead of a character struggling under abstract visual or textual representations of hardships, we made the character a person who can’t hold up the weight of pithy advice meant to help live a better life–someone who projects a sort of blandly aspirational confidence, but who ultimately fails at holding up all their simplistic and occasionally contradictory advice.

The capture and animation process

We enlisted fellow ITP ‘24 student Josh Zhong to become our model for Zach. Mat took the photographs on the week of Halloween using a Canon EOS R5 and a tripod borrowed from the ER–we learned from the test puppet that photos at smartphone resolutions were not as nice to work with when isolating the model from the background.

It should be noted that Josh was a total pro. He helped with checkout from the ER, had no problem keeping the awkward pose with his legs turned out to the sides, and took direction for Zach’s progression of pained facial expressions with ease.

The photo of the base pose for the puppet.

With photos in hand, we began to split up the work of making the puppet and getting animation-ready. Those steps proceeded roughly as follows:

  • Mat cut the main photo of Josh along with all his facial expressions out from the background using Photoshop
  • Mary divided those photos into separate layers for each joint and extended elements where necessary (e.g. lengthening the neck to give us more leeway in animation)
  • Mat rigged up a skeleton with the Duik Bassel plugin (this video tutorial from Jake In Motion was most helpful) to aide in animation
  • Mary began learning the ins and outs of working with Aero sequences

Duik’s built-in IK controllers were really helpful in reducing the overall complexity of animating Zach’s movements, freeing us from having to keyframe almost every joint with each movement. Still, it wasn’t without its own weird limitations, and the rigging step had to be repeated a few times since changing the dimensions of the composition would irreversibly alter the relationships between joints in the puppet.

The storyboard of Zach’s arc from confident to crumpled mess is all Mary. The list of Zach’s tips was initially devised by Mat, pulling inspiration from various online articles about cliched advice.

Our interaction pattern for the animation is pretty straightforward: tapping through it, it tells a story with a beginning, middle, and an end (one that hopefully feels pretty final, given how defeated Zach looks).

Diagramming it with text, it flows like so:

    1. Load the AR overlay
    2. Tap to advance through the introductory screen
    3. Aero swaps the intro screen for part 1 of the animation
    4. Part 1 plays (Zach effortlessly holds two pieces of advice)
    5. Tap to advance to part 2
    6. Aero swaps parts 1 and 2
    7. Part 2 plays (Zach’s ok, but has to use his foot)
    8. Tap to advance to part 3
    9. Aero swaps parts 2 and 3
    10. Part 3 plays (Zach clearly begins to struggle)
    11. Tap to advance to part 4
    12. Aero swaps part 3 and 4
    13. Part 4 plays (Zach collapses)

At each of the stages after these 4 parts play, we originally wanted to include an idle animation that would loop between discrete parts. We split the 8 tips across 4 sections of animation for this reason: we’d essentially have 7 section of animation, 4 main parts and 3 idle loops. Making an idle loop between each of the 8 steps would’ve meant making more than twice as many chunks of animation as we ultimately did.

We ended up deciding against using the idle animations for a couple reasons: for one, they’re a little too animated. If someone is going through the character overlay slowly, it might take them a while to realize that they’ve entered an idle loop and should tap again to advance the animation. Also, in some limited testing with other ITP students, some just wanted to keep tapping through, which would mean the idle animations would likely not be seen.

One of the unused idle animations.

Some more explicit on-screen controls might be a way of solving this tapping behavior problem and could justify adding idle animations back in; if there was a big button, for instance, that users would need to press in order to drop the next piece of advice on Zach.

Then again, adding more mechanical controls to this piece could detract from the feel of it. Zach is a character inspired by the kinds of people who might actually go around calling themselves life influencers, figures who exist in the public eye largely by way of videos housed in Story carousels and algorithmically managed feeds. This is a guy whose content you might otherwise be compelled to tap or swipe through in a hurry–now he’s in your physical space, and in this more immediate context, we present a physical metaphor for how trying to follow all kinds of vapid life advice might pan out. If a guy like this was really real, not a character we made up, a weird AR puppet, or a persona crafted to rack up followers to sell ads against, what good would the kind of advice he peddles really be to him?

Original post –11/10/22

stop motion: “A Spark of Passion” and “Intro to Mice”

Co-created with Kat Kitay

This week in Stop Motion production we have two experimental shorts: ‘a spark of passion’ and ‘intro to mice’.

Spark of Passion

In our first stop motion we really wanted to use inanimate objects and give them a human character, give them a story. We thought we would use objects that we interact with everyday and what better items to use than the ones in our PComp kit! We landed on the battery and battery connector, since the together, the two make things move and light up. The theme of passion immediately came up and the following video is the result.

Here are the actors and the video set-up:

ITP- 93856 001: Intro to Mice

The second slow motion video was an experiment with Pixilation, which proved to me much harder than we thought. We wanted to create the effect of a human moving like another creature. In this story, the snake professor gives an overview of Intro to Mice, slithering around the class to interact with ‘invisible’ snake students. 

 

your evening news

Co-created by Sam De Arms

When we watch cartoons as kids, we often experience them as playful, funny, and very entertaining. The beloved 90’s cartoon ‘Hey Arnold!’, for example, tells a story of a nine year old kid who has a football shaped head and goes on fun adventures to help his friends with personal problems.

But if we rewatch our childhood cartoons as adults we realize that they often explore very serious and often troubling themes. Take the same Arnold. He constantly gets bullied by a girl at school and he is raised by his grandparents not knowing what happened to his parents.

During the creation of this project my co-creator Sam De Armes and I discussed the different cartoons we watched as children: Soviet Vinni Puh, Courage the Cowardly Dog, CatDog, Ну, погоди! and others.

We also realized that the news that we recently watched often has a similar but reserve effect: you watch them intending to hear information on serious topics, but sometimes they feel absurd or even comical.  We decided to juxtapose these two ideas through the use of synthetic media.

“Your evening news” is a video collage, that uses a combination of cartoons, the news and AI generated images. 

Process

We started by getting a sample of cartoons we watched as children, collectively gathering 1.5 hours of footage.

recording of videos to create training dataset

Then we used a custom scripts to generate ~5000 image frames from the videos to create a training set .

We used RunwayML to train a StyleGAN on the dataset. We did not know what this was going to produce but we were happy with the overall results. In the video below you can see the resemblance between the esoteric shapes and the original cartoons. In particular, we found Vinni Puh was prominently featured in a lot of the generated images.

Finally we worked in Premier Pro to create the video using original and green-screened video footage, and AI generated cartoons.

screenshot of working in premier on the project 

synthetic media analysis

Last weekend, I went to the Whitney Biennial 2022, where at some point I noticed a group of people lying on their backs, enchanted by the movements on a set of ceiling mounted LED panels. Upon closer inspection I realized that they are watching an ever-changing pattern of shapes and colors that resembled nature’s textures like the ocean, jelly fish and blossoming flowers. The screens were interwoven with an LED net and surrounded by large aluminum panels on the walls, dripping with oil dents. The atmosphere in the room was otherworldly to say the least.

But what was truly fascinating about the installation by WangShui entitled Titration Print (Isle of Vitr⸫ous), Hyaline Seed (Isle of Vitr⸫ous), and Scr⸫pe II (Isle of Vitr⸫ous) is that this work was ‘co-authored’ by AI. The visuals displayed on LED panels are generated by Generative Adversarial Networks (GANs), which change based on the conditions of the surrounding environment that the AI ‘senses’. In particular, active sensors pick up light levels emitted from the LED screens and CO2 levels from the viewers and use that data to evolve the screen’s patterns. As a consequence, during the night, the piece enters into a form of a resting state. The ability to adapt to and sense the ‘self’ and the surrounding world reveals a conscious and living quality of the art piece. The aluminum panel paintings on the side walls are also created by using AI generated  “new images from  [artist’s] previous paintings which [the artist] then sketches, collages, and abrades into the aluminum surfaces.” [source].  

Both the animation and the paintings mirror nature’s textures, colors and curves, giving us a hint into what the training data consisted of. In fact, during an interview with artnet News, WangShui mentions that the training dataset is a depiction of their research subjects and in a way is the artist’s journal. It includes ‘thousands of images that span deep sea corporeality, fungal structures, cancerous cells, baroque architecture, and so much more.’

As I laid under the screen, mesmerized by the evolving shapes above me, I thought about how important the artist is in curating and framing synthetic media. Generative algorithms like GANs sooner or later will become a commonplace in artists’ toolboxes and I look forward to being a part of the community experimenting with this medium.

 

uncertain journey

Creators: Maryia Markhvida, Dror Margalit, Peter Zhang
Voice: Zeynep Elif Ergin

If we are lucky, we are born into the loving arms of our parents and for many years they guide us through life and help us understand the events around us. But at some point, sooner or later, life takes a turn and we are all eventually thrown into the chaos of this world. In these moments things often stop making sense and we have a hard time navigating the day-to-day. Eventually though, most of us adapt and figure out some way to go on.

Sound, in a way, is a chaotic disturbance of the air,  but somehow we learn how to make sense of it and even manipulate our voice and things around us to reproduce the strange frequencies. We start recognizing the patterns in the randomness and eventually derive deep meaning from it. 

This work is an interactive sound journey that uses evolution of randomly generated frequencies to reflect the human experience with uncertainty and chaos. It tells Zeynep Elif Ergin’s story through a composition of computer-generated sounds, her voice, and interactive visuals :

 

Inspiration and Process

After our first Hypercinema lecture, I got very intrigued by the composition of sound in terms of signal and superposition of sine waves. This is something I vaguely knew about from my engineering background (I worked a lot with earthquake wave signal in my PhD) but never got to actually play with, let alone create.  I was also curious to hear what different sounds could be produced if I used different probability distributions (uniform, normal, lognormal) to generate the number of waves, the frequencies and the amplitudes. 

Once we formed a group with Dror and Pete, we started talking about what uncertainty and randomness meant to each one of us. We discussed that when people face moments of high uncertainty they are first thrown into absolute chaos and then slowly they tend to embrace the uncertainty and adapt the chaos into something that feels more familiar. We eventually arrived at a question: What would ones journey through uncertain times sound like using randomly generated sounds?

All of us wanted this piece of work to be grounded in and driven by real human experience. In the words of Haley Shaw on creating soundscapes:

‘…even when going for goosebumps, the intended feeling should emerge from the story, not the design

The final piece is presented in an interactive web interface ,which allows one to listen to Zeynep Elif Ergin’s story through a progression of randomly generated noises. The listener has the option of experiencing the story without the main subject (inspired by the removal of the main character in Janet Cardiff’s work) or overlaying her voice over the computer-generated soundscape. There is also a gradual evolution of the visuals.

Now a little bit about the technical side:

The first question was how can one randomly generate sound starting from scratch, i.e. a blank Python Jupyter Notebook. After a quick conversation with my father about the physics of sound waves and super-positioning, I had an idea of what to do.

I started with generating a simple sine wave (formula below) and converting is into a .wav file.

$$ y = A sin(2\pi f t +\phi) $$

This is the equation of a sine wave with phase angle ($\phi$), frequency ($f$) and amplitude ($A$), all of which were eventually randomized according to different probability distributions. Below is a sample of the first python code and first sounds I generated with only one frequency:

Then I played around and generated many different sounds. Here are some things I tired:

    • Broken down the duration into many phases of varying speed, where each phase had a different sound;
    • Created “heart beats” with low frequency;
    • Super-positioned a range of 2-100 waves of varying frequency, amplitude, and phase angle to create multi-dimensional sound;
    • Tried a uniform and normal distributions to randomly generate frequencies and sounds;
    • Generated random sounds out of music note frequencies using 5 octaves;
    • Generated random arpeggios;
    • Limited notes to C major to produce random arpeggios (creates a happier tone).

Here is an example of one of these generated sounds:

In the meantime, Dror and Peter recorded the interview with Zeynep Elif Ergin as well as additional ambient sounds around NYC, and worked in Adobe Audition to compose the final piece using randomly generated sounds .

The last step was to create the interactive interface with p5.js [link to code] , which gives the listener the option of playing only the ‘chaos track’ or overlaying the voice when the mouse inside of the center square. As the track is played, the uncertainty and chaos slowly resolve, both sonically and visually… but they are never quite gone.

words in sounds

Describing sounds with words can be tricky. But what about describing words with sounds?
In collaboration with Dipika, Anvay, and Vera

Ticking:

Humming:

Airy Sound:

Sound of Heat:

Sound of Betrayal:

Sound of Roundness:

Sound of Loneliness:

Sound of Red:

Sound of Joy:

Other explorations this week

Deep listening: [link]

‘The ear hears, the brain listens, the body senses vibrations’
‘To hear is the physical means, to listen is to give attention to what is perceived both acoustically and psychologically.’

– Pauline Oliveros 

Sound design: Haley Shaw [link]