Blog

ScanIR version 2 is out!

ScanIR version 2 is now available on Github

This tool, developed at NYU MARL, helps with the measurement of acoustic impulse responses, whether for room acoustics or for binaural filters. The new version allows for SOFA output format to be saved out of the measurement, automatic measurement sequences using ARDUINO and rotating step motors, acoustics analytical metrics and more customization options.

Accompanying publication here

GitHub Link below:

The HoloDeck Distributed Music Concert

Update: Video of the concert Finale is now available!

Where to watch the April 19th 2018 HoloDeck concert 

You can follow the HoloDeck live event through this links:

NYU Steinhardt Live Feed from Loewe 

Binaural 360-Video Dolan Recording Studio

Binaural 360-Video Frederick Loewe Theatre

The NYU Immersive Audio Group and Future Reality lab present the first HoloDeck concert. This unique event explores the future of distributed musical connections for mixed reality setting. The HoloDeck Concert is a novel experimental distributed music setting where musicians and dancers are virtually connected from different studios to a theatre stage, through a dedicated network. OptiTrack Motion Tracking technology will render and stream in real time the dancers’ avatars to live stage performers.  

Musical Program:

  1. Blue in Green – Piano & Trumpet
    Bill Evans 
  2. Where do You Start – Double Bass & Piano
    Johnny Mandel
  3. Nocturne in C-sharp minor – Violin, Piano, Dancer
    Frederic Chopin
  4. When I fall in love  – Piano & Dancer
    Victor Young
  5. Groove on Drums – Drums, Percussions, Dancers
    Improvisation
  6. Elephants – Saxophone & Piano
    Timo Vollbrecht
  7. Summertime – Jazz Quartet, Dancers
    George Gershwin
  8. Hungarian Dance No.5 – Violin & Piano
    Johannes Brahms
  9. Take the A Train  – Jazz Quartet, Dancers
    Billy Strayhorn

Tom Beyer, Director
Agnieszka Roginska, Music and Audio Research Lab
Ken Perlin, Future Reality Lab
Deborah Damast, Dance Director

Musicians

David Baylies…….…………Trumpet
Tom Beyer…….Drums & Percussion
Sungrae Kim….Drums & Percussion
Sungsoo Kim………………….Piano
Marcelo Maccagnan……Double Bass
Sienna Leigh Peck……………Violin
Agnieszka Roginska…………..Piano
Noah Rott…………………..….Piano
Timo Vollbrecht…………Saxophone

Dancers

Deborah Damast
Daria Fitzgerald
Jessa Rose
Danielle Staropoli
Kim Wojcieszek

Technical Team

Audio Engineering: 
Jen-Chun Chao
Dennis Dembeck
Lisa Groom
Michael Ikonomidis
Marta Olko
Jonathan Quiñones
Makan Taghavi

Audio Streaming: 
David Baylies
Michael Hagen
Scott Murakami

Dance Tech: 
Connor DeFanti
Rose Generoso
Andrea Genovese
Tatiana Turin

Video:
Ian Anderson
Corinne Brenner
Jaye Sosa
Sripathi Sridar
Natalie Wu
Celia Yang
Gabriel Zalles

A Black History Month Celebration

                         

NYU’s first annual Black History Month: Celebration Through The Arts was held on Saturday, February 10, 2018. A sold out Frederick Loewe Theater showcased an evening of dancing, music, poetry, and spoken word performances from some NYU programs and from special guest speakers Al Sharpton and Duke Ellington’s granddaughter, Mercedes Ellington. On a few days notice, the Immersive Audio Group was offered the opportunity to record the event in 360° audio and video.

The event ranged from drama therapy, spoken word, and speech, to drum performances, jazz bands, and hip hop dancing. With an eclectic choice of performance media, the Black History Month event was recorded from an audience perspective, rather than the perspective of being on stage with the performers, in part because of the more interactive audience component. A small group set up and recorded the event:

  • Stephanie Benicek – Producer, Camera Operator, Lead Engineer, Assistant Recording Engineer
  • Pari Songmuang – Lead Recording Engineer, Assistant Engineer
  • Peter Berman – Assistant Engineer
  • Ian Anderson – Assistant Engineer
  • Sripathi Sridhar – Camera Operator, Assistant Engineer

An ESMA (Equal Segment Microphone Array) was set up, complete with four Sennheiser MKH 800 cardioid mics and corresponding Schoeps CMC6 MK6 bi-directional mics set up in each corner of the array. These mics were all in the same plane, and were arranged in a 12” x 14” rectangle (because of equipment size, this is the tightest arrangement possible). In the middle and slightly above this plane of mics was a Schoeps CMC6 MK6 omnidirectional mic and the Sennheiser AMBEO mic. At the top of this pyramid of microphones was the camera, a GoFro Fusion. The overall height of everything was slightly above audience perspective if they were sitting down, approximately 5′ with the GoPro on top. It was important not to obstruct anyone’s view.

This cluster was positioned in the 3rd row of the theater, next to the aisle. Along the front of the stage, a second AMBEO mic was positioned just above stage height, at 3’5”. The ESMA array, audience AMBEO, and omni-directional mic were recorded on two Zoom F8 recorders, and the stage AMBEO was recorded to a Tascam. The Immersive Audio group also received stereo and spot mic audio files from the venue staff.

The event ran smoothly, and the Immersive Audio Group is grateful for the event staff and organizers for working with us to get a successful recording of a sold out event. Everything is currently being organized and mixed for a thesis project in conjunction with the Immersive Audio Group. Expected delivery is no later than May 2018.

By Stephanie Benicek

Skirball Holiday Concert Recording

In Picture: ESMA, Ambeo and GoPro Fusion on stage

Towards the end of the fall semester in December 2017, the Immersive Audio Group had the opportunity to record the NYU Adult Choir and Children’s Chorus during the Holiday Concert. The event took place at 7 pm on the 17th at NYU Skirball, a premier cultural and performing arts venue in Greenwich Village. This was one of the first live performances the group recorded with a combination of ambisonic recording techniques and 360º video, and provided a great opportunity for us try out some new recording setups, including the Equal Segment Microphone Array (ESMA).

People involved in the production-

  • Marta Olko, Producer
  • Charles Craig Jr., Co-Producer
  • Ying-Ying Zhang, Lead Recording Engineer
  • Peter Berman, Assistant Recording Engineer
  • Scott Murakami, Assistant Recording Engineer
  • Sungsoo Kim, Secondary Assistant Recording Engineer
  • Sripathi Sridhar, GoPro Fusion

The recording components used in the setup were an ESMA array, 2 Ambeo microphones and a GoPro Fusion 360º camera for the video recording. The ESMA array is a audio recording technique suitable for virtual reality (VR) reproduction, detailed by Dr. Hyunkook Lee of University of Huddersfield (ESMA). The array captures immersive sound without using specialized ambisonic or binaural microphones, instead capturing a soundfield by a combination of multiple polar patterns. In addition, the Sennheiser Ambeo ambisonic microphones were used, which capture soundfields using 1st order ambisonics.

Since the auditorium is a large space, the venue characteristics were taken into account in the recording setup. The placement of a microphone in the audience area helps to capture a different perspective of the soundfield, as well as the audience reaction and involvement. Keeping this in mind, an Ambeo mic was suspended from the balcony above the audience area to increase immersion in VR reproduction. Another Ambeo mic occupied a more conventional position at the foot of the stage, alongside the GoPro Fusion camera. This microphone captures more of the performers’ sound along with the ESMA. The GoPro Fusion is a recently released 360º camera that uses its 2 lenses to capture, well, 360º footage. Although there are yet software and hardware improvements anticipated, it is perhaps one of the best affordable 360º cameras in the market today.

Despite the challenging nature of the ESMA configuration, the engineers got the setup up and running after much effort. The ESMA consisting of 4 Schoeps MK6 and 4 Sennheiser MKH-800 microphones, and the 2 Ambeo microphones were rigged to the Antelope Orion 32HD+MP32 preamps.

Overall, it was a fantastic experience and will be treasured by everyone involved. The recordings were of a high quality and currently going through post-production, to be released on the website soon. Stay tuned!

By Sripathi Sridhar

Recording of Daniel Neumann’s Sound Field Installation ‘Channels’

On Tuesday, January 23rd, the NYU Immersive Audio Group set out to record a sound art installation by Daniel Neumann, a Brooklyn-based sound artist, organizer and audio engineer originally from Germany. Neumann’s artistic practice involves using conceptual and often collaborate strategies to explore sound, sound material and its modulation through space, situation and media. The installation, ‘Channels’, deals with channeling 56 electronically produced sounds into the physical space of the gallery to produce an immanent 3D concrete sound field.

Here’s the link to the gallery’s website with more information regarding the installation:
https://www.fridmangallery.com/channels

The soundfield was created using numerous loudspeakers, which were hidden from sight, as well as five vintage speakers that were part of the visual display of the installation. The main visual centerpiece and “brain” in which the sounds were distributed to the speakers, was a MIDAS Verona 560, 56 channel mixing board suspended from the ceiling.

For this particular installation, we used a Sennheiser Ambeo microphone to capture the soundfield recording, while reinforcing it with spot mics on each individual vintage speaker in the space. The hidden loudspeakers were not close mic’d, but rather captured in the soundfield recorded by the Ambeo as they were not supposed to be heard as sound sources but rather as spatial accompaniment. The signals from the AMBEO and the spot microphones were recorded using the Antelope Audio Orion32 HD + MP32 pre-amps and Reaper.

In addition to the audio recording, 360 video was captured using the Giroptic 360 camera.

It was an interesting and fun experience recording an installation of this nature as typically the recordings done by the Immersive Audio Group have been geared towards bands or ensembles / groups recorded in the studio or in concert. Post production for the project is still in the works, so stay tuned !!

People Involved:

Scott Murakami – Lead Engineer / Producer

Gabriel Zalles – Co-Producer / Camera Op

Sripathi Sridhar – Assistant Engineer / Camera Op

JC Chao – Assistant Recording Engineer

Sungsoo Kim – Assistant Engineer

Chris Neil – Secondary Assistant Engineer

Arun Pandian – Secondary Assistant Engineer

Article written by: Scott Murakami

Flourish and Flurries: Recording immersive sound and 360 video for the West Point Holiday Show

Late in the summer of ’17, the NYU Immersive Audio Group was offered the opportunity to record the West Point Holiday Show by the Audio Engineering Branch Head of the West Point Military Band, Brandi Lane. The performance was to be held at 1:30PM on December 2nd, 2017 at the Eisenhower Hall Theatre at Trophy Point. After months of planning, an elite squad was formed to record the event using 3D sound and video technologies and techniques:

NOKIA OZO 360 Camera

* Kamal Rountree – Freelance Technical Manager/Producer
* Charles Craig Jr. – Producer
* Aggie H. Tai – Lead Engineer
* David Degenstein – Assistant Engineer
* Ying Ying Zhang – Lead Recording Engineer
* Ian Anderson – Assistant Recording Engineer
* Jason Sheng – Assistant Recording Engineer
* Scott Murakami – Secondary Assistant Recording Engineer
* Chris Neil – Secondary Assistant Recording Engineer

This post recaps the 3D sound and video technologies and techniques used to record the West Point Holiday Show.

The holiday show ensemble featured a mixture of brass, woodwind, string, percussion, and keyboard instruments. Brandi and her team of engineers handled the placement of spot mics for each of the 40+ instrumentalists. The IAG setup a Nokia Ozo 360 camera, two Sennheiser AMBEO ambisonic microphones, and a binaural capture system (aka Jeck-Head system) in the following locations:

* Stage: Sennheiser AMBEO and Nokia Ozo (AMBEO-Ozo system)
* Audience: Sennheiser AMBEO and Jeck-Head system

After taking into consideration the acoustic characteristics of the hall and the placement restrictions of the venue staff, the IAG engineers decided to place the AMBEO-Ozo system in an area between the ensemble and soloists. This allowed for the best visual and aural on-stage concert perspective and resulted in the capture of sound sources in various azimuths and elevations around the AMBEO-Ozo system.

Our chosen setup for the session
 

A second AMBEO was placed directly beside the Jeck-head system at half distance from the stage. The Jeck-head system is an experimental binaural capture system developed by members of the IAG and the Assistant Director of Music Technology, Paul Geluso. It consists of two mic arrays separated by a Jecklin disk. The mic arrays are composed of one cardioid (Schoeps MK2) and two bipolar (Schoeps MK6) patterned microphones. Facing the stage, the MK2’s of each array faced -270 degrees (L) and 45 degrees (R) along the azimuth of the Jecklin disk. On each side of the disk, one MK6 faced up-down and one faced front-back. The Jeck-Head array is hypothesized to allow for the capture of performances in binaural and for enhanced flexibility in the mixing stage.

The signals from the AMBEOs and Jeck-Head systems were recorded using the Antelope Audio Orion32 HD 64-channel HDX and USB3 AD/DA Interface and ProTools.

The West Point Holiday Show was spectacular, and the recordings are currently being edited and mixed along with the video. Stay tuned for their release in the near future!

Article by Charles Craig Jr.

 
 

NVSonic Headtracker NYU

NVSonic Head Tracker
NVSonic Head Tracker

What is head-tracker? Why should I care? 

Head-trackers send accelerometer and gyroscopic data from your head-tracker hardware to the software of your choice (in this case Reaper) allowing you to listen to your mix the same way your audience would listen to it if they were wearing a HMD. Why should I care? VR music and video experiences are becoming extremely popular but there currently does not exist a solution for mixing or mastering inside VR, the head-tracker is the next best thing! In our case, we decided to go with the NVSonic head-tracker because it was the easiest to use and most affordable. A little heads-up before you start on your journey building one, in order to load the boot software unto the chip on the tracker you will need access to a windows computer since the bootloader by my friend Tomasz Rudzki is only available for Windows at the moment. I borrowed a friend’s Windows computer for this step, it only takes one second.  

Why did you write this piece? 

While Tomasz’s site does have a great tutorial on how to set up and operate the tracker, I thought we should give you some more information regarding possible complications you might experience in the process and what we did to overcome them. Happy tracking!  🎧

NVSonic Head Tracker Instructions

  • Build the tracker and bootload it
  • Some notes: 
    • We had a wiring problem, you might need to test different gauges of wire if communication gets lost repeatedly.
    • We also recommend getting some zip ties and wireless headphones to make work more pleasant. 
  • Install Reaper https://www.reaper.fm/index.php
    • There is a free trial.
    • $60 for a full license.
      • Reaper is the only DAW that enables OSC communication. There is another head tracker that goes by the name Mr HeadTracker that is also OpenSource but the build is more complicated for that one. This other tracker sends connections via MIDI instead of OSC.
  • Install 360 FB Plug-in  https://facebook360.fb.com/spatial-workstation/
    • The Spatial Workstation not the SDK
  • Open the Reaper template found in your applications folder. 
    • /Applications/FB360 Spatial Workstation/Reaper/SpatialWorkstation

    • Its file name should be called: ‘SpatialWorkstation.RPP’. 

  • Note that all templates are protected by Reaper, this means that you cannot make changes to this template. If you want to make your own template you can save as template, this option is found under the File menu. 
  • When you open up the template start by saving as whatever you want to call your project. Then proceed to make changes.
  • The first four tracks of the template are identical in terms of routing, the only difference being that the second track has the input of the FX VST set to 1st order ambisonic while tracks 1, 3 and 4 expect a mono file. (a bit of an oversight by FB there)
  • This site https://www.reaper.fm/sdk/osc/osc.php tell you how to access OSC preferences in Reaper, this is the gist:
    • To enable network communication between REAPER and an OSC device, go to Options->Preferences->Control/OSC/Web, and add a new OSC (Open Sound Control) control surface “mode”.
      • You can always go to preferences by using the shortcut (⌘ + ,).

  • Call the Device Name whatever you want. Enable “Receive on port” and match the port number to the Bridge Application’s port number (9001 by default on Mac OS X).

  • Enable “allow binding messages to REAPER actions and FX learn”, this setting is found in that same window where you wrote in the port number.
  • Hit Ok.
  • In Reaper Mix window, click on FX on the Control Plugin track to show the FB plug-in.
    • Drag down to expand the track if not visible.
    • Alternatively, use the shortcut (⌘ + m) to show mixer.
  • Click “Get From Video” to disable the feature, the yaw, pitch and roll should now be modifiable.
  • Mute pitch and yaw in the bridge application by clicking on the M, it should turn red.
  • Click on the parameter you want to learn, which is roll (Y-axis)
  • Click on Param in the FB Plug-in, it’s at the top of the window.
  • Click Learn, you should see the parameter you are trying to automate in the command text window: ”/Roll”.
    • Hit Ok.

  • Do the same thing for the other parameters, muting the other messages you don’t want to pass and one by one assigning them to the correct Listener parameter: Roll, pitch, yaw.
  • After all the parameters are assigned you may need to click on the parameter again to refresh the communication.  
  • Have fun!

AES Award to Immersive Audio Group Members

Members of the Immersive Audio Group, working in collaboration with the Cooper Union, were awarded the Bronze Saul Walker API Bootstrap award at the 143rd AES Convention Student Design Competition in New York for their work on a cost-effective ambisonics microphone designed with MEMS capsules (full paper can be found here).

Congratulations to Gabriel Zalles, Yigal Kamel, Ian Anderson, MingYang Lee, Chris Neil, Monique Henry, Spencer Cappiello, and Dr Charlie Mydlarz! 

Welcome to the IAG

Hi, if this is your first time on the website we welcome you! We are very glad you are here. The IAG or IAIG is a group of students from NYU who share a passion for ambisonics, wavefield synthesis, binaural sound and all other techniques capable of realistically reproducing sound for games, videos or music. If you are new to the group and want to join our community check out our slack channel immersive-audio.slack.com or the mailing list immersive-audio@nyu.edu

That’s all for this post!

Sincerely,

The NYU IAG!