Research

Research interests

1. Spatial and 3D audio

a. Immersive, collaborative virtual/physical spaces
b. HRTF measurement and representation
c. HRTF preference
d. Spatial sound capture and reproduction
e. Binaural sound reproduction
f. Headphone reproduction enhancement

2. Sonification and auditory displays

3. Spatial radiation patterns of musical instruments

Current research projects

NYU Holodeck – Experiential SuperComputing Collaboration (NSF MRI Award #1626098)

The NYU Holodeck is an interactive instrument that includes a virtual and physical environment focused on exploring immersive technologies for increased communication and collaboration between researchers, musicians, educators, and many others. 

For more information visit http://www.nyu-x.org/holodeck.html&nbsp

Related publications

Genovese, A., Gospodarek, M., Roginska, A. (2019) “Mixed Realities: a live collaborative musical performance”, Proceedings of the 5th International Conference on Spatial Audio ICSA, Verband Deutscher Tonmeister, September 26-28, Ilmenau, Germany.

Genovese, A., Roginska, A. (2019) “HMDiR: an HRTF dataset measured on a mannequin wearing XR devices”, Proceedings of the International AES Conference on Immersive and Interactive Audio, March 27-29, York, UK.

Genovese, A., Roginska, A. (2019) “XR-DB: An HRTF database measured with HMD occlusions”, Proceedings of the Audio Engineering Society International Conference on Immersive and Interactive Audio, York, UK. March 27-29.

Genovese, A., Zalles, G., Reardon, G., Roginska, A., (2018) “Acoustic perturbations in HRTFs measured on MixedReality Headsets”, Proceedings of the AES International Conference on Audio for Virtual and Augmented Reality, August, 20-22. Redmond, WA.

CityTones

CityTones is a crowdsourced repository of soundscapes captured using immersive sound capture methods, that the audio community can contribute to. The database includes descriptors containing information about the technical details of the recording, physical information, subjective quality attributes, as well as sound content information.

Related publications
Roginska, A., Lee, H., Mendez, A. E., Murakami, S. (2019) “CityTones: a Repository of Crowdsourced Annotated Soundfield Soundscapes”, Proceedings of the 146th Audio Engineering Society Convention, Dublin, Ireland, March 20-23.

Boren,  B., Andreopoulou, A., Musick, M., Mohanraj, H., Roginska, A. (2013) “I Hear NY3D: Ambisonic Capture and Reproduction of an Urban Sound Environment”, Proceedings of the 135th Audio Engineering Society Convention, New York, NY, October 17-20.

Aswathanarayana, S., Roginska, A., (2014) “I Hear Bangalore3D: Capture And Reproduction Of Urban Sounds Of Bangalore Using An Ambisonic Microphone”, International Conference on Auditory Display, New York, 22-25 June, 2014.

Boren, B., Musick, M., Grossman, J., Roginska, A. (2014) “I Hear NY4D: Hybrid Acoustic and Augmented Auditory Display for Urban Soundscapes”, International Conference on Auditory Display, New York, 22-25 June, 2014.

HRTF preference and customization

The common method for acquiring a HRTF is through acoustic measurement of the left- and right-ear impulse responses. The process can be time-consuming, prone to measurement error, and require costly specialized equipment and facilities (e.g. anechoic chamber). This line of research aims to move away from acoustically measuring HRTFs for every person while maintaining the quality of performance that is associated with individually measured HRTFs.

HRTFcontinuum

Past research has identified cues that are common among individual HRTFs, and has shown that localization accuracy can be as good, if not better, using HRTFs different from those of the individual observer.

The goals of this research are:

  1. Investigate alternative options to improve spatial awareness and 3D audio quality.

  2. Define qualitative characteristics that accurately define a listener’s spatial audio impression (such as externalization, up/down discrimination, front/back discrimination).

  3. Study whether there exist HRTF datasets that are preferred by listeners, through which listeners perceive an extended spatial image, and how this compares to the spatial impressions listeners receive with individually measured HRTFs.

  4. Define a small number of HRTF datasets that would be representative of listeners’ populations.

  5. Explore methods to customize HRTFs to improve spatial impression without using acoustic measurements.

Related publications

Reardon, G., Genovese, A., Zalles, G., Flanagan, P., Roginska, A., (2018) “Evaluation of Binaural Renderers: Multidimensional Sound Quality Assessment”, Proceedings of the AES International Conference on Audio for Virtual and Augmented Reality, August, 20-22. Redmond, WA.

Reardon, G., Roginska, A., Flanagan, P., Calle, J., Genovese, A., Zalles, G., Olko, M., Jerez, C. (2017). “Evaluation of Binaural Renderers: A Methodology”, Proceedings of the 143rd Audio Engineering Society Convention, New York, NY October 18-21.

Miller, C., Juras, J., Genovese, A., Roginska, A. (2016). “Interaural Distances In Existing HRIR Repositories”, Proceedings of the AES Conference on Headphone Technology, Aalborg, Denmark August 24-26, 2016.

Genovese, A., Juras, J., Miller, C., Roginska, A. (2016). “The Effect of Elevation on ITD Symmetry”, Proceedings of the AES Conference on Headphone Technology, Aalborg, Denmark August 24-26, 2016.

Genovese, A., Juras, J., Miller, C., Roginska, A. (2016). “Investigation of ITD symmetry across existing databases of personalized HRTFs”, Proceedings of the 22nd International Conference on Auditory Display, Canberra, Australia, July 2-8, 2016.

Juras, J., Miller, C., Roginska, A. (2015) “Modeling ITDs Based On Photographic Head Information”, Proceedings of the 139th Audio Engineering Society Convention, New York, NY October 29 – November 1.

Andreopoulou, A., Roginska, A. (2014). “Evaluating HRTF Similarity through Subjective Assessments: Factors that can Affect Judgment”, Proceedings of the 40th ICMC – 11th SMC Conference, Athens, Greece, 14-20 September, 2014.

Andreopoulou, A., Roginska, A., Bello, J.P. (2013) “Reduced representations of HRTF datasets: A Discriminant Analysis Approach”, Proceedings of the 135th Audio Engineering Society Convention, New York, NY, October 17-20.

Majdak, P., Ziegelwanger, H., Wiersdorf, H., Parmentier, M., Nicol, R., Noisternig, M., Roginska, A., Carpentier, T., (2013) “Spatially Oriented Format for Acoustics: A Data Exchange Format Representing Head-Related Transfer Functions”, Proceedings of the 134th Audio Engineering Society Convention, Rome, Italy, May 4-7, 2013.

McMullen, K., Wakefield, G.H., Roginska, A. (2012) “Subjective Selection of HRTF Spectral Coloration and ITD”, Proceedings of the 133rd Audio Engineering Society Convention, San Francisco, October 26-29

Boren, B., Roginska, A. (2011) “The Effects of Headphones on Listener HRTF Preference”. Proceedings of the 131st Audio Engineering Society Convention, Oct 20-23. New York, NY.

Boren, B., Roginska, A. (2011) “Multichannel Impulse Response Measurement in Matlab”. Proceedings of the 131st Audio Engineering Society Convention, Oct 20-23. New York, NY.

Andreopoulou, A., Roginska, A. (2011) “Towards the Creation of a Standardized HRTF Repository”, Proceedings of the 131st Audio Engineering Society Convention, Oct 20-23. New York, NY.

Andreopoulou, A., Roginska, A., Bello, J.P. (2011) “Observing the Clustering Tendencies of Head Related Transfer Function Databases”, Proceedings of the 131st Audio Engineering Society Convention, Oct 20-23. New York, NY.

Roginska, A., Wakefield, G. H., Santoro, T.S. (2010) “User Selected HRTFs: Reduced Complexity and Improved Perception”, Proceedings of the Undersea Human System Integration Symposium 2010, July 27-29, Providence, RI.

Roginska, A., Wakefield, G. H., Santoro, T.S. (2010) “Stimulus-dependent HRTF preference”, Proceedings of the 129th Audio Engineering Society Convention, Nov 4-7. San Francisco, CA.


Medical imaging sonification

The goal of this research project is to utilize sonification to examine the vast datasets yielded by current imaging techniques. The project aims to supplement current diagnosis techniques (using visual analysis and statistic) and reduce inter-observer variability in diagnoses and improve diagnostic capabilities. Currently, this research project focuses on PET scans of the human brain and the diagnosis of moderate to severe Alzheimer’s disease. More information available here

Brain1 Brain2

Related publications

Gionfrida, L., Roginska, A., Keary, J., Mohanraj, H., Friedman, K. (2016). “The Triple Tone Sonification Method to Enhance the Diagnosis of Alzheimer’s Dementia”, Proceedings of the 22nd International Conference on Auditory Display, Canberra, Australia, July 2-8, 2016.

Roginska, A., Friedman, K., Mohanraj, H. (2013) “Exploring sonification for augmenting brain scan data”, Proceedings of the 19th International Conference on Auditory Displays, July 6-10. Lodz, Poland.

Roginska, A., Friedman, K., Mohanraj, H., Ballora, M. (2013) “Immersive sonification for displaying brain scan data”, Proceedings of the 6th International Conference on Health Informatics, Barcelona, 11-14 February, 2013