-
- Our paper “Deep learning with diffusion MRI as in vivo microscope reveals sex-related differences in human white matter microstructure” was published in Scientific Reports (paper link)
- Our paper “A neural speech decoding framework leveraging deep learning and speech synthesis” was published in Nature Machine Intelligence (paper link). The related press release can be found here.
- We have a new press report about our project “Object-Centric, View-Adaptive and Progressive Coding and Streaming of Point Cloud Video” (jointly with Prof. Yong Liu and Prof. Luke DuBois)
-
- Our paper “Distributed feedforward and feedback cortical processing supports human speech production“ was published in PNAS (paper link). The press release about our work can be found here.
-
- We have received an NSF award (jointly with Prof. Adeen Flinker of NYU School of Medicine) from the NSF Collaborative Research on Computational Neural Science program. The project title is “Novel computational approaches for neural speech prostheses and causal dynamics of language processing”. Funding period: Oct. 2023 – Sept. 2026.
-
- We have received an NSF award (jointly with Prof. Yong Liu and Prof. Luke DuBois) on “Object-Centric, View-Adaptive and Progressive Coding and Streaming of Point Cloud Video”. Funding period: Oct. 2023 – Aug. 2027.
-
- Professor Wang gave a keynote presentation on Learnt compression for visual analytics on the edge at the IEEE Workshop on Coding for Machines, in conjunction with ICME 2023.
-
- Professor Yao Wang gave a keynote talk titled “Compression for Scene Perception and Understanding: Deep-Learning Approaches” in the 2022 Picture Coding Symposium, December 7-9, 2022.
-
- We have received an NSF award (jointly with Prof. John Ross Rizzo of NYU School of Medicine, Professors Yi Fang, Maurizio Porfiri, and Sundeep Rangan from NYU Tandon) from the NSF Smart and Connected Health (SCC) program. The project title is “SCC-IRG Track 2: Transportation Gaps and Disability-Related Unemployment: Smarter Cities and Wearables combating Commuting Challenges for the Visually Impaired”. The funding period is Oct. 2020 – Sept. 2023
-
- We have received an NSF award (jointly with Professors Siddharth Garg and Elza Erkip) from the NSF NSF/Intel Partnership on Machine Learning for Wireless Networking Systems (MLWiNS). The project title is “MLWiNS: Resource Constrained Mobile Data Analytics Assisted by the Wireless Edge”. The funding period is July 2020 – Aug. 2023
-
- We have received an NSF award (jointly with Professor S Farokh Atashzar) from the NSF Smart and Connected Health program. The project title is “RAPID: SCH: Smart Wearable COVID19 BioTracker Necklace: Remote Assessment and Monitoring of Symptoms for Early Diagnosis, Continual Monitoring, and Prediction of Adverse Event”. The funding period is June 2020 – May 2021.
-
- We have received an NSF award (jointly with Prof. Adeen Flinker of NYU School of Medicine) from the NSF Collaborative Research on Computational Neural Science program. The project title is “Understanding Cortical Networks Related to Speech Using Deep Learning on ECOG Data”. The funding period is Oct. 2019 – Sept. 2022.
-
- We received a Google Faculty Research Award for a project on Deep Learning based image and video coding. Award announced Feb. 2019.
-
- Our paper “Layered Image Compression using Scalable Auto-encoder”, by Chuanmin Jia, Zhaoyi Liu, Yao Wang, Siwei Ma, Wen Gao, received the Best Student Paper Award at IEEE 2nd International Conference on Multimedia Information Processing and Retrieval (MIPR’2019). Paper link.
-
- We have started a new project on automatic scene tagging for TV content. Funded by Viacom Inc./NYC Media Lab. Oct. 2018 – Jan. 2019.
-
- We have received an NSF award (jointly with Prof. Yong Liu) for Dynamic Predictive Streaming of 360 Degree Video. The funding period is Sept. 2018 – Aug. 2021.
-
- Professor Wang gave a Keynote Presentation at Picture Coding Symposium, San Francisco, June 2018. The presentation tile is “360 Degree Video Streaming”. Presentation file.
-
- Professor Wang’s team developed a machine learning-based method for early detection and treatment of Lymphedema, in collaboration with Professor Mei Fu from NYU College of Nursing. You can read more about this project here.
-
- We have received a NIH R01 award (jointly with Dr. Jeffrey Ketterling of Lizzi Center for Biomedical Engineering, Riverside Research, and Professor Dan Turnbull of Skirball Institute of Biomolecular Medicine, NYU School of Medicine) for a project titled “In utero mouse embryo phenotyping with high-frequency ultrasound,” Sept. 2016 – June 2020.
The focus of our team will be on developing advanced image analysis and machine learning methods for analyzing brain development in mouse embryos and characterizing defects caused by mutations based on high frequency ultrasound images of mouse embryos.
- We have received a NIH R01 award (jointly with Dr. Jeffrey Ketterling of Lizzi Center for Biomedical Engineering, Riverside Research, and Professor Dan Turnbull of Skirball Institute of Biomolecular Medicine, NYU School of Medicine) for a project titled “In utero mouse embryo phenotyping with high-frequency ultrasound,” Sept. 2016 – June 2020.
-
- We have received a NIH R01 award (jointly with Professor Mei Fu of NYU College of Nursing) for a project titled “SCH: EXP: Improving early detection and intervention of lymphedema,” Sept 2016 – Aug. 2019. The project titled “Kinect-enhanced Lymphedema Intervention Training System” which investigates the use of machine learning and video analysis for detection and intervention of Breast-Cancer related lymphedema. This project is part of the NSF/NIH joint program on Smart and Connected Health.
-
- We have received a NSF award (jointly with Prof. Eric Brenner of Dept. Biology in NYU College of Arts and Science) for a project titled “PlantTracer: A time-lapse App for students to visualize, quantify and report novel mutants in plant motion,” Sept. 2016- Aug. 2019.
The project will develop an educational tool that students could use to visualize and quantify plant motion in time-lapse video, and compare motion patterns of normal vs mutant plants.
- We have received a NSF award (jointly with Prof. Eric Brenner of Dept. Biology in NYU College of Arts and Science) for a project titled “PlantTracer: A time-lapse App for students to visualize, quantify and report novel mutants in plant motion,” Sept. 2016- Aug. 2019.
-
- We have received a NIH R21 award (jointly with Prof. Yvonne Lui, Center for Biomedical Imaging, NYU School of Medicine) for a project titled “Pattern Classification Using Magnetic Resonance Imaging in Traumatic Brain Injury,” July 2016 – June 2018.
The goal of this project is to develop machine learning techniques for detection and outcome prediction of Mild Traumatic Brain Injury using advanced MRI imaging features.
- We have received a NIH R21 award (jointly with Prof. Yvonne Lui, Center for Biomedical Imaging, NYU School of Medicine) for a project titled “Pattern Classification Using Magnetic Resonance Imaging in Traumatic Brain Injury,” July 2016 – June 2018.
-
- We have received a gift fund from CISCO Systems, to support a project titled “Analysis and summarization of multi-view surveillance video,” May 2016 – Aug. 2017.
-
- Professor Wang’s team is selected as one of the winners of the Verizon Open Innovation Challenge Grants. The Project is titled as: Witness Video Summarization: A Collective Journalistic Experience, and the team members are: Yao Wang, Xin Feng, Fanyi Duanmu, Shervin Minaee. You can read more about this project here!
-
- MLBAM Automatic Video Annotation Challenge Grand Prize 2015 : Yuanyi Xue, Yilin Song, Andy Chiang, Chenge Li ” MLB Advanced Media Automatic Video Annotation Competition ” “MLBAM Annotation Challenge“
-
- International Ultrasonic Symposium best paper award finalist 2015 : Jen-wei Kuo, Yao Wang “Automa’c Mouse Embryo Brain Ventricle Segmentation, Gesta’on Stage Estimation, and Mutant Detection from 3D 40-MHz Ultrasound Data“
-
- 49th Asilomar Conference on Signals, Systems Conference best paper award finalist Nov 2015 :Shervin Minaee, Amirali Abdolrashidi, Yao Wang, “Screen Content Image Segmentation Using Sparse-Smooth Decomposition“.
-
- 4th Greater New York Area Multimedia and Vision Meeting best student poster award 2014: Jen-Wei Kuo, Xuan Zhao and Yao Wang “Nested Graph Cut for Automatic Segmentation of Nested Objects and Application to Ultrasound Images of Mouse Embryos“.
-
- Yao Wang, Keynote speaker in the Workshop on Communication and Networking Techniques in Contemporary Video Workshop (in conjunction with INFOCOM), April 2014, Toronto, ON. Talk title: “Design of low-delay video applications: Optimizing perceptual quality and error resilience“
-
- Yao Wang, Keynote speaker in 18-th International Packet Video Workshop, Dec. 2010, Hong Kong. Talk title:”Enhancing wireless video services through cooperative communications“
-
- IEEE Communications Society Multimedia Communication Technical Committee Best Paper Award in 2011: Zhengye Liu, Yanming Shen, Keith W. Ross, Shivendra S. Panwar, and Yao Wang, “LayerP2P: Using Layered Video Chunks in P2P Live Streaming,” IEEE Trans. on Multimedia (TMM), Pp. 1340-1352. November 2009.
-
- IEEE Communications Society Leonard G. Abraham Prize Paper Award in the Field of Communications Systems in 2004: S. Mao, S. Lin, S. S. Panwar, Y. Wang, and E. Celebi, “Video Transport over Ad Hoc Networks: Multistream Coding with Multipath Transport,” IEEE Journal of Selected Areas in Communications. Special Issue on Recent Advances in Wireless Multimedia, Vol. 21, No. 10, pp. 1721-1736, Dec. 2003.