Latest News
News about NYU Vision Lab. ljklkjlkjlkjlkjlkjlkjlkj;lkajfiowejoijfklasjdfl;kasjdl;fkjasd;lkfjasldkfdfasdasdfasdfasdfasdfasdfasdfasdfasdf
New Paper Online in Nature Machine Intelligence
We have a new paper “A neural speech decoding framework leveraging deep learning and speech synthesis” online in Nature Machine Intelligence! Also, check out the codes here The work is jointly supervised by Prof. Yao Wang and Prof. Adeen Flinker. It is supported by the National Science Foundation under Grant No. IIS-1912286, 2309057 (Y.W., A.F.) […]
New Paper about Interactive 360 degree Video Streaming came out
Our paper Interactive 360 degree Video Streaming Using FoV-Adaptive Coding with Temporal Prediction came out!
New Paper about Multi-Subject Neural Speech Decoding came out
Our new paper Subject-Agnostic Transformer-Based Neural Speech Decoding from Surface and Depth Electrode Signals proposes a solution to leveraging multi-subject and irregular structure electrodes for Neural Speech Decoding.