Coding and Streaming of Point Cloud Video
PROJECT SUMMARY
Volumetric video streaming will take telepresence to the next level by delivering full-fledged 3D information of the remote scene and facilitating six-degree-of-freedom viewpoint selection to create a truly immersive visual experience. With recent advances in the key enabling technologies, we are now at the verge of completing the puzzle of teleporting holograms of real-world humans/creatures/objects through the global Internet to realize the full potentials of Virtual/Augmented/Mixed Reality. Streaming volumetric video over the Internet requires significantly higher bandwidth and lower latency than the traditional 2D video; processing volumetric video also incurs high computation loads on the source and receiver sides.
We propose an inter-disciplinary research plan to holistically address the communication and computation challenges of point cloud video (PCV) by jointly designing coding, streaming, and edge processing strategies. We develop object-centric, view-adaptive, progressive, and edge-aware designs to deliver robust and high-quality viewer Quality-of-Experience (QoE) in the faces of network and viewer dynamics.
This research consists of four research thrusts:
4. Develop a fully-functional PCV streaming testbed and conduct modern dance education experiments by streaming PCVs of professional dancers to dance students in on-demand and live fashions.
PARTICIPANTS
Yong Liu, Principal Investigator
Yao Wang, Principal Investigator, Lab Page
R. Luke Dubois, Principal Investigator, Lab Page
Todd Bryant, Senior Personnel
Tingyu Fan, PhD Student
Ran Gong, PhD Student
Yueyu Hu, PhD Student
Chen Li, PhD student
Tongyu Zong, Ph.D. student
SPONSOR
This material is based upon work supported by the National Science Foundation under Grant No. 2312839.
Press Release about this Project
Static and Dynamic Point Cloud Coding
FoV-Adaptive Point Cloud Video streaming
Field-of-View Prediction
Volumetric video Capture of Dancers