Dynamic and Predictive Streaming of 360 Degree Video

PROJECT SUMMARY

Virtual Reality (VR) and Augmented Reality (AR) applications are projected to be the next wave of “Killer Apps” in the future Internet. Many VR/AR applications involve streaming of 360-degree video scenes. Compared with traditional video streaming, 360 video streaming faces unique new challenges. To deliver an immersive video experience, 360 video has a much higher bandwidth requirement and subjects to a much tighter delay deadline. Additionally, a user can change her Field-of-View (FoV) at any time during the streaming session. It is therefore very challenging to deliver a high level of user Quality-of-Experience in the face of time-varying network conditions and user FoVs.
 
We propose novel joint coding-and-delivery solutions for high-quality and robust 360-degree video streaming. Our research plan consists of four inter-dependent research thrusts:
1. One-tier Interactive Streaming: We propose a novel low-latency FoV-adaptive coding structure to simultaneously reduce coding/decoding delays and frame size burstiness. We study real-time joint optimization of streaming rate adaption and video coding bits allocation to maximize the rendered video quality. We further propose to develop adaptive coding-and-delivery strategies under the neural-network-based reinforcement learning framework.
2. Two-tier On-demand Streaming: We propose an innovative chunk-based streaming framework, featuring two-tier video coding and delivery: the base tier prefetches chunks covering the 360-degree view at low quality using a long buffer, while the enhancement tier requests chunks covering the predicted user view window at high quality. We study optimal rate allocation between the two tiers to strike the desired balance between the rendered video quality and streaming robustness to bandwidth and FoV dynamics. We develop chunk-based video rate selection and view-span adaptation algorithms using both model-based feedback control loops and model-free reinforcement learning.
3. Intelligent Field-of-View Prediction: We propose to develop effective algorithms for predicting user FoVs, based on the past FoV trajectory and the audio and visual content through novel deep learning architectures. We further propose to study personalized FoV prediction based on other users’ view trajectories under the framework of recommender systems.
4. Prototype and Experimentation: We will develop fully functional 360 video streaming prototypes and conduct experiments with real users in controlled and real network environments to validate and improve the proposed designs. The live demo can run on a Chrome browser from a computer or Android device.
 

PARTICIPANTS

Yao Wang, Principal Investigator, Lab Page
Yong Liu, Principal Investigator, Lab Page
Liyang Sun, Ph.D. student
Yixiang Mao, Ph.D. student
Tongyu Zong, Ph.D. student
Duanmu Fanyi, Previous Ph.D. student
Chenge Li, Previous Ph.D. student

SPONSOR

This material is based upon work supported by the National Science Foundation.

PUBLICATIONS

Liyang Sun, Yixiang Mao, Tongyu Zong, Yong Liu, and Yao Wang. 2020. Flocking-based live streaming of 360-degree video. In Proceedings of the 11th ACM Multimedia Systems Conference (MMSys 2020). 

Yixiang Mao, Liyang Sun, Yong Liu, and Yao Wang. 2020. Low-latency FoV-adaptive Coding and Streaming for Interactive 360° Video Streaming. In Proceedings of the 28th ACM International Conference on Multimedia (MM 2020). 

L. Sun, F. Duanmu, Y. Liu, Y. Wang, Y. Ye, H. Shi, and D. Dai, “A Two-Tier System for On-Demand Streaming of 360 Degree Video Streaming over Dynamic Networks,” in IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS), 2019.

Chenge Li, Weixi Zhang, Yong Liu, Yao Wang, “Very Long Term Field of View Prediction for 360-degree Video Streaming“, invited paper in IEEE Multimedia Information Processing and Retrieval (MIPR), 2019.

Liyang Sun, Fanyi Duanmu, Yong Liu, Yao Wang, Yihua Ye, Hang Shi, David Dai, Multi-path Multi-tier 360-degree Video Streaming in 5G Networks, ACM Multimedia System Conference (MMSys), Amsterdam, Netherland, 2018.

Fanyi Duanmu, Yixiang Mao, Shuai Liu, Sumanth Srinivasan, and Yao Wang, “A Subjective Study of Viewer Navigation Behaviors When Watching 360-degree Videos on Computers,” in Proc. of IEEE International Conference on Multimedia Expo (ICME), San Diego, California, USA, 2018. 

Yuwen He, Xiaoyu Xiu, Phillipe Hanhart, Yan Ye, Fanyi Duanmu, and Yao Wang, “Content-Adaptive 360-degree Video Coding Using Hybrid Cubemap Projection,” IEEE Picture Coding Symposium (PCS), San Francisco, California, USA, 2018.

Fanyi Duanmu, Yuwen He, Xiaoyu Xiu, Phillipe Hanhart, Yan Ye, and Yao Wang, Hybrid Cubemap Projection Format for 360-degree Video Coding, IEEE Data Compression Conference (DCC), Snowbird, Utah, 2018.

Fanyi Duanmu, Eymen Kurdoglu, S. Amir Hosseini, Yong Liu and Yao Wang, Prioritized Buffer Control in Two-tier 360 Video Streaming, ACM SigComm Workshop on VR/AR Network, 2017.

Fanyi Duanmu, Eymen Kurdoglu, Yong Liu, Yao Wang, View Direction and Bandwidth Adaptive 360 Degree Video Streaming Using a Two-Tier System, IEEE International Symposium on Circuits and Systems (ISCAS), 2017.