Perceptual Quality Metrics Considering Effect of Spatial, Temporal and Amplitude Resolution

Development of objective quality metrics that can automatically and accurately measure perceptual video quality is becoming more and more important as video applications become pervasive. Prior work in video quality assessment is mainly concerned with applications where the frame rate of the video is fixed. The objective quality metric compares each pair of corresponding frames in deriving a similarity score or distortion between two videos with the same frame rate. In many emerging applications targeting for heterogeneous users with different display devices and/or different communication links, the same video content may be accessed with varying frame rate, frame size or quantization (assuming the video is coded into a scalable stream with spatial/temporal/SNR scalability). In applications permitting only very low bit rate video, one often has to determine whether to code an original high frame-rate video at the same frame rate but with significant quantization, or to code it at a lower frame rate with less quantization. In all proceeding scenarios as well as many others, it is important being able to objectively quantify the perceptual quality of a video that has been subjected to both quantization and frame rate reduction.

We conduct subjective tests to evaluate how frame rate and quantization artifacts influence the perceived video quality. Based on these results, we proposed a quality metric, a function of PSNR and frame rate, uses the product of a PSNR-based metric and a temporal correction factor (TCF). The first term, the sigmoidal function, assesses the quality of video based on the average PSNR of frames included in the video (not including interpolated frames), and the TCF, inverted falling exponential, reduces the quality assigned by the first metric according to the actual frame rate.

Our model has only two parameters and correlates very well with subjective possible to replace the sigmoidal function with other metrics that can more accurately access the quality of a video at the full frame rate. Also, although the proposed metric is only validated on SVC video with temporal and quality scalability, we expect the metric to be applicable to any coded videos.ratings obtained in our subjective tests with significantly high correlation. Each function has a single parameter that is video-content dependent. We are currently studying the dependency of these parameters with some content feature that can be easily derived from the underlying video. The proposed model is shown to be highly accurate, compared to the subjective ratings for a large set of test sequences. We note that it is

This project is supported by NSF and CATT.

Related Report and Publications

 
  1. Y.-F. Ou, W. Lin, H. Zeng and Y. Wang, “Perceptual Quality of Video with Frame Rate and Quantization Variation : a subjective study and analytical modeling”, Submitted to, IEEE transaction on multimedia, 2012.
     
  2. Y.-F. Ou, Y. Xue and Y. Wang, “Q-STAR:A Perceptual Video Quality Model for Mobile Platforms Considering Impact of Spatial, Temporal, and Amplitude Resolutions”, Submitted to, IEEE JSAC: QoE-Aware Wireless Multimedia Systems, 2012.
     
  3. Y.-F. Ou, Y. Xue, and Y. Wang,”Q-STAR:A Perceptual Video Quality Model for Mobile Platforms Considering Impact of Spatial, Temporal, and Amplitude Resolutions”, Technical Report, Vision Lab, Polytechnic Institute of NYU, 2011.
     
  4. Y. Xue, Y.-F. Ou and Y. Wang, “Perceptual Video Quality Comparison : single-layer vs. scalable bitstream”, Technical Report, Vision Lab, Polytechnic Institute of NYU, 2011.
     
  5. Yen-Fu Ou, Zhan Ma, Yao Wang, “A Novel Quality Metric for Compressed Video Considering both Frame Rate and Quantization Artifacts,” VPQM, Scottsdale, AZ, 2009.
     
  6. Yen-fu Ou, Tao Liu, Zhi Zhao, Zhan Ma, and Yao Wang, “Modeling the Impact of Frame rate on Perceptual Quality of Video”, in Proceedings of IEEE ICIP, San Diego, CA, October 2008.