Perceptual Quality Assessment of User-generated-content Images and Videos
Author | : Xiangxu Yu |
Publisher | : |
Total Pages | : 0 |
Release | : 2022 |
ISBN-10 | : OCLC:1341251873 |
ISBN-13 | : |
Rating | : 4/5 ( Downloads) |
Download or read book Perceptual Quality Assessment of User-generated-content Images and Videos written by Xiangxu Yu and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Because of the increasing ease of image and video capture, many millions of consumers create and upload large volumes of User-Generated-Content (UGC) images and videos to social and streaming media sites over the Internet. UGC images and videos are commonly captured by naive users having limited skills and imperfect techniques, and tend to be afflicted by mixtures of highly diverse in-capture distortions. They are then often uploaded for sharing onto cloud servers, where they are further compressed for storage and transmission. My Ph.D. research first tackles the highly practical problem of predicting the quality of compressed images and videos with only (possibly severely) distorted UGC references. To address this problem, we develop a novel two-step image quality prediction concept called 2stepQA, and a novel Video Quality Assessment (VQA) framework called 1stepVQA. We construct a new, first-of-a-kind dedicated image quality database specialized for the design and testing of two-step IQA models, and a new dedicated video database, which was created by applying a realistic VMAF-Guided perceptual rate distortion optimization (RDO) criterion to create realistically compressed versions of UGC source videos, which typically have pre-existing distortions. Furthermore, we also study the automatic quality prediction of a particular UGC category, UGC gaming videos. To do this, we create a novel UGC gaming video resource, called the LIVE-YouTube Gaming video quality (LIVE-YT-Gaming) database, comprised of 600 real UGC gaming videos. We create a new VQA model specifically designed to succeed on UGC gaming videos, called the Gaming Video Quality Predictor (GAME-VQP). GAME-VQP successfully predicts the unique statistical characteristics of gaming videos by drawing upon features designed under modified natural scene statistics models, combined with gaming specific features learned by a Convolution Neural Network. We study the performance of 2stepQA, 1stepVQA, and GAME-VQP on the three new video (image) databases, respectively, and find that they all outperform other mainstream VQA models