LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Guntuku, Sharath Chandra; Scott, Michael; Huan, Yang; Ghinea, Gheorghita; Lin, Weisi (2015)
Publisher: IEEE
Languages: English
Types: Conference object
Subjects: cs, Human factors, Video signal processing, Image sequences, Multimedia communication
Perception of quality and affect are subjective, driven by a complex interplay between system and human factors. Is it, however, possible to model these factors to predict subjective perception? To pursue this question, broader collaboration is needed to sample all aspects of personality, culture, and other human factors. Thus, an appropriate dataset is needed to integrate such efforts. Here, the CP-QAE-I is proposed. This is a video dataset containing 144 video sequences based on 12 short movie clips. These vary by: frame rate; frame dimension; bit-rate; and affect. An evaluation by 76 participants drawn from the United Kingdom, Singapore, India, and China suggests adequate distinction between the video sequences in terms of perceived quality as well as positive and negative affect. Nationality also emerged as a significant predictor, supporting the rationale for further study. By sharing the dataset, this paper aims to promote work modeling human factors in multimedia perception. A part of this work was carried out at the Rapid-Rich Object Search (ROSE) Lab at the Nanyang Technological University, Singapore. The ROSE Lab is supported by a grant from the Singapore National Research Foundation. This grant is administered by the Interactive & Digital Media Programme Office at the Media Development Authority, Singapore.

Share - Bookmark

Cite this article