Impact of feedback on crowdsourced visual quality assessment with paired comparisonsFor this study, we used a dataset of five source images that were compressed by two codecs, VVC Intra and JPEG XL from JPEG AIC-3 database. This resulted in 10 sequences of 11 cropped images each, starting with the source at distortion level 0 and increasing levels from 1 to 10.
We conducted our experiment on the MTurk platform. We posted one human intelligence task (HIT) with 200 assignments for 200 unique crowdworkers. In each assignment, besides the training questions, a worker had to answer 120 study and trap questions in random order, once with feedback and then again in different random order without feedback. |
Cite usThe Feedback IQA dataset is freely available to the research community. If you use our database in your research, please cite the following reference:
@inproceedings{jenadeleh2024impact, author={Jenadeleh, Mohsen and Heß, Alexander and Hviid Del Pin, Simon and Gamboa, Edwin and Hirth, Matthias and Saupe, Dietmar}, booktitle={2024 16th International Conference on Quality of Multimedia Experience (QoMEX)}, title={Impact of feedback on crowdsourced visual quality assessment with paired comparisons}, year={2024}, pages={ }, doi={ } } |
Downloads
346.3 MB download
23.3 kB download
|
Subjective DataThe subjective data file includes three CSV files:
|