|
Image quality assessment datasets |
A subjectively annotated and ecologically valid IQA database consisting of 10,073 images, on which we performed very large scale crowdsourcing experiments in order to obtain reliable quality ratings from 1467 crowd workers (1.2 million ratings).
|
KonIQ++ extends KonIQ-10k by introducing distortion annotations for each image, enabling machine learning models to improve both quality score prediction (standard NR-IQA) as well as distortion identification.
|
|
|
KADID-10k
|
IQA-Experts-300
|
The Konstanz Artificially Distorted Image quality Database (KADID-10k), and the Konstanz Artificially Distorted Image quality Set (KADIS-700k). KADID-10k contains 81 pristine images, each degraded by one of 25 distortions, 5 levels each. Each image is scored by 30 degradation category ratings (DCRs). KADIS-700k, has 140,000 pristine images, with 5 randomly degradations each.
|
Image database consisting of 300 images (200 natural + 100 artificially distorted) scored by 19 freelance experts (professional photographers). Includes information about quality scores as well as presence and types of image degradation.
|
|
|
KonPatch-30k
|
StudyMB 2.0
|
Image quality assessment has been studied almost exclusively as a global image property. We extend the notion of quality to spatially restricted sub-regions of images, by individually annotating image patches: 32,000 patches with 10 votes each.
|
StudyMB 2.0 consists of 8 sets of artifact-amplified images originate from Middlebury interpolation benchmark. Subjective quality scores of these images were collected from 20 ratings via paired comparisons.
|
|
|
KonFIG-IQA
|
CogVQA |
We created the Konstanz Fine-Grained IQA dataset (KonFiG-IQA, Parts A and B), a subjectively annotated image quality dataset with 10 source images processed using 7 distortion types at 12 or even 30 levels, evenly distributed over a span of 3 JND units. The KonFiG-IQA dataset contains a large number of subjective responses to triplet comparisons and DCR ratings obtained via crowdsourcing using proposed artifact boosting techniques.
|
We compared a reference image containing 300 dots to 20 test images with a greater number of dots. We conducted a crowdsourcing study with 254 participants using a within-subject design. We presented paired comparisons in both AFC (two-alternative forced choice) and RFC (relaxed forced choice with "not sure" response option) formats. We also asked participants to complete the NASA-TLX questionnaire to evaluate the cognitive load for each testing condition.
|
KonX
A cross-resolution IQA database KonX is a first-of-its-kind database that allows to disentangle and study the influence of scaling on quality prediction models. |
GFIQA-20k
|
VQA@Country
|
|
|
|
|
Video quality assessment datasets |
KonVid-150k
|
KoNVid-1k
|
A two-part subjectively annotated VQA database containing public-domain video sequences from YFCC100M with `in the wild' authentic distortions, depicting a wide variety of content. KonVid-150k-A consists of over 150,000 videos annotated with 5 ratings each, while KonVid-150k-B is a set of 1,576 videos which were each annotated at least 89 times.
|
A subjectively annotated VQA database consisting of 1,200 public-domain video sequences, fairly sampled from a large public video dataset, YFCC100m, aimed at `in the wild' authentic distortions, depicting a wide variety of content.
|
|
|
KoSMo-1k
|
|
|
|
Just noticeable difference datasets |
KonJND-1k
|
Picturewise JND annotations
|
Konstanz just noticeable difference database (KonJND-1k) contains 1,008 source images with two compression schemes, JPEG and BPG. A total of 503 unique workers participated in the study, yielding 61,030 PJND ratings and resulting in an average of 42 ratings per image.
|
To estimate the picturewise just noticeable difference (PJND) efficiently, two subjective assessment methods, a slider-based method and a keystroke-based method, were introduced and compared with a traditional one, the relaxed binary search method. The flicker test was applied and the PJND for 10 selected reference images from the MCL-JCI dataset with different distortion levels were assessed. A crowdsourcing study with side-by-side comparisons and forced choice was conducted as well.
|
|
|
Get In Touch
Do you have questions about the databases,
or interested in collaborating? Email us. |