Guillaume Lavoué Homepage
Contact
glavoue@liris.cnrs.fr
Tél.: (33) 04 72 43 71 36

Subjective quality assessment of 3D models


We provide here two subjective quality assessment databases. Each package contains: the 3D models of the corpus, the subjective opinion scores given by the observers and the values from several objective metrics. These databases were used in the following work to provide a comparison between existing perceptual metrics:

Lavoué G, Corsini M. A comparison of perceptually-based metrics for objective evaluation of geometry processing. IEEE Transactions on Multimedia. 2010;12(7):636-649.

LIRIS/EPFL General-Purpose database


This database was created at EPFL and experiments were conducted at EPFL and LIRIS, Université de Lyon. It you use it, please cite: 

Lavoue G, Drelie Gelasca E, Dupont F, Baskurt A, Ebrahimi T. Perceptually driven 3D distance metrics with application to watermarking. In: Proceedings of SPIE.Vol 6312. SPIE; 2006:63120L-63120L-12.

  • 88 models between 40K and 50K vertices were generated from 4 reference objects. Two types of distortion (noise addition and smoothing) were applied with different strengths and at four locations: on the whole model, on smooth areas, on rough areas and on intermediate areas.
  • Subjective evaluations were made at normal viewing distance, using a SSIS (Single Stimulus Impairment Scale) method with 12 observers. 
  • A Microsoft excel document giving all subjective quality scores is included in the above archive. It contains also Mean Opinion Scores after normalization and outlier removal. An other Microsoft excel document provides the objective scores of several recent perceptual metrics from the state of the art.
Download the LIRIS/EPFL General-Purpose database (90 MB zip file)

LIRIS Masking database


The database was created at LIRIS, Université de Lyon. It you use it, please cite: 

Lavoué G. A local roughness measure for 3D meshes and its application to visual masking. ACM Transactions on Applied Perception (TAP). 2009;5(4).

  • 26 models between 9K and 40K vertices were generated from 4 reference objects. The only distortion is noise addition applied with three strengths, either on smooth or rough regions.
  • Subjective evaluations were made at normal viewing distance, using a Multiple Stimulus Impairment Scale method with 11 observers.
  • A Microsoft excel document giving all subjective quality scores is included in the above archive. It contains also Mean Opinion Scores after normalization and outlier removal. An other Microsoft excel document provides the objective scores of several recent perceptual metrics from the state of the art. 
Download the LIRIS Masking database (18 MB zip file)


Notice:
The Armadillo model from these databases is a manifold/simplified version of the original model that was created from scanning data by the Stanford Computer Graphics Laboratory.

The Dinosaur and Igea models are the courtesies of the Cyberware Inc.
The Bimba, RockerArm and vaseLion models are the courtesies of the AIM@SHAPE project.


3D Segmentation benchmark


The goal of this 3D-mesh segmentation benchmark is to provide an automatic tool to evaluate, analyse, and compare the different automatic 3D-mesh segmentation algorithms. It provides a corpus of segmented 3D models, an easy online evaluation tool and some comparison results for recent algorithms. It is available here.

Reference:  Halim Benhabiles, Jean-Philippe Vandeborre, Guillaume Lavoué and Mohamed Daoudi, A comparative study of existing metrics for 3D-mesh segmentation evaluation, The Visual Computer, Vol. 26, No. 12, pp. 1451–1466, 2010.

3D Mesh Watermarking benchmark

The proposed 3D mesh watermarking benchmark has three different components: a data set, a software tool and two evaluation protocols. The data set contains several "standard" mesh models on which we suggest to test the watermarking algorithms. The software tool integrates both geometric and perceptual measurements of the distortion induced by watermark embedding, and also the implementation of a variety of attacks on watermarked meshes. Besides, two different application-oriented evaluation protocols are proposed, which define the main steps to follow when conducting the evaluation experiments. This benchmark is available here.

Reference:  Kai Wang, Guillaume Lavoué, Florence Denis, Atilla Baskurt and Xiyan He, A Benchmark for 3D Mesh Watermarking, IEEE Shape Modeling International (SMI) , Avignon, France, June 2010.