Evaluation is very important to measure the effectiveness and progresses of information indexing and retrieval methods. We participate in many national and international evaluations campaigns such as TREC, TRECVID, CLEF, INEX, VOC and MediaEval. We also contribute to the organization of such campaign by providing ancillary data and tools, by participating to advisory committees (TRECVID) or by directly organizing them (TRECVid semantic indexing task and CLEF eHealth). These participations enable the positioning of the team among international research as previously mentioned.
Production of annotated resources is also very important both for system training and system evaluation. In the context of the Corpus project of the Quaero programme, we produced over 30 million of concept annotations in still images or in video shots. Most of them were publicly released for the TRECVid 2010-2014 semantic indexing tasks. Such annotations were produced using active learning and active cleaning approaches ensuring that each annotation made is both reliable and as useful as possible for system training.