Liming Chen

Highlights


  • Liris ECL awarded the 1st performance at the Shrec 2011 contest for 3D face recognition and retrieval
  • Liris at Ecole Centrale de Lyon took part to the track on 3D face retrieval and recognition at Shrec 2011 contest. 2 runs submitted by Liris were ranked the first and the second performance in terms of rank one recognition rate out of 14 runs submitted by four research groups. They were ranked the second and the third performance in terms of recall and precision. Huibin Li and Liming Chen are the members of Liris at ECL taking part to this contest. The paper describing and comparing all the methods submitted to Shrec 2011 can be found here.
  • Liris ECL awarded the second performance at the ImageClef 2011 photo annotation challenge
  • The Photo annotation challenge at ImageClef 2011 aims at the automatic annotation of a large number of consumer photos with multiple semantic concepts, including visual objects (car, anmal, people, etc.), scenes (indoor, outdoor, city, etc.), events (voyage, working, etc.), and enven sentiments (happy, scary, etc.). This year, 18 groups from 11 countries participated with 79 runs. For their first participation, Liris achieved a 43.7% MiAP using a multimodal model and was ranked the second performance behind TUBFI, ajoint submission from TU Berlin and Fraunhofer First, which achieved a 44,3% MiAP also with a multimodal model. The following people took part to this challenge with Liris: Ningning Liu (ningning.liu@ec-lyon.fr), Yu Zhang (yu.zhang@ec-lyon.fr), Emmanuel Dellandréa (emmanuel.dellandrea@ec-lyon.fr), Stéphane Brès (stephane.bres@insa-lyon.fr) et Liming Chen (liming.chen@ec-lyon.fr). The paper describing and comparing all the methods submitted to the ImageClef photo annotation task can be found here. The paper describing our methods submitted to the ImageClef photo annotation task can be found here.

Short Biography


Prof. Liming Chen was awarded a joint BSc degree in Mathematics and Computer Science from the University of Nantes in 1984. He obtained a Master degree in 1986 and a PhD in computer science from the University of Paris 6 in 1989. He first served as associate professor at the Université de Technologie de Compiègne, then joined Ecole Centrale de Lyon as Professor in 1998, where he leads an advanced research team on multimedia computing and pattern recognition. From 2001 to 2003, he also served as Chief Scientific Officer in a Paris-based company, Avivias, specialized in media asset management. In 2005, he served as Scientific multimedia expert in France Telecom R&D China. He has been Head of the department of Mathematics and Computer science from 2007. Prof. Liming Chen has taken out 3 patents, authored more than 100 publications and acted as chairman, PC member and reviewer in a number of high profile journal and conferences since 1995. He has been a (co)-principal investigator on a number of research grants from EU FP programme, French research funding bodies and local government departments. He has directed more than 15 PhD theses. His current research spans from 2D/3D face analysis and recognition, image and video analysis and categorization, to affect analysis both in image, audio and video.

Open Source software


Workshop on 3D and 2D Face Analysis and Recognition at ECL, Jan. 28, 2011


Face plays prominent role in human communication and it is potentially the best biometrics for people identification related applications. Over the past three decades, face analysis and recognition has attracted tremendous research effort from various disciplines and has witnessed impressive progress in basic and applied research, product development and applications. This one day workshop focuses on 2D and 3D face analysis and recognition. The workshop is aimed towards bringing together scientists and patricians from a wide range of theoretical and application areas whose work impacts 2D and 3D face analysis and recognition. Its goal is to provide a state-of-the-art overview of paradigms and challenges on this challenging topic.
For more information...

Teaching


  • Databases systems
  • Design of information systems
  • Computer vision
  • Pattern recognition

Colleagues and Collaborators


PhD students


  • Chu Duc Nguyen Co-advised with Dr. Mohsen Ardabilian
  • Karima Ouji Co-advised with Dr.Mohsen Ardabilan and Prof. Faouzi Ghorbel at ENSI, Tunisa
  • Przemyslaw Szeptycki Co-advised with Dr.Mohsen Ardabilian [Web Page]
  • Wael Bensoltana Co-advised with Dr.Mohsen Ardabilian and Prof.Chokri Ben Amar at ENIS, Tunisia
  • Chao Zhu Co-advised with Dr. Charles-Edmond Bichot
  • Huibin Li Co-advised with Prof. Jean-Marie Morvan at ICJ, UCBL
  • Pierre Lemaire Co-advised with Prof. Mohamed Daoudi at LIFL, Telecom Lille 1
  • Di Huang Co-advised with Dr.Mohsen Ardabilian and Prof.Yunhong Wang at IRIP, Beihang University, China
  • Yu Zhang Co-advised with Dr.Stéphane Brès at Liris, Insa de Lyon

Selected projects


  • ANR 3D Face Analyzer

  • The 3D face analyzer project targets at reliable recognition of facial attributes on 2.5D or 3D face models, thus making use of face shape, texture and landmarks at the same time. While developing 3D analysis-based techniques directly aiming at recognition of facial attributes, we also want to make forward knowledge on some underlying fundamental issues, e.g. stability of discrete geometric measures and descriptions (curvature, distance, etc.) across variations in terms of model resolution and precision, 3D non-rigid surface registration and matching in the presence of noisy data. Another important aim of the project is the collection of significantly representative datasets of 3D face models in facial expressions, age and gender for the purpose of training and testing.
  • ANR Videosense

  • The VideoSense project aims at automatic video tagging by high level concepts, including static concepts (e.g. object, scene, people, etc.), events, and emotions, while targeting two applications, namely video recommendation and ads monetization, on the Ghanni’s media assess management platform. The innovations targeted by the project include video content description by low-level features, emotional video content recognition, cross-concept detection and, multimodal fusion, and the use of a pivot language for dealing with multilingual textual resources associated with video data.
  • ANR Omnia

  • The Omnia project aims at filtering documents containing text and images, in a context of data profusion, as they are found on intranets and on Internet, and to present them to users in a content processing tool such as DocuShare (Xerox). The originality of the project is to work on 3 dimensions (image, text, emotion) and in a multilingual context. Images and texts give rise to 2 categorizations, relative to the informational aspects and to specific emotional aspects (coming directly from the images, or relative to their perception as expressed in the texts). These 2 types of content will be processed independently (annotation followed by indexation and categorization), with learning techniques, and will then be merged at the level of the filter and query tool. Their "primitives" will be linked to an interlingual representation of word senses based on English (UNL), that will open the way to multilingualism at the level of "publishing" the document categories, and of processing queries in natural languages equipped with UNL dictionaries.
  • ANR FAR3D

  • Face recognition from still images is an attractive biometrics for a broad range of applications. Nevertheless, although numerous and significant works on that domain, when dealing with still images only, this modality provides low performances under difficult conditions (eg. presence of facial expressions) in terms of authentication compared to fingerprints for example. In this project, we investigate the possible contribution of considering an additional dimension in face recognition “3-D” to improve performances of authentication while keeping existing advantages of face recognition from still images like no contact, low cooperation from user needed, well-accepted modality. Several points will be studied in order to cover as well as possible the domain: surface matching in 3D, asymmetrical protocol (ie. enrolments in 3D but authentication in 2D) and combination between 3D (shape) with 2D (texture or appearance). This industrial research is oriented towards new solutions in face biometrics, allowing a leapfrog in performances compared to present solutions, thanks to the use of 3D. These solutions must be realistic in the sense of application: field usable thanks to the use of "light" sensors, like, for instance, video surveillance cameras, even if the enrollement phase is performed with more complex equipment ("asymetric" approach). They must offer fairly acceptable performance (in the range or better than fingerprints). Finally, they must be transferable to real commercial applications in a short time range (4-6 years). International publications, as well as our experience of the domain, lead to the conclusion that such an ambitious objective cannot be reached by a unique biometric "solution", but by the clever association of several processes: multimodality, of course, as the 3D face sensors deliver also generally an appearance (texture) image, but also multiple 3D algorithmics ("multimatcher"). Our teams have developed during years different and complementary methods in 3D face recognition, methods that we want to combine in order to make the resulting score better that the one achieved by the best of the methods. Finally, it is also important in such projects related to biometrics, that all participants also worry about possible impacts of such technologies on privacy. The consortium is composed of USTL, Eurécom, Ecole Centrale de Lyon and Thales. All partners have already some expertise on 3-D and/or Face recognition with different but complementary backgrounds and technical approaches. They know fairly well each other thanks to some past projects (e.g. Semantic 3D for USTL/LIFL and Eurécom, or Technovision IV2 for Thales, ECL and Eurécom) or miscellaneous collaborations (eg. PhD. boards, lectures, etc.) The project is organized as 5 workpackages. WP0 is classically dedicated to the coordination and management of the project. WP1 is dedicated to the study of asymmetrical protocols, ie enrollment in 3-D but verification from video or images. WP2 focuses on facial deformations and variability. It includes the following sub items: geometrical approach, region-based facial surface matching and comparison, and learning. WP3 is entitled “face recognition by the fusion of shape based and texture based matching”. This work package includes the following subworkpackages: fusion strategies combining shape and texture, and multi matchers. Finally, the fourth work package is dedicated to the evaluation that includes first the definition of the evaluation framework and second criteria and the evaluation of algorithm performances.

Publications


Available on the Liris website