Pose estimation and Gesture recognition

Scientific goals

In various projects we work on pose estimation (hand pose or full body pose) and recognition of hand and/or full body gestures from multiple modalities, in particular depth images, aiming intentional gestures bearing communicative function.

We are particularly interested in machine learning / deep learning methods which capture appearance information and structural information. The goal is to automatically learn invariant and discriminative hierarchical representations from labelled and unlabelled data, leveraging structural information extracted from the latter.

Hand and full body gesture recognition

We won the first prize of the 2014 ChaLearn Challenge "Looking at People: Gesture Recognition", which has been organized in conjunction with ECCV 2014 (Results).The objective of the challenge was to detect, localize and classify Italian conversational gestures from large database of 13858 gestures. The multimodal data included color video, range maps and a skeleton stream. The method, together with a new multimodal training algorithm called ModDrop, are described in the PAMI paper below.

Papers:

Hand pose estimation


Hand pose is estimated in a weakly supervised way with interwoven deep networks.

Papers:

Hand part segmentation is estimated through convolutional deep learning in a semi-supervised setting, leveraging structural information extracted from unlabelled real data and labelled synthetic data.

Papers:

Learning human identity from motion patterns


We propose an optimized shift-invariant dense convolutional mechanism (DCWRNN) to learn human kinematics and introduce the first method for active biometric authentication with mobile inertial sensors. Our results demonstrate, that human kinematics convey important information about user identity and can serve as a valuable component of multi-modal authentication.

Papers:

Full body pose estimation

Papers:

Misc links

A list of activity recognition and gesture recognition datasets we try keep up to date:

Contact

Christian Wolf
LIRIS UMR CNRS 5205
INSA-Lyon / Université de Lyon

Partners and Funding