Jean-Claude Martin

Also published as: J-C. Martin, J.-C. Martin, J.C. Martin


2014

pdf bib
A Database of Full Body Virtual Interactions Annotated with Expressivity Scores
Demulier Virginie | Elisabetta Bevacqua | Florian Focone | Tom Giraud | Pamela Carreno | Brice Isableu | Sylvie Gibet | Pierre De Loor | Jean-Claude Martin
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Recent technologies enable the exploitation of full body expressions in applications such as interactive arts but are still limited in terms of dyadic subtle interaction patterns. Our project aims at full body expressive interactions between a user and an autonomous virtual agent. The currently available databases do not contain full body expressivity and interaction patterns via avatars. In this paper, we describe a protocol defined to collect a database to study expressive full-body dyadic interactions. We detail the coding scheme for manually annotating the collected videos. Reliability measures for global annotations of expressivity and interaction are also provided.

2010

pdf bib
Multimodal Annotation of Conversational Data
Philippe Blache | Roxane Bertrand | Emmanuel Bruno | Brigitte Bigi | Robert Espesser | Gaelle Ferré | Mathilde Guardiola | Daniel Hirst | Ning Tan | Edlira Cela | Jean-Claude Martin | Stéphane Rauzy | Mary-Annick Morel | Elisabeth Murisasco | Irina Nesterenko
Proceedings of the Fourth Linguistic Annotation Workshop

2008

pdf bib
Coding Emotional Events in Audiovisual Corpora
Laurence Devillers | Jean-Claude Martin
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

The modelling of realistic emotional behaviour is needed for various applications in multimodal human-machine interaction such as the design of emotional conversational agents (Martin et al., 2005) or of emotional detection systems (Devillers and Vidrascu, 2007). Yet, building such models requires appropriate definition of various levels for representing the emotions themselves but also some contextual information such as the events that elicit these emotions. This paper presents a coding scheme that has been defined following annotations of a corpus of TV interviews (EmoTV). Deciding which events triggered or may trigger which emotion is a challenge for building efficient emotion eliciting protocols. In this paper, we present the protocol that we defined for collecting another corpus of spontaneous human-human interactions recorded in laboratory conditions (EmoTaboo). We discuss the events that we designed for eliciting emotions. Part of this scheme for coding emotional event is being included in the specifications that are currently defined by a working group of the W3C (the W3C Emotion Incubator Working group). This group is investigating the feasibility of working towards a standard representation of emotions and related states in technological contexts.

2006

pdf bib
Real life emotions in French and English TV video clips: an integrated annotation protocol combining continuous and discrete approaches
L. Devillers | R. Cowie | J-C. Martin | E. Douglas-Cowie | S. Abrilian | M. McRorie
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

A major barrier to the development of accurate and realistic models of human emotions is the absence of multi-cultural / multilingual databases of real-life behaviours and of a federative and reliable annotation protocol. QUB and LIMSI teams are working towards the definition of an integrated coding scheme combining their complementary approaches. This multilevel integrated scheme combines the dimensions that appear to be useful for the study of real-life emotions: verbal labels, abstract dimensions and contextual (appraisal based) annotations. This paper describes this integrated coding scheme, a protocol that was set-up for annotating French and English video clips of emotional interviews and the results (e.g. inter-coder agreement measures and subjective evaluation of the scheme).

pdf bib
Manual Annotation and Automatic Image Processing of Multimodal Emotional Behaviours: Validating the Annotation of TV Interviews
J.-C. Martin | G. Caridakis | L. Devillers | K. Karpouzis | S. Abrilian
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

There has been a lot of psychological researches on emotion and nonverbal communication. Yet, these studies were based mostly on acted basic emotions. This paper explores how manual annotation and image processing can cooperate towards the representation of spontaneous emotional behaviour in low resolution videos from TV. We describe a corpus of TV interviews and the manual annotations that have been defined. We explain the image processing algorithms that have been designed for the automatic estimation of movement quantity. Finally, we explore how image processing can be used for the validation of manual annotations.

pdf bib
Annotation of Emotions in Real-Life Video Interviews: Variability between Coders
S. Abrilian | L. Devillers | J-C. Martin
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Research on emotional real-life data has to tackle the problem of their annotation. The annotation of emotional corpora raises the issue of how different coders perceive the same multimodal emotional behaviour. The long-term goal of this paper is to produce a guideline for the selection of annotators. The LIMSI team is working towards the definition of a coding scheme integrating emotion, context and multimodal annotations. We present the current defined coding scheme for emotion annotation, and the use of soft vectors for representing a mixture of emotions. This paper describes a perceptive test of emotion annotations and the results obtained with 40 different coders on a subset of complex real-life emotional segments selected from the EmoTV Corpus collected at LIMSI. The results of this first study validate previous annotations of emotion mixtures and highlight the difference of annotation between male and female coders.

2002

pdf bib
Annotating and Measuring Multimodal Behaviour – Tycoon Metrics in the Anvil Tool
Jean-Claude Martin | Michael Kipp
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

pdf bib
Multimodal and Adaptative Pedagogical Resources
Jean-Claude Martin | Jean-Hugues Réty | Nelly Bensimon
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

1997

pdf bib
The CARTOON project : Towards Integration of Multimodal and Linguistic Analysis for Cartographic Applications
J.C. Martin | X. Briffault | M.R. Goncalves | J. Vapillon
Referring Phenomena in a Multimedia Context and their Computational Treatment