David Benyon


2010

pdf bib
Evaluating Human-Machine Conversation for Appropriateness
Nick Webb | David Benyon | Preben Hansen | Oil Mival
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

Evaluation of complex, collaborative dialogue systems is a difficult task. Traditionally, developers have relied upon subjective feedback from the user, and parametrisation over observable metrics. However, both models place some reliance on the notion of a task; that is, the system is helping to user achieve some clearly defined goal, such as book a flight or complete a banking transaction. It is not clear that such metrics are as useful when dealing with a system that has a more complex task, or even no definable task at all, beyond maintain and performing a collaborative dialogue. Working within the EU funded COMPANIONS program, we investigate the use of appropriateness as a measure of conversation quality, the hypothesis being that good companions need to be good conversational partners . We report initial work in the direction of annotating dialogue for indicators of good conversation, including the annotation and comparison of the output of two generations of the same dialogue system.

pdf bib
Wizard of Oz Experiments for a Companion Dialogue System: Eliciting Companionable Conversation
Nick Webb | David Benyon | Jay Bradley | Preben Hansen | Oil Mival
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

Within the EU-funded COMPANIONS project, we are working to evaluate new collaborative conversational models of dialogue. Such an evaluation requires us to benchmark approaches to companionable dialogue. In order to determine the impact of system strategies on our evaluation paradigm, we need to generate a range of companionable conversations, using dialogue strategies such as `empathy' and `positivity'. By companionable dialogue, we mean interactions that take user input of some scenario, and respond in a manner appropriate to the emotional content of the user utterance. In this paper, we describe our working Wizard of Oz (WoZ) system for systematically creating dialogues that fulfil these potential strategies, and enables us to deploy a range of potential techniques for selecting which parts of user input to address is which order, to inform the wizard response to the user based on a manual, on-the-fly assessment of the polarity of the user input.

2008

pdf bib
Dialogue, Speech and Images: the Companions Project Data Set
Yorick Wilks | David Benyon | Christopher Brewster | Pavel Ircing | Oli Mival
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

This paper describes part of the corpus collection efforts underway in the EC funded Companions project. The Companions project is collecting substantial quantities of dialogue a large part of which focus on reminiscing about photographs. The texts are in English and Czech. We describe the context and objectives for which this dialogue corpus is being collected, the methodology being used and make observations on the resulting data. The corpora will be made available to the wider research community through the Companions Project web site.