Patrick Lehnen


2022

pdf bib
Towards Need-Based Spoken Language Understanding Model Updates: What Have We Learned?
Quynh Do | Judith Gaspers | Daniil Sorokin | Patrick Lehnen
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track

In productionized machine learning systems, online model performance is known to deteriorate over time when there is a distributional drift between offline training and online application data. As a remedy, models are typically retrained at fixed time intervals, implying high computational and manual costs. This work aims at decreasing such costs in productionized, large-scale Spoken Language Understanding systems. In particular, we develop a need-based re-training strategy guided by an efficient drift detector and discuss the arising challenges including system complexity, overlapping model releases, observation limitation and the absence of annotated resources at runtime. We present empirical results on historical data and confirm the utility of our design decisions via an online A/B experiment.

2021

pdf bib
Multilingual Paraphrase Generation For Bootstrapping New Features in Task-Oriented Dialog Systems
Subhadarshi Panda | Caglar Tirkaz | Tobias Falke | Patrick Lehnen
Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI

The lack of labeled training data for new features is a common problem in rapidly changing real-world dialog systems. As a solution, we propose a multilingual paraphrase generation model that can be used to generate novel utterances for a target feature and target language. The generated utterances can be used to augment existing training data to improve intent classification and slot labeling models. We evaluate the quality of generated utterances using intrinsic evaluation metrics and by conducting downstream evaluation experiments with English as the source language and nine different target languages. Our method shows promise across languages, even in a zero-shot setting where no seed data is available.

pdf bib
Feedback Attribution for Counterfactual Bandit Learning in Multi-Domain Spoken Language Understanding
Tobias Falke | Patrick Lehnen
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

With counterfactual bandit learning, models can be trained based on positive and negative feedback received for historical predictions, with no labeled data needed. Such feedback is often available in real-world dialog systems, however, the modularized architecture commonly used in large-scale systems prevents the direct application of such algorithms. In this paper, we study the feedback attribution problem that arises when using counterfactual bandit learning for multi-domain spoken language understanding. We introduce an experimental setup to simulate the problem on small-scale public datasets, propose attribution methods inspired by multi-agent reinforcement learning and evaluate them against multiple baselines. We find that while directly using overall feedback leads to disastrous performance, our proposed attribution methods can allow training competitive models from user feedback.

2020

pdf bib
Leveraging User Paraphrasing Behavior In Dialog Systems To Automatically Collect Annotations For Long-Tail Utterances
Tobias Falke | Markus Boese | Daniil Sorokin | Caglar Tirkaz | Patrick Lehnen
Proceedings of the 28th International Conference on Computational Linguistics: Industry Track

In large-scale commercial dialog systems, users express the same request in a wide variety of alternative ways with a long tail of less frequent alternatives. Handling the full range of this distribution is challenging, in particular when relying on manual annotations. However, the same users also provide useful implicit feedback as they often paraphrase an utterance if the dialog system failed to understand it. We propose MARUPA, a method to leverage this type of feedback by creating annotated training examples from it. MARUPA creates new data in a fully automatic way, without manual intervention or effort from annotators, and specifically for currently failing utterances. By re-training the dialog system on this new data, accuracy and coverage for long-tail utterances can be improved. In experiments, we study the effectiveness of this approach in a commercial dialog system across various domains and three languages.

2015

pdf bib
A Comparison of Update Strategies for Large-Scale Maximum Expected BLEU Training
Joern Wuebker | Sebastian Muehr | Patrick Lehnen | Stephan Peitz | Hermann Ney
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2013

pdf bib
(Hidden) Conditional Random Fields Using Intermediate Classes for Statistical Machine Translation
Patrick Lehnen | Jorn Wiibker Jan-Thorsten Peter | Stephan Peitz | Hermann Ney
Proceedings of Machine Translation Summit XIV: Papers

2010

pdf bib
A Comparison of Various Types of Extended Lexicon Models for Statistical Machine Translation
Matthias Huck | Martin Ratajczak | Patrick Lehnen | Hermann Ney
Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Research Papers

In this work we give a detailed comparison of the impact of the integration of discriminative and trigger-based lexicon models in state-of-the-art hierarchical and conventional phrase-based statistical machine translation systems. As both types of extended lexicon models can grow very large, we apply certain restrictions to discard some of the less useful information. We show how these restrictions facilitate the training of the extended lexicon models. We finally evaluate systems that incorporate both types of models with different restrictions on a large-scale translation task for the Arabic-English language pair. Our results suggest that extended lexicon models can be substantially reduced in size while still giving clear improvements in translation performance.

2008

pdf bib
A Comparison of Various Methods for Concept Tagging for Spoken Language Understanding
Stefan Hahn | Patrick Lehnen | Christian Raymond | Hermann Ney
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

The extraction of flat concepts out of a given word sequence is usually one of the first steps in building a spoken language understanding (SLU) or dialogue system. This paper explores five different modelling approaches for this task and presents results on a French state-of-the-art corpus, MEDIA. Additionally, two log-linear modelling approaches could be further improved by adding morphologic knowledge. This paper goes beyond what has been reported in the literature. We applied the models on the same training and testing data and used the NIST scoring toolkit to evaluate the experimental results to ensure identical conditions for each of the experiments and the comparability of the results. Using a model based on conditional random fields, we achieve a concept error rate of 11.8% on the MEDIA evaluation corpus.