David Vandyke


2022

pdf bib
Prompting for a conversation: How to control a dialog model?
Josef Valvoda | Yimai Fang | David Vandyke
Proceedings of the Second Workshop on When Creative AI Meets Conversational AI

Dialog modelling faces a difficult trade-off. Models are trained on a large amount of text, yet their responses need to be limited to a desired scope and style of a dialog agent. Because the datasets used to achieve the former contain language that is not compatible with the latter, pre-trained dialog models are fine-tuned on smaller curated datasets. However, the fine-tuning process robs them of the ability to produce diverse responses, eventually reducing them to dull conversation partners. In this paper we investigate if prompting can help with mitigating the above trade-off. Specifically, we experiment with conditioning the prompt on the query, rather than training a single prompt for all queries. By following the intuition that freezing the pre-trained language model will conserve its expressivity, we find that compared to fine-tuning, prompting can achieve a higher BLEU score and substantially improve the diversity and novelty of the responses.

pdf bib
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
Oliver Lemon | Dilek Hakkani-Tur | Junyi Jessy Li | Arash Ashrafzadeh | Daniel Hernández Garcia | Malihe Alikhani | David Vandyke | Ondřej Dušek
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue

2021

pdf bib
Non-Autoregressive Text Generation with Pre-trained Language Models
Yixuan Su | Deng Cai | Yan Wang | David Vandyke | Simon Baker | Piji Li | Nigel Collier
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Non-autoregressive generation (NAG) has recently attracted great attention due to its fast inference speed. However, the generation quality of existing NAG models still lags behind their autoregressive counterparts. In this work, we show that BERT can be employed as the backbone of a NAG model for a greatly improved performance. Additionally, we devise two mechanisms to alleviate the two common problems of vanilla NAG models: the inflexibility of prefixed output length and the conditional independence of individual token predictions. To further strengthen the speed advantage of the proposed model, we propose a new decoding strategy, ratio-first, for applications where the output lengths can be approximately estimated beforehand. For a comprehensive evaluation, we test the proposed model on three text generation tasks, including text summarization, sentence compression and machine translation. Experimental results show that our model significantly outperforms existing non-autoregressive baselines and achieves competitive performance with many strong autoregressive models. In addition, we also conduct extensive analysis experiments to reveal the effect of each proposed component.

pdf bib
Keep the Primary, Rewrite the Secondary: A Two-Stage Approach for Paraphrase Generation
Yixuan Su | David Vandyke | Simon Baker | Yan Wang | Nigel Collier
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Plan-then-Generate: Controlled Data-to-Text Generation via Planning
Yixuan Su | David Vandyke | Sihui Wang | Yimai Fang | Nigel Collier
Findings of the Association for Computational Linguistics: EMNLP 2021

Recent developments in neural networks have led to the advance in data-to-text generation. However, the lack of ability of neural models to control the structure of generated output can be limiting in certain real-world applications. In this study, we propose a novel Plan-then-Generate (PlanGen) framework to improve the controllability of neural data-to-text models. Extensive experiments and analyses are conducted on two benchmark datasets, ToTTo and WebNLG. The results show that our model is able to control both the intra-sentence and inter-sentence structure of the generated output. Furthermore, empirical comparisons against previous state-of-the-art methods show that our model improves the generation quality as well as the output diversity as judged by human and automatic evaluations.

pdf bib
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue
Haizhou Li | Gina-Anne Levow | Zhou Yu | Chitralekha Gupta | Berrak Sisman | Siqi Cai | David Vandyke | Nina Dethlefs | Yan Wu | Junyi Jessy Li
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue

2020

pdf bib
Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Olivier Pietquin | Smaranda Muresan | Vivian Chen | Casey Kennington | David Vandyke | Nina Dethlefs | Koji Inoue | Erik Ekstedt | Stefan Ultes
Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue

pdf bib
A Generative Model for Joint Natural Language Understanding and Generation
Bo-Hsiang Tseng | Jianpeng Cheng | Yimai Fang | David Vandyke
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Natural language understanding (NLU) and natural language generation (NLG) are two fundamental and related tasks in building task-oriented dialogue systems with opposite objectives: NLU tackles the transformation from natural language to formal representations, whereas NLG does the reverse. A key to success in either task is parallel training data which is expensive to obtain at a large scale. In this work, we propose a generative model which couples NLU and NLG through a shared latent variable. This approach allows us to explore both spaces of natural language and formal representations, and facilitates information sharing through the latent space to eventually benefit NLU and NLG. Our model achieves state-of-the-art performance on two dialogue datasets with both flat and tree-structured formal representations. We also show that the model can be trained in a semi-supervised fashion by utilising unlabelled data to boost its performance.

2017

pdf bib
PyDial: A Multi-domain Statistical Dialogue System Toolkit
Stefan Ultes | Lina M. Rojas-Barahona | Pei-Hao Su | David Vandyke | Dongho Kim | Iñigo Casanueva | Paweł Budzianowski | Nikola Mrkšić | Tsung-Hsien Wen | Milica Gašić | Steve Young
Proceedings of ACL 2017, System Demonstrations

pdf bib
A Network-based End-to-End Trainable Task-oriented Dialogue System
Tsung-Hsien Wen | David Vandyke | Nikola Mrkšić | Milica Gašić | Lina M. Rojas-Barahona | Pei-Hao Su | Stefan Ultes | Steve Young
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers

Teaching machines to accomplish tasks by conversing naturally with humans is challenging. Currently, developing task-oriented dialogue systems requires creating multiple components and typically this involves either a large amount of handcrafting, or acquiring costly labelled datasets to solve a statistical learning problem for each component. In this work we introduce a neural network-based text-in, text-out end-to-end trainable goal-oriented dialogue system along with a new way of collecting dialogue data based on a novel pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue systems easily and without making too many assumptions about the task at hand. The results show that the model can converse with human subjects naturally whilst helping them to accomplish tasks in a restaurant search domain.

2016

pdf bib
Towards Using Conversations with Spoken Dialogue Systems in the Automated Assessment of Non-Native Speakers of English
Diane Litman | Steve Young | Mark Gales | Kate Knill | Karen Ottewell | Rogier van Dalen | David Vandyke
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue

pdf bib
Multi-domain Neural Network Language Generation for Spoken Dialogue Systems
Tsung-Hsien Wen | Milica Gašić | Nikola Mrkšić | Lina M. Rojas-Barahona | Pei-Hao Su | David Vandyke | Steve Young
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Counter-fitting Word Vectors to Linguistic Constraints
Nikola Mrkšić | Diarmuid Ó Séaghdha | Blaise Thomson | Milica Gašić | Lina M. Rojas-Barahona | Pei-Hao Su | David Vandyke | Tsung-Hsien Wen | Steve Young
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
On-line Active Reward Learning for Policy Optimisation in Spoken Dialogue Systems
Pei-Hao Su | Milica Gašić | Nikola Mrkšić | Lina M. Rojas-Barahona | Stefan Ultes | David Vandyke | Tsung-Hsien Wen | Steve Young
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Conditional Generation and Snapshot Learning in Neural Dialogue Systems
Tsung-Hsien Wen | Milica Gašić | Nikola Mrkšić | Lina M. Rojas-Barahona | Pei-Hao Su | Stefan Ultes | David Vandyke | Steve Young
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

2015

pdf bib
Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems
Tsung-Hsien Wen | Milica Gašić | Nikola Mrkšić | Pei-Hao Su | David Vandyke | Steve Young
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
Multi-domain Dialog State Tracking using Recurrent Neural Networks
Nikola Mrkšić | Diarmuid Ó Séaghdha | Blaise Thomson | Milica Gašić | Pei-Hao Su | David Vandyke | Tsung-Hsien Wen | Steve Young
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

pdf bib
Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking
Tsung-Hsien Wen | Milica Gašić | Dongho Kim | Nikola Mrkšić | Pei-Hao Su | David Vandyke | Steve Young
Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue

pdf bib
Reward Shaping with Recurrent Neural Networks for Speeding up On-Line Policy Learning in Spoken Dialogue Systems
Pei-Hao Su | David Vandyke | Milica Gašić | Nikola Mrkšić | Tsung-Hsien Wen | Steve Young
Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue