Amy Isard


2022

pdf bib
MY DGSANNIS: ANNIS and the Public DGS Corpus
Amy Isard | Reiner Konrad
Proceedings of the LREC2022 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources

In 2018 the DGS-Korpus project published the first full release of the Public DGS Corpus. The data have already been published in two different ways to fulfil the needs of different user groups, and we have now published the third portal MY DGS – ANNIS using the ANNIS browser-based corpus software. ANNIS is a corpus query tool for visualization and querying of multi-layer corpus data. It has its own query language, AQL, and is accessed from a web browser without requiring a login. It allows more complex queries and visualizations than those provided by the existing research portal. We introduce ANNIS and its query language AQL, describe the structure of MY DGS – ANNIS, and give some example queries. The use cases with queries over multiple annotation tiers and metadata illustrate the research potential of this powerful tool and show how students and researchers can explore the Public DGS Corpus.

2020

pdf bib
Neural NLG for Methodius: From RST Meaning Representations to Texts
Symon Stevens-Guille | Aleksandre Maskharashvili | Amy Isard | Xintong Li | Michael White
Proceedings of the 13th International Conference on Natural Language Generation

While classic NLG systems typically made use of hierarchically structured content plans that included discourse relations as central components, more recent neural approaches have mostly mapped simple, flat inputs to texts without representing discourse relations explicitly. In this paper, we investigate whether it is beneficial to include discourse relations in the input to neural data-to-text generators for texts where discourse relations play an important role. To do so, we reimplement the sentence planning and realization components of a classic NLG system, Methodius, using LSTM sequence-to-sequence (seq2seq) models. We find that although seq2seq models can learn to generate fluent and grammatical texts remarkably well with sufficiently representative Methodius training data, they cannot learn to correctly express Methodius’s similarity and contrast comparisons unless the corresponding RST relations are included in the inputs. Additionally, we experiment with using self-training and reverse model reranking to better handle train/test data mismatches, and find that while these methods help reduce content errors, it remains essential to include discourse relations in the input to obtain optimal performance.

pdf bib
Approaches to the Anonymisation of Sign Language Corpora
Amy Isard
Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives

In this paper we survey the state of the art for the anonymisation of sign language corpora. We begin by exploring the motivations behind anonymisation and the close connection with the issue of ethics and informed consent for corpus participants. We detail how the the names which should be anonymised can be identified. We then describe the processes which can be used to anonymise both the video and the annotations belonging to a corpus, and the variety of ways in which these can be carried out. We provide examples for all of these processes from three sign language corpora in which anonymisation of the data has been performed.

2018

pdf bib
Up-cycling Data for Natural Language Generation
Amy Isard | Jon Oberlander | Claire Grover
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2016

pdf bib
The Methodius Corpus of Rhetorical Discourse Structures and Generated Texts
Amy Isard
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Using the Methodius Natural Language Generation (NLG) System, we have created a corpus which consists of a collection of generated texts which describe ancient Greek artefacts. Each text is linked to two representations created as part of the NLG process. The first is a content plan, which uses rhetorical relations to describe the high-level discourse structure of the text, and the second is a logical form describing the syntactic structure, which is sent to the OpenCCG surface realization module to produce the final text output. In recent work, White and Howcroft (2015) have used the SPaRKy restaurant corpus, which contains similar combination of texts and representations, for their research on the induction of rules for the combination of clauses. In the first instance this corpus will be used to test their algorithms on an additional domain, and extend their work to include the learning of referring expression generation rules. As far as we know, the SPaRKy restaurant corpus is the only existing corpus of this type, and we hope that the creation of this new corpus in a different domain will provide a useful resource to the Natural Language Generation community.

pdf bib
Proceedings of the 9th International Natural Language Generation conference
Amy Isard | Verena Rieser | Dimitra Gkatzia
Proceedings of the 9th International Natural Language Generation conference

pdf bib
Automatic Generation of Student Report Cards
Amy Isard | Jeremy Knox
Proceedings of the 9th International Natural Language Generation conference

2014

pdf bib
Using Ellipsis Detection and Word Similarity for Transformation of Spoken Language into Grammatically Valid Sentences
Manuel Giuliani | Thomas Marschall | Amy Isard
Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)

2012

pdf bib
Evaluating language understanding accuracy with respect to objective outcomes in a dialogue system
Myroslava O. Dzikovska | Peter Bell | Amy Isard | Johanna D. Moore
Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics

2011

pdf bib
Beetle II: an adaptable tutorial dialogue system
Myroslava Dzikovska | Amy Isard | Peter Bell | Johanna Moore | Natalie Steinhauser | Gwendolyn Campbell
Proceedings of the SIGDIAL 2011 Conference

2010

pdf bib
Situated Reference in a Hybrid Human-Robot Interaction System
Manuel Giuliani | Mary Ellen Foster | Amy Isard | Colin Matheson | Jon Oberlander | Alois Knoll
Proceedings of the 6th International Natural Language Generation Conference

2008

pdf bib
Creation of a New Domain and Evaluation of Comparison Generation in a Natural Language Generation System
Matthew Marge | Amy Isard | Johanna Moore
Proceedings of the Fifth International Natural Language Generation Conference

2006

pdf bib
Individuality and Alignment in Generated Dialogues
Amy Isard | Carsten Brockmann | Jon Oberlander
Proceedings of the Fourth International Natural Language Generation Conference

2004

pdf bib
Multi-lingual Evaluation of a Natural Language Generation System
Athanasios Karasimos | Amy Isard
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

2002

pdf bib
Advanced Tools for the Study of Natural Interactivity
Claudia Soria | Niels Ole Bernsen | Niels Cadée | Jean Carletta | Laila Dybkjær | Stefan Evert | Ulrich Heid | Amy Isard | Mykola Kolodnytsky | Christoph Lauer | Wolfgang Lezius | Lucas P.J.J. Noldus | Vito Pirrelli | Norbert Reithinger | Andreas Vögele
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

2000

pdf bib
The MATE Workbench Annotation Tool, a Technical Description
Amy Isard | David McKelvie | Andreas Mengel | Morten Baun Møller
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)

1999

pdf bib
The MATE Annotation Workbench: User Requirements
Jean Carletta | Amy Isard
Towards Standards and Tools for Discourse Tagging

1997

pdf bib
The Reliability of a Dialogue Structure Coding Scheme
Jean Carletta | Amy Isard | Stephen Isard | Jacqueline C. Kowtko | Gwyneth Doherty-Sneddon | Anne H. Anderson
Computational Linguistics, Volume 23, Number 1, March 1997