Ion Androutsopoulos


2024

pdf bib
Should I try multiple optimizers when fine-tuning a pre-trained Transformer for NLP tasks? Should I tune their hyperparameters?
Nefeli Gkouti | Prodromos Malakasiotis | Stavros Toumpis | Ion Androutsopoulos
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

NLP research has explored different neural model architectures and sizes, datasets, training objectives, and transfer learning techniques. However, the choice of optimizer during training has not been explored as extensively. Typically, some variant of Stochastic Gradient Descent (SGD) is employed, selected among numerous variants, using unclear criteria, often with minimal or no tuning of the optimizer’s hyperparameters. Experimenting with five GLUE datasets, two models (DistilBERT and DistilRoBERTa), and seven popular optimizers (SGD, SGD with Momentum, Adam, AdaMax, Nadam, AdamW, and AdaBound), we find that when the hyperparameters of the optimizers are tuned, there is no substantial difference in test performance across the five more elaborate (adaptive) optimizers, despite differences in training loss. Furthermore, tuning just the learning rate is in most cases as good as tuning all the hyperparameters. Hence, we recommend picking any of the best-behaved adaptive optimizers (e.g., Adam) and tuning only its learning rate. When no hyperparameter can be tuned, SGD with Momentum is the best choice.

2023

pdf bib
Cache me if you Can: an Online Cost-aware Teacher-Student framework to Reduce the Calls to Large Language Models
Ilias Stogiannidis | Stavros Vassos | Prodromos Malakasiotis | Ion Androutsopoulos
Findings of the Association for Computational Linguistics: EMNLP 2023

Prompting Large Language Models (LLMs) performs impressively in zero- and few-shot settings. Hence, small and medium-sized enterprises (SMEs) that cannot afford the cost of creating large task-specific training datasets, but also the cost of pretraining their own LLMs, are increasingly turning to third-party services that allow them to prompt LLMs. However, such services currently require a payment per call, which becomes a significant operating expense (OpEx). Furthermore, customer inputs are often very similar over time, hence SMEs end-up prompting LLMs with very similar instances. We propose a framework that allows reducing the calls to LLMs by caching previous LLM responses and using them to train a local inexpensive model on the SME side. The framework includes criteria for deciding when to trust the local model or call the LLM, and a methodology to tune the criteria and measure the tradeoff between performance and cost. For experimental purposes, we instantiate our framework with two LLMs, GPT-3.5 or GPT-4, and two inexpensive students, a k-NN classifier or a Multi-Layer Perceptron, using two common business tasks, intent recognition and sentiment analysis. Experimental results indicate that significant OpEx savings can be obtained with only slightly lower performance.

pdf bib
Machine Learning for Ancient Languages: A Survey
Thea Sommerschield | Yannis Assael | John Pavlopoulos | Vanessa Stefanak | Andrew Senior | Chris Dyer | John Bodel | Jonathan Prag | Ion Androutsopoulos | Nando de Freitas
Computational Linguistics, Volume 49, Issue 3 - September 2023

Ancient languages preserve the cultures and histories of the past. However, their study is fraught with difficulties, and experts must tackle a range of challenging text-based tasks, from deciphering lost languages to restoring damaged inscriptions, to determining the authorship of works of literature. Technological aids have long supported the study of ancient texts, but in recent years advances in artificial intelligence and machine learning have enabled analyses on a scale and in a detail that are reshaping the field of humanities, similarly to how microscopes and telescopes have contributed to the realm of science. This article aims to provide a comprehensive survey of published research using machine learning for the study of ancient texts written in any language, script, and medium, spanning over three and a half millennia of civilizations around the ancient world. To analyze the relevant literature, we introduce a taxonomy of tasks inspired by the steps involved in the study of ancient documents: digitization, restoration, attribution, linguistic analysis, textual criticism, translation, and decipherment. This work offers three major contributions: first, mapping the interdisciplinary field carved out by the synergy between the humanities and machine learning; second, highlighting how active collaboration between specialists from both fields is key to producing impactful and compelling scholarship; third, highlighting promising directions for future work in this field. Thus, this work promotes and supports the continued collaborative impetus between the humanities and machine learning.

pdf bib
Harmful Language Datasets: An Assessment of Robustness
Katerina Korre | John Pavlopoulos | Jeffrey Sorensen | Léo Laugier | Ion Androutsopoulos | Lucas Dixon | Alberto Barrón-cedeño
The 7th Workshop on Online Abuse and Harms (WOAH)

The automated detection of harmful language has been of great importance for the online world, especially with the growing importance of social media and, consequently, polarisation. There are many open challenges to high quality detection of harmful text, from dataset creation to generalisable application, thus calling for more systematic studies. In this paper, we explore re-annotation as a means of examining the robustness of already existing labelled datasets, showing that, despite using alternative definitions, the inter-annotator agreement remains very inconsistent, highlighting the intrinsically subjective and variable nature of the task. In addition, we build automatic toxicity detectors using the existing datasets, with their original labels, and we evaluate them on our multi-definition and multi-source datasets. Surprisingly, while other studies show that hate speech detection models perform better on data that are derived from the same distribution as the training set, our analysis demonstrates this is not necessarily true.

2022

pdf bib
From the Detection of Toxic Spans in Online Discussions to the Analysis of Toxic-to-Civil Transfer
John Pavlopoulos | Leo Laugier | Alexandros Xenos | Jeffrey Sorensen | Ion Androutsopoulos
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We study the task of toxic spans detection, which concerns the detection of the spans that make a text toxic, when detecting such spans is possible. We introduce a dataset for this task, ToxicSpans, which we release publicly. By experimenting with several methods, we show that sequence labeling models perform best, but methods that add generic rationale extraction mechanisms on top of classifiers trained to predict if a post is toxic or not are also surprisingly promising. Finally, we use ToxicSpans and systems trained on it, to provide further analysis of state-of-the-art toxic to non-toxic transfer systems, as well as of human performance on that latter task. Our work highlights challenges in finer toxicity detection and mitigation.

pdf bib
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English
Ilias Chalkidis | Abhik Jana | Dirk Hartung | Michael Bommarito | Ion Androutsopoulos | Daniel Katz | Nikolaos Aletras
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Laws and their interpretations, legal arguments and agreements are typically expressed in writing, leading to the production of vast corpora of legal text. Their analysis, which is at the center of legal practice, becomes increasingly elaborate as these collections grow in size. Natural language understanding (NLU) technologies can be a valuable tool to support legal practitioners in these endeavors. Their usefulness, however, largely depends on whether current state-of-the-art models can generalize across various tasks in the legal domain. To answer this currently open question, we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks in a standardized way. We also provide an evaluation and analysis of several generic and legal-oriented models demonstrating that the latter consistently offer performance improvements across multiple tasks.

pdf bib
FiNER: Financial Numeric Entity Recognition for XBRL Tagging
Lefteris Loukas | Manos Fergadiotis | Ilias Chalkidis | Eirini Spyropoulou | Prodromos Malakasiotis | Ion Androutsopoulos | Georgios Paliouras
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Publicly traded companies are required to submit periodic reports with eXtensive Business Reporting Language (XBRL) word-level tags. Manually tagging the reports is tedious and costly. We, therefore, introduce XBRL tagging as a new entity extraction task for the financial domain and release FiNER-139, a dataset of 1.1M sentences with gold XBRL tags. Unlike typical entity extraction datasets, FiNER-139 uses a much larger label set of 139 entity types. Most annotated tokens are numeric, with the correct tag per token depending mostly on context, rather than the token itself. We show that subword fragmentation of numeric expressions harms BERT’s performance, allowing word-level BILSTMs to perform better. To improve BERT’s performance, we propose two simple and effective solutions that replace numeric expressions with pseudo-tokens reflecting original token shapes and numeric magnitudes. We also experiment with FIN-BERT, an existing BERT model for the financial domain, and release our own BERT (SEC-BERT), pre-trained on financial filings, which performs best. Through data and error analysis, we finally identify possible limitations to inspire future work on XBRL tagging.

pdf bib
Processing Long Legal Documents with Pre-trained Transformers: Modding LegalBERT and Longformer
Dimitris Mamakas | Petros Tsotsi | Ion Androutsopoulos | Ilias Chalkidis
Proceedings of the Natural Legal Language Processing Workshop 2022

Pre-trained Transformers currently dominate most NLP tasks. They impose, however, limits on the maximum input length (512 sub-words in BERT), which are too restrictive in the legal domain. Even sparse-attention models, such as Longformer and BigBird, which increase the maximum input length to 4,096 sub-words, severely truncate texts in three of the six datasets of LexGLUE. Simpler linear classifiers with TF-IDF features can handle texts of any length, require far less resources to train and deploy, but are usually outperformed by pre-trained Transformers. We explore two directions to cope with long legal texts: (i) modifying a Longformer warm-started from LegalBERT to handle even longer texts (up to 8,192 sub-words), and (ii) modifying LegalBERT to use TF-IDF representations. The first approach is the best in terms of performance, surpassing a hierarchical version of LegalBERT, which was the previous state of the art in LexGLUE. The second approach leads to computationally more efficient models at the expense of lower performance, but the resulting models still outperform overall a linear SVM with TF-IDF features in long legal document classification.

pdf bib
Data Augmentation for Biomedical Factoid Question Answering
Dimitris Pappas | Prodromos Malakasiotis | Ion Androutsopoulos
Proceedings of the 21st Workshop on Biomedical Language Processing

We study the effect of seven data augmentation (DA) methods in factoid question answering, focusing on the biomedical domain, where obtaining training instances is particularly difficult. We experiment with data from the BIOASQ challenge, which we augment with training instances obtained from an artificial biomedical machine reading comprehension dataset, or via back-translation, information retrieval, word substitution based on WORD2VEC embeddings, or masked language modeling, question generation, or extending the given passage with additional context. We show that DA can lead to very significant performance gains, even when using large pre-trained Transformers, contributing to a broader discussion of if/when DA benefits large pre-trained models. One of the simplest DA methods, WORD2VEC-based word substitution, performed best and is recommended. We release our artificial training instances and code.

2021

pdf bib
Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases
Ilias Chalkidis | Manos Fergadiotis | Dimitrios Tsarapatsanis | Nikolaos Aletras | Ion Androutsopoulos | Prodromos Malakasiotis
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Interpretability or explainability is an emerging research field in NLP. From a user-centric point of view, the goal is to build models that provide proper justification for their decisions, similar to those of humans, by requiring the models to satisfy additional constraints. To this end, we introduce a new application on legal text where, contrary to mainstream literature targeting word-level rationales, we conceive rationales as selected paragraphs in multi-paragraph structured court cases. We also release a new dataset comprising European Court of Human Rights cases, including annotations for paragraph-level rationales. We use this dataset to study the effect of already proposed rationale constraints, i.e., sparsity, continuity, and comprehensiveness, formulated as regularizers. Our findings indicate that some of these constraints are not beneficial in paragraph-level rationale extraction, while others need re-formulation to better handle the multi-label nature of the task we consider. We also introduce a new constraint, singularity, which further improves the quality of rationales, even compared with noisy rationale supervision. Experimental results indicate that the newly introduced task is very challenging and there is a large scope for further research.

pdf bib
EDGAR-CORPUS: Billions of Tokens Make The World Go Round
Lefteris Loukas | Manos Fergadiotis | Ion Androutsopoulos | Prodromos Malakasiotis
Proceedings of the Third Workshop on Economics and Natural Language Processing

We release EDGAR-CORPUS, a novel corpus comprising annual reports from all the publicly traded companies in the US spanning a period of more than 25 years. To the best of our knowledge, EDGAR-CORPUS is the largest financial NLP corpus available to date. All the reports are downloaded, split into their corresponding items (sections), and provided in a clean, easy-to-use JSON format. We use EDGAR-CORPUS to train and release EDGAR-W2V, which are WORD2VEC embeddings for the financial domain. We employ these embeddings in a battery of financial NLP tasks and showcase their superiority over generic GloVe embeddings and other existing financial word embeddings. We also open-source EDGAR-CRAWLER, a toolkit that facilitates downloading and extracting future annual reports.

pdf bib
SemEval-2021 Task 5: Toxic Spans Detection
John Pavlopoulos | Jeffrey Sorensen | Léo Laugier | Ion Androutsopoulos
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)

The Toxic Spans Detection task of SemEval-2021 required participants to predict the spans of toxic posts that were responsible for the toxic label of the posts. The task could be addressed as supervised sequence labeling, using training data with gold toxic spans provided by the organisers. It could also be treated as rationale extraction, using classifiers trained on potentially larger external datasets of posts manually annotated as toxic or not, without toxic span annotations. For the supervised sequence labeling approach and evaluation purposes, posts previously labeled as toxic were crowd-annotated for toxic spans. Participants submitted their predicted spans for a held-out test set and were scored using character-based F1. This overview summarises the work of the 36 teams that provided system descriptions.

pdf bib
Context Sensitivity Estimation in Toxicity Detection
Alexandros Xenos | John Pavlopoulos | Ion Androutsopoulos
Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)

User posts whose perceived toxicity depends on the conversational context are rare in current toxicity detection datasets. Hence, toxicity detectors trained on current datasets will also disregard context, making the detection of context-sensitive toxicity a lot harder when it occurs. We constructed and publicly release a dataset of 10k posts with two kinds of toxicity labels per post, obtained from annotators who considered (i) both the current post and the previous one as context, or (ii) only the current post. We introduce a new task, context-sensitivity estimation, which aims to identify posts whose perceived toxicity changes if the context (previous post) is also considered. Using the new dataset, we show that systems can be developed for this task. Such systems could be used to enhance toxicity detection datasets with more context-dependent posts or to suggest when moderators should consider the parent posts, which may not always be necessary and may introduce additional costs.

pdf bib
A Neural Model for Joint Document and Snippet Ranking in Question Answering for Large Document Collections
Dimitris Pappas | Ion Androutsopoulos
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Question answering (QA) systems for large document collections typically use pipelines that (i) retrieve possibly relevant documents, (ii) re-rank them, (iii) rank paragraphs or other snippets of the top-ranked documents, and (iv) select spans of the top-ranked snippets as exact answers. Pipelines are conceptually simple, but errors propagate from one component to the next, without later components being able to revise earlier decisions. We present an architecture for joint document and snippet ranking, the two middle stages, which leverages the intuition that relevant documents have good snippets and good snippets come from relevant documents. The architecture is general and can be used with any neural text relevance ranker. We experiment with two main instantiations of the architecture, based on POSIT-DRMM (PDRMM) and a BERT-based ranker. Experiments on biomedical data from BIOASQ show that our joint models vastly outperform the pipelines in snippet retrieval, the main goal for QA, with fewer trainable parameters, also remaining competitive in document retrieval. Furthermore, our joint PDRMM-based model is competitive with BERT-based models, despite using orders of magnitude fewer parameters. These claims are also supported by human evaluation on two test batches of BIOASQ. To test our key findings on another dataset, we modified the Natural Questions dataset so that it can also be used for document and snippet retrieval. Our joint PDRMM-based model again outperforms the corresponding pipeline in snippet retrieval on the modified Natural Questions dataset, even though it performs worse than the pipeline in document retrieval. We make our code and the modified Natural Questions dataset publicly available.

pdf bib
MultiEURLEX - A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer
Ilias Chalkidis | Manos Fergadiotis | Ion Androutsopoulos
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

We introduce MULTI-EURLEX, a new multilingual dataset for topic classification of legal documents. The dataset comprises 65k European Union (EU) laws, officially translated in 23 languages, annotated with multiple labels from the EUROVOC taxonomy. We highlight the effect of temporal concept drift and the importance of chronological, instead of random splits. We use the dataset as a testbed for zero-shot cross-lingual transfer, where we exploit annotated training documents in one language (source) to classify documents in another language (target). We find that fine-tuning a multilingually pretrained model (XLM-ROBERTA, MT5) in a single source language leads to catastrophic forgetting of multilingual knowledge and, consequently, poor zero-shot transfer to other languages. Adaptation strategies, namely partial fine-tuning, adapters, BITFIT, LNFIT, originally proposed to accelerate fine-tuning for new end-tasks, help retain multilingual knowledge from pretraining, substantially improving zero-shot cross-lingual transfer, but their impact also depends on the pretrained model used and the size of the label set.

pdf bib
Proceedings of the Natural Legal Language Processing Workshop 2021
Nikolaos Aletras | Ion Androutsopoulos | Leslie Barrett | Catalina Goanta | Daniel Preotiuc-Pietro
Proceedings of the Natural Legal Language Processing Workshop 2021

2020

pdf bib
LEGAL-BERT: The Muppets straight out of Law School
Ilias Chalkidis | Manos Fergadiotis | Prodromos Malakasiotis | Nikolaos Aletras | Ion Androutsopoulos
Findings of the Association for Computational Linguistics: EMNLP 2020

BERT has achieved impressive performance in several NLP tasks. However, there has been limited investigation on its adaptation guidelines in specialised domains. Here we focus on the legal domain, where we explore several approaches for applying BERT models to downstream legal tasks, evaluating on multiple datasets. Our findings indicate that the previous guidelines for pre-training and fine-tuning, often blindly followed, do not always generalize well in the legal domain. Thus we propose a systematic investigation of the available strategies when applying BERT in specialised domains. These are: (a) use the original BERT out of the box, (b) adapt BERT by additional pre-training on domain-specific corpora, and (c) pre-train BERT from scratch on domain-specific corpora. We also propose a broader hyper-parameter search space when fine-tuning for downstream tasks and we release LEGAL-BERT, a family of BERT models intended to assist legal NLP research, computational law, and legal technology applications.

pdf bib
Domain Adversarial Fine-Tuning as an Effective Regularizer
Giorgos Vernikos | Katerina Margatina | Alexandra Chronopoulou | Ion Androutsopoulos
Findings of the Association for Computational Linguistics: EMNLP 2020

In Natural Language Processing (NLP), pretrained language models (LMs) that are transferred to downstream tasks have been recently shown to achieve state-of-the-art results. However, standard fine-tuning can degrade the general-domain representations captured during pretraining. To address this issue, we introduce a new regularization technique, AFTER; domain Adversarial Fine-Tuning as an Effective Regularizer. Specifically, we complement the task-specific loss used during fine-tuning with an adversarial objective. This additional loss term is related to an adversarial classifier, that aims to discriminate between in-domain and out-of-domain text representations. Indomain refers to the labeled dataset of the task at hand while out-of-domain refers to unlabeled data from a different domain. Intuitively, the adversarial classifier acts as a regularize which prevents the model from overfitting to the task-specific domain. Empirical results on various natural language understanding tasks show that AFTER leads to improved performance compared to standard fine-tuning.

pdf bib
Toxicity Detection: Does Context Really Matter?
John Pavlopoulos | Jeffrey Sorensen | Lucas Dixon | Nithum Thain | Ion Androutsopoulos
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Moderation is crucial to promoting healthy online discussions. Although several ‘toxicity’ detection datasets and models have been published, most of them ignore the context of the posts, implicitly assuming that comments may be judged independently. We investigate this assumption by focusing on two questions: (a) does context affect the human judgement, and (b) does conditioning on context improve performance of toxicity detection systems? We experiment with Wikipedia conversations, limiting the notion of context to the previous post in the thread and the discussion title. We find that context can both amplify or mitigate the perceived toxicity of posts. Moreover, a small but significant subset of manually labeled posts (5% in one of our experiments) end up having the opposite toxicity labels if the annotators are not provided with context. Surprisingly, we also find no evidence that context actually improves the performance of toxicity classifiers, having tried a range of classifiers and mechanisms to make them context aware. This points to the need for larger datasets of comments annotated in context. We make our code and data publicly available.

pdf bib
BioMRC: A Dataset for Biomedical Machine Reading Comprehension
Dimitris Pappas | Petros Stavropoulos | Ion Androutsopoulos | Ryan McDonald
Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing

We introduceBIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.

pdf bib
An Empirical Study on Large-Scale Multi-Label Text Classification Including Few and Zero-Shot Labels
Ilias Chalkidis | Manos Fergadiotis | Sotiris Kotitsas | Prodromos Malakasiotis | Nikolaos Aletras | Ion Androutsopoulos
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Large-scale Multi-label Text Classification (LMTC) has a wide range of Natural Language Processing (NLP) applications and presents interesting challenges. First, not all labels are well represented in the training set, due to the very large label set and the skewed label distributions of datasets. Also, label hierarchies and differences in human labelling guidelines may affect graph-aware annotation proximity. Finally, the label hierarchies are periodically updated, requiring LMTC models capable of zero-shot generalization. Current state-of-the-art LMTC models employ Label-Wise Attention Networks (LWANs), which (1) typically treat LMTC as flat multi-label classification; (2) may use the label hierarchy to improve zero-shot learning, although this practice is vastly understudied; and (3) have not been combined with pre-trained Transformers (e.g. BERT), which have led to state-of-the-art results in several NLP benchmarks. Here, for the first time, we empirically evaluate a battery of LMTC methods from vanilla LWANs to hierarchical classification approaches and transfer learning, on frequent, few, and zero-shot learning on three datasets from different domains. We show that hierarchical methods based on Probabilistic Label Trees (PLTs) outperform LWANs. Furthermore, we show that Transformer-based approaches outperform the state-of-the-art in two of the datasets, and we propose a new state-of-the-art method which combines BERT with LWAN. Finally, we propose new models that leverage the label hierarchy to improve few and zero-shot learning, considering on each dataset a graph-aware annotation proximity measure that we introduce.

2019

pdf bib
SEQˆ3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression
Christos Baziotis | Ion Androutsopoulos | Ioannis Konstas | Alexandros Potamianos
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Neural sequence-to-sequence models are currently the dominant approach in several natural language processing tasks, but require large parallel corpora. We present a sequence-to-sequence-to-sequence autoencoder (SEQˆ3), consisting of two chained encoder-decoder pairs, with words used as a sequence of discrete latent variables. We apply the proposed model to unsupervised abstractive sentence compression, where the first and last sequences are the input and reconstructed sentences, respectively, while the middle sequence is the compressed sentence. Constraining the length of the latent word sequences forces the model to distill important information from the input. A pretrained language model, acting as a prior over the latent sequences, encourages the compressed sentences to be human-readable. Continuous relaxations enable us to sample from categorical distributions, allowing gradient-based optimization, unlike alternatives that rely on reinforcement learning. The proposed model does not require parallel text-summary pairs, achieving promising results in unsupervised sentence compression on benchmark datasets.

pdf bib
A Survey on Biomedical Image Captioning
John Pavlopoulos | Vasiliki Kougia | Ion Androutsopoulos
Proceedings of the Second Workshop on Shortcomings in Vision and Language

Image captioning applied to biomedical images can assist and accelerate the diagnosis process followed by clinicians. This article is the first survey of biomedical image captioning, discussing datasets, evaluation measures, and state of the art methods. Additionally, we suggest two baselines, a weak and a stronger one; the latter outperforms all current state of the art systems on one of the datasets.

pdf bib
Extreme Multi-Label Legal Text Classification: A Case Study in EU Legislation
Ilias Chalkidis | Emmanouil Fergadiotis | Prodromos Malakasiotis | Nikolaos Aletras | Ion Androutsopoulos
Proceedings of the Natural Legal Language Processing Workshop 2019

We consider the task of Extreme Multi-Label Text Classification (XMTC) in the legal domain. We release a new dataset of 57k legislative documents from EURLEX, the European Union’s public document database, annotated with concepts from EUROVOC, a multidisciplinary thesaurus. The dataset is substantially larger than previous EURLEX datasets and suitable for XMTC, few-shot and zero-shot learning. Experimenting with several neural classifiers, we show that BIGRUs with self-attention outperform the current multi-label state-of-the-art methods, which employ label-wise attention. Replacing CNNs with BIGRUs in label-wise attention networks leads to the best overall performance.

pdf bib
Transfer Learning for Causal Sentence Detection
Manolis Kyriakakis | Ion Androutsopoulos | Artur Saudabayev | Joan Ginés i Ametllé
Proceedings of the 18th BioNLP Workshop and Shared Task

We consider the task of detecting sentences that express causality, as a step towards mining causal relations from texts. To bypass the scarcity of causal instances in relation extraction datasets, we exploit transfer learning, namely ELMO and BERT, using a bidirectional GRU with self-attention ( BIGRUATT ) as a baseline. We experiment with both generic public relation extraction datasets and a new biomedical causal sentence detection dataset, a subset of which we make publicly available. We find that transfer learning helps only in very small datasets. With larger datasets, BIGRUATT reaches a performance plateau, then larger datasets and transfer learning do not help.

pdf bib
Embedding Biomedical Ontologies by Jointly Encoding Network Structure and Textual Node Descriptors
Sotiris Kotitsas | Dimitris Pappas | Ion Androutsopoulos | Ryan McDonald | Marianna Apidianaki
Proceedings of the 18th BioNLP Workshop and Shared Task

Network Embedding (NE) methods, which map network nodes to low-dimensional feature vectors, have wide applications in network analysis and bioinformatics. Many existing NE methods rely only on network structure, overlooking other information associated with the nodes, e.g., text describing the nodes. Recent attempts to combine the two sources of information only consider local network structure. We extend NODE2VEC, a well-known NE method that considers broader network structure, to also consider textual node descriptors using recurrent neural encoders. Our method is evaluated on link prediction in two networks derived from UMLS. Experimental results demonstrate the effectiveness of the proposed approach compared to previous work.

pdf bib
SUM-QE: a BERT-based Summary Quality Estimation Model
Stratos Xenouleas | Prodromos Malakasiotis | Marianna Apidianaki | Ion Androutsopoulos
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

We propose SUM-QE, a novel Quality Estimation model for summarization based on BERT. The model addresses linguistic quality aspects that are only indirectly captured by content-based approaches to summary evaluation, without involving comparison with human references. SUM-QE achieves very high correlations with human ratings, outperforming simpler models addressing these linguistic aspects. Predictions of the SUM-QE model can be used for system development, and to inform users of the quality of automatically produced summaries and other types of generated text.

pdf bib
Neural Legal Judgment Prediction in English
Ilias Chalkidis | Ion Androutsopoulos | Nikolaos Aletras
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Legal judgment prediction is the task of automatically predicting the outcome of a court case, given a text describing the case’s facts. Previous work on using neural models for this task has focused on Chinese; only feature-based models (e.g., using bags of words and topics) have been considered in English. We release a new English legal judgment prediction dataset, containing cases from the European Court of Human Rights. We evaluate a broad variety of neural models on the new dataset, establishing strong baselines that surpass previous feature-based models in three tasks: (1) binary violation classification; (2) multi-label classification; (3) case importance prediction. We also explore if models are biased towards demographic information via data anonymization. As a side-product, we propose a hierarchical version of BERT, which bypasses BERT’s length limitation.

pdf bib
Large-Scale Multi-Label Text Classification on EU Legislation
Ilias Chalkidis | Emmanouil Fergadiotis | Prodromos Malakasiotis | Ion Androutsopoulos
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

We consider Large-Scale Multi-Label Text Classification (LMTC) in the legal domain. We release a new dataset of 57k legislative documents from EUR-LEX, annotated with ∼4.3k EUROVOC labels, which is suitable for LMTC, few- and zero-shot learning. Experimenting with several neural classifiers, we show that BIGRUs with label-wise attention perform better than other current state of the art methods. Domain-specific WORD2VEC and context-sensitive ELMO embeddings further improve performance. We also find that considering only particular zones of the documents is sufficient. This allows us to bypass BERT’s maximum text length limit and fine-tune BERT, obtaining the best results in all but zero-shot learning cases.

pdf bib
ConvAI at SemEval-2019 Task 6: Offensive Language Identification and Categorization with Perspective and BERT
John Pavlopoulos | Nithum Thain | Lucas Dixon | Ion Androutsopoulos
Proceedings of the 13th International Workshop on Semantic Evaluation

This paper presents the application of two strong baseline systems for toxicity detection and evaluates their performance in identifying and categorizing offensive language in social media. PERSPECTIVE is an API, that serves multiple machine learning models for the improvement of conversations online, as well as a toxicity detection system, trained on a wide variety of comments from platforms across the Internet. BERT is a recently popular language representation model, fine tuned per task and achieving state of the art performance in multiple NLP tasks. PERSPECTIVE performed better than BERT in detecting toxicity, but BERT was much better in categorizing the offensive type. Both baselines were ranked surprisingly high in the SEMEVAL-2019 OFFENSEVAL competition, PERSPECTIVE in detecting an offensive post (12th) and BERT in categorizing it (11th). The main contribution of this paper is the assessment of two strong baselines for the identification (PERSPECTIVE) and the categorization (BERT) of offensive language with little or no additional training data.

2018

pdf bib
AUEB at BioASQ 6: Document and Snippet Retrieval
George Brokos | Polyvios Liosis | Ryan McDonald | Dimitris Pappas | Ion Androutsopoulos
Proceedings of the 6th BioASQ Workshop A challenge on large-scale biomedical semantic indexing and question answering

We present AUEB’s submissions to the BioASQ 6 document and snippet retrieval tasks (parts of Task 6b, Phase A). Our models use novel extensions to deep learning architectures that operate solely over the text of the query and candidate document/snippets. Our systems scored at the top or near the top for all batches of the challenge, highlighting the effectiveness of deep learning for these tasks.

pdf bib
Deep Relevance Ranking Using Enhanced Document-Query Interactions
Ryan McDonald | George Brokos | Ion Androutsopoulos
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We explore several new models for document relevance ranking, building upon the Deep Relevance Matching Model (DRMM) of Guo et al. (2016). Unlike DRMM, which uses context-insensitive encodings of terms and query-document term interactions, we inject rich context-sensitive encodings throughout our models, inspired by PACRR’s (Hui et al., 2017) convolutional n-gram matching features, but extended in several ways including multiple views of query and document inputs. We test our models on datasets from the BIOASQ question answering challenge (Tsatsaronis et al., 2015) and TREC ROBUST 2004 (Voorhees, 2005), showing they outperform BM25-based baselines, DRMM, and PACRR.

pdf bib
Obligation and Prohibition Extraction Using Hierarchical RNNs
Ilias Chalkidis | Ion Androutsopoulos | Achilleas Michos
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

We consider the task of detecting contractual obligations and prohibitions. We show that a self-attention mechanism improves the performance of a BILSTM classifier, the previous state of the art for this task, by allowing it to focus on indicative tokens. We also introduce a hierarchical BILSTM, which converts each sentence to an embedding, and processes the sentence embeddings to classify each sentence. Apart from being faster to train, the hierarchical BILSTM outperforms the flat one, even when the latter considers surrounding sentences, because the hierarchical model has a broader discourse view.

pdf bib
BioRead: A New Dataset for Biomedical Reading Comprehension
Dimitris Pappas | Ion Androutsopoulos | Haris Papageorgiou
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2017

pdf bib
Deeper Attention to Abusive User Content Moderation
John Pavlopoulos | Prodromos Malakasiotis | Ion Androutsopoulos
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Experimenting with a new dataset of 1.6M user comments from a news portal and an existing dataset of 115K Wikipedia talk page comments, we show that an RNN operating on word embeddings outpeforms the previous state of the art in moderation, which used logistic regression or an MLP classifier with character or word n-grams. We also compare against a CNN operating on word embeddings, and a word-list baseline. A novel, deep, classificationspecific attention mechanism improves the performance of the RNN further, and can also highlight suspicious words for free, without including highlighted words in the training data. We consider both fully automatic and semi-automatic moderation.

pdf bib
Deep Learning for User Comment Moderation
John Pavlopoulos | Prodromos Malakasiotis | Ion Androutsopoulos
Proceedings of the First Workshop on Abusive Language Online

Experimenting with a new dataset of 1.6M user comments from a Greek news portal and existing datasets of EnglishWikipedia comments, we show that an RNN outperforms the previous state of the art in moderation. A deep, classification-specific attention mechanism improves further the overall performance of the RNN. We also compare against a CNN and a word-list baseline, considering both fully automatic and semi-automatic moderation.

pdf bib
Improved Abusive Comment Moderation with User Embeddings
John Pavlopoulos | Prodromos Malakasiotis | Juli Bakagianni | Ion Androutsopoulos
Proceedings of the 2017 EMNLP Workshop: Natural Language Processing meets Journalism

Experimenting with a dataset of approximately 1.6M user comments from a Greek news sports portal, we explore how a state of the art RNN-based moderation method can be improved by adding user embeddings, user type embeddings, user biases, or user type biases. We observe improvements in all cases, with user embeddings leading to the biggest performance gains.

2016

pdf bib
Using Centroids of Word Embeddings and Word Mover’s Distance for Biomedical Document Retrieval in Question Answering
Georgios-Ioannis Brokos | Prodromos Malakasiotis | Ion Androutsopoulos
Proceedings of the 15th Workshop on Biomedical Natural Language Processing

pdf bib
SemEval-2016 Task 5: Aspect Based Sentiment Analysis
Maria Pontiki | Dimitris Galanis | Haris Papageorgiou | Ion Androutsopoulos | Suresh Manandhar | Mohammad AL-Smadi | Mahmoud Al-Ayyoub | Yanyan Zhao | Bing Qin | Orphée De Clercq | Véronique Hoste | Marianna Apidianaki | Xavier Tannier | Natalia Loukachevitch | Evgeniy Kotelnikov | Nuria Bel | Salud María Jiménez-Zafra | Gülşen Eryiğit
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

pdf bib
aueb.twitter.sentiment at SemEval-2016 Task 4: A Weighted Ensemble of SVMs for Twitter Sentiment Analysis
Stavros Giorgis | Apostolos Rousas | John Pavlopoulos | Prodromos Malakasiotis | Ion Androutsopoulos
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

pdf bib
AUEB-ABSA at SemEval-2016 Task 5: Ensembles of Classifiers and Embeddings for Aspect Based Sentiment Analysis
Dionysios Xenos | Panagiotis Theodorakakos | John Pavlopoulos | Prodromos Malakasiotis | Ion Androutsopoulos
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

2015

pdf bib
SemEval-2015 Task 12: Aspect Based Sentiment Analysis
Maria Pontiki | Dimitris Galanis | Haris Papageorgiou | Suresh Manandhar | Ion Androutsopoulos
Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)

2014

pdf bib
Aspect Term Extraction for Sentiment Analysis: New Datasets, New Evaluation Measures and an Improved Unsupervised Method
John Pavlopoulos | Ion Androutsopoulos
Proceedings of the 5th Workshop on Language Analysis for Social Media (LASM)

pdf bib
SemEval-2014 Task 4: Aspect Based Sentiment Analysis
Maria Pontiki | Dimitris Galanis | John Pavlopoulos | Harris Papageorgiou | Ion Androutsopoulos | Suresh Manandhar
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)

pdf bib
Multi-Granular Aspect Aggregation in Aspect-Based Sentiment Analysis
John Pavlopoulos | Ion Androutsopoulos
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics

2013

pdf bib
Using Integer Linear Programming in Concept-to-Text Generation to Produce More Compact Texts
Gerasimos Lampouras | Ion Androutsopoulos
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Using Integer Linear Programming for Content Selection, Lexicalization, and Aggregation to Produce Compact Texts from OWL Ontologies
Gerasimos Lampouras | Ion Androutsopoulos
Proceedings of the 14th European Workshop on Natural Language Generation

2012

pdf bib
Extractive Multi-Document Summarization with Integer Linear Programming and Support Vector Regression
Dimitrios Galanis | Gerasimos Lampouras | Ion Androutsopoulos
Proceedings of COLING 2012

2011

pdf bib
A New Sentence Compression Dataset and Its Use in an Abstractive Generate-and-Rank Sentence Compressor
Dimitrios Galanis | Ion Androutsopoulos
Proceedings of the UCNLG+Eval: Language Generation and Evaluation Workshop

pdf bib
A Generate and Rank Approach to Sentence Paraphrasing
Prodromos Malakasiotis | Ion Androutsopoulos
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

2010

pdf bib
An extractive supervised two-stage method for sentence compression
Dimitrios Galanis | Ion Androutsopoulos
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

2009

pdf bib
Finding Short Definitions of Terms on Web Pages
Gerasimos Lampouras | Ion Androutsopoulos
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
An Open-Source Natural Language Generator for OWL Ontologies and its Use in Protege and Second Life
Dimitrios Galanis | George Karakatsiotis | Gerasimos Lampouras | Ion Androutsopoulos
Proceedings of the Demonstrations Session at EACL 2009

pdf bib
Adaptive Natural Language Interaction
Stasinos Konstantopoulos | Athanasios Tegos | Dimitrios Bilidas | Ion Androutsopoulos | Gerasimos Lampouras | Colin Matheson | Olivier Deroo | Prodromos Malakasiotis
Proceedings of the Demonstrations Session at EACL 2009

2007

pdf bib
Learning Textual Entailment using SVMs and String Similarity Measures
Prodromos Malakasiotis | Ion Androutsopoulos
Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing

pdf bib
Generating Multilingual Descriptions from Linguistically Annotated OWL Ontologies: the NaturalOWL System
Dimitrios Galanis | Ion Androutsopoulos
Proceedings of the Eleventh European Workshop on Natural Language Generation (ENLG 07)

2005

pdf bib
Exploiting OWL Ontologies in the Multilingual Generation of Object Descriptions
Ion Androutsopoulos | Spyros Kallonis | Vangelis Karkaletsis
Proceedings of the Tenth European Workshop on Natural Language Generation (ENLG-05)

pdf bib
A Practically Unsupervised Learning Method to Identify Single-Snippet Answers to Definition Questions on the Web
Ion Androutsopoulos | Dimitrios Galanis
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

2004

pdf bib
Learning to Identify Single-Snippet Answers to Definition Questions
Spyridoula Miliaraki | Ion Androutsopoulos
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

2003

pdf bib
Learning to Order Facts for Discourse Planning in Natural Language Generation
Aggeliki Dimitromanolaki | Ion Androutsopoulos
Proceedings of the 9th European Workshop on Natural Language Generation (ENLG-2003) at EACL 2003

2002

pdf bib
Ellogon: A New Text Engineering Platform
Georgios Petasis | Vangelis Karkaletsis | Georgios Paliouras | Ion Androutsopoulos | Constantine D. Spyropoulos
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

2001

pdf bib
Stacking Classifiers for Anti-Spam Filtering of E-Mail
Georgios Sakkis | Ion Androutsopoulos | Georgios Paliouras | Vangelis Karkaletsis | Constantine D. Spyropoulos | Panagiotis Stamatopoulos
Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing

2000

pdf bib
Selectional Restrictions in HPSG
Ion Androutsopoulos | Robert Dale
COLING 2000 Volume 1: The 18th International Conference on Computational Linguistics

Search
Co-authors