Alberto Abad


2022

pdf bib
A new European Portuguese corpus for the study of Psychosis through speech analysis
Maria Forjó | Daniel Neto | Alberto Abad | HSofia Pinto | Joaquim Gago
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Psychosis is a clinical syndrome characterized by the presence of symptoms such as hallucinations, thought disorder and disorganized speech. Several studies have used machine learning, combined with speech and natural language processing methods to aid in the diagnosis process of this disease. This paper describes the creation of the first European Portuguese corpus for the identification of the presence of speech characteristics of psychosis, which contains samples of 92 participants, 56 controls and 36 individuals diagnosed with psychosis and medicated. The corpus was used in a set of experiments that allowed identifying the most promising feature set to perform the classification: the combination of acoustic and speech metric features. Several classifiers were implemented to study which ones entailed the best performance depending on the task and feature set. The most promising results obtained for the entire corpus were achieved when identifying individuals with a Multi-Layer Perceptron classifier and reached an 87.5% accuracy. Focusing on the gender dependent results, the overall best results were 90.9% and 82.9% accuracy, for female and male subjects respectively. Lastly, the experiments performed lead us to conjecture that spontaneous speech presents more identifiable characteristics than read speech to differentiate healthy and patients diagnosed with psychosis.

pdf bib
Multilingual Transfer Learning for Children Automatic Speech Recognition
Thomas Rolland | Alberto Abad | Catia Cucchiarini | Helmer Strik
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Despite recent advances in automatic speech recognition (ASR), the recognition of children’s speech still remains a significant challenge. This is mainly due to the high acoustic variability and the limited amount of available training data. The latter problem is particularly evident in languages other than English, which are usually less-resourced. In the current paper, we address children ASR in a number of less-resourced languages by combining several small-sized children speech corpora from these languages. In particular, we address the following research question: Does a novel two-step training strategy in which multilingual learning is followed by language-specific transfer learning outperform conventional single language/task training for children speech, as well as multilingual and transfer learning alone? Based on previous experimental results with English, we hypothesize that multilingual learning provides a better generalization of the underlying characteristics of children’s speech. Our results provide a positive answer to our research question, by showing that using transfer learning on top of a multilingual model for an unseen language outperforms conventional single language-specific learning.

2017

pdf bib
Fusion of Simple Models for Native Language Identification
Fabio Kepler | Ramon F. Astudillo | Alberto Abad
Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications

In this paper we describe the approaches we explored for the 2017 Native Language Identification shared task. We focused on simple word and sub-word units avoiding heavy use of hand-crafted features. Following recent trends, we explored linear and neural networks models to attempt to compensate for the lack of rich feature use. Initial efforts yielded f1-scores of 82.39% and 83.77% in the development and test sets of the fusion track, and were officially submitted to the task as team L2F. After the task was closed, we carried on further experiments and relied on a late fusion strategy for combining our simple proposed approaches with modifications of the baselines provided by the task. As expected, the i-vectors based sub-system dominates the performance of the system combinations, and results in the major contributor to our achieved scores. Our best combined system achieves 90.1% and 90.2% f1-score in the development and test sets of the fusion track, respectively.

2016

pdf bib
The SpeDial datasets: datasets for Spoken Dialogue Systems analytics
José Lopes | Arodami Chorianopoulou | Elisavet Palogiannidi | Helena Moniz | Alberto Abad | Katerina Louka | Elias Iosif | Alexandros Potamianos
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

The SpeDial consortium is sharing two datasets that were used during the SpeDial project. By sharing them with the community we are providing a resource to reduce the duration of cycle of development of new Spoken Dialogue Systems (SDSs). The datasets include audios and several manual annotations, i.e., miscommunication, anger, satisfaction, repetition, gender and task success. The datasets were created with data from real users and cover two different languages: English and Greek. Detectors for miscommunication, anger and gender were trained for both systems. The detectors were particularly accurate in tasks where humans have high annotator agreement such as miscommunication and gender. As expected due to the subjectivity of the task, the anger detector had a less satisfactory performance. Nevertheless, we proved that the automatic detection of situations that can lead to problems in SDSs is possible and can be a promising direction to reduce the duration of SDS’s development cycle.

pdf bib
SPA: Web-based Platform for easy Access to Speech Processing Modules
Fernando Batista | Pedro Curto | Isabel Trancoso | Alberto Abad | Jaime Ferreira | Eugénio Ribeiro | Helena Moniz | David Martins de Matos | Ricardo Ribeiro
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

This paper presents SPA, a web-based Speech Analytics platform that integrates several speech processing modules and that makes it possible to use them through the web. It was developed with the aim of facilitating the usage of the modules, without the need to know about software dependencies and specific configurations. Apart from being accessed by a web-browser, the platform also provides a REST API for easy integration with other applications. The platform is flexible, scalable, provides authentication for access restrictions, and was developed taking into consideration the time and effort of providing new services. The platform is still being improved, but it already integrates a considerable number of audio and text processing modules, including: Automatic transcription, speech disfluency classification, emotion detection, dialog act recognition, age and gender classification, non-nativeness detection, hyper-articulation detection, dialog act recognition, and two external modules for feature extraction and DTMF detection. This paper describes the SPA architecture, presents the already integrated modules, and provides a detailed description for the ones most recently integrated.

pdf bib
The DIRHA Portuguese Corpus: A Comparison of Home Automation Command Detection and Recognition in Simulated and Real Data.
Miguel Matos | Alberto Abad | António Serralheiro
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

In this paper, we describe a new corpus -named DIRHA-L2F RealCorpus- composed of typical home automation speech interactions in European Portuguese that has been recorded by the INESC-ID’s Spoken Language Systems Laboratory (L2F) to support the activities of the Distant-speech Interaction for Robust Home Applications (DIRHA) EU-funded project. The corpus is a multi-microphone and multi-room database of real continuous audio sequences containing read phonetically rich sentences, read and spontaneous keyword activation sentences, and read and spontaneous home automation commands. The background noise conditions are controlled and randomly recreated with noises typically found in home environments. Experimental validation on this corpus is reported in comparison with the results obtained on a simulated corpus using a fully automated speech processing pipeline for two fundamental automatic speech recognition tasks of typical ‘always-listening’ home-automation scenarios: system activation and voice command recognition. Attending to results on both corpora, the presence of overlapping voice-like noise is shown as the main problem: simulated sequences contain concurrent speakers that result in general in a more challenging corpus, while real sequences performance drops drastically when TV or radio is on.

2015

pdf bib
Predicting disordered speech comprehensibility from Goodness of Pronunciation scores
Lionel Fontan | Thomas Pellegrini | Julia Olcoz | Alberto Abad
Proceedings of SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies

pdf bib
Speech and language technologies for the automatic monitoring and training of cognitive functions
Anna Pompili | Cristiana Amorim | Alberto Abad | Isabel Trancoso
Proceedings of SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies

2014

pdf bib
The DIRHA simulated corpus
Luca Cristoforetti | Mirco Ravanelli | Maurizio Omologo | Alessandro Sosi | Alberto Abad | Martin Hagmueller | Petros Maragos
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This paper describes a multi-microphone multi-language acoustic corpus being developed under the EC project Distant-speech Interaction for Robust Home Applications (DIRHA). The corpus is composed of several sequences obtained by convolution of dry acoustic events with more than 9000 impulse responses measured in a real apartment equipped with 40 microphones. The acoustic events include in-domain sentences of different typologies uttered by native speakers in four different languages and non-speech events representing typical domestic noises. To increase the realism of the resulting corpus, background noises were recorded in the real home environment and then added to the generated sequences. The purpose of this work is to describe the simulation procedure and the data sets that were created and used to derive the corpus. The corpus contains signals of different characteristics making it suitable for various multi-microphone signal processing and distant speech recognition tasks.

2013

pdf bib
Meet EDGAR, a tutoring agent at MONSERRATE
Pedro Fialho | Luísa Coheur | Sérgio Curto | Pedro Cláudio | Ângela Costa | Alberto Abad | Hugo Meinedo | Isabel Trancoso
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations

2011

pdf bib
An on-line system for remote treatment of aphasia
Anna Pompili | Alberto Abad | Isabel Trancoso | José Fonseca | Isabel Pavão Martins | Gabriela Leal | Luisa Farrajota
Proceedings of the Second Workshop on Speech and Language Processing for Assistive Technologies