Junichi Yamagishi


2023

pdf bib
Revisiting Pathologies of Neural Models under Input Reduction
Canasai Kruengkrai | Junichi Yamagishi
Findings of the Association for Computational Linguistics: ACL 2023

We revisit the question of why neural models tend to produce high-confidence predictions on inputs that appear nonsensical to humans. Previous work has suggested that the models fail to assign low probabilities to such inputs due to model overconfidence. We evaluate various regularization methods on fact verification benchmarks and find that this problem persists even with well-calibrated or underconfident models, suggesting that overconfidence is not the only underlying cause. We also find that regularizing the models with reduced examples helps improve interpretability but comes with the cost of miscalibration. We show that although these reduced examples are incomprehensible to humans, they can contain valid statistical patterns in the dataset utilized by the model.

pdf bib
XFEVER: Exploring Fact Verification across Languages
Yi-Chen Chang | Canasai Kruengkrai | Junichi Yamagishi
Proceedings of the 35th Conference on Computational Linguistics and Speech Processing (ROCLING 2023)

2022

pdf bib
Outlier-Aware Training for Improving Group Accuracy Disparities
Li-Kuang Chen | Canasai Kruengkrai | Junichi Yamagishi
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Student Research Workshop

Methods addressing spurious correlations such as Just Train Twice (JTT, Liu et al. 2021) involve reweighting a subset of the training set to maximize the worst-group accuracy. However, the reweighted set of examples may potentially contain unlearnable examples that hamper the model’s learning. We propose mitigating this by detecting outliers to the training set and removing them before reweighting. Our experiments show that our method achieves competitive or better accuracy compared with JTT and can detect and remove annotation errors in the subset being reweighted in JTT.

pdf bib
Mitigating the Diminishing Effect of Elastic Weight Consolidation
Canasai Kruengkrai | Junichi Yamagishi
Proceedings of the 29th International Conference on Computational Linguistics

Elastic weight consolidation (EWC, Kirkpatrick et al. 2017) is a promising approach to addressing catastrophic forgetting in sequential training. We find that the effect of EWC can diminish when fine-tuning large-scale pre-trained language models on different datasets. We present two simple objective functions to mitigate this problem by rescaling the components of EWC. Experiments on natural language inference and fact-checking tasks indicate that our methods require much smaller values for the trade-off parameters to achieve results comparable to EWC.

2021

pdf bib
A Multi-Level Attention Model for Evidence-Based Fact Checking
Canasai Kruengkrai | Junichi Yamagishi | Xin Wang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf bib
Viable Threat on News Reading: Generating Biased News Using Natural Language Models
Saurabh Gupta | Hong Huy Nguyen | Junichi Yamagishi | Isao Echizen
Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science

Recent advancements in natural language generation has raised serious concerns. High-performance language models are widely used for language generation tasks because they are able to produce fluent and meaningful sentences. These models are already being used to create fake news. They can also be exploited to generate biased news, which can then be used to attack news aggregators to change their reader’s behavior and influence their bias. In this paper, we use a threat model to demonstrate that the publicly available language models can reliably generate biased news content based on an input original news. We also show that a large number of high-quality biased news articles can be generated using controllable text generation. A subjective evaluation with 80 participants demonstrated that the generated biased news is generally fluent, and a bias evaluation with 24 participants demonstrated that the bias (left or right) is usually evident in the generated articles and can be easily identified.

2018

pdf bib
Identifying Computer-Translated Paragraphs using Coherence Features
Hoang-Quoc Nguyen-Son | Huy H. Nguyen | Ngoc-Dung T. Tieu | Junichi Yamagishi | Isao Echizen
Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation

2016

pdf bib
Continuous Expressive Speaking Styles Synthesis based on CVSM and MR-HMM
Jaime Lorenzo-Trueba | Roberto Barra-Chicote | Ascension Gallardo-Antolin | Junichi Yamagishi | Juan M. Montero
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

This paper introduces a continuous system capable of automatically producing the most adequate speaking style to synthesize a desired target text. This is done thanks to a joint modeling of the acoustic and lexical parameters of the speaker models by adapting the CVSM projection of the training texts using MR-HMM techniques. As such, we consider that as long as sufficient variety in the training data is available, we should be able to model a continuous lexical space into a continuous acoustic space. The proposed continuous automatic text to speech system was evaluated by means of a perceptual evaluation in order to compare them with traditional approaches to the task. The system proved to be capable of conveying the correct expressiveness (average adequacy of 3.6) with an expressive strength comparable to oracle traditional expressive speech synthesis (average of 3.6) although with a drop in speech quality mainly due to the semi-continuous nature of the data (average quality of 2.9). This means that the proposed system is capable of improving traditional neutral systems without requiring any additional user interaction.

2015

pdf bib
A Comparison of Manual and Automatic Voice Repair for Individual with Vocal Disabilities
Christophe Veaux | Junichi Yamagishi | Simon King
Proceedings of SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies

2013

pdf bib
Towards Personalised Synthesised Voices for Individuals with Vocal Disabilities: Voice Banking and Reconstruction
Christophe Veaux | Junichi Yamagishi | Simon King
Proceedings of the Fourth Workshop on Speech and Language Processing for Assistive Technologies