Gábor Berend


2023

pdf bib
Masked Latent Semantic Modeling: an Efficient Pre-training Alternative to Masked Language Modeling
Gábor Berend
Findings of the Association for Computational Linguistics: ACL 2023

In this paper, we propose an alternative to the classic masked language modeling (MLM) pre-training paradigm, where the objective is altered from the reconstruction of the exact identity of randomly selected masked subwords to the prediction of their latent semantic properties. We coin the proposed pre-training technique masked latent semantic modeling (MLSM for short). In order to make the contextualized determination of the latent semantic properties of the masked subwords possible, we rely on an unsupervised technique which uses sparse coding. Our experimental results reveal that the fine-tuned performance of those models that we pre-trained via MLSM is consistently and significantly better compared to the use of vanilla MLM pretraining and other strong baselines.

pdf bib
Better Together: Jointly Using Masked Latent Semantic Modeling and Masked Language Modeling for Sample Efficient Pre-training
Gábor Berend
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning

pdf bib
SzegedAI at SemEval-2023 Task 1: Applying Quasi-Symbolic Representations in Visual Word Sense Disambiguation
Gábor Berend
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)

In this paper, we introduce our submission in the task of visual word sense disambiguation (vWSD). Our proposed solution operates by deriving quasi-symbolic semantic categories from the hidden representations of multi-modal text-image encoders. Our results are mixed, as we manage to achieve a substantial boost in performance when evaluating on a validation set, however, we experienced detrimental effects during evaluation on the actual test set. Our positive results on the validation set confirms the validity of the quasi-symbolic features, whereas our results on the test set revealed that the proposed technique was not able to cope with the sufficiently different distribution of the test data.

2022

pdf bib
Combating the Curse of Multilinguality in Cross-Lingual WSD by Aligning Sparse Contextualized Word Representations
Gábor Berend
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

In this paper, we advocate for using large pre-trained monolingual language models in cross lingual zero-shot word sense disambiguation (WSD) coupled with a contextualized mapping mechanism. We also report rigorous experiments that illustrate the effectiveness of employing sparse contextualized word representations obtained via a dictionary learning procedure. Our experimental results demonstrate that the above modifications yield a significant improvement of nearly 6.5 points of increase in the average F-score (from 62.0 to 68.5) over a collection of 17 typologically diverse set of target languages. We release our source code for replicating our experiments at https://github.com/begab/sparsity_makes_sense.

pdf bib
Codenames as a Game of Co-occurrence Counting
Réka Cserháti | Istvan Kollath | András Kicsi | Gábor Berend
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics

Codenames is a popular board game, in which knowledge and cooperation between players play an important role. The task of a player playing as a spymaster is to find words (clues) that a teammate finds related to as many of some given words as possible, but not to other specified words. This is a hard challenge even with today’s advanced language technology methods. In our study, we create spymaster agents using four types of relatedness measures that require only a raw text corpus to produce. These include newly introduced ones based on co-occurrences, which outperform FastText cosine similarity on gold standard relatedness data. To generate clues in Codenames, we combine relatedness measures with four different scoring functions, for two languages, English and Hungarian. For testing, we collect decisions of human guesser players in an online game, and our configurations outperform previous agents among methods using raw corpora only.

2021

pdf bib
SzegedAI at SemEval-2021 Task 2: Zero-shot Approach for Multilingual and Cross-lingual Word-in-Context Disambiguation
Gábor Berend
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)

In this paper, we introduce our system that we participated with at the multilingual and cross-lingual word-in-context disambiguation SemEval 2021 shared task. In our experiments, we investigated the possibility of using an all-words fine-grained word sense disambiguation system trained purely on sense-annotated data in English and draw predictions on the semantic equivalence of words in context based on the similarity of the ranked lists of the (English) WordNet synsets returned for the target words decisions had to be made for. We overcame the multi,-and cross-lingual aspects of the shared task by applying a multilingual transformer for encoding the texts written in either Arabic, English, French, Russian and Chinese. While our results lag behind top scoring submissions, it has the benefit that it not only provides a binary flag whether two words in their context have the same meaning, but also provides a more tangible output in the form of a ranked list of (English) WordNet synsets irrespective of the language of the input texts. As our framework is designed to be as generic as possible, it can be applied as a baseline for basically any language (supported by the multilingual transformed architecture employed) even in the absence of any additional form of language specific training data.

pdf bib
Identifying the Importance of Content Overlap for Better Cross-lingual Embedding Mappings
Réka Cserháti | Gábor Berend
Proceedings of the 1st Workshop on Multilingual Representation Learning

In this work, we analyze the performance and properties of cross-lingual word embedding models created by mapping-based alignment methods. We use several measures of corpus and embedding similarity to predict BLI scores of cross-lingual embedding mappings over three types of corpora, three embedding methods and 55 language pairs. Our experimental results corroborate that instead of mere size, the amount of common content in the training corpora is essential. This phenomenon manifests in that i) despite of the smaller corpus sizes, using only the comparable parts of Wikipedia for training the monolingual embedding spaces to be mapped is often more efficient than relying on all the contents of Wikipedia, ii) the smaller, in return less diversified Spanish Wikipedia works almost always much better as a training corpus for bilingual mappings than the ubiquitously used English Wikipedia.

pdf bib
Changing the Basis of Contextual Representations with Explicit Semantics
Tamás Ficsor | Gábor Berend
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop

The application of transformer-based contextual representations has became a de facto solution for solving complex NLP tasks. Despite their successes, such representations are arguably opaque as their latent dimensions are not directly interpretable. To alleviate this limitation of contextual representations, we devise such an algorithm where the output representation expresses human-interpretable information of each dimension. We achieve this by constructing a transformation matrix based on the semantic content of the embedding space and predefined semantic categories using Hellinger distance. We evaluate our inferred representations on supersense prediction task. Our experiments reveal that the interpretable nature of transformed contextual representations makes it possible to accurately predict the supersense category of a word by simply looking for its transformed coordinate with the largest coefficient. We quantify the effects of our proposed transformation when applied over traditional dense contextual embeddings. We additionally investigate and report consistent improvements for the integration of sparse contextual word representations into our proposed algorithm.

2020

pdf bib
ProsperAMnet at FinCausal 2020, Task 1 & 2: Modeling causality in financial texts using multi-headed transformers
Zsolt Szántó | Gábor Berend
Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation

This paper introduces our efforts at the FinCasual shared task for modeling causality in financial utterances. Our approach uses the commonly and successfully applied strategy of fine-tuning a transformer-based language model with a twist, i.e. we modified the training and inference mechanism such that our model produces multiple predictions for the same instance. By designing such a model that returns k>1 predictions at the same time, we not only obtain a more resource efficient training (as opposed to fine-tuning some pre-trained language model k independent times), but our results indicate that we are also capable of obtaining comparable or even better evaluation scores that way. We compare multiple strategies for combining the k predictions of our model. Our submissions got ranked third on both subtasks of the shared task.

pdf bib
Quasi-Multitask Learning: an Efficient Surrogate for Obtaining Model Ensembles
Norbert Kis-Szabó | Gábor Berend
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing

We propose the technique of quasi-multitask learning (Q-MTL), a simple and easy to implement modification of standard multitask learning, in which the tasks to be modeled are identical. With this easy modification of a standard neural classifier we can get benefits similar to an ensemble of classifiers with a fraction of the resources required. We illustrate it through a series of sequence labeling experiments over a diverse set of languages, that applying Q-MTL consistently increases the generalization ability of the applied models. The proposed architecture can be regarded as a new regularization technique that encourages the model to develop an internal representation of the problem at hand which is beneficial to multiple output units of the classifier at the same time. Our experiments corroborate that by relying on the proposed algorithm, we can approximate the quality of an ensemble of classifiers at a fraction of computational resources required. Additionally, our results suggest that Q-MTL handles the presence of noisy training labels better than ensembles.

pdf bib
ProsperAMnet at the FinSim Task: Detecting hypernyms of financial concepts via measuring the information stored in sparse word representations
Gábor Berend | Norbert Kis-Szabó | Zsolt Szántó
Proceedings of the Second Workshop on Financial Technology and Natural Language Processing

pdf bib
Sparsity Makes Sense: Word Sense Disambiguation Using Sparse Contextualized Word Representations
Gábor Berend
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

In this paper, we demonstrate that by utilizing sparse word representations, it becomes possible to surpass the results of more complex task-specific models on the task of fine-grained all-words word sense disambiguation. Our proposed algorithm relies on an overcomplete set of semantic basis vectors that allows us to obtain sparse contextualized word representations. We introduce such an information theory-inspired synset representation based on the co-occurrence of word senses and non-zero coordinates for word forms which allows us to achieve an aggregated F-score of 78.8 over a combination of five standard word sense disambiguating benchmark datasets. We also demonstrate the general applicability of our proposed framework by evaluating it towards part-of-speech tagging on four different treebanks. Our results indicate a significant improvement over the application of the dense word representations.

2018

pdf bib
300-sparsans at SemEval-2018 Task 9: Hypernymy as interaction of sparse attributes
Gábor Berend | Márton Makrai | Péter Földiák
Proceedings of the 12th International Workshop on Semantic Evaluation

This paper describes 300-sparsians’s participation in SemEval-2018 Task 9: Hypernym Discovery, with a system based on sparse coding and a formal concept hierarchy obtained from word embeddings. Our system took first place in subtasks (1B) Italian (all and entities), (1C) Spanish entities, and (2B) music entities.

2017

pdf bib
SZTE-NLP at SemEval-2017 Task 10: A High Precision Sequence Model for Keyphrase Extraction Utilizing Sparse Coding for Feature Generation
Gábor Berend
Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)

In this paper we introduce our system participating at the 2017 SemEval shared task on keyphrase extraction from scientific documents. We aimed at the creation of a keyphrase extraction approach which relies on as little external resources as possible. Without applying any hand-crafted external resources, and only utilizing a transformed version of word embeddings trained at Wikipedia, our proposed system manages to perform among the best participating systems in terms of precision.

pdf bib
Sparse Coding of Neural Word Embeddings for Multilingual Sequence Labeling
Gábor Berend
Transactions of the Association for Computational Linguistics, Volume 5

In this paper we propose and carefully evaluate a sequence labeling framework which solely utilizes sparse indicator features derived from dense distributed word representations. The proposed model obtains (near) state-of-the art performance for both part-of-speech tagging and named entity recognition for a variety of languages. Our model relies only on a few thousand sparse coding-derived features, without applying any modification of the word representations employed for the different tasks. The proposed model has favorable generalization properties as it retains over 89.8% of its average POS tagging accuracy when trained at 1.2% of the total available training data, i.e. 150 sentences per language.

2015

pdf bib
USZEGED: Correction Type-sensitive Normalization of English Tweets Using Efficiently Indexed n-gram Statistics
Gábor Berend | Ervin Tasnádi
Proceedings of the Workshop on Noisy User-generated Text

2014

pdf bib
SZTE-NLP: Aspect level opinion mining exploiting syntactic cues
Viktor Hangya | Gábor Berend | István Varga | Richárd Farkas
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)

2013

pdf bib
SZTE-NLP: Sentiment Detection on Twitter Messages
Viktor Hangya | Gábor Berend | Richárd Farkas
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)

pdf bib
Keyphrase-Driven Document Visualization Tool
Gábor Berend | Richárd Farkas
The Companion Volume of the Proceedings of IJCNLP 2013: System Demonstrations

pdf bib
LFG-based Features for Noun Number and Article Grammatical Errors
Gábor Berend | Veronika Vincze | Sina Zarrieß | Richárd Farkas
Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task

2012

pdf bib
How to Evaluate Opinionated Keyphrase Extraction?
Gábor Berend | Veronika Vincze
Proceedings of the 3rd Workshop in Computational Approaches to Subjectivity and Sentiment Analysis

2011

pdf bib
Opinion Expression Mining by Exploiting Keyphrase Extraction
Gábor Berend
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf bib
Detecting Noun Compounds and Light Verb Constructions: a Contrastive Study
Veronika Vincze | István Nagy T. | Gábor Berend
Proceedings of the Workshop on Multiword Expressions: from Parsing and Generation to the Real World

pdf bib
Noun Compound and Named Entity Recognition and their Usability in Keyphrase Extraction
István Nagy T. | Gábor Berend | Veronika Vincze
Proceedings of the International Conference Recent Advances in Natural Language Processing 2011

pdf bib
Multiword Expressions and Named Entities in the Wiki50 Corpus
Veronika Vincze | István Nagy T. | Gábor Berend
Proceedings of the International Conference Recent Advances in Natural Language Processing 2011

pdf bib
Domain-Dependent Identification of Multiword Expressions
István Nagy T. | Veronika Vincze | Gábor Berend
Proceedings of the International Conference Recent Advances in Natural Language Processing 2011

pdf bib
Domain-Dependent Detection of Light Verb Constructions
István T. Nagy | Gábor Berend | György Móra | Veronika Vincze
Proceedings of the Second Student Research Workshop associated with RANLP 2011

pdf bib
Inter-domain Opinion Phrase Extraction Based on Feature Augmentation
Gábor Berend | István T. Nagy | György Móra | Veronika Vincze
Proceedings of the Second Student Research Workshop associated with RANLP 2011

2010

pdf bib
SZTERGAK : Feature Engineering for Keyphrase Extraction
Gábor Berend | Richárd Farkas
Proceedings of the 5th International Workshop on Semantic Evaluation