Maria Janicka


2019

pdf bib
TMLab SRPOL at SemEval-2019 Task 8: Fact Checking in Community Question Answering Forums
Piotr Niewiński | Aleksander Wawer | Maria Pszona | Maria Janicka
Proceedings of the 13th International Workshop on Semantic Evaluation

The article describes our submission to SemEval 2019 Task 8 on Fact-Checking in Community Forums. The systems under discussion participated in Subtask A: decide whether a question asks for factual information, opinion/advice or is just socializing. Our primary submission was ranked as the second one among all participants in the official evaluation phase. The article presents our primary solution: Deeply Regularized Residual Neural Network (DRR NN) with Universal Sentence Encoder embeddings. This is followed by a description of two contrastive solutions based on ensemble methods.

pdf bib
GEM: Generative Enhanced Model for adversarial attacks
Piotr Niewinski | Maria Pszona | Maria Janicka
Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)

We present our Generative Enhanced Model (GEM) that we used to create samples awarded the first prize on the FEVER 2.0 Breakers Task. GEM is the extended language model developed upon GPT-2 architecture. The addition of novel target vocabulary input to the already existing context input enabled controlled text generation. The training procedure resulted in creating a model that inherited the knowledge of pretrained GPT-2, and therefore was ready to generate natural-like English sentences in the task domain with some additional control. As a result, GEM generated malicious claims that mixed facts from various articles, so it became difficult to classify their truthfulness.

2018

pdf bib
Multi-Module Recurrent Neural Networks with Transfer Learning
Filip Skurniak | Maria Janicka | Aleksander Wawer
Proceedings of the Workshop on Figurative Language Processing

This paper describes multiple solutions designed and tested for the problem of word-level metaphor detection. The proposed systems are all based on variants of recurrent neural network architectures. Specifically, we explore multiple sources of information: pre-trained word embeddings (Glove), a dictionary of language concreteness and a transfer learning scenario based on the states of an encoder network from neural network machine translation system. One of the architectures is based on combining all three systems: (1) Neural CRF (Conditional Random Fields), trained directly on the metaphor data set; (2) Neural Machine Translation encoder of a transfer learning scenario; (3) a neural network used to predict final labels, trained directly on the metaphor data set. Our results vary between test sets: Neural CRF standalone is the best one on submission data, while combined system scores the highest on a test subset randomly selected from training data.