Alon Rozental


2020

pdf bib
Amobee at SemEval-2020 Task 7: Regularization of Language Model Based Classifiers
Alon Rozental | Dadi Biton | Ido Blank
Proceedings of the Fourteenth Workshop on Semantic Evaluation

This paper describes Amobee’s participation in SemEval-2020 task 7: “Assessing Humor in Edited News Headlines”, sub-tasks 1 and 2. The goal of this task was to estimate the funniness of human modified news headlines. in this paper we present methods to fine-tune and ensemble various language models (LM) based classifiers to for this task. This technique used for both sub-tasks and reached the second place (out of 49) in sub-tasks 1 with RMSE score of 0.5, and the second (out of 32) place in sub-task 2 with accuracy of 66% without using any additional data except the official training set.

2019

pdf bib
Amobee at SemEval-2019 Tasks 5 and 6: Multiple Choice CNN Over Contextual Embedding
Alon Rozental | Dadi Biton
Proceedings of the 13th International Workshop on Semantic Evaluation

This article describes Amobee’s participation in “HatEval: Multilingual detection of hate speech against immigrants and women in Twitter” (task 5) and “OffensEval: Identifying and Categorizing Offensive Language in Social Media” (task 6). The goal of task 5 was to detect hate speech targeted to women and immigrants. The goal of task 6 was to identify and categorized offensive language in social media, and identify offense target. We present a novel type of convolutional neural network called “Multiple Choice CNN” (MC- CNN) that we used over our newly developed contextual embedding, Rozental et al. (2019). For both tasks we used this architecture and achieved 4th place out of 69 participants with an F1 score of 0.53 in task 5, in task 6 achieved 2nd place (out of 75) in Sub-task B - automatic categorization of offense types (our model reached places 18/2/7 out of 103/75/65 for sub-tasks A, B and C respectively in task 6).

2018

pdf bib
Amobee at IEST 2018: Transfer Learning from Language Models
Alon Rozental | Daniel Fleischer | Zohar Kelrich
Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis

This paper describes the system developed at Amobee for the WASSA 2018 implicit emotions shared task (IEST). The goal of this task was to predict the emotion expressed by missing words in tweets without an explicit mention of those words. We developed an ensemble system consisting of language models together with LSTM-based networks containing a CNN attention mechanism. Our approach represents a novel use of language models—specifically trained on a large Twitter dataset—to predict and classify emotions. Our system reached 1st place with a macro F1 score of 0.7145.

pdf bib
Amobee at SemEval-2018 Task 1: GRU Neural Network with a CNN Attention Mechanism for Sentiment Classification
Alon Rozental | Daniel Fleischer
Proceedings of the 12th International Workshop on Semantic Evaluation

This paper describes the participation of Amobee in the shared sentiment analysis task at SemEval 2018. We participated in all the English sub-tasks and the Spanish valence tasks. Our system consists of three parts: training task-specific word embeddings, training a model consisting of gated-recurrent-units (GRU) with a convolution neural network (CNN) attention mechanism and training stacking-based ensembles for each of the sub-tasks. Our algorithm reached the 3rd and 1st places in the valence ordinal classification sub-tasks in English and Spanish, respectively.

2017

pdf bib
Amobee at SemEval-2017 Task 4: Deep Learning System for Sentiment Detection on Twitter
Alon Rozental | Daniel Fleischer
Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)

This paper describes the Amobee sentiment analysis system, adapted to compete in SemEval 2017 task 4. The system consists of two parts: a supervised training of RNN models based on a Twitter sentiment treebank, and the use of feedforward NN, Naive Bayes and logistic regression classifiers to produce predictions for the different sub-tasks. The algorithm reached the 3rd place on the 5-label classification task (sub-task C).