Jetsons at the FinNLP-2022 ERAI Task: BERT-Chinese for mining high MPP posts

Alolika Gon, Sihan Zha, Sai Krishna Rallabandi, Parag Pravin Dakle, Preethi Raghavan


Abstract
In this paper, we discuss the various approaches by the Jetsons team for the “Pairwise Comparison” sub-task of the ERAI shared task to compare financial opinions for profitability and loss. Our BERT-Chinese model considers a pair of opinions and predicts the one with a higher maximum potential profit (MPP) with 62.07% accuracy. We analyze the performance of our approaches on both the MPP and maximal loss (ML) problems and deeply dive into why BERT-Chinese outperforms other models.
Anthology ID:
2022.finnlp-1.19
Volume:
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Editors:
Chung-Chi Chen, Hen-Hsen Huang, Hiroya Takamura, Hsin-Hsi Chen
Venue:
FinNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
141–146
Language:
URL:
https://aclanthology.org/2022.finnlp-1.19
DOI:
10.18653/v1/2022.finnlp-1.19
Bibkey:
Cite (ACL):
Alolika Gon, Sihan Zha, Sai Krishna Rallabandi, Parag Pravin Dakle, and Preethi Raghavan. 2022. Jetsons at the FinNLP-2022 ERAI Task: BERT-Chinese for mining high MPP posts. In Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP), pages 141–146, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Jetsons at the FinNLP-2022 ERAI Task: BERT-Chinese for mining high MPP posts (Gon et al., FinNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.finnlp-1.19.pdf
Video:
 https://aclanthology.org/2022.finnlp-1.19.mp4