VTUBGM@LT-EDI-2023: Hope Speech Identification using Layered Differential Training of ULMFit

Sanjana M. Kavatagi, Rashmi R. Rachh, Shankar S. Biradar


Abstract
Hope speech embodies optimistic and uplifting sentiments, aiming to inspire individuals to maintain faith in positive progress and actively contribute to a better future. In this article, we outline the model presented by our team, VTUBGM, for the shared task “Hope Speech Detection for Equality, Diversity, and Inclusion” at LT-EDI-RANLP 2023. This task entails classifying YouTube comments, which is a classification problem at the comment level. The task was conducted in four different languages: Bulgarian, English, Hindi, and Spanish. VTUBGM submitted a model developed through layered differential training of the ULMFit model. As a result, a macro F1 score of 0.48 was obtained and ranked 3rd in the competition.
Anthology ID:
2023.ltedi-1.32
Volume:
Proceedings of the Third Workshop on Language Technology for Equality, Diversity and Inclusion
Month:
September
Year:
2023
Address:
Varna, Bulgaria
Editors:
Bharathi R. Chakravarthi, B. Bharathi, Joephine Griffith, Kalika Bali, Paul Buitelaar
Venues:
LTEDI | WS
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
209–213
Language:
URL:
https://aclanthology.org/2023.ltedi-1.32
DOI:
Bibkey:
Cite (ACL):
Sanjana M. Kavatagi, Rashmi R. Rachh, and Shankar S. Biradar. 2023. VTUBGM@LT-EDI-2023: Hope Speech Identification using Layered Differential Training of ULMFit. In Proceedings of the Third Workshop on Language Technology for Equality, Diversity and Inclusion, pages 209–213, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
VTUBGM@LT-EDI-2023: Hope Speech Identification using Layered Differential Training of ULMFit (Kavatagi et al., LTEDI-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.ltedi-1.32.pdf