Nádia Félix Felipe da Silva

Also published as: Nadia Félix Felipe da Silva


2024

pdf bib
Enhancing Stance Detection in Low-Resource Brazilian Portuguese Using Corpus Expansion generated by GPT-3.5
Dyonnatan Maia | Nádia Félix Felipe da Silva
Proceedings of the 16th International Conference on Computational Processing of Portuguese

pdf bib
Natural Language Processing Application in Legislative Activity: a Case Study of Similar Amendments in the Brazilian Senate
Diany Pressato | Pedro Lucas Castro de Andrade | Flávio Rocha Junior | Felipe Alves Siqueira | Ellen Polliana Ramos Souza | Nádia Félix Felipe da Silva | Márcio de Souza Dias | André Carlos Ponce de Leon Ferreira de Carvalho
Proceedings of the 16th International Conference on Computational Processing of Portuguese

2023

pdf bib
DeepLearningBrasil@LT-EDI-2023: Exploring Deep Learning Techniques for Detecting Depression in Social Media Text
Eduardo Garcia | Juliana Gomes | Adalberto Ferreira Barbosa Junior | Cardeque Henrique Bittes de Alvarenga Borges | Nadia Félix Felipe da Silva
Proceedings of the Third Workshop on Language Technology for Equality, Diversity and Inclusion

In this paper, we delineate the strategy employed by our team, DeepLearningBrasil, which secured us the first place in the shared task DepSign-LT-EDI@RANLP-2023 with the advantage of 2.4%. The task was to classify social media texts into three distinct levels of depression - “not depressed,” “moderately depressed,” and “severely depressed.” Leveraging the power of the RoBERTa and DeBERTa models, we further pre-trained them on a collected Reddit dataset, specifically curated from mental health-related Reddit’s communities (Subreddits), leading to an enhanced understanding of nuanced mental health discourse. To address lengthy textual data, we introduced truncation techniques that retained the essence of the content by focusing on its beginnings and endings. Our model was robust against unbalanced data by incorporating sample weights into the loss. Cross-validation and ensemble techniques were then employed to combine our k-fold trained models, delivering an optimal solution. The accompanying code is made available for transparency and further development.