Representation and Generation of Machine Learning Test Functions

Souha Hassine, Steven Wilson


Abstract
Writing tests for machine learning (ML) code is a crucial step towards ensuring the correctness and reliability of ML software. At the same time, Large Language Models (LLMs) have been adopted at a rapid pace for various code generation tasks, making it a natural choice for many developers who need to write ML tests. However, the implications of using these models, and how the LLM-generated tests differ from human-written ones, are relatively unexplored. In this work, we examine the use of LLMs to extract representations of ML source code and tests in order to understand the semantic relationships between human-written test functions and LLM-generated ones, and annotate a set of LLM-generated tests for several important qualities including usefulness, documentation, and correctness. We find that programmers prefer LLM-generated tests to those selected using retrieval-based methods, and in some cases, to those written by other humans.
Anthology ID:
2024.eacl-srw.18
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Neele Falk, Sara Papi, Mike Zhang
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
238–247
Language:
URL:
https://aclanthology.org/2024.eacl-srw.18
DOI:
Bibkey:
Cite (ACL):
Souha Hassine and Steven Wilson. 2024. Representation and Generation of Machine Learning Test Functions. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 238–247, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Representation and Generation of Machine Learning Test Functions (Hassine & Wilson, EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-srw.18.pdf