How Predictable Are Large Language Model Capabilities? A Case Study on BIG-bench

Qinyuan Ye, Harvey Fu, Xiang Ren, Robin Jia


Abstract
We investigate the predictability of large language model (LLM) capabilities: given records of past experiments using different model families, numbers of parameters, tasks, and numbers of in-context examples, can we accurately predict LLM performance on new experiment configurations? Answering this question has practical implications for LLM users (e.g., deciding which models to try), developers (e.g., prioritizing evaluation on representative tasks), and the research community (e.g., identifying hard-to-predict capabilities that warrant further investigation). We study the performance prediction problem on experiment records from BIG-bench. On a random train-test split, an MLP-based predictor achieves an R2 score greater than 95%, indicating the presence of learnable patterns within the experiment records. We then formulate the problem of searching for “small-bench,” an informative subset of BIG-bench tasks from which the performance on the full set can be maximally recovered. We find a subset as informative as BIG-bench Hard for evaluating new model families, while being smaller. Additionally, we find competitive subsets by clustering task representations learned by our MLP-based predictor and selecting tasks close to cluster centroids, highlighting the importance of task diversity in constructing “small-bench.”
Anthology ID:
2023.findings-emnlp.503
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7493–7517
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.503
DOI:
10.18653/v1/2023.findings-emnlp.503
Bibkey:
Cite (ACL):
Qinyuan Ye, Harvey Fu, Xiang Ren, and Robin Jia. 2023. How Predictable Are Large Language Model Capabilities? A Case Study on BIG-bench. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7493–7517, Singapore. Association for Computational Linguistics.
Cite (Informal):
How Predictable Are Large Language Model Capabilities? A Case Study on BIG-bench (Ye et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.503.pdf