Dan McFarland

Also published as: Daniel A. McFarland, Daniel McFarland


2023

pdf bib
Causal Matching with Text Embeddings: A Case Study in Estimating the Causal Effects of Peer Review Policies
Raymond Zhang | Neha Nayak Kennard | Daniel Smith | Daniel McFarland | Andrew McCallum | Katherine Keith
Findings of the Association for Computational Linguistics: ACL 2023

A promising approach to estimate the causal effects of peer review policies is to analyze data from publication venues that shift policies from single-blind to double-blind from one year to the next. However, in these settings the content of the manuscript is a confounding variable—each year has a different distribution of scientific content which may naturally affect the distribution of reviewer scores. To address this textual confounding, we extend variable ratio nearest neighbor matching to incorporate text embeddings. We compare this matching method to a widely-used causal method of stratified propensity score matching and a baseline of randomly selected matches. For our case study of the ICLR conference shifting from single- to double-blind review from 2017 to 2018, we find human judges prefer manuscript matches from our method in 70% of cases. While the unadjusted estimate of the average causal effect of reviewers’ scores is -0.25, our method shifts the estimate to -0.17, a slightly smaller difference between the outcomes of single- and double-blind policies. We hope this case study enables exploration of additional text-based causal estimation methods and domains in the future.

2020

pdf bib
Will This Idea Spread Beyond Academia? Understanding Knowledge Transfer of Scientific Concepts across Text Corpora
Hancheng Cao | Mengjie Cheng | Zhepeng Cen | Daniel McFarland | Xiang Ren
Findings of the Association for Computational Linguistics: EMNLP 2020

What kind of basic research ideas are more likely to get applied in practice? There is a long line of research investigating patterns of knowledge transfer, but it generally focuses on documents as the unit of analysis and follow their transfer into practice for a specific scientific domain. Here we study translational research at the level of scientific concepts for all scientific fields. We do this through text mining and predictive modeling using three corpora: 38.6 million paper abstracts, 4 million patent documents, and 0.28 million clinical trials. We extract scientific concepts (i.e., phrases) from corpora as instantiations of “research ideas”, create concept-level features as motivated by literature, and then follow the trajectories of over 450,000 new concepts (emerged from 1995-2014) to identify factors that lead only a small proportion of these ideas to be used in inventions and drug trials. Results from our analysis suggest several mechanisms that distinguish which scientific concept will be adopted in practice, and which will not. We also demonstrate that our derived features can be used to explain and predict knowledge transfer with high accuracy. Our work provides greater understanding of knowledge transfer for researchers, practitioners, and government agencies interested in encouraging translational research.

2018

pdf bib
Measuring the Evolution of a Scientific Field through Citation Frames
David Jurgens | Srijan Kumar | Raine Hoover | Dan McFarland | Dan Jurafsky
Transactions of the Association for Computational Linguistics, Volume 6

Citations have long been used to characterize the state of a scientific field and to identify influential works. However, writers use citations for different purposes, and this varied purpose influences uptake by future scholars. Unfortunately, our understanding of how scholars use and frame citations has been limited to small-scale manual citation analysis of individual papers. We perform the largest behavioral study of citations to date, analyzing how scientific works frame their contributions through different types of citations and how this framing affects the field as a whole. We introduce a new dataset of nearly 2,000 citations annotated for their function, and use it to develop a state-of-the-art classifier and label the papers of an entire field: Natural Language Processing. We then show how differences in framing affect scientific uptake and reveal the evolution of the publication venues and the field as a whole. We demonstrate that authors are sensitive to discourse structure and publication venue when citing, and that how a paper frames its work through citations is predictive of the citation count it will receive. Finally, we use changes in citation framing to show that the field of NLP is undergoing a significant increase in consensus.

2016

pdf bib
Predicting the Rise and Fall of Scientific Topics from Trends in their Rhetorical Framing
Vinodkumar Prabhakaran | William L. Hamilton | Dan McFarland | Dan Jurafsky
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2012

pdf bib
Towards a Computational History of the ACL: 1980-2008
Ashton Anderson | Dan Jurafsky | Daniel A. McFarland
Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries

2011

pdf bib
A Study of Academic Collaborations in Computational Linguistics using a Latent Mixture of Authors Model
Nikhil Johri | Daniel Ramage | Daniel McFarland | Daniel Jurafsky
Proceedings of the 5th ACL-HLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities

2009

pdf bib
Extracting Social Meaning: Identifying Interactional Style in Spoken Conversation
Dan Jurafsky | Rajesh Ranganath | Dan McFarland
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
It’s Not You, it’s Me: Detecting Flirting and its Misperception in Speed-Dates
Rajesh Ranganath | Dan Jurafsky | Dan McFarland
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing