Vineet Gupta

Also published as: V. Gupta


2022

pdf bib
Large-Scale Differentially Private BERT
Rohan Anil | Badih Ghazi | Vineet Gupta | Ravi Kumar | Pasin Manurangsi
Findings of the Association for Computational Linguistics: EMNLP 2022

In this work, we study the large-scale pretraining of BERT-Large (Devlin et al., 2019) with differentially private SGD (DP-SGD). We show that combined with a careful implementation, scaling up the batch size to millions (i.e., mega-batches) improves the utility of the DP-SGD step for BERT; we also enhance the training efficiency by using an increasing batch size schedule. Our implementation builds on the recent work of Subramani et al (2020), who demonstrated that the overhead of a DP-SGD step is minimized with effective use of JAX (Bradbury et al., 2018; Frostig et al., 2018) primitives in conjunction with the XLA compiler (XLA team and collaborators, 2017). Our implementation achieves a masked language model accuracy of 60.5% at a batch size of 2M, for epsilon=5, which is a reasonable privacy setting. To put this number in perspective, non-private BERT models achieve an accuracy of ∼70%.

1998

pdf bib
Efficient Linear Logic Meaning Assembly
Vineet Gupta | John Lamping
COLING 1998 Volume 1: The 17th International Conference on Computational Linguistics

pdf bib
Efficient Linear Logic Meaning Assembly
Vineet Gupta | John Lamping
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1

1990

pdf bib
An 86,000-Word Recognizer Based on Phonemic Models
M. Lennig | V. Gupta | P. Kenny | P. Mermelstein | D. O’Shaughnessy
Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990