Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022)

Bill Yuchen Lin, Chaoyang He, Chulin Xie, Fatemehsadat Mireshghallah, Ninareh Mehrabi, Tian Li, Mahdi Soltanolkotabi, Xiang Ren (Editors)


Anthology ID:
2022.fl4nlp-1
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
FL4NLP
SIG:
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/2022.fl4nlp-1
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://aclanthology.org/2022.fl4nlp-1.pdf

pdf bib
Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022)
Bill Yuchen Lin | Chaoyang He | Chulin Xie | Fatemehsadat Mireshghallah | Ninareh Mehrabi | Tian Li | Mahdi Soltanolkotabi | Xiang Ren

pdf bib
ActPerFL: Active Personalized Federated Learning
Huili Chen | Jie Ding | Eric Tramel | Shuang Wu | Anit Kumar Sahu | Salman Avestimehr | Tao Zhang

In the context of personalized federated learning (FL), the critical challenge is to balance local model improvement and global model tuning when the personal and global objectives may not be exactly aligned. Inspired by Bayesian hierarchical models, we develop ActPerFL, a self-aware personalized FL method where each client can automatically balance the training of its local personal model and the global model that implicitly contributes to other clients’ training. Such a balance is derived from the inter-client and intra-client uncertainty quantification. Consequently, ActPerFL can adapt to the underlying clients’ heterogeneity with uncertainty-driven local training and model aggregation. With experimental studies on Sent140 and Amazon Alexa audio data, we show that ActPerFL can achieve superior personalization performance compared with the existing counterparts.

pdf bib
Scaling Language Model Size in Cross-Device Federated Learning
Jae Ro | Theresa Breiner | Lara McConnaughey | Mingqing Chen | Ananda Suresh | Shankar Kumar | Rajiv Mathews

Most studies in cross-device federated learning focus on small models, due to the server-client communication and on-device computation bottlenecks. In this work, we leverage various techniques for mitigating these bottlenecks to train larger language models in cross-device federated learning. With systematic applications of partial model training, quantization, efficient transfer learning, and communication-efficient optimizers, we are able to train a 21M parameter Transformer that achieves the same perplexity as that of a similarly sized LSTM with ∼10× smaller client-to-server communication cost and 11% lower perplexity than smaller LSTMs commonly studied in literature.

pdf bib
Adaptive Differential Privacy for Language Model Training
Xinwei Wu | Li Gong | Deyi Xiong

Although differential privacy (DP) can protect language models from leaking privacy, its indiscriminative protection on all data points reduces its practical utility. Previous works improve DP training by discriminating privacy and non-privacy data. But these works rely on datasets with prior privacy information, which is not available in real-world scenarios. In this paper, we propose an Adaptive Differential Privacy (ADP) framework for language modeling without resorting to prior privacy information. We estimate the probability that a linguistic item contains privacy based on a language model. We further propose a new Adam algorithm that adjusts the degree of differential privacy noise injected to the language model according to the estimated privacy probabilities. Experiments demonstrate that our ADP improves differentially private language modeling to achieve good protection from canary attackers.

pdf bib
Intrinsic Gradient Compression for Scalable and Efficient Federated Learning
Luke Melas-Kyriazi | Franklyn Wang

Federated learning is a rapidly growing area of research, holding the promise of privacy-preserving distributed training on edge devices. The largest barrier to wider adoption of federated learning is the communication cost of model updates, which is accentuated by the fact that many edge devices are bandwidth-constrained. At the same time, within the machine learning theory community, a separate line of research has emerged around optimizing networks within a subspace of the full space of all parameters. The dimension of the smallest subspace for which these methods still yield strong results is called the intrinsic dimension. In this work, we prove a general correspondence between the notions of intrinsic dimension and gradient compressibility, and we show that a family of low-bandwidth federated learning algorithms, which we call intrinsic gradient compression algorithms, naturally emerges from this correspondence. Finally, we conduct large-scale NLP experiments using transformer models with over 100M parameters (GPT-2 and BERT), and show that our method significantly outperforms the state-of-the-art in gradient compression.