skip to main content
research-article

Communication Efficient and Provable Federated Unlearning

Published: 01 January 2024 Publication History

Abstract

We study federated unlearning, a novel problem to eliminate the impact of specific clients or data points on the global model learned via federated learning (FL). This problem is driven by the right to be forgotten and the privacy challenges in FL. We introduce a new framework for exact federated unlearning that meets two essential criteria: communication efficiency and exact unlearning provability. To our knowledge, this is the first work to tackle both aspects coherently. We start by giving a rigorous definition of exact federated unlearning, which guarantees that the unlearned model is statistically indistinguishable from the one trained without the deleted data. We then pinpoint the key property that enables fast exact federated unlearning: total variation (TV) stability, which measures the sensitivity of the model parameters to slight changes in the dataset. Leveraging this insight, we develop a TV-stable FL algorithm called FATS, which modifies the classical FedAvg algorithm for TV Stability and employs local SGD with periodic averaging to lower the communication round. We also design efficient unlearning algorithms for FATS under two settings: client-level and sample-level unlearning. We provide theoretical guarantees for our learning and unlearning algorithms, proving that they achieve exact federated unlearning with reasonable convergence rates for both the original and unlearned models. We empirically validate our framework on 6 benchmark datasets, and show its superiority over state-of-the-art methods in terms of accuracy, communication cost, computation cost, and unlearning efficacy.

References

[1]
Daniel J Beutel, Taner Topal, Akhil Mathur, Xinchi Qiu, Javier Fernandez-Marques, Yan Gao, Lorenzo Sani, Kwing Hei Li, Titouan Parcollet, Pedro Porto Buarque de Gusmão, et al. 2020. Flower: A friendly federated learning research framework. arXiv preprint arXiv:2007.14390 (2020).
[2]
Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2021. Machine unlearning. In 42nd IEEE Symposium on Security and Privacy (SP 2021). IEEE, 141--159.
[3]
Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H Brendan McMahan, Virginia Smith, and Ameet Talwalkar. 2018. Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097 (2018).
[4]
Yinzhi Cao and Junfeng Yang. 2015. Towards making systems forget with machine unlearning. In 36th IEEE Symposium on Security and Privacy (SP 2015). IEEE, 463--480.
[5]
Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, and Yang Zhang. 2022. Graph unlearning. In 29th ACM SIGSAC Conference on Computer and Communications Security (CCS 2022). 499--513.
[6]
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In 3rd Theory of Cryptography Conference (TCC 2006). Springer, 265--284.
[7]
Yann Fraboni, Richard Vidal, Laetitia Kameni, and Marco Lorenzi. 2022. Sequential Informed Federated Unlearning: Efficient and Provable Client Unlearning in Federated Optimization. arXiv preprint arXiv:2211.11656 (2022).
[8]
Antonio Ginart, Melody Guan, Gregory Valiant, and James Y Zou. 2019. Making ai forget you: Data deletion in machine learning., 3513--3526 pages.
[9]
Chuan Guo, Tom Goldstein, Awni Hannun, and Laurens Van Der Maaten. 2020. Certified data removal from machine learning models. In 37th International Conference on Machine Learning (ICML 2020). 3832--3842.
[10]
Anisa Halimi, Swanand Kadhe, Ambrish Rawat, and Nathalie Baracaldo. 2022. Federated Unlearning: How to Efficiently Erase a Client in FL? arXiv preprint arXiv:2207.05521 (2022).
[11]
Elizabeth Liz Harding, Jarno J Vanto, Reece Clark, L Hannah Ji, and Sara C Ainsworth. 2019. Understanding the scope and impact of the california consumer privacy act of 2018. Journal of Data Protection & Privacy 2, 3 (2019), 234--253.
[12]
Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. 2019. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335 (2019).
[13]
Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. 2021. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning 14, 1--2 (2021), 1--210.
[14]
Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. Master's thesis, University of Toronto (2009).
[15]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278--2324.
[16]
Gaoyang Liu, Xiaoqiang Ma, Yang Yang, Chen Wang, and Jiangchuan Liu. 2021. FedEraser: Enabling efficient client-level data removal from federated learning models. In 29th IEEE/ACM International Symposium on Quality of Service (IWQOS 2021). IEEE, 1--10.
[17]
Yi Liu, Lei Xu, Xingliang Yuan, Cong Wang, and Bo Li. 2022. The right to be forgotten in federated learning: An efficient realization with rapid retraining. In 41st IEEE International Conference on Computer Communications (INFOCOM 2022). IEEE, 1749--1758.
[18]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In 20th International Conference on Artificial Intelligence and Statistics (AISTATS 2017). PMLR, 1273--1282.
[19]
Seth Neel, Aaron Roth, and Saeed Sharifi-Malvajerdi. 2021. Descent-to-delete: Gradient-based methods for machine unlearning. In 32nd International Conference on Algorithmic Learning Theory (ALT 2021). PMLR, 931--962.
[20]
Thanh Tam Nguyen, Thanh Trung Huynh, Phi Le Nguyen, Alan Wee-Chung Liew, Hongzhi Yin, and Quoc Viet Hung Nguyen. 2022. A survey of machine unlearning. arXiv preprint arXiv:2209.02299 (2022).
[21]
Ayush Sekhari, Jayadev Acharya, Gautam Kamath, and Ananda Theertha Suresh. 2021. Remember what you want to forget: Algorithms for machine unlearning. In 35th Annual Conference on Neural Information Processing Systems (NeurIPS 2021). 18075--18086.
[22]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership Inference Attacks Against Machine Learning Models. In 38th IEEE Symposium on Security and Privacy (SP 2017). IEEE Computer Society, 3--18.
[23]
Mengkai Song, Zhibo Wang, Zhifei Zhang, Yang Song, Qian Wang, Ju Ren, and Hairong Qi. 2020. Analyzing user-level privacy attack against federated learning. IEEE Journal on Selected Areas in Communications 38, 10 (2020), 2430--2444.
[24]
Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, and Nicolas Papernot. 2022. Unrolling sgd: Understanding factors influencing machine unlearning. In 7th IEEE European Symposium on Security and Privacy (EuroS&P 2022). IEEE, 303--319.
[25]
Enayat Ullah, Tung Mai, Anup Rao, Ryan A Rossi, and Raman Arora. 2021. Machine unlearning via algorithmic stability. In 32nd International Conference on Algorithmic Learning Theory (ALT 2021). PMLR, 4126--4142.
[26]
Paul Voigt and Axel Von dem Bussche. 2017. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing 10, 3152676 (2017), 10--5555.
[27]
Cheng-Long Wang, Mengdi Huai, and Di Wang. 2023. Inductive Graph Unlearning. In 32nd USENIX Security Symposium (USENIX Security 2023). 3205--3222.
[28]
Junxiao Wang, Song Guo, Xin Xie, and Heng Qi. 2022. Federated unlearning via class-discriminative pruning. In 31st ACM Web Conference (WWW 2022. 622--632.
[29]
Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian Wang, and Hairong Qi. 2019. Beyond inferring class representatives: User-level privacy leakage from federated learning. In 38th IEEE International Conference on Computer Communications (INFOCOM 2019). IEEE, 2512--2520.
[30]
Chen Wu, Sencun Zhu, and Prasenjit Mitra. 2022. Federated unlearning with knowledge distillation. arXiv preprint arXiv:2201.09441 (2022).
[31]
Leijie Wu, Song Guo, Junxiao Wang, Zicong Hong, Jie Zhang, and Yaohong Ding. 2022. Federated Unlearning: Guarantee the Right of Clients to Forget. IEEE Network 36, 5 (2022), 129--135.
[32]
Han Xiao, Kashif Rasul, and Roland Vollgraf. 2017. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017).
[33]
Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep Leakage from Gradients. 33rd Annual Conference on Neural Information Processing Systems (NeurIPS 2019) 32 (2019), 14774--14784.

Cited By

View all
  • (2024)A Survey on Federated Unlearning: Challenges, Methods, and Future DirectionsACM Computing Surveys10.1145/367901457:1(1-38)Online publication date: 19-Jul-2024
  • (2024)QuickDrop: Efficient Federated Unlearning via Synthetic Data GenerationProceedings of the 25th International Middleware Conference10.1145/3652892.3700764(266-278)Online publication date: 2-Dec-2024

Recommendations

Comments

Information & Contributors

Information

Published In

Proceedings of the VLDB Endowment  Volume 17, Issue 5
January 2024
233 pages
Issue’s Table of Contents

Publisher

VLDB Endowment

Publication History

Published: 01 January 2024
Published in PVLDB Volume 17, Issue 5

Check for updates

Badges

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)102
  • Downloads (Last 6 weeks)21
Reflects downloads up to 25 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)A Survey on Federated Unlearning: Challenges, Methods, and Future DirectionsACM Computing Surveys10.1145/367901457:1(1-38)Online publication date: 19-Jul-2024
  • (2024)QuickDrop: Efficient Federated Unlearning via Synthetic Data GenerationProceedings of the 25th International Middleware Conference10.1145/3652892.3700764(266-278)Online publication date: 2-Dec-2024

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy