Member Reasoning Attack Detection and Privacy Disclosure Quantification in Federated Learning Gradient Swap Process

Authors

  • ChenWei Gong Henry Samueli School of Engineering, University of California, Los Angeles, CA 90024, United States Author
  • TongYue Sun Olin Business School, Washington University in St. Louis, St. Louis, MO 63130, United States Author
  • SongYun Zhang Interdisciplinary Data Science, Gross Hall, Duke University, Durham, NC 27708, United States Author

DOI:

https://doi.org/10.63313/JCSFT.9055

Keywords:

Federated Learning, Member Inference Attack, Gradient Leak, Attack Detection, Privacy Quantization

Abstract

Federated learning aims to protect user data privacy by training locally and exchanging only model updates. However, the gradient of the exchange itself can reveal information that allows malicious actors to launch member-inference attacks to infer whether a particular data sample is present in a client’s training set, posing a serious privacy threat. In this paper, a complete detection and quantification scheme is proposed for the member inference attack in federated learning gradient exchange. First, we design an attack detection framework based on gradient feature analysis, which uses ensemble learning model to identify potential malicious attacks by extracting and aggregating high-dimensional statistical features and distribution anomalies of client-uploaded gradients. Secondly, in order to evaluate the severity of privacy leakage when the attack succeeds, we propose a privacy leakage quantification method. By measuring the sensitivity difference of gradient to member and non-member samples, we construct a computable leakage scoring system, and classify different leakage risk levels accordingly. Experimental results show that the detection method proposed in this paper can effectively identify member inference attacks in various federated learning scenarios, and the quantitative disclosure index can accurately reflect the privacy risk level under different attack configurations and model states. This study provides theoretical and technical references for privacy security assessment and dynamic protection of federated learning systems.

References

[1] Mireshghallah F, Goyal K, Uniyal A, et al. Quantifying privacy risks of masked language models using membership inference attacks[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022: 8332-8347.

[2] Arshad R, Asghar M R. Characterization and Quantification of User Privacy: Key Challenges, Regulations, and Future Directions[J]. IEEE Communications Surveys & Tutorials, 2024, 27(5): 3266-3307.

[3] Biswas M, Yu W, Wang Q, et al. Quantifying Privacy Leakage in Multi-Agent LLMs: A Unified Theoretical and Empirical Analysis[C]//2025 IEEE International Conference on Big Data (BigData). IEEE, 2025: 1685-1694.

[4] Xu Q, Feng Z, Gong C, et al. Applications of explainable AI in natural language processing[J]. Global Academic Frontiers, 2024, 2(3): 51-64.

[5] Nguyen T T, Huynh T T, Ren Z, et al. A survey of privacy-preserving model explanations: Privacy risks, attacks, and countermeasures[J]. arXiv preprint arXiv:2404.00673, 2024.

[6] Zhong D, Sun H, Xu J, et al. Understanding disparate effects of membership inference attacks and their countermeasures[C]//Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security. 2022: 959-974.

[7] Gong C, Zhang X, Lin Y, et al. Federated learning for heterogeneous data integration and privacy protection[C]//2025 28th International Conference on Computer Supported Cooperative Work in Design (CSCWD). IEEE, 2025: 459-466.

Downloads

Published

2026-03-20

Issue

Section

Articles

How to Cite

Member Reasoning Attack Detection and Privacy Disclosure Quantification in Federated Learning Gradient Swap Process. (2026). Journal of Computer Science and Frontier Technologies, 3(1), 1-8. https://doi.org/10.63313/JCSFT.9055