要看的论文备忘录

1、最先由谷歌提出联邦学习的概念

KONEČNÝ J, MCMAHAN H B, RAMAGE D, et al. Federated optimization: distributed machine learning for on-device intelligence[J]. arXiv preprint, 2016

2、一种通过小部分原始的梯度信息,反推出原始数据信息的隐私泄露方法

PHONG L T, AONO Y, HAYASHI T, et al. Privacy-preserving deep learning via additively homomorphic encryption[J]. IEEE Transactions on Information Forensics and Security, 2018(5): 1.

3、恶意攻击者通过部分更新的梯度信息窃取原始数据的攻击方法。

Zhu L, Liu Z, Han S. Deep leakage from gradients[C]//Advances in Neural Information Processing Systems. 2019: 14774-14784.

4、Baddasaryan E等人和 Bhagoji A N等人介绍了一个恶意参与方攻击 模型导致分类精度大幅下降的方法

BHAGOJI A N, CHAKRABORTY S, MITTAL P, et al. Analyzing federated learning through an adversarial lens[J]. International Conference on Machine Learning. [S.l.s.n.], 2019.
BAGDASARYAN E, VEIT A, HUA Y, et al. How to backdoor federated learning[C]// International Conference on Artificial Intelligence and Statistics. [S.l.s.n.], 2020.

5、拜占庭攻击

CHEN L J, WANG H Y, CHARLES Z, et al. DRACO: byzantine-resilient distributed training via redundant gradients[J]. arXiv preprint, 2018, arXiv: 1803.09877.

6、女巫攻击, 攻击方伪装为参与方,攻击联邦学习模型导致模型效果显著降低

FUNG C, YOON C J M, BESCHASTNIKH I. Mitigating sybils in federated learning poisoning[J]. arXiv preprint, 2018, arXiv:1808.04866.

7、设计了一种专门针对中毒攻击的数据净化方法,达到移除模型的中毒数据或其他异常数据的目的

ABHISHEK B, JOHN D, JULIEN F, et al. Protection against reconstruction and its applications in private federated learning[J]. arXiv preprint, 2018, arXiv: 1812.00984.

8、在7的基础上使用具有鲁棒性统计的数据净化方法抵御数据中毒,并且证明了该方法在少 量异常值下能够保持鲁棒性

CARLINI N, LIU C, KOS J, et al. The secret sharer: measuring unintended neural network memorization &
extracting secrets[J]. arXiv preprint, 2018, arXiv:1802.08232.

9、通过加入后门的方式进行模型中毒攻击来攻击联邦学习系统

BAGDASARYAN E, VEIT A, HUA Y, et al. How to backdoor federated learning[C]// International Conference on Artificial Intelligence and Statistics. [S.l.s.n.], 2020.

10、研究了联邦学习的模型中毒攻击问题并且调整联邦学习中不同的攻击策略,从而实现更好的攻击效果(从攻击策略上让模型中毒攻击更有效)

BHAGOJI A N, CHAKRABORTY S, MITTAL P, et al. Analyzing federated learning through an adversarial lens[J]. International Conference on Machine Learning. [S.l.s.n.], 2019.

11、一种叫作 FoolsGold 的抵御方法来抵御联邦学习中中毒方式的女巫攻击(不足之处是这种防御方式只能减轻女巫攻击,在同时有很多攻击假设时才有效,例如假设攻击类型为变换标签策略或者为后门策略时奏效。)

FUNG C, YOON C J M, BESCHASTNIKH I. Mitigating sybils in federated learning poisoning[J]. arXiv preprint, 2018, arXiv:1808.04866.

12、推理攻击也被称作探索攻击(入侵攻击)

Barreno M , Nelson B , Sears R , et al. Can machine learning be secure[C]// The 2006 ACM Symposium on Information, Computer and Communications Security. New York: ACM Press, 2006.

13、同态加密具有较强的隐私保护能力,但是效率很难提升

HALEVI S, SHOUP V. Design and implementation of a homomorphic-encryption library[J]. IBM Research (Manuscript), 2013, 6: 12-15

14、提供了一种安全、高效、准确的“双同态”加密 算法以支持对密文的灵活操作,以对电子商务数据进行数值分析

YUAN J W, YU S C. Privacy preserving back-propagation neural net- work learning made practical with cloud computing[J]. IEEE Transactions on Parallel and Distributed Systems, 2013, 5(1): 212-221.

15、使用同态加密实现了横向线性回归的隐私保护协议

HO Q R, CIPAR J, CUI H G, et al. More effective distributed ml via a stale synchronous parallel parameter server[J]. In Advances in neural information processing systems, 2013: 1223-1231.

16、通过使用实体解析和同态加密对纵向分布数据进行具有隐私保护的联邦学习

HARDY S, HENECKA W, IVEY-LAW H, et al. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption[J]. arXiv preprint, 2017, arXiv:1711.10677.

17、差分隐私

DWORK C. A firm foundation for private data analysis[J]. Communications of the ACM, 2011, 54(1): 86-95. [33]

18、基于梯度的联邦学习方法,往往通过在每次迭代中随机地扰动中间输出来应用差分隐私(也就
是说在联邦学习的过程不会暴露是否使用某个特定的样本信息)

① DWORK C, ROTH A. The algorithmic foundations of differential privacy[J]. Foundations & Trends® in Databases, 2014, 9(3-4): 211-407.
② BASSILY R, SMITH A, THAKURTA A. Private empirical risk minimization: Efficient algorithms and tight error bounds[C]//2014 IEEE 55th Annual Symposium on Foundations of Computer Science. Piscataway: IEEE Press, 2014: 464-473.
③ PAPERNOT N, SONG S, MIRONOV I, et al. Scalable private learning with pate[J]. arXiv preprint, 2018, arXiv:1802.08908.

19、接18,现在实行的扰动方式特别多。对梯度数据添加了高斯噪声,采用了拉普拉斯噪声

① WU X, LI F G, Kumar A, et al. Bolt-on differential privacy for scalable stochastic gradient descent- based analytics[C]//The 2017 ACM International Conference on Management of Data. New York: ACM Press, 2017: 1307-1322.
② LUCA M, GEORGE D, EMILIANO DE C. Efficient private statistics with succinct sketches[J]. arXiv preprint, 2015, arXiv: 1508.06110.

20、Choudhury O等人成功将差分隐私部署在联邦学习框架内用来分析与健康相关的数据,但是在试验中证明,差分隐私可能会带来较大的函数损失值

CHOUDHURY O, GKOULALAS-DIVANIS A, SALONIDIS T, et al. Differential privacy-enabled federated learning for sensitive health data[J]. arXiv preprint, 2019, a rXiv: 1910.02578.

21、Geyer R C等人证明了 差分隐私对于保障数据持有方的数据隐私的有效性,同时认为大量的数据持有方会使带有差分隐私的联 邦学习表现更加稳定,准确率更高。

GEYER R C, KLEIN T, NABI M. Differentially private federated learning: a client level perspective[J]. arXiv preprint, 2017, arXiv:1712.07557.

22、Pettai M等人将安全多方计算与差分隐私技术相结合,用来保护来自不同 数据持有方的数据

PETTAI M, PEETER L. Combining differential privacy and secure multiparty computation[C]//The 31st Annual Computer Security Applications Conference. New York: ACM Press, 2015.

23、Jeong E 等人也设计了一种结合了安全多方计算与差分隐私技术的联邦学 习隐私保护系统,这种系统结合降噪差分隐私与加性同态加密,有效地保障了联邦学习系统的隐私性。

JEONG E, OH S, KIM H, et al. Communication-efficient on-device machine learning: federated distillation and augmentation under non-iid private data[J]. arXiv preprint, 2018, arXiv: 1811.11479.

24、Bonawitz K 等人将故障共享协议中的秘密共享技术与经过验证的加密技术结合,以安全地聚合高维技术。

BONAWITZ K, IVANOV V, KREUTER B, et al. Practical secure aggregation for privacy-preserving machine learning[C]//The 2017 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM Press, 2017: 1175-1191.

25、Xu R H 等人提出了一种新的加密方法 HybridAlpha,将差分隐私技术和基于功能加密的安全多方计算结合,被证明拥有很好的通信效率。

XU R H, BARACALDO N, ZHOU Y, et al. HybridAlpha: an efficient approach for privacy-preserving federated learning[C]//The 12th ACM Workshop on Artificial Intelligence and Security. New York: ACM Press, 2019.

上一篇:Python 存储有缩进的XML


下一篇:R语言MCMC:Metropolis-Hastings采样用于回归的贝叶斯估计