DeepSpaceHarbor/Awesome-AI-Security

GitHub: DeepSpaceHarbor/Awesome-AI-Security

精心整理的 AI 安全资源列表,系统收录了对抗样本、逃避攻击、投毒攻击等方向的经典论文、开源工具与参考资料。

Stars: 1616 | Forks: 217

# Awesome AI 安全 ![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg) 受 [awesome-adversarial-machine-learning](https://github.com/yenchenlin/awesome-adversarial-machine-learning) 和 [awesome-ml-for-cybersecurity](https://github.com/jivoi/awesome-ml-for-cybersecurity) 启发而精心整理的 AI 安全资源列表。 #### 图例: |类型| 图标| |---|---| | 研究 | ![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| | 幻灯片 | ![](https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/x-office-presentation-32.png) | | 视频 | ![](https://cdn2.iconfinder.com/data/icons/snipicons/500/video-32.png) | | 网站 / 博客文章 | ![](https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/internet-web-browser-32.png) | | 代码 | ![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)| | 其他 | ![](https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/emblem-symbolic-link-32.png)| ## 关键词: - [对抗样本](#-adversarial-examples) - [逃避攻击](#-evasion) - [投毒攻击](#-poisoning) - [特征选择](#-feature-selection) - 教程 - [杂项](#-misc) - [代码](#-code) - [链接](#-links) ## [▲](#keywords) 对抗样本 |类型|标题| |---|:---| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png) | [解释并利用对抗样本](https://arxiv.org/abs/1412.6572) | | ![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png) | [机器学习中的可迁移性:从现象到使用对抗样本的黑盒攻击](https://arxiv.org/abs/1605.07277)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png) | [深入探究可迁移的对抗样本与黑盒攻击](https://arxiv.org/abs/1611.02770)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png) | [关于对抗样本的(统计)检测](https://arxiv.org/abs/1702.06280)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png) | [可迁移对抗样本的空间](https://arxiv.org/abs/1704.03453)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png) | [对神经网络策略的对抗攻击](http://rll.berkeley.edu/adversarial/)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png) | [针对深度神经网络的对抗性扰动用于恶意软件分类](https://arxiv.org/abs/1606.04435)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png) | [为循环神经网络构造对抗输入序列](https://arxiv.org/abs/1604.08275)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [针对机器学习的实用黑盒攻击](https://arxiv.org/abs/1602.02697)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [物理世界中的对抗样本](https://arxiv.org/abs/1607.02533)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [针对深度学习模型的鲁棒物理世界攻击](https://arxiv.org/abs/1707.08945)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [你能在视觉图灵测试中用对抗样本欺骗 AI 吗?](https://arxiv.org/abs/1709.08693)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [合成鲁棒的对抗样本](https://arxiv.org/abs/1707.07397)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [防御性蒸馏对对抗样本并不鲁棒](http://nicholas.carlini.com/papers/2016_defensivedistillation.pdf)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [机器学习模型对对抗样本的脆弱性](http://ceur-ws.org/Vol-1649/187.pdf)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [用于评估阅读理解系统的对抗样本](https://nlp.stanford.edu/pubs/jia2017adversarial.pdf)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/video-32.png)| [Ian Goodfellow 在斯坦福大学的对抗样本与对抗训练演讲](https://www.youtube.com/watch?v=CIfsB_EYsVI)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [对深度强化学习智能体的对抗攻击策略](http://yclin.me/adversarial_attack_RL/)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [计算机视觉中深度学习面临对抗攻击的威胁:一项综述](https://arxiv.org/abs/1801.00553)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [你听到了吗?针对自动语音识别的对抗样本](https://arxiv.org/abs/1801.00554)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [深度表征的对抗操纵](https://arxiv.org/abs/1511.05122)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [探索对抗图像的空间](https://arxiv.org/abs/1510.05328)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [关于使用对抗贴纸攻击目标检测器的笔记](https://arxiv.org/abs/1712.08062)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [对抗补丁](https://arxiv.org/abs/1712.09665)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [关于攻击深度特征的方方面面](https://arxiv.org/abs/1611.06179)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [基于 GAN 生成用于黑盒攻击的对抗性恶意软件示例](https://arxiv.org/abs/1702.05983)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [针对变分自编码器的对抗图像](https://arxiv.org/abs/1612.00155)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [深入探究对深度策略的对抗攻击](https://arxiv.org/abs/1705.06452)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [针对深度网络的简单黑盒对抗扰动](https://arxiv.org/abs/1612.06299)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [DeepFool:一种简单且准确的欺骗深度神经网络的方法](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Moosavi-Dezfooli_DeepFool_A_Simple_CVPR_2016_paper.pdf)| ## [▲](#keywords) 逃避 |类型|标题| |---|:---| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)|[规避凸诱导分类器的查询策略](https://people.eecs.berkeley.edu/~adj/publications/paper-files/1007-0484v1.pdf)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)|[测试阶段的针对机器学习的逃避攻击](https://pralab.diee.unica.it/sites/default/files/Biggio13-ecml.pdf)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)|[自动逃避分类器:以 PDF 恶意软件分类器为例](http://evademl.org/docs/evademl.pdf)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)|[只看包是不足以发现炸弹的:对恶意 PDF 文件检测结构化方法的逃避](https://pralab.diee.unica.it/sites/default/files/maiorca_ASIACCS13.pdf)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)|[针对 RNN 和其他基于 API 调用的恶意软件分类器的通用黑盒端到端攻击](https://arxiv.org/abs/1707.05970)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)|[为犯罪添砖加瓦:针对最先进人脸识别的真实且隐蔽的攻击](https://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [快速特征欺骗:一种数据独立的通用对抗扰动方法](https://arxiv.org/abs/1707.05572v1)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [用于欺骗深度神经网络的单像素攻击](https://arxiv.org/abs/1710.08864v1)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [对抗生成网络:针对最先进人脸识别的神经网络攻击](https://arxiv.org/abs/1801.00349)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [RHMD:抗逃避的硬件恶意软件检测器](http://www.cs.ucr.edu/~kkhas001/pubs/micro17-rhmd.pdf)| ## [▲](#keywords) 投毒 |类型|标题| |---|:---| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png) ![](https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/x-office-presentation-32.png)|[对行为恶意软件聚类的投毒攻击](http://pralab.diee.unica.it/en/node/1121)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)|[针对黑盒学习模型的高效标签污染攻击](https://www.ijcai.org/proceedings/2017/0551.pdf)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)|[利用反向梯度优化对深度学习算法进行投毒攻击](https://arxiv.org/abs/1708.08689)| ## [▲](#keywords) 特征选择 |类型|标题| |---|:---| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png) ![](https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/x-office-presentation-32.png)|[特征选择对训练数据投毒安全吗?](https://pralab.diee.unica.it/en/node/1191)| ## [▲](#keywords) 杂项 |类型|标题| |---|:---| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [破坏智能体骨干:评估 AI 智能体中骨干 LLM 的安全性](https://arxiv.org/abs/2510.22620)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png) |[机器学习能安全吗?](https://people.eecs.berkeley.edu/~adj/publications/paper-files/asiaccs06.pdf)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)|[对抗环境中深度学习系统的完整性](https://etda.libraries.psu.edu/catalog/28680)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)|[通过预测 API 窃取机器学习模型](https://arxiv.org/abs/1609.02943)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)|[对抗领域中针对黑盒分类器的数据驱动探索性攻击](https://arxiv.org/abs/1703.07909)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)|[利用置信度信息的模型逆向攻击及基本对策](https://www.cs.cmu.edu/~mfredrik/papers/fjr2015ccs.pdf)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)|[形式化模型逆向攻击的方法论](https://andrewxiwu.github.io/public/papers/2016/WFJN16-a-methodology-for-modeling-model-inversion-attacks.pdf)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)|[针对入侵检测系统的对抗攻击:分类、解决方案与开放问题](https://pdfs.semanticscholar.org/d4e8/aed54dc4c6bed41651254a49d47885648142.pdf)| |![](https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/x-office-presentation-32.png)|[用于网络安全的对抗性数据挖掘](https://www.utdallas.edu/~muratk/CCS-tutorial.pdf)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [高维空间、深度学习与对抗样本](https://arxiv.org/abs/1801.00634)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [对抗环境下的神经网络与病态权重空间](https://arxiv.org/abs/1801.00905)| |![](https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/internet-web-browser-32.png)| [对抗机器](https://medium.com/@samim/adversarial-machines-998d8362e996)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [对抗性任务分配](https://arxiv.org/abs/1709.00358)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [深度强化学习对策略诱导攻击的脆弱性](https://arxiv.org/abs/1701.04143)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [狂野模式:对抗机器学习崛起十年后](https://arxiv.org/abs/1712.03141)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [对抗鲁棒性:Softmax 对决 Openmax](https://arxiv.org/abs/1708.01697)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/video-32.png)| [DEF CON 25 - Hyrum Anderson - 使用 AI 逃避下一代杀毒软件](https://youtu.be/FGCle6T0Jpc)| |![](https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/internet-web-browser-32.png)| [向善的对抗学习:我在 #34c3 上关于深度学习盲点的演讲](http://blog.kjamistan.com/adversarial-learning-for-good-my-talk-at-34c3-on-deep-learning-blindspots/)| |![](https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png)| [通用对抗扰动](https://arxiv.org/abs/1610.08401)| |![](https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/emblem-symbolic-link-32.png)| [躲避人脸检测的伪装 - CV Dazzle](https://www.cvdazzle.com/)| ## [▲](#keywords) 代码 |类型|标题| |---|:---| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)|[CleverHans - 用于基准测试机器学习系统对抗样本脆弱性的 Python 库](https://github.com/tensorflow/cleverhans)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)|[对机器学习即服务平台的模型提取攻击](https://github.com/ftramer/Steal-ML)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)|[Foolbox - 用于创建对抗样本的 Python 工具箱](https://github.com/bethgelab/foolbox)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)|[对抗机器学习库(Ad-lib)](https://github.com/vu-aml/adlib)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)|[Deep-pwning](https://github.com/cchio/deep-pwning)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)|[DeepFool](https://github.com/lts4/deepfool)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)|[通用对抗扰动](https://github.com/LTS4/universal)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)|[OpenAI Gym 的恶意软件环境](https://github.com/endgameinc/gym-malware)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)|[探索对抗图像的空间](https://github.com/tabacof/adversarial)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)|[StringSifter - 一种基于与恶意软件分析相关性对字符串进行排名的机器学习工具](https://github.com/fireeye/stringsifter)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)|[CAI - 用于自主安全测试的网络安全 AI 框架](https://github.com/aliasrobotics/CAI)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)|[dstack - 具有硬件强制隔离和数据隐私的安全 ML/LLM 部署的机密 AI 框架](https://github.com/Dstack-TEE/dstack)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)|[ClawMoat - AI 智能体的开源运行时安全扫描器。检测 prompt 注入、越狱、PII 泄露、记忆投毒和工具滥用](https://github.com/darfaz/clawmoat)| |![](https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png)|[SkillFortify - 针对 Agentic AI 技能的形式化分析与供应链安全。可靠的静态分析、基于 SAT 的依赖解析、信任评分、CycloneDX ASBOM。5 个定理,F1=96.95%,0% FP 率](https://github.com/varun369/skillfortify)| ## [▲](#keywords) 链接 |类型|标题| |---|:---| |![](https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/internet-web-browser-32.png)|[EvadeML - 对抗环境下的机器学习](http://evademl.org/)| |![](https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/internet-web-browser-32.png)|[对抗机器学习 - PRA 实验室](https://pralab.diee.unica.it/en/AdversarialMachineLearning)| |![](https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/internet-web-browser-32.png)|[对抗样本及其影响](https://hackernoon.com/the-implications-of-adversarial-examples-deep-learning-bits-3-4086108287c7)|
标签:AI安全, Apex, Chat Copilot, DNS解析, Kubernetes 安全, Tutorials, 人工智能安全, 合规性, 安全测试, 对抗攻击, 对抗样本, 开源项目, 投毒攻击, 攻击性安全, 敏感信息检测, 机器学习, 模型鲁棒性, 深度学习安全, 特征选择, 白盒攻击, 神经网络攻防, 网络安全, 论文合集, 逃逸攻击, 逆向工具, 隐私保护, 黑盒攻击