depalmar/ai_for_the_win

GitHub: depalmar/ai_for_the_win

一套面向安全从业者的AI实战培训课程,包含50多个实验和CTF挑战,教你用机器学习和大语言模型构建钓鱼检测、日志分析、威胁猎杀等安全工具。

Stars: 117 | Forks: 18

AI for the Win - Security AI Training Platform Logo

# AI for the Win ### 构建 AI 驱动的安全工具 | 实战学习 [![CI](https://static.pigsec.cn/wp-content/uploads/repos/2026/03/3fd3f19989070349.svg)](https://github.com/depalmar/ai_for_the_win/actions/workflows/ci.yml) [![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/depalmar/ai_for_the_win/badge)](https://scorecard.dev/viewer/?uri=github.com/depalmar/ai_for_the_win) [![Python 3.10-3.12](https://img.shields.io/badge/python-3.10--3.12-blue.svg)](https://www.python.org/downloads/) [![License: Dual](https://img.shields.io/badge/License-Dual%20(MIT%20%2B%20CC%20BY--NC--SA)-blue.svg)](./LICENSE) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/depalmar/ai_for_the_win/blob/main/notebooks/lab10_phishing_classifier.ipynb) [![Docker](https://img.shields.io/badge/Docker-Ready-blue?logo=docker)](./Dockerfile) 这是一个面向安全从业者的实战培训计划,旨在构建用于威胁检测、事件响应和安全自动化的 AI 驱动工具。包含 **50+ 个实验**(包括 9 个入门实验和 12 个进阶实验)、**4 个毕设项目**、**18 个 CTF 挑战**。内含**示例数据集**、**解决方案详解**和 **Docker 实验环境**。专为使用 Cursor、Claude Code 和 Copilot 等 AI 助手进行 **vibe coding** 而设计。 ## 你将构建什么 **实验 10 - 钓鱼分类器** 能够捕捉规则遗漏的内容: ``` $ python labs/lab10-phishing-classifier/solution/main.py [+] Training on 1,000 labeled emails... [+] Model: Random Forest + TF-IDF (847 features) [+] Accuracy: 96.2% | Precision: 94.1% | Recall: 97.8% Scanning inbox (4 new emails)... From: security@amaz0n-verify.com Subj: "Your account will be suspended in 24 hours" --> PHISHING (98.2%) [urgency + spoofed domain] From: sarah.jones@company.com Subj: "Q3 budget report attached" --> LEGIT (94.6%) From: helpdesk@paypa1.com Subj: "Click here to verify your identity" --> PHISHING (96.7%) [link mismatch + typosquat] From: it-dept@company.com Subj: "Password expires in 7 days - reset here" --> SUSPICIOUS (67.3%) [needs review] Top features that caught phishing: urgency_words: +0.34 (suspend, verify, immediately) url_mismatch: +0.28 (display != actual link) domain_spoof: +0.22 (amaz0n, paypa1) ``` **实验 35 - LLM 日志分析** 在噪音中发现攻击: ``` +------------------------------------------------------+ | Lab 35: LLM-Powered Security Log Analysis - SOLUTION | +------------------------------------------------------+ Security Log Analysis Pipeline Step 1: Initializing LLM... LLM initialized: READY Step 2: Parsing log entries... Parsing entry 1/5... Done Parsing entry 2/5... Done Parsing entry 3/5... Done Parsing entry 4/5... Done Parsing entry 5/5... Done Parsed 5 log entries Step 3: Analyzing for threats... Found 2 threats Severity: 8/10 Step 4: Extracting IOCs... Extracted 12 IOCs Step 5: Generating incident report... Report generated ================================================================ INCIDENT REPORT ================================================================ +--------------------------------------------------------------+ | Executive Summary | +--------------------------------------------------------------+ A critical security incident involving multi-stage attack behavior was detected on WORKSTATION01 involving user 'jsmith'. The attack progression includes initial PowerShell execution downloading a payload from a suspicious external domain, followed by system discovery commands, and culminating in persistence establishment via Registry Run keys and Scheduled Tasks. +--------------------------------------------------------------+ | Timeline | +--------------------------------------------------------------+ 1 2025-01-15 03:22:10 - PowerShell downloaded payload from hxxp://evil-c2[.]com/payload.ps1 2 2025-01-15 03:22:15 - Discovery commands executed (whoami, hostname, ipconfig) 3 2025-01-15 03:22:18 - Network connection to evil-c2[.]com (185[.]143[.]223[.]47:443) 4 2025-01-15 03:23:00 - Registry persistence: HKCU Run keys 5 2025-01-15 03:25:00 - Scheduled Task: SecurityUpdate created +--------------------------------------------------------------+ | MITRE ATT&CK Mapping | +--------------------------------------------------------------+ Technique ID Technique Name Evidence ------------------------------------------------------------- T1059.001 PowerShell DownloadString, IEX T1082 System Information Discovery whoami, hostname T1547.001 Registry Run Keys HKCU\...\Run T1053.005 Scheduled Task SecurityUpdate T1105 Ingress Tool Transfer DownloadString +--------------------------------------------------------------+ | Attribution Analysis | +--------------------------------------------------------------+ High Confidence: FIN7/Carbanak * Tooling matches known campaigns (PowerShell obfuscation) * Infrastructure historically associated with FIN7 * TTP sequence is signature behavior pattern ``` ## 60 秒快速开始 **无需安装** —— 在浏览器中点击即可运行: **初级(无需 API key):** [![Lab 02](https://img.shields.io/badge/Lab_02-Open_in_Colab-F9AB00?logo=googlecolab&logoColor=white)](https://colab.research.google.com/github/depalmar/ai_for_the_win/blob/main/notebooks/lab02_prompt_engineering.ipynb) Prompt 工程基础 [![Lab 07](https://img.shields.io/badge/Lab_07-Open_in_Colab-F9AB00?logo=googlecolab&logoColor=white)](https://colab.research.google.com/github/depalmar/ai_for_the_win/blob/main/notebooks/lab07_hello_world_ml.ipynb) 你的第一个 ML 模型 **中级(无需 API key):** [![Lab 10](https://img.shields.io/badge/Lab_10-Open_in_Colab-F9AB00?logo=googlecolab&logoColor=white)](https://colab.research.google.com/github/depalmar/ai_for_the_win/blob/main/notebooks/lab10_phishing_classifier.ipynb) ML 钓鱼检测 **高级(需要 API key):** [![Lab 15](https://img.shields.io/badge/Lab_15-Open_in_Colab-F9AB00?logo=googlecolab&logoColor=white)](https://colab.research.google.com/github/depalmar/ai_for_the_win/blob/main/notebooks/lab15_llm_log_analysis.ipynb) LLM 驱动的日志分析 ## 选择你的起点 | 你的背景 | 从这里开始 | 后续步骤 | |-----------------|------------|------------| | **AI 小白?** | [实验 02: Prompt 工程](./labs/lab02-intro-prompt-engineering/) | -> 实验 07 -> 实验 10 | | **AI/ML 新手?** | [实验 10: 钓鱼分类器](./labs/lab10-phishing-classifier/) | -> 实验 11 -> 实验 12 | | **懂 Python,想要 LLM 工具?** | [实验 15: LLM 日志分析](./labs/lab15-llm-log-analysis/) | -> 实验 16 -> 实验 18 | | **想要侧重 DFIR?** | [实验 31: 勒索软件检测](./labs/lab31-ransomware-detection/) | -> 实验 33 -> 实验 34 | **提示**:实验 00-13 免费(无需 API keys)。LLM 实验(14+)需要 API key(总计约 $5-25)。 ## 实验导航器 **点击任意实验探索** —— 从入门到专家的学习之旅:
Lab 00 Lab 01 Lab 02 Lab 03 Lab 04
Lab 05 Lab 06 Lab 07 Lab 08 Lab 09
Lab 10 Lab 11 Lab 12 Lab 13 Lab 14
Lab 15 Lab 16 Lab 17 Lab 18 Lab 19
Lab 20 Lab 21 Lab 22 Lab 23 Lab 24
Lab 25 Lab 26 Lab 27 Lab 28 Lab 29
Lab 30 Lab 31 Lab 32 Lab 33 Lab 34
Lab 35 Lab 36 Lab 37 Lab 38 Lab 39
Lab 40 Lab 41 Lab 42 Lab 43 Lab 44
Lab 45 Lab 46 Lab 47 Lab 48 Lab 49
Lab 50
**图例:** 灰色基础篇 (00-09, 免费) | 绿色 ML 基础篇 (10-13, 免费) | 紫色 LLM 基础篇 (14-18) | 橙色检测/DFIR (19-29) | 红色高级/云 (30-50)
详细实验描述 ### 基础实验 (00-09) -- 环境设置与基础,无需 API keys | 实验 | 主题 | 描述 | |-----|-------|-------------| | [00](./labs/lab00-environment-setup/) | 设置 | 环境配置 | | [01](./labs/lab01-python-security-fundamentals/) | Python | 面向安全的 Python 基础 | | [02](./labs/lab02-intro-prompt-engineering/) | Prompts | 使用免费平台进行 LLM 基础学习 | | [03](./labs/lab03-vibe-coding-with-ai/) | Vibe Coding | 利用 AI 助手加速学习 | | [04](./labs/lab04-ml-concepts-primer/) | ML 简介 | 监督/非监督学习、特征、评估 | | [05](./labs/lab05-ai-in-security-operations/) | AI in SOC | AI 的定位,人机协同 (human-in-the-loop) | | [06](./labs/lab06-visualization-stats/) | Stats | 用于仪表盘的 Matplotlib, Seaborn | | [07](./labs/lab07-hello-world-ml/) | Hello ML | 端到端构建你的第一个 ML 模型 | | [08](./labs/lab08-working-with-apis/) | APIs | REST APIs、认证、速率限制 | | [09](./labs/lab09-ctf-fundamentals/) | CTF 基础 | CTF 思维、编码、Flag 寻找 | ### ML 实验 (10-13) -- 机器学习,无需 API keys | 实验 | 主题 | 描述 | |-----|-------|-------------| | [10](./labs/lab10-phishing-classifier/) | 钓鱼 | TF-IDF, Random Forest, 分类 | | [11](./labs/lab11-malware-clustering/) | 恶意软件 | K-Means, DBSCAN, 二进制文件聚类 | | [12](./labs/lab12-anomaly-detection/) | 异常 | Isolation Forest, LOF, 基线 | | [13](./labs/lab13-ml-vs-llm/) | ML vs LLM | 使用场景对比,成本权衡 | ### LLM 实验 (14-18) -- 大语言模型与智能体 | 实验 | 主题 | 描述 | |-----|-------|-------------| | [14](./labs/lab14-first-ai-agent/) | Agent | ReAct 模式,工具调用基础 | | [15](./labs/lab15-llm-log-analysis/) | 日志 | Prompt 工程,IOC 提取 | | [16](./labs/lab16-threat-intel-agent/) | 情报 | LangChain,自主调查 | | [17](./labs/lab17-embeddings-vectors/) | 向量 | Embeddings,相似性搜索 | | [18](./labs/lab18-security-rag/) | RAG | ChromaDB,检索增强问答 | ### 检测与 DFIR 实验 (19-35) -- 管道、自动化与取证 | 实验 | 主题 | 描述 | |-----|-------|-------------| | [19](./labs/lab19-binary-basics/) | 二进制 | PE 结构,熵分析 | | [20](./labs/lab20-sigma-fundamentals/) | Sigma | 基于日志的检测规则 | | [21](./labs/lab21-yara-generator/) | YARA | AI 辅助规则生成 | | [22](./labs/lab22-vuln-scanner-ai/) | 漏洞 | CVSS,风险优先级排序 | | [23](./labs/lab23-detection-pipeline/) | 管道 | ML 过滤 + LLM 富化 | | [24](./labs/lab24-monitoring-ai-systems/) | 监控 | 可观测性,成本追踪 | | [25](./labs/lab25-dfir-fundamentals/) | DFIR | 取证基础,证据收集 | | [26](./labs/lab26-windows-event-log-analysis/) | Windows 日志 | 事件日志解析,检测 | | [27](./labs/lab27-windows-registry-forensics/) | 注册表 | 注册表取证,持久化 | | [28](./labs/lab28-live-response/) | 实时 IR | 实时响应,分类程序 | | [29](./labs/lab29-ir-copilot/) | IR Bot | 对话式 IR,Playbook 执行 | | [30](./labs/lab30-ransomware-fundamentals/) | 勒索家族 | 勒索软件家族,攻击生命周期 | | [31](./labs/lab31-ransomware-detection/) | 勒索 | 熵,行为检测 | | [32](./labs/lab32-ransomware-simulation/) | 紫队 | 安全的对手模拟 | | [33](./labs/lab33-memory-forensics-ai/) | 内存 | Volatility3,进程注入 | | [34](./labs/lab34-c2-traffic-analysis/) | C2 | Beaconing,DNS 隧道,JA3 | | [35](./labs/lab35-lateral-movement-detection/) | 横向移动 | 认证异常,图路径 | ### 专家实验 (36-50) -- 对抗、云与高级主题 | 实验 | 主题 | 描述 | |-----|-------|-------------| | [36](./labs/lab36-threat-actor-profiling/) | 行为体 | TTP 提取,归因 | | [37](./labs/lab37-ai-powered-threat-actors/) | AI 威胁 | Deepfakes,AI 生成的钓鱼 | | [38](./labs/lab38-ml-security-intro/) | MLSec | 数据投毒,模型安全 | | [39](./labs/lab39-adversarial-ml/) | 对抗 ML | 逃避攻击,鲁棒防御 | | [40](./labs/lab40-llm-security-testing/) | LLM 安全 | Prompt 注入测试,越狱 | | [41](./labs/lab41-model-monitoring/) | 模型监控 | 漂移检测,对抗性输入 | | [42](./labs/lab42-fine-tuning-security/) | 微调 | LoRA,自定义 embeddings | | [43](./labs/lab43-rag-security/) | RAG 安全 | 知识库投毒,上下文清洗 | | [44](./labs/lab44-cloud-security-fundamentals/) | 云基础 | 共享责任,IAM | | [45](./labs/lab45-cloud-security-ai/) | 云 | AWS/Azure/GCP, CloudTrail | | [46](./labs/lab46-container-security/) | 容器 | Kubernetes,运行时检测 | | [47](./labs/lab47-serverless-security/) | Serverless | Lambda,事件注入 | | [48](./labs/lab48-cloud-ir-automation/) | 云 IR | 自动化遏制,证据 | | [49](./labs/lab49-llm-red-teaming/) | 红队 | Prompt 注入,越狱 | | [50](./labs/lab50-purple-team-ai/) | 紫队 AI | 自动化攻击模拟 |
## 毕设项目 | 项目 | 难度 | 重点 | |---------|------------|-------| | **安全分析师 Copilot** | 高级 | LLM 智能体,IR 自动化 | | **自动化威胁猎手** | 高级 | ML 检测,管道 | | **恶意软件分析助手** | 中级 | 静态分析,YARA | | **漏洞情报平台** | 中级 | RAG,优先级排序 | 每个项目都包含入门代码、需求和评估标准。参见 [`capstone-projects/`](./capstone-projects/)。 ## 本地设置 ### 系统要求 | 需求 | 最低 | 推荐 | |-------------|---------|-------------| | **Python** | 3.10 | 3.10-3.12 (PyTorch 尚不支持 3.13+) | | **RAM** | 8GB | 16GB (用于本地 LLMs) | | **OS** | Windows, macOS, Linux | 任意 | | **编辑器** | 任意 | VS Code, Cursor, PyCharm | | **Git** | 必需 | - | | **Docker** | 可选 | 用于容器化实验 | | **API Key** | 仅限实验 14+ | 有免费层级可用 | ### 选项 1: Docker (最简单!) ``` git clone https://github.com/depalmar/ai_for_the_win.git cd ai_for_the_win/docker # 仅启动 Jupyter (最快 — 足以完成大多数 labs) docker compose up -d jupyter # 或者启动所有内容 (需要 8GB+ RAM) docker compose up -d # 访问 Jupyter Lab: http://localhost:8888 (token: aiforthewin) ``` 可用服务:Jupyter Lab, Elasticsearch, Kibana, PostgreSQL, Redis, MinIO, Ollama (本地 LLMs), ChromaDB (向量)。只启动你需要的服务 —— 详见 [`docker/README.md`](./docker/README.md) 了解服务配置。 ### 选项 2: 本地 Python 安装 ``` # 1. Clone repository git clone https://github.com/depalmar/ai_for_the_win.git cd ai_for_the_win # 2. 创建 virtual environment (如有多个版本,请使用 python3.12) python -m venv venv source venv/bin/activate # Windows: .\venv\Scripts\activate # 3. Install dependencies (选择其中之一) pip install -r requirements.txt # Everything (all providers) pip install -e ".[anthropic]" # Core + Claude (recommended) pip install -e ".[ollama]" # Core + Ollama (free, local) pip install -e "." # Core only (Labs 00-13, no LLM) # 4. 验证设置 python scripts/verify_setup.py # 5. 从 Lab 00 开始 (无需 API key) cd labs/lab00-environment-setup ```
依赖安装失败? 如果 `pip install` 卡住或因 `resolution-too-deep` 失败: 1. **检查 Python 版本** — 必须是 3.10-3.12 (`python --version`) 2. **尝试 `uv`** (更快的解析器): `pip install uv && uv pip install -r requirements.txt` 3. **选择性安装**: 用 `pip install -e ".[anthropic]"` 代替完整的 `requirements.txt` 详情请参阅 [故障排除指南](./docs/guides/troubleshooting-guide.md)。
### API Keys (用于实验 14+) ``` # 复制示例 env cp .env.example .env # 使用您喜欢的编辑器编辑 .env 并添加 API key # 重要提示:不要将 keys 粘贴到终端 (会保存在历史记录中) # 示例:ANTHROPIC_API_KEY=your-key-here # 验证设置 python scripts/verify_setup.py ``` | 变量 | 描述 | 是否必需 | |----------|-------------|----------| | `ANTHROPIC_API_KEY` | Claude API | 需要一个 LLM key | | `OPENAI_API_KEY` | GPT-4/5 API | 需要一个 LLM key | | `GOOGLE_API_KEY` | Gemini API | 需要一个 LLM key | | `VIRUSTOTAL_API_KEY` | VirusTotal | 可选 | ### 运行测试 ``` pytest tests/ -v # All tests pytest tests/test_lab01*.py -v # Single lab pytest tests/ --cov=labs # With coverage docker compose run test # In Docker ``` ## 资源 | 资源 | 描述 | |----------|-------------| | [环境设置]( 准备好构建 AI 驱动的安全工具了吗?
在 Colab 中开始 | 本地设置 | 完整课程

标签:AI风险缓解, AMSI绕过, Apex, Colab, DLL 劫持, Docker, Petitpotam, Python, RAG, 人工智能, 凭据扫描, 大语言模型, 威胁检测, 安全实训, 安全工具开发, 安全防御评估, 数字取证, 数据展示, 数据科学, 无后门, 机器学习, 用户模式Hook绕过, 红队, 网络安全, 自动化脚本, 请求拦截, 资源验证, 逆向工具, 钓鱼检测, 隐私保护