MonnAmi/homelab-ai-redteam

GitHub: MonnAmi/homelab-ai-redteam

基于VMware ESXi和RTX 3060构建的私有化AI红队实验室,支持离线运行DeepSeek R1并规划RAG知识库管道。

Stars: 0 | Forks: 0

# 🏠 Homelab AI Red Team 平台 ## 🚧 构建状态 | 组件 | 状态 | 备注 | |---|---|---| | 🖥️ ESXi Hypervisor | ![Done](https://img.shields.io/badge/status-DONE-brightgreen) | 运行于 Supermicro H12SSL-CT | | 🎮 RTX 3060 GPU Passthrough | ![Done](https://img.shields.io/badge/status-DONE-brightgreen) | Driver 580 + CUDA 13.0 已确认 | | 🧠 VM3 — LLM Brain | ![Done](https://img.shields.io/badge/status-DONE-brightgreen) | DeepSeek R1 14B 运行速度约 ~45 tok/s | | 🗄️ VM2 — Vector Database | ![In Progress](https://img.shields.io/badge/status-IN%20PROGRESS-orange) | Qdrant 设置待定 | | 📥 VM1 — Ingestor | ![In Progress](https://img.shields.io/badge/status-IN%20PROGRESS-orange) | RAG pipeline 待定 | | 🌐 VM4 — Agent / Open WebUI | ![In Progress](https://img.shields.io/badge/status-IN%20PROGRESS-orange) | Chat interface 待定 | | 🔒 Air-Gap VLAN | ![Planned](https://img.shields.io/badge/status-PLANNED-blue) | 网络隔离配置 | | 📚 Book Ingestion | ![Planned](https://img.shields.io/badge/status-PLANNED-blue) | 10TB 安全资料库 | ## ✅ 当前可用功能 ``` nvidia-smi output (confirmed working): ──────────────────────────────────────────────────────────────── NVIDIA-SMI 580.126.09 Driver Version: 580.126.09 CUDA Version: 13.0 GPU: NVIDIA GeForce RTX 3060 VRAM: 9495MiB / 12288MiB ← DeepSeek R1 14B loaded Speed: ~45 tokens/second Proc: /usr/local/bin/ollama PID 23677 GPU Memory: 9486MiB Temp: 43C Power: 9W idle / 170W max ──────────────────────────────────────────────────────────────── ``` **VM3 (ubkleinai) 已完全运行:** - ✅ Ubuntu 22.04 LTS - ✅ NVIDIA driver 580 + CUDA 13.0 - ✅ Ollama 已安装并运行 - ✅ DeepSeek R1 14B — 9.4GB VRAM 已加载至 GPU - ✅ RTX 3060 PCIe passthrough 已确认 ## 🗺️ 路线图 ``` Phase 1 — AI Brain ████████████████████ 100% ✅ DONE Phase 2 — Vector DB ░░░░░░░░░░░░░░░░░░░░ 0% ⏳ Tonight Phase 3 — Book Ingestor ░░░░░░░░░░░░░░░░░░░░ 0% ⏳ Tonight Phase 4 — Agent / WebUI ░░░░░░░░░░░░░░░░░░░░ 0% ⏳ Tonight Phase 5 — Air-Gap Network ░░░░░░░░░░░░░░░░░░░░ 0% 📅 Planned Phase 6 — Book Library ░░░░░░░░░░░░░░░░░░░░ 0% 📅 Planned ``` ## 📋 目录 - [硬件](#hardware) - [存储布局](#storage-layout) - [VM 架构](#vm-architecture) - [GPU Passthrough](#gpu-passthrough-rtx-3060) - [网络配置](#network-config) - [VM 设置指南](#vm-setup-guides) - [AI Stack](#ai-stack) - [脚本](#scripts) ## 🖥️ 硬件 | 组件 | 规格 | |---|---| | **CPU** | AMD EPYC 7282 — 16c/32t, 2.8→3.2GHz, 120W | | **Motherboard** | Supermicro H12SSL-CT ATX | | **RAM** | 2 × Samsung 64GB DDR4-3200 ECC RDIMM = 128GB | | **GPU** | ASUS RTX 3060 Dual OC 12GB GDDR6 PCIe 4.0 | | **NVMe** | 2 × Samsung 990 PRO 4TB PCIe 4.0 = 8TB | | **HDD** | 3 × Toshiba X300 14TB 7200RPM SATA | | **RAID** | RAID 5 → 28TB 可用空间 | | **Hypervisor** | VMware ESXi | ## 💾 存储布局 ``` NVMe (8TB total — fast, AI workloads): ├── NVMe Disk 1 (4TB) │ ├── VM-AI-BRAIN datastore 2TB ✅ Active │ └── VM-AI-STACK datastore 2TB ⏳ Pending └── NVMe Disk 2 (4TB) ├── AI Model Store 4TB ✅ Active (DeepSeek R1 stored here) └── Vector DB storage 2TB ⏳ Pending RAID 5 HDD (28TB usable — bulk/cold storage): ├── AI-Books-Store 10TB 📅 Planned ├── VM-OS-Disks 3TB ✅ Active └── Backup-Archive 15TB 📅 Planned ``` ## 🏗️ VM 架构 ``` ┌─────────────────────────────────────────────────────────┐ │ VMware ESXi Host │ │ AMD EPYC 7282 | 128GB RAM | RTX 3060 │ │ │ │ ┌──────────────┐ ┌──────────────┐ │ │ │ VM1 │ │ VM2 │ │ │ │ INGESTOR │ │ VECTOR DB │ │ │ │ ⏳ Pending │ │ ⏳ Pending │ │ │ │ 192.168.1.11│ │ 192.168.1.12│ │ │ └──────────────┘ └──────────────┘ │ │ │ │ ┌──────────────┐ ┌──────────────┐ │ │ │ VM3 │ │ VM4 │ │ │ │ LLM BRAIN │ │ AGENT/UI │ │ │ │ ✅ RUNNING │ │ ⏳ Pending │ │ │ │ 192.168.1.13│ │ 192.168.1.14│ │ │ │ RTX 3060 ✅ │ │ │ │ │ └──────────────┘ └──────────────┘ │ └─────────────────────────────────────────────────────────┘ ``` ### VM 规格 | | VM1 Ingestor | VM2 VectorDB | VM3 LLM Brain | VM4 Agent UI | |---|---|---|---|---| | **Hostname** | ingestor-vm | vectordb-vm | ubkleinai | agent-vm | | **OS** | Ubuntu 22.04 | Ubuntu 22.04 | Ubuntu 22.04 | Ubuntu 22.04 | | **vCPU** | 6 | 4 | 16 | 6 | | **RAM** | 16GB | 16GB | 40GB | 8GB | | **Data Disk** | 10TB RAID | 2TB NVMe | 4TB NVMe | 2TB NVMe | | **GPU** | ❌ | ❌ | ✅ RTX 3060 | ❌ | | **Status** | ⏳ Pending | ⏳ Pending | ✅ Running | ⏳ Pending | ## 🎮 GPU Passthrough (RTX 3060) ### 已执行的步骤 **Step 1 — BIOS (Supermicro H12SSL-CT):** ``` Advanced → AMD CBS → NBIO → IOMMU → Enabled ``` **Step 2 — ESXi Passthrough:** ``` ESXi UI → Host → Manage → Hardware → PCI Devices RTX 3060 → Toggle Passthrough = ON → Reboot ESXi VM3 Settings → Add PCI Device → RTX 3060 Memory Reservation = ALL (40GB) ``` **Step 3 — Ubuntu Driver:** ``` sudo apt install nvidia-driver-545 -y sudo reboot nvidia-smi # confirmed ✅ ``` ## 🌐 网络配置 ``` ESXi Host: 192.168.1.10 VM1 Ingestor: 192.168.1.11 ⏳ VM2 VectorDB: 192.168.1.12 ⏳ VM3 LLM: 192.168.1.13 ✅ Running VM4 Agent: 192.168.1.14 ⏳ ``` ## 🔧 VM 设置指南 ### ✅ VM3 — LLM Brain (完成) ``` # Ollama 已安装并运行 curl -fsSL https://ollama.ai/install.sh | sh # DeepSeek R1 14B 已拉取 ollama pull deepseek-r1:14b # 验证 GPU 使用情况 nvidia-smi # 进程: /usr/local/bin/ollama 9486MiB ✅ ``` ### ⏳ VM2 — Vector Database (待定) ``` # 安装 Docker curl -fsSL https://get.docker.com | sh sudo usermod -aG docker $USER # 在 NVMe 上运行 Qdrant docker run -d \ --name qdrant \ --restart always \ -p 6333:6333 \ -v /mnt/nvme/qdrant:/qdrant/storage \ qdrant/qdrant ``` ### ⏳ VM1 — Ingestor (待定) ``` # 安装依赖 pip install pymupdf sentence-transformers \ qdrant-client llama-index watchdog tqdm \ --break-system-packages # 运行 ingestion pipeline python3 ingest.py --books-path /mnt/books \ --qdrant-host 192.168.1.12 ``` ### ⏳ VM4 — Open WebUI (待定) ``` # 运行 Open WebUI — 私有 ChatGPT 界面 docker run -d \ --name open-webui \ --restart always \ -p 3000:8080 \ -e OLLAMA_BASE_URL=http://192.168.1.13:11434 \ -v open-webui:/app/backend/data \ ghcr.io/open-webui/open-webui:main # 访问地址: http://192.168.1.14:3000 ``` ## 🤖 AI Stack | 组件 | 详情 | 状态 | |---|---|---| | **Ollama** | Model manager | ✅ Running | | **DeepSeek R1 14B** | Primary LLM | ✅ GPU accelerated | | **NVIDIA Driver** | 580.126.09 | ✅ Installed | | **CUDA** | 13.0 | ✅ Ready | | **Qdrant** | Vector database | ⏳ Pending | | **Open WebUI** | Chat interface | ⏳ Pending | | **FastAPI** | Private API | ⏳ Pending | | **Book Pipeline** | 10TB RAG library | 📅 Planned | ### Pipeline (完成后) ``` RAID (PDFs/Books 10TB) 📅 ↓ VM1 Ingestor ⏳ ↓ VM2 VectorDB (Qdrant) ⏳ ↓ VM3 LLM Brain ✅ ←── Only this part is running today ↓ VM4 Agent/UI ⏳ ``` ## 📜 脚本 | 脚本 | 用途 | 状态 | |---|---|---| | `scripts/lab-manager.sh` | 通过 ESXi 启动/停止 AI VM | ✅ Ready | | `ai-security-pipeline/ingest.py` | PDF → Qdrant pipeline | ⏳ Pending test | | `ai-security-pipeline/query.py` | CLI RAG query interface | ⏳ Pending test | ## 🔐 隐私与安全 ``` ✅ 100% offline — no internet required after setup ✅ No cloud API keys — all models run locally ⏳ Air-gapped VLAN — coming soon ⏳ UFW firewall on all VMs — coming soon ✅ ECC RAM — data integrity on all operations ✅ RAID 5 — fault tolerant storage ``` ## 📄 完整文档 ./Homelab_AI_RedTeam_Documentation mein.docx 包含:硬件规格、GPU passthrough 步骤、VM 设置指南、AI stack 配置、网络布局、GitHub 设置、知识库策略及完整构建清单。 ## ⚠️ 已知风险 - RAID 5 配置 3 × 14TB HDD — 重建期间 URE 风险较高。计划添加第 4 块硬盘以升级至 RAID 6。 - GPU passthrough 需要在 ESXi VM 设置中启用 Memory Reservation = ALL。 - 混合 RAM 品牌 (Samsung + Hynix + Micron) 已确认可正常工作 — 切勿混用 RDIMM + LRDIMM。 *构建基于:Ubuntu 22.04 LTS | VMware ESXi | Ollama | DeepSeek R1 | Qdrant | Docker* *最后更新:March 2026*
标签:AI风险缓解, CUDA, DeepSeek, DLL 劫持, GPU 直通, Homelab, LLM评估, LLM 部署, NVIDIA RTX 3060, Ollama, Qdrant, RAG, Ruby, VLAN, VMware ESXi, WebUI, 人工智能, 向量数据库, 大语言模型, 安全文档, 家庭实验室, 数据展示, 本地推理, 检索增强生成, 熵值分析, 用户模式Hook绕过, 知识库, 私有云, 红队, 网络安全, 网络隔离, 虚拟化, 请求拦截, 超微服务器, 超级电脑, 逆向工具, 隐私保护