talha104anwar-eng/MSc-CTI-Robustness-Project

GitHub: talha104anwar-eng/MSc-CTI-Robustness-Project

一个面向钓鱼检测的对抗鲁棒性与可解释性机器学习框架,揭示模型脆弱性并验证防御与解释方法。

Stars: 0 | Forks: 0

# MSc CTI Robustness and Explainability Framework **Robust and Explainable Machine Learning for Predictive Cyber Threat Intelligence** Implementation of an integrated framework for adversarially robust and explainable cyber threat intelligence systems. ## 👥 作者 - Muhammad Sajid Iqbal (Component 1: Baseline Models) - Aneela Shafique (Component 2: Adversarial Attacks) - Muhammad Arfan (Component 3: Defense Mechanisms) - Muhammad Talha Anwar (Component 4: Explainability) **Institution:** University of the West of Scotland **Programme:** MSc Information Technology **Supervisor:** Dr. Haider Ali ## 📋 项目概述 This project addresses critical vulnerabilities in machine learning-based cyber threat intelligence systems by: 1. **Building baseline ML models** for phishing URL detection 2. **Assessing adversarial vulnerabilities** through FGSM and PGD attacks 3. **Implementing defense mechanisms** via adversarial training 4. **Providing explainability** through SHAP and LIME analysis ## 🎯 研究目标 - Develop high-accuracy baseline models for CTI threat detection - Quantify adversarial vulnerability across different model architectures - Implement and evaluate defense mechanisms to close robustness gaps - Ensure model interpretability through explainable AI techniques ## 📊 数据集 **Source:** Kaggle Phishing Websites Dataset **Samples:** 22,110 URLs **Features:** 30 URL-based characteristics **Classes:** Binary (Phishing / Legitimate) Dataset includes structural, lexical, and content-based URL features for comprehensive phishing detection. ## 🛠️ 安装 ### 先决条件 - Python 3.9.7 or higher - pip package manager - 4GB RAM minimum (8GB recommended) - 2GB free disk space ### 步骤 1:克隆仓库 ``` git clone https://github.com/yourusername/MSc-CTI-Robustness-Project.git cd MSc-CTI-Robustness-Project ``` ### 步骤 2:安装依赖项 ``` pip install -r requirements.txt ``` This installs: - pandas, numpy (data handling) - scikit-learn, imbalanced-learn (machine learning) - adversarial-robustness-toolbox (attacks/defenses) - shap, lime (explainability) - matplotlib, seaborn (visualization) ## 🚀 使用 ### 选项 1:运行所有组件(推荐) ``` python run_all.py ``` Executes all four components sequentially (~20-30 minutes). ### 选项 2:运行单个组件 ``` # 组件 1:训练基线模型 python component1_baseline_models.py # 组件 2:生成对抗攻击 python component2_adversarial_attacks.py # 组件 3:实现防御机制 python component3_defense_mechanisms.py # 组件 4:可解释性分析 python component4_explainability.py ``` ## 📁 项目结构 ``` MSc-CTI-Project/ │ ├── Dataset_Phising_Website.csv # Input data │ ├── component1_baseline_models.py # Baseline model training ├── component2_adversarial_attacks.py # FGSM/PGD attacks ├── component3_defense_mechanisms.py # Adversarial training ├── component4_explainability.py # SHAP/LIME analysis │ ├── run_all.py # Master execution script ├── requirements.txt # Python dependencies ├── README.md # This file │ ├── results/ # Generated results │ ├── *.csv # Performance metrics │ └── *.png # Visualizations │ └── models/ # Trained models └── *.pkl # Serialized models ``` ## 📊 组件 ### 组件 1:基线预测模型 **Responsibility:** Muhammad Sajid Iqbal **Implementation:** - Logistic Regression (linear baseline) - Random Forest (ensemble method) - Support Vector Machine with RBF kernel **Outputs:** - Confusion matrices - ROC curves - Performance metrics (accuracy, precision, recall, F1) - Cross-validation results **Key Results:** - Random Forest: 98.46% accuracy - SVM: 96.29% accuracy - Logistic Regression: 92.58% accuracy ### 组件 2:对抗攻击评估 **Responsibility:** Aneela Shafique **Implementation:** - FGSM attacks (ε = 0.01, 0.05, 0.1, 0.2) - PGD attacks (iterations: 5, 10, 20) - Transferability analysis **Outputs:** - Attack success rates - Accuracy degradation curves - Transferability matrices **Key Results:** - Maximum attack success rate: 47% (FGSM ε=0.2 on RF) - PGD more effective than FGSM - Moderate cross-model transferability ### 组件 3:防御机制 **Responsibility:** Muhammad Arfan **Implementation:** - Adversarial training (ratios: 20/80, 30/70, 40/60) - Isolation Forest anomaly detection - Input sanitization **Outputs:** - Robustness gap reduction - Defense effectiveness comparisons - Anomaly detection rates **Key Results:** - Adversarial accuracy improved from 52% to 99.89% - Optimal ratio: 30/70 adversarial/clean - Isolation Forest: 73% adversarial detection rate ### 组件 4:可解释人工智能 **Responsibility:** Muhammad Talha Anwar **Implementation:** - SHAP (TreeExplainer for Random Forest) - LIME (local perturbation-based) - Consistency analysis **Outputs:** - Feature importance rankings - SHAP-LIME correlation - Baseline vs defended comparison **Key Results:** - SHAP-LIME correlation: ρ = 0.78-0.81 - Top features: SSL certificate, URL anchors, web traffic - Feature importance shift after defense: ρ = 0.49 ## 📈 结果 All results are saved in the `results/` folder: ### CSV 文件: - `baseline_results.csv` - Model performance metrics - `fgsm_attack_results.csv` - FGSM attack data - `pgd_attack_results.csv` - PGD attack data - `defense_results.csv` - Defense effectiveness - `explainability_results.csv` - SHAP-LIME consistency ### 可视化: - Confusion matrices for all models - ROC curves comparison - Attack effectiveness plots - Defense comparison charts - Feature importance rankings - SHAP summary plots ## 🔬 方法论亮点 ### 数据预处理 - 80/20 stratified train-test split - SMOTE for class balancing (training only) - StandardScaler normalization - Feature extraction from URL characteristics ### 模型训练 - GridSearchCV hyperparameter optimization - 5-fold stratified cross-validation - Random seed (42) for reproducibility ### 攻击生成 - Adversarial Robustness Toolbox (ART) library - White-box threat model assumption - L∞ perturbation budget constraints ### 防御实现 - Adversarial training with mixed datasets - Isolation Forest (contamination = 0.1) - Feature constraint validation ### 可解释性 - SHAP TreeExplainer (exact Shapley values) - LIME with 1000-5000 perturbations - Spearman rank correlation for consistency ## 📚 关键发现 1. **Baseline Performance:** Random Forest achieves 98.46% accuracy, outperforming SVM (96.29%) and Logistic Regression (92.58%) 2. **Adversarial Vulnerability:** Even high-performing models show significant vulnerability, with accuracy dropping to 52% under FGSM attacks 3. **Defense Effectiveness:** Adversarial training restores adversarial accuracy to 99.89% with minimal clean accuracy degradation 4. **Explainability:** SHAP and LIME demonstrate strong agreement (ρ > 0.78), validating interpretation consistency 5. **Feature Importance:** SSL certificate status, URL anchors, and web traffic emerge as critical phishing indicators ## 📄 许可证 This project is submitted as part of MSc IT dissertation requirements at the University of the West of Scotland. All code implements standard machine learning and adversarial robustness techniques using open-source libraries. ## 🙏 致谢 - **Supervisor:** Dr. Haider Ali (University of the West of Scotland) - **Dataset:** Kaggle Phishing Websites Dataset contributors - **Libraries:** scikit-learn, ART, SHAP, LIME development teams ## 📞 联系方式 For academic inquiries related to this research: **Programme:** MSc Information Technology **Institution:** University of the West of Scotland **Academic Year:** 2025-2026 ## ⚙️ 技术规格 **Development Environment:** - Python 3.9.7 - scikit-learn 1.0.2 - adversarial-robustness-toolbox 1.10.1 - SHAP 0.41.0 - LIME 0.2.0.1 **Hardware Requirements:** - Minimum: GB RAM, Dual-core CPU - Recommended: 8GB RAM, Quad-core CPU - Execution time: 20-30 minutes on standard hardware **Last Updated:** April 2026
标签:Apex, FGSM, Kaggle, LIME, pandas, PGD, Python, SHAP, URL特征, 二分类, 信息科技, 分类, 可解释性, 对抗训练, 对抗鲁棒性, 数据科学, 无后门, 机器学习, 模型解释, 网络威胁情报, 网络安全, 资源验证, 逆向工具, 钓鱼检测, 隐私保护