oumi-ai/oumi

GitHub: oumi-ai/oumi

开源的大语言模型全生命周期平台,提供从数据准备、训练微调到评估部署的一站式解决方案。

Stars: 9128 | Forks: 738

![Oumi Logo](https://static.pigsec.cn/wp-content/uploads/repos/2026/04/9404c30e9a075651.png) [![文档](https://img.shields.io/badge/Documentation-oumi-blue.svg)](https://oumi.ai/docs/en/latest/index.html) [![博客](https://img.shields.io/badge/Blog-oumi-blue.svg)](https://oumi.ai/blog) [![Twitter](https://img.shields.io/twitter/follow/Oumi_PBC)](https://x.com/Oumi_PBC) [![Discord](https://img.shields.io/discord/1286348126797430814?label=Discord)](https://discord.gg/oumi) [![PyPI 版本](https://badge.fury.io/py/oumi.svg)](https://badge.fury.io/py/oumi) [![许可证](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![测试](https://static.pigsec.cn/wp-content/uploads/repos/2026/04/a3c3c7ff5c075652.svg)](https://github.com/oumi-ai/oumi/actions/workflows/pretest.yaml) [![GPU 测试](https://static.pigsec.cn/wp-content/uploads/repos/2026/04/fd3cd382e3075653.svg)](https://github.com/oumi-ai/oumi/actions/workflows/gpu_tests.yaml) [![GitHub Repo 星标](https://img.shields.io/github/stars/oumi-ai/oumi)](https://github.com/oumi-ai/oumi/stargazers) [![代码风格: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit)](https://github.com/pre-commit/pre-commit) [![关于](https://img.shields.io/badge/About-oumi-blue.svg)](https://oumi.ai) ### 构建最先进基础模型所需的一切,端到端

GitHub trending

## 🔥 新闻 - [2026/03] 升级至 Transformers v5, TRL v0.30, vLLM v0.19 和 veRL v0.7 兼容性 - [2026/03] [MCP 集成第一阶段](https://github.com/oumi-ai/oumi/pull/2234): 用于 MCP 服务器支持的包脚手架和依赖项 - [2026/03] 新增: `oumi deploy` 命令,用于在 fireworks.ai 和 parasail 上部署 Oumi 模型的专用推理端点 - [2026/03] 添加了对 Qwen3.5 模型家族的支持 - [2026/03] 推理引擎获得多项改进: list_models API, 改进的错误报告 - [2026/02] [预览: 使用 Oumi 平台和 Lambda 微调及部署用于用户意图分类的 4B 模型](https://youtu.be/0XpfYRpd_FA) - [2026/02] [Lambda 和 Oumi 合作进行端到端定制模型开发](https://blog.oumi.ai/p/lambda-and-oumi-partner-for-end-to) - [2025/12] [Oumi v0.6.0 发布](https://github.com/oumi-ai/oumi/releases/tag/v0.6.0),支持 Python 3.13, `oumi analyze` CLI 命令, TRL 0.26+ 支持等 - [2025/12] [WeMakeDevs AI Agents Assemble 黑客马拉松: Oumi 关于文本转 SQL 微调的网络研讨会](https://www.youtube.com/watch?v=6wPikqRZ7bQ&t=3203s) - [2025/12] [Oumi 联合赞助 WeMakeDevs AI Agents Assemble 黑客马拉松,超过 2000 个项目提交](https://www.wemakedevs.org/hackathons/assemblehack25) - [2025/11] [Oumi v0.5.0 发布](https://github.com/oumi-ai/oumi/releases/tag/v0.5),包含高级数据合成、超参数调优自动化、OpenEnv 支持等 - [2025/11] [使用 OpenEnv 执行 RLVF 微调的示例笔记本](https://github.com/oumi-ai/oumi/blob/main/notebooks/Oumi%20-%20OpenEnv%20GRPO%20with%20trl.ipynb),这是 Meta PyTorch 团队的一个开源库,用于创建、部署和分发代理式 RL 环境 - [2025/10] [Oumi v0.4.1](https://github.com/oumi-ai/oumi/releases/tag/v0.4.1) 和 [v0.4.2](https://github.com/oumi-ai/oumi/releases/tag/v0.4.2) 发布,支持 Qwen3-VL 和 Transformers v4.56,数据合成文档和示例,以及多项错误修复
较早的更新 - [2025/09] [Oumi v0.4.0 发布](https://github.com/oumi-ai/oumi/releases/tag/v0.4.0),包含 DeepSpeed 支持、Hugging Face Hub 缓存管理工具、KTO/Vision DPO 训练器支持 - [2025/08] 训练和推理支持 OpenAI 的 `gpt-oss-20b` 和 `gpt-oss-120b`: [配方在此](https://github.com/oumi-ai/oumi/tree/main/configs/recipes/gpt_oss) - [2025/08] 8 月 14 日网络研讨会 - [OpenAI 的 gpt-oss: 拨开炒作看本质](https://youtu.be/g1PkAV7fXn0). - [2025/08] [Oumi v0.3.0 发布](https://github.com/oumi-ai/oumi/releases/tag/v0.3.0),包含模型量化 (AWQ)、改进的 LLM-as-a-Judge API 和自适应推理 - [2025/07] [Qwen3 235B](https://github.com/oumi-ai/oumi/blob/main/configs/recipes/qwen3/inference/235b_a22b_together_infer.yaml) 的配方 - [2025/07] 7 月 24 日网络研讨会: ["使用 Oumi + Lambda 训练最先进的 Agent LLM"](https://youtu.be/f3SU_heBP54) - [2025/06] [Oumi v0.2.0 发布](https://github.com/oumi-ai/oumi/releases/tag/v0.2.0),支持 GRPO 微调、大量新模型支持等 - [2025/06] 宣布在 NeurIPS2025 举办 [视觉语言模型数据策展 (DCVLR) 竞赛](https://oumi.ai/blog/posts/announcing-dcvlr) - [2025/06] 使用新发布的 [Falcon-H1](https://github.com/oumi-ai/oumi/tree/main/configs/recipes/falcon_h1) 和 [Falcon-E](https://github.com/oumi-ai/oumi/tree/main/configs/recipes/falcon_e) 模型进行训练、推理和评估的配方 - [2025/05] [InternVL3 1B](https://github.com/oumi-ai/oumi/tree/main/configs/recipes/vision/internvl3) 的支持和配方 - [2025/04] 添加了使用 Llama 4 模型进行训练和推理的支持: Scout (17B 激活, 109B 总量) 和 Maverick (17B 激活, 400B 总量) 变体,包括全量微调、LoRA 和 QLoRA 配置 - [2025/04] [Qwen3 模型家族](https://github.com/oumi-ai/oumi/tree/main/configs/recipes/qwen3) 的配方 - [2025/04] 推出 HallOumi: 最先进的声明验证模型 [(技术概述)](https://oumi.ai/blog/posts/introducing-halloumi) - [2025/04] Oumi 现在支持两种新的视觉-语言模型: [Phi4](https://github.com/oumi-ai/oumi/tree/main/configs/recipes/vision/phi4) 和 [Qwen 2.5](https://github.com/oumi-ai/oumi/tree/main/configs/recipes/vision/qwen2_5_vl_3b)
## 🔎 关于 Oumi 是一个完全开源的平台,可简化基础模型的整个生命周期 - 从数据准备和训练到评估和部署。无论您是在笔记本电脑上开发、在集群上启动大规模实验,还是在生产环境中部署模型,Oumi 都能提供所需的工具和工作流程。 使用 Oumi,您可以: - 🚀 使用最先进的技术 (SFT, LoRA, QLoRA, GRPO 等) 训练和微调从 10M 到 405B 参数的模型 - 🤖 同时处理文本和多模态模型 (Llama, DeepSeek, Qwen, Phi 等) - 🔄 使用 LLM 评判器合成和筛选训练数据 - ⚡️ 使用流行的推理引擎 (vLLM, SGLang) 高效部署模型 - 📊 在标准基准测试中全面评估模型 - 🌎 在任何地方运行 - 从笔记本电脑到集群再到云端 (AWS, Azure, GCP, Lambda 等) - 🔌 同时集成开源模型和商业 API (OpenAI, Anthropic, Vertex AI, Together, Parasail, ...) 所有这些都通过一致的 API、生产级的可靠性以及您研究所需的全部灵活性来实现。 了解更多信息,请访问 [oumi.ai](https://oumi.ai/docs),或直接查看 [快速入门指南](https://oumi.ai/docs/en/latest/get_started/quickstart.html)。 ## 🚀 快速入门 | **Notebook** | **在 Colab 中尝试** | **目标** | |----------|--------------|-------------| | **🎯 入门: 概览** | Open In Colab | 核心功能快速概览: 训练、评估、推理和作业管理 | | **🔧 模型微调指南** | Open In Colab | LoRA 调优的端到端指南,包括数据准备、训练和评估 | | **📚 模型蒸馏** | Open In Colab | 将大型模型蒸馏为更小、更高效模型的指南 | | **📋 模型评估** | Open In Colab | 使用 Oumi 评估框架进行全面的模型评估 | | **☁️ 远程训练** | Open In Colab | 在云平台 (AWS, Azure, GCP, Lambda 等) 上启动和监控训练作业 | | **📈 LLM-as-a-Judge** | Open In Colab | 使用内置评判器筛选和整理训练数据 | ## 🔧 用法 ### 安装 选择最适合您的安装方式:
使用 pip (推荐) ``` # Basic 安装 uv pip install oumi # 支持 GPU uv pip install 'oumi[gpu]' # 最新开发版本 uv pip install git+https://github.com/oumi-ai/oumi.git ``` 没有 uv? [安装它](https://docs.astral.sh/uv/getting-started/installation/) 或改用 `pip`。
使用 Docker ``` # 拉取最新镜像 docker pull ghcr.io/oumi-ai/oumi:latest # 运行 Oumi 命令 docker run --gpus all -it ghcr.io/oumi-ai/oumi:latest oumi --help # 使用挂载的 config 进行训练 docker run --gpus all -v $(pwd):/workspace -it ghcr.io/oumi-ai/oumi:latest \ oumi train --config /workspace/my_config.yaml ```
快速安装脚本 (实验性) 无需设置 Python 环境即可试用 Oumi。这会在隔离环境中安装 Oumi: ``` curl -LsSf https://oumi.ai/install.sh | bash ```
有关更多高级安装选项,请参阅 [安装指南](https://oumi.ai/docs/en/latest/get_started/installation.html)。 ### Oumi CLI 您可以快速使用 `oumi` 命令,利用现有的 [配方](/configs/recipes) 之一来训练、评估和推理模型: ``` # 训练 oumi train -c configs/recipes/smollm/sft/135m/quickstart_train.yaml # 评估 oumi evaluate -c configs/recipes/smollm/evaluation/135m/quickstart_eval.yaml # 推理 oumi infer -c configs/recipes/smollm/inference/135m_infer.yaml --interactive ``` 有关更多高级选项,请参阅 [训练](https://oumi.ai/docs/en/latest/user_guides/train/train.html)、[评估](https://oumi.ai/docs/en/latest/user_guides/evaluate/evaluate.html)、[推理](https://oumi.ai/docs/en/latest/user_guides/infer/infer.html) 和 [llm-as-a-judge](https://oumi.ai/docs/en/latest/user_guides/judge/judge.html) 指南。 ### 远程运行作业 您可以使用 `oumi launch` 命令在云平台 (AWS, Azure, GCP, Lambda 等) 上远程运行作业: ``` # GCP oumi launch up -c configs/recipes/smollm/sft/135m/quickstart_gcp_job.yaml # AWS oumi launch up -c configs/recipes/smollm/sft/135m/quickstart_gcp_job.yaml --resources.cloud aws # Azure oumi launch up -c configs/recipes/smollm/sft/135m/quickstart_gcp_job.yaml --resources.cloud azure # Lambda oumi launch up -c configs/recipes/smollm/sft/135m/quickstart_gcp_job.yaml --resources.cloud lambda ``` **注意:** Oumi 处于 测试 阶段并正在积极开发中。核心功能已稳定,但某些高级功能可能会随着平台的改进而变化。 ## 💻 为什么使用 Oumi? 如果您需要一个用于训练、评估或部署模型的综合平台,Oumi 是一个不错的选择。 以下是 Oumi 的一些主要功能: - 🔧 **零样板代码**: 使用流行模型和工作流程的现成配方,几分钟内即可开始。无需编写训练循环或数据管道。 - 🏢 **企业级**: 由大规模训练模型的团队构建和验证 - 🎯 **研究就绪**: 非常适合 ML 研究,具有易于复现的实验,以及用于自定义每个组件的灵活接口。 - 🌐 **广泛的模型支持**: 适用于大多数流行的模型架构 - 从微型模型到最大的模型,从纯文本到多模态。 - 🚀 **SOTA 性能**: 原生支持分布式训练技术 (FSDP, DeepSpeed, DDP) 和优化的推理引擎 (vLLM, SGLang)。 - 🤝 **社区优先**: 100% 开源,拥有活跃的社区。无供应商锁定,无附加条件。 ## 📚 示例与配方 探索不断增长的现成配置集合,用于最先进的模型和训练工作流程: **注意:** 这些配置并非所支持内容的详尽列表,只是帮助您入门的示例。您可以在 Oumi 文档中找到更详尽的受支持 [模型](https://oumi.ai/docs/en/latest/resources/models/supported_models.html) 和数据集 ([监督微调](https://oumi.ai/docs/en/latest/resources/datasets/sft_datasets.html), [预训练](https://oumi.ai/docs/en/latest/resources/datasets/pretraining_datasets.html), [偏好调优](https://oumi.ai/docs/en/latest/resources/datasets/preference_datasets.html 和 [视觉-语言微调](https://oumi.ai/docs/en/latest/resources/datasets/vl_sft_datasets.html)) 列表。 ### Qwen 家族 | 模型 | 示例配置 | |-------|------------------------| | Qwen3-Next 80B A3B | [LoRA](/configs/recipes/qwen3_next/sft/80b_a3b_lora/train.yaml) • [推理](/configs/recipes/qwen3_next/inference/80b_a3b_infer.yaml) • [推理 (Instruct)](/configs/recipes/qwen3_next/inference/80b_a3b_instruct_infer.yaml) • [评估](/configs/recipes/qwen3_next/evaluation/80b_a3b_eval.yaml) | | Qwen3 30B A3B | [LoRA](/configs/recipes/qwen3/sft/30b_a3b_lora/train.yaml) • [推理](/configs/recipes/qwen3/inference/30b_a3b_infer.yaml) • [评估](/configs/recipes/qwen3/evaluation/30b_a3b_eval.yaml) | | Qwen3 32B | [LoRA](/configs/recipes/qwen3/sft/32b_lora/train.yaml) • [推理](/configs/recipes/qwen3/inference/32b_infer.yaml) • [评估](/configs/recipes/qwen3/evaluation/32b_eval.yaml) | | Qwen3 14B | [LoRA](/configs/recipes/qwen3/sft/14b_lora/train.yaml) • [推理](/configs/recipes/qwen3/inference/14b_infer.yaml) • [评估](/configs/recipes/qwen3/evaluation/14b_eval.yaml) | | Qwen3 8B | [FFT](/configs/recipes/qwen3/sft/8b_full/train.yaml) • [推理](/configs/recipes/qwen3/inference/8b_infer.yaml) • [评估](/configs/recipes/qwen3/evaluation/8b_eval.yaml) | | Qwen3 4B | [FFT](/configs/recipes/qwen3/sft/4b_full/train.yaml) • [推理](/configs/recipes/qwen3/inference/4b_infer.yaml) • [评估](/configs/recipes/qwen3/evaluation/4b_eval.yaml) | | Qwen3 1.7B | [FFT](/configs/recipes/qwen3/sft/1.7b_full/train.yaml) • [推理](/configs/recipes/qwen3/inference/1.7b_infer.yaml) • [评估](/configs/recipes/qwen3/evaluation/1.7b_eval.yaml) | | Qwen3 0.6B | [FFT](/configs/recipes/qwen3/sft/0.6b_full/train.yaml) • [推理](/configs/recipes/qwen3/inference/0.6b_infer.yaml) • [评估](/configs/recipes/qwen3/evaluation/0.6b_eval.yaml) | | QwQ 32B | [FFT](/configs/recipes/qwq/sft/full_train.yaml) • [LoRA](/configs/recipes/qwq/sft/lora_train.yaml) • [QLoRA](/configs/recipes/qwq/sft/qlora_train.yaml) • [推理](/configs/recipes/qwq/inference/infer.yaml) • [评估](/configs/recipes/qwq/evaluation/eval.yaml) | | Qwen2.5-VL 3B | [SFT](/configs/recipes/vision/qwen2_5_vl_3b/sft/full/train.yaml) • [LoRA](/configs/recipes/vision/qwen2_5_vl_3b/sft/lora/train.yaml)• [推理 (vLLM)](configs/recipes/vision/qwen2_5_vl_3b/inference/vllm_infer.yaml) • [推理](configs/recipes/vision/qwen2_5_vl_3b/inference/infer.yaml) | | Qwen2-VL 2B | [SFT](/configs/recipes/vision/qwen2_vl_2b/sft/full/train.yaml) • [LoRA](/configs/recipes/vision/qwen2_vl_2b/sft/lora/train.yaml) • [推理 (vLLM)](configs/recipes/vision/qwen2_vl_2b/inference/vllm_infer.yaml) • [推理 (SGLang)](configs/recipes/vision/qwen2_vl_2b/inference/sglang_infer.yaml) • [推理](configs/recipes/vision/qwen2_vl_2b/inference/infer.yaml) • [评估](configs/recipes/vision/qwen2_vl_2b/evaluation/eval.yaml) | ### 🐋 DeepSeek R1 家族 | 模型 | 示例配置 | |-------|------------------------| | DeepSeek R1 671B | [推理 (Together AI)](configs/recipes/deepseek_r1/inference/671b_together_infer.yaml) | | Distilled Llama 8B | [FFT](/configs/recipes/deepseek_r1/sft/distill_llama_8b/full_train.yaml) • [LoRA](/configs/recipes/deepseek_r1/sft/distill_llama_8b/lora_train.yaml) • [QLoRA](/configs/recipes/deepseek_r1/sft/distill_llama_8b/qlora_train.yaml) • [推理](configs/recipes/deepseek_r1/inference/distill_llama_8b_infer.yaml) • [评估](/configs/recipes/deepseek_r1/evaluation/distill_llama_8b/eval.yaml) | | Distilled Llama 70B | [FFT](/configs/recipes/deepseek_r1/sft/distill_llama_70b/full_train.yaml) • [LoRA](/configs/recipes/deepseek_r1/sft/distill_llama_70b/lora_train.yaml) • [QLoRA](/configs/recipes/deepseek_r1/sft/distill_llama_70b/qlora_train.yaml) • [推理](configs/recipes/deepseek_r1/inference/distill_llama_70b_infer.yaml) • [评估](/configs/recipes/deepseek_r1/evaluation/distill_llama_70b/eval.yaml) | | Distilled Qwen 1.5B | [FFT](/configs/recipes/deepseek_r1/sft/distill_qwen_1_5b/full_train.yaml) • [LoRA](/configs/recipes/deepseek_r1/sft/distill_qwen_1_5b/lora_train.yaml) • [推理](configs/recipes/deepseek_r1/inference/distill_qwen_1_5b_infer.yaml) • [评估](/configs/recipes/deepseek_r1/evaluation/distill_qwen_1_5b/eval.yaml) | | Distilled Qwen 32B | [LoRA](/configs/recipes/deepseek_r1/sft/distill_qwen_32b/lora_train.yaml) • [推理](configs/recipes/deepseek_r1/inference/distill_qwen_32b_infer.yaml) • [评估](/configs/recipes/deepseek_r1/evaluation/distill_qwen_32b/eval.yaml) | ### 🦙 Llama 家族 | 模型 | 示例配置 | |-------|------------------------| | Llama 4 Scout Instruct 17B | [FFT](/configs/recipes/llama4/sft/scout_instruct_full/train.yaml) • [LoRA](/configs/recipes/llama4/sft/scout_instruct_lora/train.yaml) • [QLoRA](/configs/recipes/llama4/sft/scout_instruct_qlora/train.yaml) • [推理 (vLLM)](/configs/recipes/llama4/inference/scout_instruct_vllm_infer.yaml) • [推理](/configs/recipes/llama4/inference/scout_instruct_infer.yaml) • [推理 (Together.ai)](/configs/recipes/llama4/inference/scout_instruct_together_infer.yaml) | | Llama 4 Scout 17B | [FFT](/configs/recipes/llama4/sft/scout_base_full/train.yaml) | | Llama 3.1 8B | [FFT](/configs/recipes/llama3_1/sft/8b_full/train.yaml) • [LoRA](/configs/recipes/llama3_1/sft/8b_lora/train.yaml) • [QLoRA](/configs/recipes/llama3_1/sft/8b_qlora/train.yaml) • [预训练](/configs/recipes/llama3_1/pretraining/8b/train.yaml) • [推理 (vLLM)](configs/recipes/llama3_1/inference/8b_rvllm_infer.yaml) • [推理](/configs/recipes/llama3_1/inference/8b_infer.yaml) • [评估](/configs/recipes/llama3_1/evaluation/8b_eval.yaml) | | Llama 3.1 70B | [FFT](/configs/recipes/llama3_1/sft/70b_full/train.yaml) • [LoRA](/configs/recipes/llama3_1/sft/70b_lora/train.yaml) • [QLoRA](/configs/recipes/llama3_1/sft/70b_qlora/train.yaml) • [推理](/configs/recipes/llama3_1/inference/70b_infer.yaml) • [评估](/configs/recipes/llama3_1/evaluation/70b_eval.yaml) | | Llama 3.1 405B | [FFT](/configs/recipes/llama3_1/sft/405b_full/train.yaml) • [LoRA](/configs/recipes/llama3_1/sft/405b_lora/train.yaml) • [QLoRA](/configs/recipes/llama3_1/sft/405b_qlora/train.yaml) | | Llama 3.2 1B | [FFT](/configs/recipes/llama3_2/sft/1b_full/train.yaml) • [LoRA](/configs/recipes/llama3_2/sft/1b_lora/train.yaml) • [QLoRA](/configs/recipes/llama3_2/sft/1b_qlora/train.yaml) • [推理 (vLLM)](/configs/recipes/llama3_2/inference/1b_vllm_infer.yaml) • [推理 (SGLang)](/configs/recipes/llama3_2/inference/1b_sglang_infer.yaml) • [推理](/configs/recipes/llama3_2/inference/1b_infer.yaml) • [评估](/configs/recipes/llama3_2/evaluation/1b_eval.yaml) | | Llama 3.2 3B | [FFT](/configs/recipes/llama3_2/sft/3b_full/train.yaml) • [LoRA](/configs/recipes/llama3_2/sft/3b_lora/train.yaml) • [QLoRA](/configs/recipes/llama3_2/sft/3b_qlora/train.yaml) • [推理 (vLLM)](/configs/recipes/llama3_2/inference/3b_vllm_infer.yaml) • [推理 (SGLang)](/configs/recipes/llama3_2/inference/3b_sglang_infer.yaml) • [推理](/configs/recipes/llama3_2/inference/3b_infer.yaml) • [评估](/configs/recipes/llama3_2/evaluation/3b_eval.yaml) | | Llama 3.3 70B | [FFT](/configs/recipes/llama3_3/sft/70b_full/train.yaml) • [LoRA](/configs/recipes/llama3_3/sft/70b_lora/train.yaml) • [QLoRA](/configs/recipes/llama3_3/sft/70b_qlora/train.yaml) • [推理 (vLLM)](/configs/recipes/llama3_3/inference/70b_vllm_infer.yaml) • [推理](/configs/recipes/llama3_3/inference/70b_infer.yaml) • [评估](/configs/recipes/llama3_3/evaluation/70b_eval.yaml) | | Llama 3.2 Vision 11B | [SFT](/configs/recipes/vision/llama3_2_vision/sft/11b_full/train.yaml) • [推理 (vLLM)](/configs/recipes/vision/llama3_2_vision/inference/11b_rvllm_infer.yaml) • [推理 (SGLang)](/configs/recipes/vision/llama3_2_vision/inference/11b_sglang_infer.yaml) • [评估](/configs/recipes/vision/llama3_2_vision/evaluation/11b_eval.yaml) | ### 🦅 Falcon 家族 | 模型 | 示例配置 | |-------|------------------------| | [Falcon-H1](https://huggingface.co/collections/tiiuae/falcon-h1-6819f2795bc406da60fab8df) | [FFT](/configs/recipes/falcon_h1/sft/) • [推理](/configs/recipes/falcon_h1/inference/) • [评估](/configs/recipes/falcon_h1/evaluation/) | | [Falcon-E (BitNet)](https://huggingface.co/collections/tiiuae/falcon-edge-series-6804fd13344d6d8a8fa71130) | [FFT](/configs/recipes/falcon_e/sft/) • [DPO](/configs/recipes/falcon_e/dpo/) • [评估](/configs/recipes/falcon_e/evaluation/) | ### 💎 Gemma 3 家族 | 模型 | 示例配置 | |-------|------------------------| | Gemma 3 4B Instruct | [FFT](/configs/recipes/gemma3/sft/4b_full/train.yaml) • [推理](/configs/recipes/gemma3/inference/4b_instruct_infer.yaml) • [评估](/configs/recipes/gemma3/evaluation/4b/eval.yaml) | | Gemma 3 12B Instruct | [LoRA](/configs/recipes/gemma3/sft/12b_lora/train.yaml) • [推理](/configs/recipes/gemma3/inference/12b_instruct_infer.yaml) • [评估](/configs/recipes/gemma3/evaluation/12b/eval.yaml) | | Gemma 3 27B Instruct | [LoRA](/configs/recipes/gemma3/sft/27b_lora/train.yaml) • [推理](/configs/recipes/gemma3/inference/27b_instruct_infer.yaml) • [评估](/configs/recipes/gemma3/evaluation/27b/eval.yaml) | ### 🦉 OLMo 3 家族 | 模型 | 示例配置 | |-------|------------------------| | OLMo 3 7B Instruct | [FFT](/configs/recipes/olmo3/sft/7b_full/train.yaml) • [推理](/configs/recipes/olmo3/inference/7b_infer.yaml) • [评估](/configs/recipes/olmo3/evaluation/7b/eval.yaml) | | OLMo 3 32B Instruct | [LoRA](/configs/recipes/olmo3/sft/32b_lora/train.yaml) • [推理](/configs/recipes/olmo3/inference/32b_infer.yaml) • [评估](/configs/recipes/olmo3/evaluation/32b/eval.yaml) | ### 🎨 视觉模型 | 模型 | 示例配置 | |-------|------------------------| | Llama 3.2 Vision 11B | [SFT](/configs/recipes/vision/llama3_2_vision/sft/11b_full/train.yaml) • [LoRA](/configs/recipes/vision/llama3_2_vision/sft/11b_lora/train.yaml) • [推理 (vLLM)](/configs/recipes/vision/llama3_2_vision/inference/11b_rvllm_infer.yaml) • [推理 (SGLang)](/configs/recipes/vision/llama3_2_vision/inference/11b_sglang_infer.yaml) • [评估](/configs/recipes/vision/llama3_2_vision/evaluation/11b_eval.yaml) | | LLaVA 7B | [SFT](/configs/recipes/vision/llava_7b/sft/train.yaml) • [推理 (vLLM)](configs/recipes/vision/llava_7b/inference/vllm_infer.yaml) • [推理](/configs/recipes/vision/llava_7b/inference/infer.yaml) | | Phi3 Vision 4.2B | [SFT](/configs/recipes/vision/phi3/sft/full/train.yaml) • [LoRA](/configs/recipes/vision/phi3/sft/lora/train.yaml) • [推理 (vLLM)](configs/recipes/vision/phi3/inference/vllm_infer.yaml) | | Phi4 Vision 5.6B | [SFT](/configs/recipes/vision/phi4/sft/full/train.yaml) • [LoRA](/configs/recipes/vision/phi4/sft/lora/train.yaml) • [推理 (vLLM)](configs/recipes/vision/phi4/inference/vllm_infer.yaml) • [推理](/configs/recipes/vision/phi4/inference/infer.yaml) | | Qwen2-VL 2B | [SFT](/configs/recipes/vision/qwen2_vl_2b/sft/full/train.yaml) • [LoRA](/configs/recipes/vision/qwen2_vl_2b/sft/lora/train.yaml) • [推理 (vLLM)](configs/recipes/vision/qwen2_vl_2b/inference/vllm_infer.yaml) • [推理 (SGLang)](configs/recipes/vision/qwen2_vl_2b/inference/sglang_infer.yaml) • [推理](configs/recipes/vision/qwen2_vl_2b/inference/infer.yaml) • [评估](configs/recipes/vision/qwen2_vl_2b/evaluation/eval.yaml) | | Qwen3-VL 2B | [推理](/configs/recipes/vision/qwen3_vl_2b/inference/infer.yaml) | | Qwen3-VL 4B | [推理](/configs/recipes/vision/qwen3_vl_4b/inference/infer.yaml) | | Qwen3-VL 8B | [推理](/configs/recipes/vision/qwen3_vl_8b/inference/infer.yaml) | | Qwen2.5-VL 3B | [SFT](/configs/recipes/vision/qwen2_5_vl_3b/sft/full/train.yaml) • [LoRA](/configs/recipes/vision/qwen2_5_vl_3b/sft/lora/train.yaml)• [推理 (vLLM)](configs/recipes/vision/qwen2_5_vl_3b/inference/vllm_infer.yaml) • [推理](configs/recipes/vision/qwen2_5_vl_3b/inference/infer.yaml) | | SmolVLM-Instruct 2B | [SFT](/configs/recipes/vision/smolvlm/sft/full/train.yaml) • [LoRA](/configs/recipes/vision/smolvlm/sft/lora/train.yaml) | ### 🔍 更多选项 本节列出了所有可与 Oumi 一起使用的语言模型。得益于与 [🤗 Transformers](https://github.com/huggingface/transformers) 库的集成,您可以轻松使用这些模型中的任何一个进行训练、评估或推理。 带有对勾标记 (✅) 前缀的模型已经过 Oumi 社区的彻底测试和验证,并在 [configs/recipes](configs/recipes) 目录中提供了现成的配方。
📋 点击查看更多支持的模型 #### 指令模型 | 模型 | 大小 | 论文 | HF Hub | 许可证 | 开源 [^1] | |-------|------|-------|---------|----------|------| | ✅ SmolLM-Instruct | 135M/360M/1.7B | [博客](https://huggingface.co/blog/smollm) | [Hub](https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct) | Apache 2.0 | ✅ | | ✅ DeepSeek R1 家族 | 1.5B/8B/32B/70B/671B | [博客](https://api-docs.deepseek.com/news/news250120) | [Hub](https://huggingface.co/deepseek-ai/DeepSeek-R1) | MIT | ❌ | | ✅ Llama 3.1 Instruct | 8B/70B/405B | [论文](https://arxiv.org/abs/2407.21783) | [Hub](https://huggingface.co/meta-llama/Llama-3.1-70b-instruct) | [许可证](https://llama.meta.com/llama3/license/) | ❌ | | ✅ Llama 3.2 Instruct | 1B/3B | [论文](https://arxiv.org/abs/2407.21783) | [Hub](https://huggingface.co/meta-llama/Llama-3.2-3b-instruct) | [许可证](https://llama.meta.com/llama3/license/) | ❌ | | ✅ Llama 3.3 Instruct | 70B | [论文](https://arxiv.org/abs/2407.21783) | [Hub](https://huggingface.co/meta-llama/Llama-3.3-70b-instruct) | [许可证](https://llama.meta.com/llama3/license/) | ❌ | | ✅ Phi-3.5-Instruct | 4B/14B | [论文](https://arxiv.org/abs/2404.14219) | [Hub](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) | [许可证](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) | ❌ | | ✅ Qwen3 | 0.6B-32B | [论文](https://arxiv.org/abs/2505.09388) | [Hub](https://huggingface.co/Qwen/Qwen3-32B) | [许可证](https://github.com/QwenLM/Qwen/blob/main/LICENSE) | ❌ | | Qwen2.5-Instruct | 0.5B-70B | [论文](https://arxiv.org/abs/2309.16609) | [Hub](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | [许可证]( ## 📖 文档 要了解更多关于平台功能的信息,请参阅 [Oumi 文档](https://oumi.ai/docs)。 ## 🤝 加入社区 Oumi 是一项社区优先的工作。无论您是开发人员、研究人员还是非技术用户,都非常欢迎所有贡献! - 要为 `oumi` 仓库做贡献,请查看 [`CONTRIBUTING.md`](https://github.com/oumi-ai/oumi/blob/main/CONTRIBUTING.md) 以获取有关如何贡献并发送您的第一个 Pull Request 的指南。 - 请务必加入我们的 [Discord 社区](https://discord.gg/oumi),以获取帮助、分享您的经验并为项目做出贡献! - 如果您有兴趣加入社区的开放科学工作之一,请查看我们的 [开放合作](https://oumi.ai/community) 页面。 ## 🙏 致谢 Oumi 使用了开源社区的 [多个库](https://oumi.ai/docs/en/latest/about/acknowledgements.html) 和工具。我们要感谢并深深感谢这些项目的贡献者! ✨ 🌟 💫 ## 📝 引用 如果您在研究中发现 Oumi 有用,请考虑引用它: ``` @software{oumi2025, author = {Oumi Community}, title = {Oumi: an Open, End-to-end Platform for Building Large Foundation Models}, month = {January}, year = {2025}, url = {https://github.com/oumi-ai/oumi} } ``` ## 📜 许可证 本项目根据 Apache License 2.0 授权。有关详细信息,请参阅 [LICENSE](LICENSE) 文件。 [^1]: 开源模型定义为具有完全开放的权重、训练代码和数据以及宽松许可证的模型。有关更多信息,请参阅 [开源定义](https://opensource.org/ai)。
标签:AI开发框架, Apex, CUDA, DeepSeek-R1, DLL 劫持, Fine-tuning, GPT, LLM, MLOps, Python, PyTorch, Qwen3, RAG, trl, Unmanaged PE, veRL, vLLM, VLM, 人工智能, 凭据扫描, 基础模型, 多模态, 大语言模型, 开源模型, 强化学习, 微调, 无后门, 机器学习, 模型评估, 模型部署, 深度学习, 漏洞管理, 用户模式Hook绕过, 系统调用监控, 训练框架, 请求拦截, 逆向工具