PeanutSplash/promptloom
GitHub: PeanutSplash/promptloom
逆向工程自 Claude Code 的提示词编译器,通过缓存边界、条件区块、工具注入和 Token 预算来编排生产级 LLM 系统提示词。
Stars: 1 | Forks: 0
# promptloom
**通过缓存边界、工具注入和 Token 预算,编织生产级 LLM 提示词。**
逆向工程自 [Claude Code](https://claude.ai/code) 的 7 层提示词架构——这正是 Anthropic 内部用于为其 50 万行以上 CLI 工具组装系统提示词的相同模式。
[](https://www.npmjs.com/package/promptloom)
[](./LICENSE)
[](https://www.typescriptlang.org/)
[](./package.json)
[快速开始](#quick-start) | [文档](#core-concepts) | [API 参考](#api-reference) | [LLM 文档](https://raw.githubusercontent.com/PeanutSplash/promptloom/main/llms.txt)
## 为什么
每个 LLM 应用都是将提示词拼接而成的。大多数应用使用字符串拼接来完成。而 Claude Code 使用**编译器**来实现——多区域缓存作用域、条件区块、逐工具提示词注入、延迟工具加载和 Token 预算追踪。
**promptloom** 将这些经过实战检验的模式提取为一个零依赖的库。
### 亮点
- **多区域缓存作用域** — 每个区域都有自己的缓存作用域(`global`、`org` 或 `null`),因此更改某个部分不会破坏其他部分的缓存
- **工具注册表** — 会话级别的提示词缓存,具有稳定的排序和延迟加载功能,适用于 40 种以上的工具配置
- **条件区块** — `when` 谓词根据模型、环境或用户类型控制是否包含
- **Token 估算与预算规划** — 内置于每次 `compile()` 调用中,带有针对 Agent 循环的边际效益递减检测
- **5 个提供商格式化工具** — `toAnthropic()` / `toOpenAI()` / `toOpenAIResponses()` / `toBedrock()` / `toGemini()` 加上任何 OpenAI 兼容的提供商(Groq、Together、DeepSeek、Mistral、Fireworks...)
## 安装
```
# npm
npm install promptloom
# bun
bun add promptloom
# pnpm
pnpm add promptloom
# yarn
yarn add promptloom
```
## 快速开始
### 对于 Agent
将此内容提供给您的 AI 助手,即可开始构建:
```
curl -s https://raw.githubusercontent.com/PeanutSplash/promptloom/main/llms.txt
```
或者直接在您的 AI 聊天中粘贴 URL:
```
https://raw.githubusercontent.com/PeanutSplash/promptloom/main/llms.txt
```
### 对于人类
```
import { PromptCompiler, toAnthropic } from 'promptloom'
const pc = new PromptCompiler()
// Zone 1: Attribution header (no cache)
pc.zone(null)
pc.static('attribution', 'x-billing-org: org-123')
// Zone 2: Static rules (globally cacheable)
pc.zone('global')
pc.static('identity', 'You are a code review bot.')
pc.static('rules', 'Only comment on bugs, not style.')
// Zone 3: Dynamic context (session-specific)
pc.zone(null)
pc.dynamic('diff', async () => {
const diff = await getCurrentDiff()
return `Review this diff:\n${diff}`
})
// Conditional section — only included for Opus models
pc.static('thinking', 'Use extended thinking for complex reviews.', {
when: (ctx) => ctx.model?.includes('opus') ?? false,
})
// Tools (inline + deferred)
pc.tool({
name: 'post_comment',
prompt: 'Post a review comment on a specific line of code.',
inputSchema: {
type: 'object',
properties: {
file: { type: 'string' },
line: { type: 'number' },
body: { type: 'string' },
},
required: ['file', 'line', 'body'],
},
order: 1,
})
pc.tool({
name: 'web_search',
prompt: 'Search the web for context.',
inputSchema: { type: 'object', properties: { query: { type: 'string' } } },
deferred: true, // excluded from prompt, loaded on demand
})
// Compile
const result = await pc.compile({ model: 'claude-opus-4-6' })
result.blocks // CacheBlock[] — one per zone, with scope annotations
result.tools // CompiledTool[] — inline tools only
result.deferredTools // CompiledTool[] — deferred tools
result.tokens // { systemPrompt, tools, deferredTools, total }
result.text // Full prompt as a single string
```
## 配合 API 使用
Anthropic
``` import Anthropic from '@anthropic-ai/sdk' import { PromptCompiler, toAnthropic } from 'promptloom' const pc = new PromptCompiler() // ... add zones, sections, and tools ... const result = await pc.compile({ model: 'claude-sonnet-4-6' }) const { system, tools } = toAnthropic(result) const response = await new Anthropic().messages.create({ model: 'claude-sonnet-4-6', max_tokens: 4096, system, // TextBlockParam[] with cache_control tools, // includes deferred tools with defer_loading: true messages: [{ role: 'user', content: 'Review this PR' }], }) ```OpenAI
``` import OpenAI from 'openai' import { PromptCompiler, toOpenAI } from 'promptloom' const pc = new PromptCompiler() // ... add zones, sections, and tools ... const result = await pc.compile() const { system, tools } = toOpenAI(result) const response = await new OpenAI().chat.completions.create({ model: 'gpt-4o', messages: [ { role: 'system', content: system }, { role: 'user', content: 'Review this PR' }, ], tools, }) ```OpenAI Responses API
``` import OpenAI from 'openai' import { PromptCompiler, toOpenAIResponses } from 'promptloom' const pc = new PromptCompiler() // ... add zones, sections, and tools ... const result = await pc.compile() const { instructions, tools } = toOpenAIResponses(result) const response = await new OpenAI().responses.create({ model: 'gpt-4o', instructions, input: 'Review this PR', tools, }) ```AWS Bedrock
``` import { PromptCompiler, toBedrock } from 'promptloom' const result = await pc.compile() const { system, toolConfig } = toBedrock(result) // Use with @aws-sdk/client-bedrock-runtime ConverseCommand ```Google Gemini / Vertex AI
``` import { GoogleGenAI } from '@google/genai' import { PromptCompiler, toGemini } from 'promptloom' const pc = new PromptCompiler() // ... add zones, sections, and tools ... const result = await pc.compile() const { systemInstruction, tools } = toGemini(result) const response = await new GoogleGenAI({ apiKey: '...' }).models.generateContent({ model: 'gemini-2.5-pro', contents: [{ role: 'user', parts: [{ text: 'Review this PR' }] }], config: { systemInstruction, tools }, }) ```OpenAI 兼容提供商 (Groq, Together, DeepSeek, Mistral, Fireworks)
`toOpenAI()` 适用于任何 OpenAI 兼容 API——只需替换 `baseURL`: ``` import OpenAI from 'openai' import { PromptCompiler, toOpenAI } from 'promptloom' const result = await pc.compile() const { system, tools } = toOpenAI(result) // Groq const groq = new OpenAI({ baseURL: 'https://api.groq.com/openai/v1', apiKey: '...' }) // Together AI const together = new OpenAI({ baseURL: 'https://api.together.xyz/v1', apiKey: '...' }) // DeepSeek const deepseek = new OpenAI({ baseURL: 'https://api.deepseek.com', apiKey: '...' }) // Mistral const mistral = new OpenAI({ baseURL: 'https://api.mistral.ai/v1', apiKey: '...' }) // Fireworks AI const fireworks = new OpenAI({ baseURL: 'https://api.fireworks.ai/inference/v1', apiKey: '...' }) ```
**[GitHub](https://github.com/PeanutSplash/promptloom)** | **[npm](https://www.npmjs.com/package/promptloom)** | **[LLM 文档](https://raw.githubusercontent.com/PeanutSplash/promptloom/main/llms.txt)**
标签:Anthropic, API密钥扫描, Bedrock, CIS基准, Claude, Claude Code, CVE检测, DeepSeek, DLL 劫持, DNS解析, Gemini, LLM, Mistral, MITM代理, NPM包, OpenAI, OSV-Scalibr, Together, Token估算, Token限制, Token预算, TypeScript, Unmanaged PE, 云资产清单, 内存规避, 后端开发, 多模型适配, 大语言模型, 安全插件, 工具注入, 开源项目, 提示词工程, 提示词组装, 提示词缓存, 提示词编织, 提示词编译器, 暗色界面, 条件渲染, 熵值分析, 策略决策点, 缓存边界, 逆向工程, 零依赖