strands-agents/sdk-python
GitHub: strands-agents/sdk-python
一个极简的模型驱动 AI Agent 框架,支持多种大模型和 MCP 协议,用几行 Python 代码即可构建从简单对话到复杂多 Agent 协作的智能应用。
Stars: 5860 | Forks: 837
Strands Agents 是一个简单而强大的 SDK,采用模型驱动的方法来构建和运行 AI 代理。从简单的对话助手到复杂的自主工作流,从本地开发到生产部署,Strands Agents 都能随您的需求进行扩展。
## 功能概述
- **轻量且灵活**:简单且开箱即用的代理循环,并完全支持自定义
- **模型无关**:支持 Amazon Bedrock、Anthropic、Gemini、LiteLLM、Llama、Ollama、OpenAI、Writer 以及自定义提供商
- **高级功能**:多代理系统、自主代理和流式传输支持
- **内置 MCP**:原生支持模型上下文协议 (Model Context Protocol, MCP) 服务器,可访问数千种预构建工具
## 快速入门
```
# Install Strands Agents
pip install strands-agents strands-agents-tools
```
```
from strands import Agent
from strands_tools import calculator
agent = Agent(tools=[calculator])
agent("What is the square root of 1764")
```
## 安装说明
确保您已安装 Python 3.10+,然后执行:
```
# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows use: .venv\Scripts\activate
# Install Strands and tools
pip install strands-agents strands-agents-tools
```
## 功能一览
### 基于 Python 的工具
使用 Python 装饰器轻松构建工具:
```
from strands import Agent, tool
@tool
def word_count(text: str) -> int:
"""Count words in text.
This docstring is used by the LLM to understand the tool's purpose.
"""
return len(text.split())
agent = Agent(tools=[word_count])
response = agent("How many words are in this sentence?")
```
**从目录热重载:**
启用从 `./tools/` 目录自动加载和重新加载工具:
```
from strands import Agent
# Agent will watch ./tools/ directory for changes
agent = Agent(load_tools_from_directory=True)
response = agent("Use any tools you find in the tools directory")
```
### MCP 支持
无缝集成模型上下文协议 (MCP) 服务器:
```
from strands import Agent
from strands.tools.mcp import MCPClient
from mcp import stdio_client, StdioServerParameters
aws_docs_client = MCPClient(
lambda: stdio_client(StdioServerParameters(command="uvx", args=["awslabs.aws-documentation-mcp-server@latest"]))
)
with aws_docs_client:
agent = Agent(tools=aws_docs_client.list_tools_sync())
response = agent("Tell me about Amazon Bedrock and how to use it with Python")
```
### 多个模型提供商
支持各种模型提供商:
```
from strands import Agent
from strands.models import BedrockModel
from strands.models.ollama import OllamaModel
from strands.models.llamaapi import LlamaAPIModel
from strands.models.gemini import GeminiModel
from strands.models.llamacpp import LlamaCppModel
# Bedrock
bedrock_model = BedrockModel(
model_id="us.amazon.nova-pro-v1:0",
temperature=0.3,
streaming=True, # Enable/disable streaming
)
agent = Agent(model=bedrock_model)
agent("Tell me about Agentic AI")
# Google Gemini
gemini_model = GeminiModel(
client_args={
"api_key": "your_gemini_api_key",
},
model_id="gemini-2.5-flash",
params={"temperature": 0.7}
)
agent = Agent(model=gemini_model)
agent("Tell me about Agentic AI")
# Ollama
ollama_model = OllamaModel(
host="http://localhost:11434",
model_id="llama3"
)
agent = Agent(model=ollama_model)
agent("Tell me about Agentic AI")
# Llama API
llama_model = LlamaAPIModel(
model_id="Llama-4-Maverick-17B-128E-Instruct-FP8",
)
agent = Agent(model=llama_model)
response = agent("Tell me about Agentic AI")
```
内置提供商:
- [Amazon Bedrock](https://strandsagents.com/docs/user-guide/concepts/model-providers/amazon-bedrock/)
- [Anthropic](https://strandsagents.com/docs/user-guide/concepts/model-providers/anthropic/)
- [Gemini](https://strandsagents.com/docs/user-guide/concepts/model-providers/gemini/)
- [Cohere](https://strandsagents.com/docs/user-guide/concepts/model-providers/cohere/)
- [LiteLLM](https://strandsagents.com/docs/user-guide/concepts/model-providers/litellm/)
- [llama.cpp](https://strandsagents.com/docs/user-guide/concepts/model-providers/llamacpp/)
- [LlamaAPI](https://strandsagents.com/docs/user-guide/concepts/model-providers/llamaapi/)
- [MistralAI](https://strandsagents.com/docs/user-guide/concepts/model-providers/mistral/)
- [Ollama](https://strandsagents.com/docs/user-guide/concepts/model-providers/ollama/)
- [OpenAI](https://strandsagents.com/docs/user-guide/concepts/model-providers/openai/)
- [OpenAI Responses API](https://strandsagents.com/docs/user-guide/concepts/model-providers/openai/)
- [SageMaker](https://strandsagents.com/docs/user-guide/concepts/model-providers/sagemaker/)
- [Writer](https://strandsagents.com/docs/user-guide/concepts/model-providers/writer/)
可以使用 [自定义提供商](https://strandsagents.com/docs/user-guide/concepts/model-providers/custom_model_provider/) 来实现自定义提供商
### 示例工具
Strands 提供了一个可选的 strands-agents-tools 包,其中包含用于快速实验的预构建工具:
```
from strands import Agent
from strands_tools import calculator
agent = Agent(tools=[calculator])
agent("What is the square root of 1764")
```
它也可以在 GitHub 上通过 [strands-agents/tools](https://github.com/strands-agents/tools) 获取。
### 双向流式传输
构建具有持久流式连接的实时语音和音频对话。与传统的请求-响应模式不同,双向流式传输维持长时间的对话,用户可以在其中打断、提供连续输入并接收实时音频响应。请按照 [快速入门](https://strandsagents.com/docs/user-guide/concepts/bidirectional-streaming/quickstart/) 指南开始构建您的第一个 BidiAgent。
**支持的模型提供商:**
- Amazon Nova Sonic (v1, v2)
- Google Gemini Live
- OpenAI Realtime API
**安装:**
```
# Server-side only (no audio I/O dependencies)
pip install strands-agents[bidi]
# With audio I/O support (includes PyAudio dependency)
pip install strands-agents[bidi,bidi-io]
```
**快速示例:**
```
import asyncio
from strands.experimental.bidi import BidiAgent
from strands.experimental.bidi.models import BidiNovaSonicModel
from strands.experimental.bidi.io import BidiAudioIO, BidiTextIO
from strands.experimental.bidi.tools import stop_conversation
from strands_tools import calculator
async def main():
# Create bidirectional agent with Nova Sonic v2
model = BidiNovaSonicModel()
agent = BidiAgent(model=model, tools=[calculator, stop_conversation])
# Setup audio and text I/O (requires bidi-io extra)
audio_io = BidiAudioIO()
text_io = BidiTextIO()
# Run with real-time audio streaming
# Say "stop conversation" to gracefully end the conversation
await agent.run(
inputs=[audio_io.input()],
outputs=[audio_io.output(), text_io.output()]
)
if __name__ == "__main__":
asyncio.run(main())
```
**配置选项:**
```
from strands.experimental.bidi.models import BidiNovaSonicModel
# Configure audio settings and turn detection (v2 only)
model = BidiNovaSonicModel(
provider_config={
"audio": {
"input_rate": 16000,
"output_rate": 16000,
"voice": "matthew"
},
"turn_detection": {
"endpointingSensitivity": "MEDIUM" # HIGH, MEDIUM, or LOW
},
"inference": {
"max_tokens": 2048,
"temperature": 0.7
}
}
)
# Configure I/O devices
audio_io = BidiAudioIO(
input_device_index=0, # Specific microphone
output_device_index=1, # Specific speaker
input_buffer_size=10,
output_buffer_size=10
)
# Text input mode (type messages instead of speaking)
text_io = BidiTextIO()
await agent.run(
inputs=[text_io.input()], # Use text input
outputs=[audio_io.output(), text_io.output()]
)
# Multi-modal: Both audio and text input
await agent.run(
inputs=[audio_io.input(), text_io.input()], # Speak OR type
outputs=[audio_io.output(), text_io.output()]
)
```
## 文档
有关详细的指南和示例,请浏览我们的文档:
- [用户指南](https://strandsagents.com/)
- [快速入门指南](https://strandsagents.com/docs/user-guide/quickstart/)
- [代理循环](https://strandsagents.com/docs/user-guide/concepts/agents/agent-loop/)
- [示例](https://strandsagents.com/docs/examples/)
- [API 参考](https://strandsagents.com/docs/api/python/strands.agent.agent/)
- [生产与部署指南](https://strandsagents.com/docs/user-guide/deploy/operating-agents-in-production/)
## 参与贡献 ❤️
我们欢迎各种形式的贡献!有关以下内容的详细信息,请参阅我们的 [贡献指南](CONTRIBUTING.md):
- 报告 Bug 和功能需求
- 开发环境设置
- 通过 Pull Request 贡献
- 行为准则
- 报告安全问题
## 与团队保持联系
欢迎来 [**Discord**](https://discord.com/invite/strands) 认识 Strands 团队和其他用户
## 许可证
本项目基于 Apache License 2.0 许可 - 详见 [LICENSE](LICENSE) 文件。
## 安全
有关更多信息,请参阅 [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications)。
标签:Agent Builder, AI代理, AI编程, AI风险缓解, API, LLM, MCP Server, Python, RAG, SOC Prime, Unmanaged PE, 人工智能, 大模型, 开发工具, 开源, 提示词工程, 无后门, 模型驱动, 用户模式Hook绕过, 策略决策点, 网络调试, 自动化, 逆向工具