NLP(一百一十七)使用Openai_Agents_SDK构建MCP

本文将会介绍如何在OpenAI开源的Agent框架OpenAI Agents中使用MCP,实现文件操作、Redis操作以及播放音乐等操作。

前言

在文章NLP(一百一十五)MCP入门与实践与文章NLP(一百一十六)使用MCP实现网络爬虫与数独自动化求解中,笔者介绍了如何在Cursor中使用MCP框架来实现文件操作、B站视频搜索、网络爬虫以及自动化求解数独等任务。上述两篇文章作为MCP的入门文章,能够让读者切实体验到MCP的强大与便利。

在今年的3月初,OpenAI官方宣布了自己公司的Agent SDK(参考网站: https://openai.com/index/new-tools-for-building-agents/),其在Github上对应的项目为openai-agents-python,网址为https://github.com/openai/openai-agents-python

就在一周前(3月27日),OpenAI再次释放大招,Agent SDK接入行业标准MCP,下一步将加持ChatGPT桌面版,欲彻底颠覆个人AI工作流。

这是一个不错的信号,强如OpenAI,也开始支持和拥抱对手的MCP协议,这正说明了MCP符合当前Agent的发展趋势。

本文将会在openai-agents-python项目官方例子的基础上,结合笔者在之前文章的经验和后续欲尝试的项目,给出几个使用的例子,并使用Langfuse进行可视化追踪,来演示如何在openai-agents-python中更好地使用MCP。

关于Langfuse的介绍与使用,可参考文章NLP(一百一十三)使用Langfuse提升LLM和Agents的可观测性

Let's go!

准备工作

  • 为了使用OpenAI中的大模型(LLM),你需要在OpenAI中注册账号并获取API Key.

  • 在使用openai-agents-python,先安装其对应的Python第三方模块:

1
pip install openai-agents
  • 为了使用MCP,笔者在本地电脑上已安装了NPM和Python环境管理工具uv,如果需要Langfuse来进行Agents的观测,则需注册Langfuse账号并安装langfuse第三方模块。

  • 在演示redis操作时,需在本地电脑上安装好redis数据库。

  • 在演示播放本地音乐时,需安装工具sox,Mac电脑安装方式如下:

1
brew install sox

文件操作

我们使用filesystem这个MCP Server来实现本地文件操作,其官方网址为:https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem ,以NPX方式在本地电脑使用,操作路径设置为/Users/admin/papers,该文件夹包括笔者下载的论文PDF文档。

笔者的输入问题为:

帮我找下这个路径下面的所有关于LLAMA的论文

以下是使用openai-agents模块来实现文件系统的操作脚本,Python代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
import os
import base64
import logfire
import shutil
import asyncio
from agents import Agent, Runner
from agents.mcp import MCPServer, MCPServerStdio

from dotenv import load_dotenv

load_dotenv()

# Build Basic Auth header.
LANGFUSE_AUTH = base64.b64encode(
f"{os.environ.get('LANGFUSE_PUBLIC_KEY')}:{os.environ.get('LANGFUSE_SECRET_KEY')}".encode()
).decode()

# Configure OpenTelemetry endpoint & headers
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = os.environ.get("LANGFUSE_HOST") + "/api/public/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = f"Authorization=Basic {LANGFUSE_AUTH}"

# Configure logfire instrumentation.
logfire.configure(
service_name='my_agent_service',
send_to_logfire=False
)
# This method automatically patches the OpenAI Agents SDK to send logs via OTLP to Langfuse.
logfire.instrument_openai_agents()

async def run(mcp_server: MCPServer):
agent = Agent(
name="File System Assistant",
model="gpt-4o",
instructions="Use the tools to read, write, and search the filesystem and answer questions based on those files.",
mcp_servers=[mcp_server],
)

message = "帮我找下这个路径下面的所有关于LLAMA的论文"
print(f"Running: {message}")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)



async def main():
samples_dir = "/Users/admin/papers"

async with MCPServerStdio(
name="Filesystem Server, via npx",
params={
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", samples_dir],
},
) as server:
await run(server)


if __name__ == "__main__":
# Let's make sure the user has npx installed
if not shutil.which("npx"):
raise RuntimeError("npx is not installed. Please install it with `npm install -g npx`.")

asyncio.run(main())

上述脚本的输出结果如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Secure MCP Filesystem Server running on stdio
Allowed directories: [ '/Users/admin/papers' ]
Running: 帮我找下这个路径下面的所有关于LLAMA的论文
15:46:36.985 OpenAI Agents trace: Agent workflow
15:46:36.986 Agent run: 'File System Assistant'
15:46:36.986 OpenAI agents: mcp_tools span
15:46:37.007 Responses API with 'gpt-4o'
15:46:40.497 Function: directory_tree
15:46:40.500 Responses API with 'gpt-4o'
15:46:43.545 Function: list_allowed_directories
15:46:43.549 Responses API with 'gpt-4o'
15:46:46.215 Function: search_files
15:46:46.226 Responses API with 'gpt-4o'
我在指定路径下找到了以下关于LLAMA的论文:

1. LLaMA.pdf
2. The Llama 3 Herd of Models.pdf
3. llama-2-70b.pdf

Langfuse中对上述工作流进行观测,如下图:

使用Agent和MCP实现文件操作

从上述中可知,Agent先对用户问题进行理解,再调用了MCP Server中的工具: directory_tree, list_allowed_directories, search_files,最后回答用户问题。流程上与上述终端输出一致,但在Langfuse能看到更多细节。比如调用工具的入参与返回结果,LLM调用的参数、花费,以及各阶段耗时、整体耗时等。

Tips

openai-agents模块中提供了两类server,说明如下:

  • stdio server以你开发的应用中的子进程方式运行,你可以把它们当作被本地运行。
  • HTTP over SSE server以远程方式运行。你需要使用URL进行连接。

上述两类server在模块中对应的类MCPServerStdioMCPServerSse.

Redis操作

我们使用Redis MCP Server来实现Redis操作,其官方网址为https://glama.ai/mcp/servers/@farhankaz/redis-mcp,以NPX方式运行,操作的Redis为本地Redis数据库。

笔者输入的问题为:

将所有这个路径下的PDF论文的绝对路径,都保存至redis中db为0, key=papers中的set中。

在这里我们将同时使用filesystemRedis MCP Server这两个MCP Server,并设定路径下的PDF文档名称保存至Redis.

以下是使用openai-agents模块来实现文件系统操作和Redis操作的脚本,Python代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
import os
import base64
import logfire
import shutil
import asyncio
from agents import Agent, Runner
from agents.mcp import MCPServer, MCPServerStdio

from dotenv import load_dotenv

load_dotenv()

# Build Basic Auth header.
LANGFUSE_AUTH = base64.b64encode(
f"{os.environ.get('LANGFUSE_PUBLIC_KEY')}:{os.environ.get('LANGFUSE_SECRET_KEY')}".encode()
).decode()

# Configure OpenTelemetry endpoint & headers
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = os.environ.get("LANGFUSE_HOST") + "/api/public/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = f"Authorization=Basic {LANGFUSE_AUTH}"

# Configure logfire instrumentation.
logfire.configure(
service_name='my_agent_service',
send_to_logfire=False
)
# This method automatically patches the OpenAI Agents SDK to send logs via OTLP to Langfuse.
logfire.instrument_openai_agents()

async def run(mcp_server_1: MCPServer, mcp_server_2: MCPServer):
agent = Agent(
name="File System Assistant",
model="gpt-4o",
instructions="Use the tools to read, write, and search the filesystem and answer questions based on those files.",
mcp_servers=[mcp_server_1, mcp_server_2],
)

message = "将所有这个路径下的PDF论文的绝对路径,都保存至redis中db为0, key=papers中的set中。"
print(f"Running: {message}")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)



async def main():
samples_dir = "/Users/admin/papers"

file_server = MCPServerStdio(
name="Filesystem Server, via npx",
params={
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", samples_dir],
},
)

redis_server = MCPServerStdio(
name="Redis Server, via npx",
params={
"command": "npx",
"args": ["redis-mcp", "--redis-host", "localhost", "--redis-port", "6379"],
"disabled": False
}
)

async with file_server as file_server, redis_server as redis_server:
await run(file_server, redis_server)


if __name__ == "__main__":
# Let's make sure the user has npx installed
if not shutil.which("npx"):
raise RuntimeError("npx is not installed. Please install it with `npm install -g npx`.")

asyncio.run(main())

运行上述脚本,输出结果如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Secure MCP Filesystem Server running on stdio
Allowed directories: [ '/Users/admin/papers' ]
npm WARN exec The following package was not found and will be installed: redis-mcp@0.0.4
Running: 将所有这个路径下的PDF论文的绝对路径,都保存至redis中db为0, key=papers中的set中。
16:00:56.948 OpenAI Agents trace: Agent workflow
16:00:56.950 Agent run: 'File System Assistant'
16:00:56.950 OpenAI agents: mcp_tools span
16:00:56.954 OpenAI agents: mcp_tools span
16:00:56.973 Responses API with 'gpt-4o'
16:00:59.962 Function: list_allowed_directories
16:00:59.964 Responses API with 'gpt-4o'
16:01:01.559 Function: search_files
16:01:01.578 Responses API with 'gpt-4o'
16:01:11.254 Function: sadd
16:01:11.294 Responses API with 'gpt-4o'
已将路径下所有PDF论文的绝对路径保存至 Redis 中,键为 `papers` 的集合中。

Langfuse观测如下图:

使用Agent和MCP实现Redis操作

我们观察到,Agent使用sadd函数将数据保存为集合(set)格式。

在Redis中查看数据,结果如下:

Redis中的数据

官方SSE例子

openai-agents官网的SSE例子中,实现的MCP Server如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
import random

import requests
from mcp.server.fastmcp import FastMCP

# Create server
mcp = FastMCP("Echo Server")


@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
print(f"[debug-server] add({a}, {b})")
return a + b


@mcp.tool()
def get_secret_word() -> str:
print("[debug-server] get_secret_word()")
return random.choice(["apple", "banana", "cherry"])


@mcp.tool()
def get_current_weather(city: str) -> str:
print(f"[debug-server] get_current_weather({city})")

endpoint = "https://wttr.in"
response = requests.get(f"{endpoint}/{city}")
return response.text


if __name__ == "__main__":
mcp.run(transport="sse")

在上述自己实现的MCP Server中,有三个工具可以调用,分别为:

  • add: 两数相加
  • get_secret_word: 获取秘密单词
  • get_current_weather: 获取输入城市的天气情况

在MCP Client中,用MCPServerSse来进行连接,Python代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
import asyncio
import os
import shutil
import base64
import logfire
import subprocess
import time
from typing import Any

from agents import Agent, Runner
from agents.mcp import MCPServer, MCPServerSse
from agents.model_settings import ModelSettings

from dotenv import load_dotenv

load_dotenv()

# Build Basic Auth header.
LANGFUSE_AUTH = base64.b64encode(
f"{os.environ.get('LANGFUSE_PUBLIC_KEY')}:{os.environ.get('LANGFUSE_SECRET_KEY')}".encode()
).decode()

# Configure OpenTelemetry endpoint & headers
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = os.environ.get("LANGFUSE_HOST") + "/api/public/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = f"Authorization=Basic {LANGFUSE_AUTH}"

# Configure logfire instrumentation.
logfire.configure(
service_name='my_agent_service',
send_to_logfire=False
)
# This method automatically patches the OpenAI Agents SDK to send logs via OTLP to Langfuse.
logfire.instrument_openai_agents()


async def run(mcp_server: MCPServer):
agent = Agent(
name="Assistant",
instructions="Use the tools to answer the questions.",
mcp_servers=[mcp_server],
model_settings=ModelSettings(tool_choice="required"),
)

# Use the `add` tool to add two numbers
message = "Add these numbers: 7 and 22."
print(f"Running: {message}")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)

# Run the `get_weather` tool
message = "What's the weather in Shanghai?"
print(f"\n\nRunning: {message}")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)

# Run the `get_secret_word` tool
message = "What's the secret word?"
print(f"\n\nRunning: {message}")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)


async def main():
async with MCPServerSse(
name="SSE Python Server",
params={
"url": "http://localhost:8000/sse",
},
) as server:
await run(server)


if __name__ == "__main__":
# Let's make sure the user has uv installed
if not shutil.which("uv"):
raise RuntimeError(
"uv is not installed. Please install it: https://docs.astral.sh/uv/getting-started/installation/"
)

# We'll run the SSE server in a subprocess. Usually this would be a remote server, but for this
# demo, we'll run it locally at http://localhost:8000/sse
process: subprocess.Popen[Any] | None = None
try:
this_dir = os.path.dirname(os.path.abspath(__file__))
server_file = os.path.join(this_dir, "openai_agents_local_mcp_server.py")

print("Starting SSE server at http://localhost:8000/sse ...")

# Run `uv run server.py` to start the SSE server
process = subprocess.Popen(["uv", "run", server_file])
# Give it 3 seconds to start
time.sleep(3)

print("SSE server started. Running example...\n\n")
except Exception as e:
print(f"Error starting SSE server: {e}")
exit(1)

try:
asyncio.run(main())
finally:
if process:
process.terminate()

在上述脚本中,会以子进程的方式来运行MCP Server(运行命令采用uv),并使用MCPServerSse类来连接子进程中的MCP Server。

运行上述脚本,输出结果如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
Starting SSE server at http://localhost:8000/sse ...
INFO: Started server process [97601]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
SSE server started. Running example...


INFO: 127.0.0.1:59992 - "GET /sse HTTP/1.1" 200 OK
INFO: 127.0.0.1:59994 - "POST /messages/?session_id=8ef9d738e0b249cf81e70ba029c0fb77 HTTP/1.1" 202 Accepted
Running: Add these numbers: 7 and 22.
03:06:38.062 OpenAI Agents trace: Agent workflow
03:06:38.063 Agent run: 'Assistant'
03:06:38.063 OpenAI agents: mcp_tools span
INFO: 127.0.0.1:59994 - "POST /messages/?session_id=8ef9d738e0b249cf81e70ba029c0fb77 HTTP/1.1" 202 Accepted
INFO: 127.0.0.1:59994 - "POST /messages/?session_id=8ef9d738e0b249cf81e70ba029c0fb77 HTTP/1.1" 202 Accepted
[04/03/25 11:06:38] INFO Processing request of type ListToolsRequest server.py:534
03:06:38.093 Responses API with 'gpt-4o'
03:06:41.730 Function: add
INFO: 127.0.0.1:59994 - "POST /messages/?session_id=8ef9d738e0b249cf81e70ba029c0fb77 HTTP/1.1" 202 Accepted
[04/03/25 11:06:41] INFO Processing request of type CallToolRequest server.py:534
[debug-server] add(7, 22)
03:06:41.735 Responses API with 'gpt-4o'
The sum of 7 and 22 is 29.


Running: What's the weather in Shanghai?
03:06:42.987 OpenAI Agents trace: Agent workflow
03:06:42.987 Agent run: 'Assistant'
03:06:42.988 OpenAI agents: mcp_tools span
INFO: 127.0.0.1:59994 - "POST /messages/?session_id=8ef9d738e0b249cf81e70ba029c0fb77 HTTP/1.1" 202 Accepted
[04/03/25 11:06:42] INFO Processing request of type ListToolsRequest server.py:534
03:06:42.995 Responses API with 'gpt-4o'
03:06:43.936 Function: get_current_weather
INFO: 127.0.0.1:59994 - "POST /messages/?session_id=8ef9d738e0b249cf81e70ba029c0fb77 HTTP/1.1" 202 Accepted
[04/03/25 11:06:43] INFO Processing request of type CallToolRequest server.py:534
[debug-server] get_current_weather(Shanghai)
03:06:45.572 Responses API with 'gpt-4o'
### Current Weather in Shanghai:
- **Condition:** Sunny
- **Temperature:** 14°C (feels like 13°C)
- **Wind:** 13 km/h ←
- **Visibility:** 10 km
- **Precipitation:** 0.0 mm

### Forecast:
- **Morning:** Partly Cloudy, 14°C
- **Noon:** Sunny, 17°C
- **Evening:** Sunny, 14°C
- **Night:** Clear, 12°C

No precipitation expected throughout the day.


Running: What's the secret word?
03:06:48.980 OpenAI Agents trace: Agent workflow
03:06:48.981 Agent run: 'Assistant'
03:06:48.982 OpenAI agents: mcp_tools span
INFO: 127.0.0.1:60084 - "POST /messages/?session_id=8ef9d738e0b249cf81e70ba029c0fb77 HTTP/1.1" 202 Accepted
[04/03/25 11:06:48] INFO Processing request of type ListToolsRequest server.py:534
03:06:48.991 Responses API with 'gpt-4o'
03:06:49.755 Function: [Scrubbed due to 'secret']
INFO: 127.0.0.1:60084 - "POST /messages/?session_id=8ef9d738e0b249cf81e70ba029c0fb77 HTTP/1.1" 202 Accepted
[04/03/25 11:06:49] INFO Processing request of type CallToolRequest server.py:534
[debug-server] get_secret_word()
03:06:49.762 Responses API with 'gpt-4o'
The secret word is "apple."
INFO: Shutting down
INFO: Waiting for background tasks to complete.

在上面的输出中,我们可以看到MCP Server与MCP Client的交互过程以及MCP Client的大体运行流程。

以问题Add these numbers: 7 and 22.为例,Langfuse中的观测情况如下:

播放音乐

笔者在本地电脑中已安装sox,该工具支持以终端命令行的方式来播放本地音乐文件,命令如下:

1
play xxx.mp3

我们来自行实现音乐播放MCP Server,Python代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import os

from mcp.server.fastmcp import FastMCP

# Create server
mcp = FastMCP("Music Play Server")


@mcp.tool()
def play_music(music_file_path: str):
"""
Play a music file.
Args:
music_file_path: The absolute path to the music file to play.
Returns:
A message indicating that the music is playing.
"""
print(f"[debug-server] play_music({music_file_path})")
os.system(f"play {music_file_path}")
return f"Playing {music_file_path}"


if __name__ == "__main__":
mcp.run(transport="sse")

这个MCP Server只有一个可调用的工具,那就是播放输入文件路径的音乐。

笔者输入的问题为:

给我播放这个目录下的小情歌这首歌曲。

在MCP Client中,使用filesystem来实现文件操作,设定目录为音乐文件所在目录,并使用上述MCP Server实现歌曲播放。实现的Python代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
import asyncio
import os
import shutil
import base64
import logfire
import subprocess
import time
from typing import Any

from agents import Agent, Runner
from agents.mcp import MCPServer, MCPServerSse, MCPServerStdio
from agents.model_settings import ModelSettings

from dotenv import load_dotenv

load_dotenv()

# Build Basic Auth header.
LANGFUSE_AUTH = base64.b64encode(
f"{os.environ.get('LANGFUSE_PUBLIC_KEY')}:{os.environ.get('LANGFUSE_SECRET_KEY')}".encode()
).decode()

# Configure OpenTelemetry endpoint & headers
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = os.environ.get("LANGFUSE_HOST") + "/api/public/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = f"Authorization=Basic {LANGFUSE_AUTH}"

# Configure logfire instrumentation.
logfire.configure(
service_name='my_agent_service',
send_to_logfire=False
)
# This method automatically patches the OpenAI Agents SDK to send logs via OTLP to Langfuse.
logfire.instrument_openai_agents()


async def run(mcp_server1: MCPServer, mcp_server2: MCPServer):
agent = Agent(
name="Assistant",
instructions="Use the tools to help me create a music operation.",
mcp_servers=[mcp_server1, mcp_server2],
model_settings=ModelSettings(tool_choice="required"),
)

# Use the `add` tool to add two numbers
message = "给我播放这个目录下的小情歌这首歌曲。"
print(f"Running: {message}")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)


async def main():
music_server = MCPServerSse(
name="Music Play Server",
params={
"url": "http://localhost:8000/sse",
},
)
file_system_server = MCPServerStdio(
name="Filesystem Server, via npx",
params={
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/admin/Music/my_music"],
}
)

async with music_server as mcp_server1, file_system_server as mcp_server2:
await run(mcp_server1=mcp_server1, mcp_server2=mcp_server2)


if __name__ == "__main__":
# Let's make sure the user has uv installed
if not shutil.which("uv"):
raise RuntimeError(
"uv is not installed. Please install it: https://docs.astral.sh/uv/getting-started/installation/"
)

# We'll run the SSE server in a subprocess. Usually this would be a remote server, but for this
# demo, we'll run it locally at http://localhost:8000/sse
process: subprocess.Popen[Any] | None = None
try:
this_dir = os.path.dirname(os.path.abspath(__file__))
server_file = os.path.join(this_dir, "openai_agents_music_server.py")

print("Starting SSE server at http://localhost:8000/sse ...")

# Run `uv run server.py` to start the SSE server
process = subprocess.Popen(["uv", "run", server_file])
# Give it 3 seconds to start
time.sleep(3)

print("SSE server started. Running example...\n\n")
except Exception as e:
print(f"Error starting SSE server: {e}")
exit(1)

try:
asyncio.run(main())
finally:
if process:
process.terminate()

运行的输出结果如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
Starting SSE server at http://localhost:8000/sse ...
INFO: Started server process [28954]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
SSE server started. Running example...


INFO: 127.0.0.1:53841 - "GET /sse HTTP/1.1" 200 OK
INFO: 127.0.0.1:53843 - "POST /messages/?session_id=2fcc60b6348a4b60aee082a79dba8faa HTTP/1.1" 202 Accepted
INFO: 127.0.0.1:53843 - "POST /messages/?session_id=2fcc60b6348a4b60aee082a79dba8faa HTTP/1.1" 202 Accepted
Secure MCP Filesystem Server running on stdio
Allowed directories: [ '/Users/admin/Music/my_music' ]
Running: 给我播放这个目录下的小情歌这首歌曲。
12:01:24.716 OpenAI Agents trace: Agent workflow
12:01:24.717 Agent run: 'Assistant'
12:01:24.718 OpenAI agents: mcp_tools span
INFO: 127.0.0.1:53843 - "POST /messages/?session_id=2fcc60b6348a4b60aee082a79dba8faa HTTP/1.1" 202 Accepted
[04/03/25 20:01:24] INFO Processing request of type ListToolsRequest server.py:534
12:01:24.724 OpenAI agents: mcp_tools span
12:01:24.745 Responses API with 'gpt-4o'
12:01:28.638 Function: list_allowed_directories
12:01:28.640 Responses API with 'gpt-4o'
12:01:29.854 Function: search_files
12:01:29.861 Responses API with 'gpt-4o'
12:01:31.216 Function: play_music
INFO: 127.0.0.1:53855 - "POST /messages/?session_id=2fcc60b6348a4b60aee082a79dba8faa HTTP/1.1" 202 Accepted
[04/03/25 20:01:31] INFO Processing request of type CallToolRequest server.py:534
[debug-server] play_music(/Users/admin/Music/my_music/小情歌_苏打绿.mp3)

/Users/admin/Music/my_music/小情歌_苏打绿.mp3:

File Size: 4.38M Bit Rate: 128k
Encoding: MPEG audio
Channels: 2 @ 16-bit
Samplerate: 44100Hz
Replaygain: off
Duration: 00:04:33.67

In:100% 00:04:33.63 [00:00:00.03] Out:13.1M [ | ] Clip:6
play WARN rate: rate clipped 3 samples; decrease volume?
play WARN sox: `coreaudio' output clipped 3 samples; decrease volume?
Done.
12:06:05.160 Responses API with 'gpt-4o'
正在播放《小情歌》 by 苏打绿。享受音乐吧!🎵
INFO: Shutting down
INFO: Waiting for background tasks to complete.

这个程序成功播放了《小情歌》歌曲,并在歌曲播放完毕后给出问题答复。

总结

本文介绍了如何在 OpenAI 的 Agent SDK 中使用 MCP 框架,以提升 AI 工作流的效率和可观测性。本文首先阐述了 OpenAI 对 MCP 协议的支持,强调了其发展趋势。

本文提供了几个基于openai-agents-python 项目的实用例子,包括文件操作、Redis 操作和播放音乐。这些例子展示了如何结合使用 MCP Server,例如filesystemRedis MCP Server,来完成复杂的任务。

此外,本文还展示了如何创建自定义的 MCP Server,例如播放音乐的 Server,并使用 SSE 进行远程调用。

通过这些例子,读者可以了解到在openai-agents-python 中使用 MCP 的基本方法,以及如何利用 MCP 实现各种自动化任务。

总而言之,MCP提供了一种强大且灵活的方式来扩展 AI Agent 的能力,使其能够与各种工具和服务进行交互,从而实现更复杂的任务。

参考文献

  1. New tools for building agents: https://openai.com/index/new-tools-for-building-agents/
  2. OpenAI Agents SDK: https://github.com/openai/openai-agents-python
  3. Filesystem MCP Server: https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem
  4. Redis MCP Server: https://glama.ai/mcp/servers/@farhankaz/redis-mcp
  5. Model context protocol (MCP) in OpenAI Agents SDK: https://openai.github.io/openai-agents-python/mcp/
  6. 在终端命令行下播放音乐的命令: https://www.cnblogs.com/pinganzi/p/7353074.html

欢迎关注我的公众号NLP奇幻之旅,原创技术文章第一时间推送。

欢迎关注我的知识星球“自然语言处理奇幻之旅”,笔者正在努力构建自己的技术社区。


NLP(一百一十七)使用Openai_Agents_SDK构建MCP
https://percent4.github.io/NLP(一百一十七)使用Openai-Agents-SDK构建MCP/
作者
Jclian91
发布于
2025年4月27日
许可协议