配置提示词装饰器
在使用大型语言模型 (LLM) 进行专业内容生成时,通常会预先设计并配置提示词(Prompt),将其作为后续交互中的“行为规则”,以约束模型在既定指南和安全标准下运行。
本文将介绍如何在 APISIX 中使用 ai-prompt-decorator 插件配置提示词装饰器,在用户自定义消息前(prepend)和后(append)追加额外内容。本文以 OpenAI 作为示例上游服务,但同样适用于其他 LLM 服务提供商。
前置条件
获取 OpenAI API Key
在继续之前,请创建一个 OpenAI 账户 并生成一个 API Key。你也可以将其保存为环境变量,例如:
export OPENAI_API_KEY=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 # 替换为你的 API Key
创建路由
在本示例中,你将:
- 在请求前追加一个 system 提示,要求模型简洁且概念化地回答问题;
- 在请求后追加一个 user 提示,要求模型在回答结尾使用一个简单的类比。
创建一个指向 Chat Completion 接口的路由,并预配置提示词模板:
- Admin API
- Ingress Controller
curl "http://127.0.0.1:9180/apisix/admin/routes" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"id": "ai-prompt-decorator-route",
"uri": "/anything",
"plugins": {
"ai-proxy": {
"provider": "openai",
"auth": {
"header": {
"Authorization": "Bearer '"$OPENAI_API_KEY"'"
}
},
"options": {
"model": "gpt-4"
}
},
"ai-prompt-decorator": {
"prepend":[
{
"role": "system",
// Annotate 1
"content": "Answer briefly and conceptually."
}
],
"append":[
{
"role": "user",
// Annotate 2
"content": "End the answer with a simple analogy."
}
]
}
}
}'
❶ 在用户消息之前追加 system 消息,用于设定助手的行为方式。
❷ 在用户原始提示之后追加额外的 user 消息。
- Gateway API
- APISIX CRD
prompt-decorator-route.yaml
apiVersion: apisix.apache.org/v1alpha1
kind: PluginConfig
metadata:
namespace: ingress-apisix
name: ai-prompt-decor-plugin-config
spec:
plugins:
- name: ai-proxy
config:
provider: openai
auth:
header:
Authorization: "Bearer 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"
options:
model: gpt-4
- name: ai-prompt-decorator
config:
prepend:
- role: system
// Annotate 1
content: Answer briefly and conceptually.
append:
- role: user
// Annotate 2
content: End the answer with a simple analogy.
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
namespace: ingress-apisix
name: ai-prompt-decorator-route
spec:
parentRefs:
- name: apisix
rules:
- matches:
- path:
type: Exact
value: /anything
filters:
- type: ExtensionRef
extensionRef:
group: apisix.apache.org
kind: PluginConfig
name: ai-prompt-decor-plugin-config
prompt-decorator-route.yaml
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
namespace: ingress-apisix
name: ai-prompt-decorator-route
spec:
ingressClassName: apisix
http:
- name: ai-prompt-decorator-route
match:
paths:
- /anything
plugins:
- name: ai-proxy
enable: true
config:
// Annotate 1
provider: openai
auth:
header:
// Annotate 2
Authorization: "Bearer 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"
options:
model: gpt-4
- name: ai-prompt-decorator
enable: true
config:
prepend:
- role: system
// Annotate 1
content: Answer briefly and conceptually.
append:
- role: user
// Annotate 2
content: End the answer with a simple analogy.
❶ 在用户消息之前追加 system 消息,用于设定助手行为。
❷ 在用户自定义提示之后追加额外的 user 消息。
将配置应用到集群:
kubectl apply -f prompt-decorator-route.yaml
验证
向该路由发送一个包含示例消息的 POST 请求:
curl "http://127.0.0.1:9080/anything" -X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [{ "role": "user", "content": "What is mTLS authentication?" }]
}'
你将收到类似如下的响应:
{
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "Mutual TLS (mTLS) authentication is a security protocol that ensures both the client and server authenticate each other's identity before establishing a connection. ... Think of mTLS as a secret handshake between two friends meeting at a club. Both must know the handshake to get in, ensuring they recognize and trust each other before entering.",
"role": "assistant"
}
}
],
"created": 1723193502,
"id": "chatcmpl-9uFdWDlwKif6biCt9DpG0xgedEamg",
"model": "gpt-4o-2024-05-13",
"object": "chat.completion",
"system_fingerprint": "fp_abc28019ad",
"usage": {
"completion_tokens": 124,
"prompt_tokens": 31,
"total_tokens": 155
}
}
可以看到,模型的回答既遵循了“简洁且概念化”的要求,也在结尾使用了一个简单类比。
下一步
你现在已经学会了在 APISIX 与 LLM 服务集成时如何配置提示词装饰器。
如果你希望对接 OpenAI 的流式 API,可以使用 proxy-buffering 插件禁用 NGINX 的 proxy_buffering 指令,以避免服务器发送事件(SSE)被缓冲。