跳到主要内容

error-log-logger

error-log-logger 插件将 APISIX 的错误日志 (error.log) 分批推送到 TCP、Apache SkyWalking、Apache Kafka 或 ClickHouse 服务器。你可以指定插件发送相应日志的严重级别。

该插件默认禁用。一旦启用,它将自动开始将错误日志推送到远程服务器。你应该仅在 插件元数据 中配置远程服务器详细信息,而不是在路由等其他资源上配置。

示例

下面的示例展示了如何在不同场景下配置 error-log-logger 插件。

如果你使用的是 API7 企业版,该插件默认启用。如果你使用的是 APISIX,error-log-logger 插件默认禁用。要启用该插件,请在 配置文件 中添加插件,如下所示:

config.yaml
plugins:
- ...
- error-log-logger

重新加载 APISIX 以使更改生效。

发送日志到 TCP 服务器

以下示例展示了如何配置 error-log-logger 插件将错误日志发送到 TCP 服务器。

在端口 19000 上启动一个 netcat 监听器作为示例 TCP 服务器:

nc -l 19000

配置 error-log-logger 的插件元数据:

curl "http://127.0.0.1:9180/apisix/admin/plugin_metadata/error-log-logger" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"tcp": {
"host": "192.168.2.103",
"port": 19000
},
"level": "INFO"
}'

❶ 替换为你的内部 IP 地址。

❷ 配置为你的 TCP 服务器监听端口。

❸ 将严重级别配置为 INFO,以便发送大多数日志,方便验证。

为了验证,你可以通过 重新加载 APISIX 手动生成一条 warn 级别的日志。

在 netcat 监听的终端会话中,你应该看到类似于以下的日志条目:

2025/01/26 20:15:29 [warn] 211#211: *35552 [lua] plugin.lua:205: load(): new plugins: {"cas-auth":true,"real-ip":true,"ai":true,"client-control":true,"proxy-control":true,"request-id":true,"zipkin":true,"ext-plugin-pre-req":true,"fault-injection":true,"mocking":true,"serverless-pre-function":true,"cors":true,"ip-restriction":true,"ua-restriction":true,"referer-restriction":true,"csrf":true,"uri-blocker":true,"request-validation":true,"chaitin-waf":true,"multi-auth":true,"openid-connect":true,"authz-casbin":true,"authz-casdoor":true,"wolf-rbac":true,"ldap-auth":true,"hmac-auth":true,"basic-auth":true,"jwt-auth":true,"redirect":true,"key-auth":true,"consumer-restriction":true,"attach-consumer-label":true,"authz-keycloak":true,"proxy-cache":true,"body-transformer":true,"ai-prompt-template":true,"ai-prompt-decorator":true,"proxy-mirror":true,"proxy-rewrite":true,"workflow":true,"api-breaker":true,"ai-proxy":true,"limit-conn":true,"limit-count":true,"limit-req":true,"gzip":true,"server-info":true,"traffic-split":true,"response-rewrite":true,"degraphql":true,"kafka-proxy":true,"grpc-transcode":true,"grpc-web":true,"http-dubbo":true,"public-api":true,"prometheus":true,"datadog":true,"loki-logger":true,"elasticsearch-logger":true,"echo":true,"loggly":true,"http-logger":true,"splunk-hec-logging":true,"skywalking-logger":true,"google-cloud-logging":true,"sls-logger":true,"tcp-logger":true,"kafka-logger":true,"rocketmq-logger":true,"syslog":true,"udp-logger":true,"file-logger":true,"clickhouse-logger":true,"tencent-cloud-cls":true,"inspect":true,"example-plugin":true,"aws-lambda":true,"azure-functions":true,"openwhisk":true,"openfunction":true,"error-log-logger":true,"ext-plugin-post-req":true,"ext-plugin-post-resp":true,"serverless-post-function":true,"opa":true,"forward-auth":true,"jwe-decrypt":true}, context: init_worker_by_lua*

发送日志到 SkyWalking

以下示例展示了如何配置 error-log-logger 插件将错误日志发送到 SkyWalking。

使用 Docker Compose 启动 SkyWalking 存储、OAP 和 Booster UI,参考 Skywalking 文档。设置完成后,OAP 服务器应在 12800 上监听,你应该能够访问 http://localhost:8080 的 UI。

配置 error-log-logger 的插件元数据:

curl "http://127.0.0.1:9180/apisix/admin/plugin_metadata/error-log-logger" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"skywalking": {
"endpoint_addr": "http://192.168.2.103:12800/v3/logs"
},
"level": "INFO"
}'

❶ 替换为你的 SkyWalking 服务器地址。

❷ 将严重级别配置为 INFO,以便发送大多数日志,方便验证。

为了验证,你可以通过 重新加载 APISIX 手动生成一条 warn 级别的日志。

Skywalking UI 中,导航到 General Service > Services。你应该看到一个名为 APISIX 的服务,其中包含以下日志条目:

2025/01/27 07:40:06 [warn] 211#211: *35552 [lua] plugin.lua:205: load(): new plugins: {"cas-auth":true,"real-ip":true,"ai":true,"client-control":true,"proxy-control":true,"request-id":true,"zipkin":true,"ext-plugin-pre-req":true,"fault-injection":true,"mocking":true,"serverless-pre-function":true,"cors":true,"ip-restriction":true,"ua-restriction":true,"referer-restriction":true,"csrf":true,"uri-blocker":true,"request-validation":true,"chaitin-waf":true,"multi-auth":true,"openid-connect":true,"authz-casbin":true,"authz-casdoor":true,"wolf-rbac":true,"ldap-auth":true,"hmac-auth":true,"basic-auth":true,"jwt-auth":true,"redirect":true,"key-auth":true,"consumer-restriction":true,"attach-consumer-label":true,"authz-keycloak":true,"proxy-cache":true,"body-transformer":true,"ai-prompt-template":true,"ai-prompt-decorator":true,"proxy-mirror":true,"proxy-rewrite":true,"workflow":true,"api-breaker":true,"ai-proxy":true,"limit-conn":true,"limit-count":true,"limit-req":true,"gzip":true,"server-info":true,"traffic-split":true,"response-rewrite":true,"degraphql":true,"kafka-proxy":true,"grpc-transcode":true,"grpc-web":true,"http-dubbo":true,"public-api":true,"prometheus":true,"datadog":true,"loki-logger":true,"elasticsearch-logger":true,"echo":true,"loggly":true,"http-logger":true,"splunk-hec-logging":true,"skywalking-logger":true,"google-cloud-logging":true,"sls-logger":true,"tcp-logger":true,"kafka-logger":true,"rocketmq-logger":true,"syslog":true,"udp-logger":true,"file-logger":true,"clickhouse-logger":true,"tencent-cloud-cls":true,"inspect":true,"example-plugin":true,"aws-lambda":true,"azure-functions":true,"openwhisk":true,"openfunction":true,"error-log-logger":true,"ext-plugin-post-req":true,"ext-plugin-post-resp":true,"serverless-post-function":true,"opa":true,"forward-auth":true,"jwe-decrypt":true}, context: init_worker_by_lua*

当生成其他严重级别(如 erroremerginfo)的日志时,你也应该观察到它们。

发送日志到 ClickHouse

以下示例展示了如何配置 error-log-logger 插件将错误日志发送到 ClickHouse。

使用用户 default 和空密码启动一个示例 ClickHouse 服务器:

docker run -d -p 8123:8123 -p 9000:9000 -p 9009:9009 --name clickhouse-server clickhouse/clickhouse-server

在 ClickHouse 数据库 default 中,创建一个名为 default_logs 的表,该表包含一个 data 列。请注意,插件期望将日志推送到 data 列。

curl "http://127.0.0.1:8123" -X POST -d '
CREATE TABLE default.default_logs (
data String,
PRIMARY KEY(`data`)
)
ENGINE = MergeTree()
' --user default:

配置 error-log-logger 的插件元数据:

curl "http://127.0.0.1:9180/apisix/admin/plugin_metadata/error-log-logger" -X PUT \
-H "X-API-KEY: ${ADMIN_API_KEY}" \
-d '{
"clickhouse": {
"endpoint_addr": "http://192.168.2.103:8123",
"user": "default",
"password": "",
"database": "default",
"logtable": "default_logs"
},
"level": "INFO"
}'

❶ 替换为你的 ClickHouse 服务器地址。

❷ 设置用户名为 default

❸ 设置密码为空。

❹ 设置数据库为 default

❺ 设置数据库表为 default_logs

❻ 将严重级别配置为 INFO,以便发送大多数日志,方便验证。

为了验证,你可以通过 重新加载 APISIX 手动生成一条 warn 级别的日志。

发送请求到 ClickHouse 以查看日志条目:

echo 'SELECT * FROM default.default_logs FORMAT Pretty' | curl "http://127.0.0.1:8123/?" -d @-

你应该看到类似于以下的日志条目:

2025/01/27 08:21:13 [warn] 353#353: *106572 [lua] plugin.lua:205: load(): new plugins: {"client-control":true,"proxy-control":true,"request-id":true,"zipkin":true,"ext-plugin-pre-req":true,"fault-injection":true,"mocking":true,"serverless-pre-function":true,"cors":true,"ip-restriction":true,"ua-restriction":true,"referer-restriction":true,"csrf":true,"uri-blocker":true,"request-validation":true,"chaitin-waf":true,"multi-auth":true,"openid-connect":true,"authz-casbin":true,"authz-casdoor":true,"wolf-rbac":true,"ldap-auth":true,"hmac-auth":true,"basic-auth":true,"jwt-auth":true,"jwe-decrypt":true,"key-auth":true,"consumer-restriction":true,"attach-consumer-label":true,"forward-auth":true,"opa":true,"authz-keycloak":true,"proxy-cache":true,"body-transformer":true,"ai-prompt-template":true,"ai-prompt-decorator":true,"proxy-mirror":true,"proxy-rewrite":true,"workflow":true,"api-breaker":true,"ai-proxy":true,"limit-conn":true,"limit-count":true,"limit-req":true,"gzip":true,"server-info":true,"traffic-split":true,"response-rewrite":true,"degraphql":true,"kafka-proxy":true,"grpc-transcode":true,"grpc-web":true,"http-dubbo":true,"public-api":true,"error-log-logger":true,"google-cloud-logging":true,"sls-logger":true,"tcp-logger":true,"kafka-logger":true,"rocketmq-logger":true,"syslog":true,"udp-logger":true,"file-logger":true,"clickhouse-logger":true,"tencent-cloud-cls":true,"inspect":true,"example-plugin":true,"aws-lambda":true,"azure-functions":true,"openwhisk":true,"openfunction":true,"serverless-post-function":true,"ext-plugin-post-req":true,"ext-plugin-post-resp":true,"redirect":true,"skywalking-logger":true,"splunk-hec-logging":true,"http-logger":true,"loggly":true,"echo":true,"elasticsearch-logger":true,"cas-auth":true,"prometheus":true,"datadog":true,"loki-logger":true,"real-ip":true,"ai":true}, context: init_worker_by_lua* │