feat(log): 绝对索引优化与强制 Reset 安全契约 (v1.1.13)

This commit is contained in:
AI Engineer 2026-05-09 14:44:41 +08:00
parent 78d6addf4c
commit 03267710dc
19 changed files with 691 additions and 463 deletions

View File

@ -1,6 +1,18 @@
# Changelog # Changelog
## [1.1.10] - 2026-05-05 ## [1.1.13] - 2026-05-09
- **绝对索引优化与零空洞**:
- 彻底消除 `BaseLog` 与业务字段之间的索引空洞。字段位置调整为:`BaseLog` (0-5),标准消息字段 (`Info`, `Error` 等) 与业务日志字段从 `pos: 6` 起始。
- **智能平移映射**: 改进 `pos >= 1000` 的逻辑。这些字段(如 `Extra`, `CallStacks`)不再产生稀疏 0 占位符,而是根据定义的 `pos` 顺序,自动平移并紧跟在当前类型的最大绝对索引之后。既保证了“永远在末尾”,又维持了数组的紧凑性。
- **强制 Reset 安全契约**:
- `RegisterType` 引入严苛校验:若自定义日志类型包含字段但未显式重写 `Reset()`(即仅继承了 `BaseLog.Reset`),注册时将触发 **Panic**。强制开发者显式清理业务数据,杜绝对象池复用时的脏数据隐患。
- **应用名称识别增强**:
- `GetDefaultName` 引入 `runtime/debug.ReadBuildInfo()`。相比传统的文件夹路径读取,能更精准地识别 Go Module 定义的应用名称。
- **工具链增强**:
- `serializer_test.go` 新增原始日志输出,方便开发者通过 `go test -v` 直观校验 JSON 数组结构。
## [1.1.12] - 2026-05-09
...
- **稳定性增强**: - **稳定性增强**:
- 修复 `TestSplitTag` 在秒级进位边界时的偶发性失败。优化后的测试逻辑同时校验日志产生前后的两个潜在时间槽,闭环消除了环境抖动导致的 race condition。 - 修复 `TestSplitTag` 在秒级进位边界时的偶发性失败。优化后的测试逻辑同时校验日志产生前后的两个潜在时间槽,闭环消除了环境抖动导致的 race condition。

155
README.md
View File

@ -9,7 +9,7 @@
* **极致高性能**:采用 **Meta-Driven Positional Array (元数据驱动定长数组)** 架构。日志以单行 JSON 数组 (`[...]`) 形式落盘,消除 Key 冗余与装箱开销,性能提升数倍。 * **极致高性能**:采用 **Meta-Driven Positional Array (元数据驱动定长数组)** 架构。日志以单行 JSON 数组 (`[...]`) 形式落盘,消除 Key 冗余与装箱开销,性能提升数倍。
* **架构解耦**:元数据外置于 `.log.meta.json`。日志包仅负责高速序列化,可视化由外部工具或 `Viewable` 接口根据元数据动态渲染。 * **架构解耦**:元数据外置于 `.log.meta.json`。日志包仅负责高速序列化,可视化由外部工具或 `Viewable` 接口根据元数据动态渲染。
* **零摩擦入口**自动识别环境上下文应用名、IP等无需手动构建。 * **零摩擦入口**自动识别环境上下文应用名、IP等无需手动构建。
* **语义脱敏**:内置敏感信息(如手机号、密钥)的自动脱敏与正则过滤 * **语义脱敏**:内置敏感信息(如手机号、密钥)的自动脱敏。
* **高度可扩展**支持多种写入渠道文件切分、Elasticsearch批量传输 * **高度可扩展**支持多种写入渠道文件切分、Elasticsearch批量传输
## 📦 安装 ## 📦 安装
@ -23,14 +23,62 @@ go get apigo.cc/go/log
```go ```go
import "apigo.cc/go/log" import "apigo.cc/go/log"
// 使用默认配置初始化 (或在配置中指定) // 默认 logger (通过 log.json 或环境变量配置)
logger := log.NewLogger(log.Config{Name: "my-app", Level: "info"}) func main() {
log.Info("服务启动", "port", 8080)
log.Error("数据库连接失败", "db", "mysql")
// 记录业务日志 (自动通过 cast.ToMap 处理变长参数) // 创建带 traceId 的新 logger 实例
logger.Info("用户登录", "userId", 10086, "ip", "1.2.3.4") logger := log.New("trace-xyz-123")
logger.Error("数据库连接失败", "db", "mysql", "err", err) }
``` ```
## ⚙️ 配置 (Configuration)
本包深度集成 `@go/config`,支持多种灵活的配置方式,优先级从高到低:
1. **环境变量** (最高优先级)
2. **环境特定文件** (`env.json` / `env.yml`,需增加层级 `log:`)
3. **基础配置文件** (`log.json` / `log.yml`)
### 1. 配置文件 (`log.json`)
在项目根目录创建 `log.json``log.yml`
```json
{
"name": "my-cool-app",
"level": "info",
"file": "logs/app.log",
"splitTag": ".2006-01-02",
"sensitive": "phone,password,secret,token,key"
}
```
### 2. 环境变量 (最高优先级)
任何配置都可以通过环境变量覆盖,变量名规则为 `LOG_` + `字段名`
```bash
# 覆盖日志级别和输出文件
export LOG_LEVEL=debug
export LOG_FILE=console
```
### 配置项说明
* `name`: 应用名称 (默认读取 DISCOVER_APP 或从 `go.mod` 自动识别)。
* `level`: 日志级别 (`debug`, `info`, `warning`, `error`)。
* `file`: 输出目标。
* `console`: 直接输出到控制台(默认)。
* `path/to/file.log`: 输出到指定文件。
* `es://...``ess://...`: 输出到 Elasticsearch。
* `splitTag`: 文件切分格式,仅当 `file` 为文件路径时有效。
* 语法遵循 Go 标准的 `time.Format` 布局,如 `".2006-01-02"` (按天切分)`".2006-01-02-15"` (按小时切分)。
* `truncations`: 堆栈信息截断前缀(多个以逗号分隔,默认截断 `github.com/`, `golang.org/`, `/apigo.cc/`)。
* `sensitive`: 需要自动脱敏的字段名(多个以逗号分隔,不区分大小写),默认处理 `phone,password,secret,token,key`
## 🛠 API 指南 ## 🛠 API 指南
### 核心功能 ### 核心功能
@ -39,54 +87,85 @@ logger.Error("数据库连接失败", "db", "mysql", "err", err)
* `Debug`, `Info`, `Warning`, `Error` —— 标准日志方法,支持 `message` + 变长 `extra` 参数。 * `Debug`, `Info`, `Warning`, `Error` —— 标准日志方法,支持 `message` + 变长 `extra` 参数。
2. **通用记录 (`Log`)** 2. **通用记录 (`Log`)**
* `Log(LogEntry)` —— 记录自定义结构的日志。注意:仅支持实现 `LogEntry` 接口的类型。 * `Log(LogEntry)` —— 记录自定义结构的日志。
3. **独立可视化工具 (`logv`)** 3. **独立可视化工具 (`logv`)**
* **安装** * **安装**: `go install apigo.cc/go/log/logv@latest`
```bash * **使用**: `tail -f app.log | logv``tail -f app.log | logv -json`
go install apigo.cc/go/log/logv@latest
```
* **使用**`tail -f app.log | logv` `tail -f app.log | logv -json`,依赖当前目录的 `.log.meta.json` 文件。
### 自定义日志扩展 ### 自定义日志扩展 (规范)
如果标准日志分级不能满足业务需求,可以轻松扩展自定义日志类型 为保证高性能与内存安全,扩展自定义日志类型必须遵循以下规范
1. **定义结构体**:必须嵌入 `log.BaseLog``log.ErrorLog` 等结构体以实现 `LogEntry` 接口。 1. **定义结构体**
2. **标注位置与样式**:使用 `log:"pos:N,color:xxx,hide:true,withoutkey:true"` 标签定义字段在数组中的位置及在 `logv` 中的显示样式。 * 必须嵌入 `log.BaseLog` (或其子类,如 `log.ErrorLog`)。
3. **注册模型**:在 `init()` 中调用 `log.RegisterType("my-type", MyLog{})` * **索引 (`pos`) 规范**:
4. **获取与发送**:使用 `log.GetEntry[MyLog]()` 并调用 `logger.Log(entry)` * `0`-`6``BaseLog`
5. **参考示例**: log/extra.go。 * 业务字段从 `6` 开始紧凑递增编号 (`pos:6`, `pos:7`, ...),如果删除了某个字段请留空 pos 以实现向前兼容。
* 如果继承自 `ErrorLog` 等,则业务字段应从 `7` 开始(查询父类最大值 + 1
* `Extra` 固定使用 `pos:1000``CallStacks` 固定使用 `pos:1001` (它们会被自动平移到数组末尾)。
2. **实现 `Reset()` 方法 (强制)**
* **必须**重写 `Reset()` 方法以初始化/清空数据避免对象池复用时产生脏数据。
* `Reset()` 方法中必须首先调用父级的 `Reset()` (如 `l.BaseLog.Reset()`)。
* **安全保障**: 若未重写 `Reset``RegisterType` 将在启动时 **Panic**,以防止对象池复用时产生脏数据。
* **建议**: map / slice 类型建第一次初始化一个容量,之后使用 clear() 方法清空数据避免内存重复分配。
3. **注册模型**
* 在 `init()` 中调用 `log.RegisterType("my-type", MyLog{})` 完成注册。
#### 示例: `DBErrorLog`
```go ```go
type BusinessLog struct { package main
log.BaseLog // 必须嵌入
Action string `log:"pos:10,color:cyan"` import "apigo.cc/go/log"
UserId string `log:"pos:11"`
// 1. 定义结构体 (字段从 pos:6 开始)
type DBErrorLog struct {
log.ErrorLog // 嵌入 ErrorLog自动获得 Error 和 CallStacks 字段
DB string `log:"pos:7,color:blue"`
SQL string `log:"pos:8"`
Args []any `log:"pos:9"`
UsedTime float32 `log:"pos:10,color:cyan"`
} }
// 2. 实现 Reset() 方法 (强制)
func (l *DBErrorLog) Reset() {
l.ErrorLog.Reset() // 必须先调用父级 Reset
l.DB = ""
l.SQL = ""
if l.Args == nil {
l.Args = make([]any, 0, 10)
} else {
clear(l.Args) // 清空内容
l.Args = l.Args[:0] // 清空长度
}
l.UsedTime = 0
}
// 3. 注册
func init() { func init() {
log.RegisterType("business", BusinessLog{}) log.RegisterType("dbError", &DBErrorLog{})
} }
func LogBusiness(logger *log.Logger, action, userId string) { // 4. 使用示例
entry := log.GetEntry[BusinessLog]() func LogDBError(logger *log.Logger, db, sql string, args []any, err error, usedTime float32) {
logger.FillBase(&entry.BaseLog, "business") entry := log.GetEntry[DBErrorLog]()
entry.Action = action
entry.UserId = userId // 自动填充基础字段和 ErrorLog 字段
logger.FillError(&entry.ErrorLog, err.Error())
// 填充自定义字段
entry.DB = db
entry.SQL = sql
entry.Args = append(entry.Args, args...)
entry.UsedTime = usedTime
logger.Log(entry) logger.Log(entry)
} }
``` ```
### 配置项 (JSON/YAML)
* `Name`: 应用名称。
* `Level`: 日志级别 (`debug`, `info`, `warning`, `error`)。
* `File`: 输出目标(支持 `console``es://` 地址)。
* `SplitTag`: 文件切分标识(仅在输出到文件时有效)。
* `Truncations`: 堆栈信息截断前缀(多个以逗号分隔,默认截断 `github.com/`, `golang.org/`, `/apigo.cc/`)。
* `Sensitive`: 需要脱敏的 Key 名(多个以逗号分隔,默认包含 `phone`, `password`, `secret`, `token`, `accessToken`)。
## 🧪 验证状态 ## 🧪 验证状态
测试全部通过,异步写入与性能达标。 测试全部通过,异步写入与性能达标。

19
TEST.md
View File

@ -9,23 +9,22 @@
| 测试用例 | 迭代次数 | 耗时 (ns/op) | 内存分配 (B/op) | 分配次数 (allocs/op) | | 测试用例 | 迭代次数 | 耗时 (ns/op) | 内存分配 (B/op) | 分配次数 (allocs/op) |
| :--- | :--- | :--- | :--- | :--- | | :--- | :--- | :--- | :--- | :--- |
| `BenchmarkLogger_RequestLog_Realistic` | 544,791 | 2,230 | 561 | 17 | | `BenchmarkLogger_RequestLog_Realistic` | 344,300 | 3,338 | 1,331 | 19 |
| `BenchmarkLoggerInfo` | 368,821 | 3,042 | - | - | | `BenchmarkLoggerInfo` | 291,952 | 4,083 | - | - |
| `BenchmarkLoggerAsyncConcurrent` | 1,216,018 | 919 | - | - | | `BenchmarkLoggerAsyncConcurrent` | 784,453 | 1,466 | - | - |
## 版本对比评估 ## 版本对比评估
| 版本 | 机制 | 存储格式 | 可视化 | 性能 (Async) | | 版本 | 机制 | 存储格式 | 可视化 | 性能 (Async) |
| :--- | :--- | :--- | :--- | :--- | | :--- | :--- | :--- | :--- | :--- |
| **v1.0.3** | Map 序列化 | JSON Object | 内置 | ~8,773 ns/op | | **v1.0.3** | Map 序列化 | JSON Object | 内置 | ~8,773 ns/op |
| **v1.1.4** | Meta-Driven Array | JSON Array | 独立工具/Meta | ~7,080 ns/op |
| **v1.1.6** | BaseLog Pointer Opt | JSON Array | 独立工具/Meta | ~7,445 ns/op |
| **v1.1.7** | Dead Code Removal | JSON Array | 独立工具/Meta | ~1,059 ns/op | | **v1.1.7** | Dead Code Removal | JSON Array | 独立工具/Meta | ~1,059 ns/op |
| **v1.1.8** | Hybrid Deep Masking | JSON Array | 独立工具/Meta | ~914 ns/op | | **v1.1.10** | Stability & Infrastructure | JSON Array | 独立工具/Meta | ~919 ns/op |
| **v1.1.10** | **Stability & Infrastructure** | **JSON Array** | 独立工具/Meta | **~919 ns/op** | | **v1.1.11** | **Absolute Indexing (Schema)** | **Fixed Array** | **LogType Opt** | **~1,466 ns/op** |
## 总结 ## 总结
- **稳定性增强**: v1.1.10 修复了 `SplitTag` 轮转测试在秒级边界时的 race condition测试用例更加健壮。 - **Schema 兼容性**: v1.1.11 实现了 `pos` 绝对索引。虽然因数组稀疏化(填充 0导致序列化开销略微增加~1.4µs但换取了极强的 Schema 稳定性,适配各类数仓接入。
- **基础设施对齐**: 核心元数据解析与测试用例全面对齐 `go/cast``go/file` 基础设施,消除了原生 `strconv``os` 操作的摩擦。 - **可观测性**: 引入 `droppedLogs` 监控,解决了高并发场景下日志丢弃“黑盒”的问题。
- **性能维持**: 在增强稳定性的同时,异步并发性能稳定维持在 **~919ns**,保持了极高的吞吐能力。 - **鲁棒性**: 替换为 UDP 拨号法获取 IP消除了在 K8s 等复杂网络环境下的识别摩擦。

View File

@ -77,9 +77,6 @@ func NewESWriter(conf *Config) Writer {
} }
func (w *ESWriter) Log(entry LogEntry, data []byte) { func (w *ESWriter) Log(entry LogEntry, data []byte) {
// data is array, but ES needs object
// convert entry to JSON object
// TODO: Consider desensitization here if needed, but for now ToJSONBytes
objBytes, err := cast.ToJSONBytes(entry) objBytes, err := cast.ToJSONBytes(entry)
if err != nil || len(objBytes) == 0 { if err != nil || len(objBytes) == 0 {
return return

157
extra.go
View File

@ -1,157 +0,0 @@
package log
// import (
// "apigo.cc/go/cast"
// )
// type RequestLog struct {
// BaseLog
// ServerId string
// App string
// Node string
// ClientIp string
// FromApp string
// FromNode string
// UserId string
// DeviceId string
// ClientAppName string
// ClientAppVersion string
// SessionId string
// RequestId string
// Host string
// Scheme string
// Proto string
// AuthLevel int
// Priority int
// Method string
// Path string
// RequestHeaders map[string]string
// RequestData map[string]any
// UsedTime float32
// ResponseCode int
// ResponseHeaders map[string]string
// ResponseDataLength uint
// ResponseData string
// }
// func (logger *Logger) Request(
// method, path, host, scheme, proto string,
// clientIp, serverId, app, node string,
// fromApp, fromNode string,
// userId, deviceId, sessionId, requestId string,
// clientAppName, clientAppVersion string,
// authLevel, priority int,
// reqHeaders map[string]string,
// reqData map[string]any,
// responseCode int,
// usedTime float32,
// respHeaders map[string]string,
// responseData string,
// responseDataLength uint,
// extra ...any,
// ) {
// if !logger.CheckLevel(INFO) {
// return
// }
// entry := GetEntry[RequestLog]()
// logger.FillBase(&entry.BaseLog, LogTypeRequest)
// // 暴力平铺赋值,性能极高
// entry.Method = method
// entry.Path = path
// entry.Host = host
// entry.Scheme = scheme
// entry.Proto = proto
// entry.ClientIp = clientIp
// entry.ServerId = serverId
// entry.App = app
// entry.Node = node
// entry.FromApp = fromApp
// entry.FromNode = fromNode
// entry.UserId = userId
// entry.DeviceId = deviceId
// entry.SessionId = sessionId
// entry.RequestId = requestId
// entry.ClientAppName = clientAppName
// entry.ClientAppVersion = clientAppVersion
// entry.AuthLevel = authLevel
// entry.Priority = priority
// entry.RequestHeaders = reqHeaders
// entry.RequestData = reqData
// entry.ResponseCode = responseCode
// entry.UsedTime = usedTime
// entry.ResponseHeaders = respHeaders
// entry.ResponseData = responseData
// entry.ResponseDataLength = responseDataLength
// if len(extra) > 0 {
// cast.FillMap(&entry.Extra, extra)
// }
// logger.Log(entry)
// }
// type TaskLog struct {
// BaseLog
// Task string
// UsedTime float32
// Success bool
// Message string
// }
// type MonitorLog struct {
// BaseLog
// Target string
// Status int
// Message string
// }
// type StatisticLog struct {
// BaseLog
// Category string
// Item string
// Value float64
// }
// func (logger *Logger) Task(taskName string, usedTime float32, success bool, message string, extra ...any) {
// if logger.CheckLevel(INFO) {
// entry := GetEntry[TaskLog]()
// logger.FillBase(&entry.BaseLog, LogTypeTask)
// entry.Task = taskName
// entry.UsedTime = usedTime
// entry.Success = success
// entry.Message = message
// if len(extra) > 0 {
// cast.FillMap(&entry.Extra, extra)
// }
// logger.Log(entry)
// }
// }
// func (logger *Logger) Monitor(target string, status int, message string, extra ...any) {
// if logger.CheckLevel(INFO) {
// entry := GetEntry[MonitorLog]()
// logger.FillBase(&entry.BaseLog, LogTypeMonitor)
// entry.Target = target
// entry.Status = status
// entry.Message = message
// if len(extra) > 0 {
// cast.FillMap(&entry.Extra, extra)
// }
// logger.Log(entry)
// }
// }
// func (logger *Logger) Statistic(category, item string, value float64, extra ...any) {
// if logger.CheckLevel(INFO) {
// entry := GetEntry[StatisticLog]()
// logger.FillBase(&entry.BaseLog, LogTypeStatistic)
// entry.Category = category
// entry.Item = item
// entry.Value = value
// if len(extra) > 0 {
// cast.FillMap(&entry.Extra, extra)
// }
// logger.Log(entry)
// }
// }

224
extra_example.go Normal file
View File

@ -0,0 +1,224 @@
package log
// type RequestLog struct {
// BaseLog
// ServerId string `log:"pos:6"`
// App string `log:"pos:7"`
// Node string `log:"pos:8"`
// ClientIp string `log:"pos:9"`
// FromApp string `log:"pos:10"`
// FromNode string `log:"pos:11"`
// UserId string `log:"pos:12"`
// DeviceId string `log:"pos:13"`
// ClientAppName string `log:"pos:14"`
// ClientAppVersion string `log:"pos:15"`
// SessionId string `log:"pos:16"`
// RequestId string `log:"pos:17"`
// Host string `log:"pos:18"`
// Scheme string `log:"pos:19"`
// Proto string `log:"pos:20"`
// AuthLevel int `log:"pos:21"`
// Priority int `log:"pos:22"`
// Method string `log:"pos:23"`
// Path string `log:"pos:24"`
// RequestHeaders map[string]string `log:"pos:25"`
// RequestData map[string]any `log:"pos:26"`
// UsedTime float32 `log:"pos:27"`
// ResponseCode int `log:"pos:28"`
// ResponseHeaders map[string]string `log:"pos:29"`
// ResponseDataLength uint `log:"pos:30"`
// ResponseData string `log:"pos:31"`
// }
// func (l *RequestLog) Reset() {
// l.BaseLog.Reset()
// l.ServerId = ""
// l.App = ""
// l.Node = ""
// l.ClientIp = ""
// l.FromApp = ""
// l.FromNode = ""
// l.UserId = ""
// l.DeviceId = ""
// l.ClientAppName = ""
// l.ClientAppVersion = ""
// l.SessionId = ""
// l.RequestId = ""
// l.Host = ""
// l.Scheme = ""
// l.Proto = ""
// l.AuthLevel = 0
// l.Priority = 0
// l.Method = ""
// l.Path = ""
// if l.RequestHeaders == nil {
// l.RequestHeaders = make(map[string]string, 8)
// } else {
// clear(l.RequestHeaders)
// }
// if l.RequestData == nil {
// l.RequestData = make(map[string]any, 8)
// } else {
// clear(l.RequestData)
// }
// l.UsedTime = 0
// l.ResponseCode = 0
// if l.ResponseHeaders == nil {
// l.ResponseHeaders = make(map[string]string, 8)
// } else {
// clear(l.ResponseHeaders)
// }
// l.ResponseDataLength = 0
// l.ResponseData = ""
// }
// func (logger *Logger) Request(
// method, path, host, scheme, proto string,
// clientIp, serverId, app, node string,
// fromApp, fromNode string,
// userId, deviceId, sessionId, requestId string,
// clientAppName, clientAppVersion string,
// authLevel, priority int,
// reqHeaders map[string]string,
// reqData map[string]any,
// responseCode int,
// usedTime float32,
// respHeaders map[string]string,
// responseData string,
// responseDataLength uint,
// extra ...any,
// ) {
// if !logger.CheckLevel(INFO) {
// return
// }
// entry := GetEntry[RequestLog]()
// logger.FillBase(&entry.BaseLog, LogTypeRequest)
// // 暴力平铺赋值,性能极高
// entry.Method = method
// entry.Path = path
// entry.Host = host
// entry.Scheme = scheme
// entry.Proto = proto
// entry.ClientIp = clientIp
// entry.ServerId = serverId
// entry.App = app
// entry.Node = node
// entry.FromApp = fromApp
// entry.FromNode = fromNode
// entry.UserId = userId
// entry.DeviceId = deviceId
// entry.SessionId = sessionId
// entry.RequestId = requestId
// entry.ClientAppName = clientAppName
// entry.ClientAppVersion = clientAppVersion
// entry.AuthLevel = authLevel
// entry.Priority = priority
// entry.RequestHeaders = reqHeaders
// entry.RequestData = reqData
// entry.ResponseCode = responseCode
// entry.UsedTime = usedTime
// entry.ResponseHeaders = respHeaders
// entry.ResponseData = responseData
// entry.ResponseDataLength = responseDataLength
// if len(extra) > 0 {
// cast.FillMap(&entry.Extra, extra)
// }
// logger.Log(entry)
// }
// type TaskLog struct {
// BaseLog
// Task string `log:"pos:6"`
// UsedTime float32 `log:"pos:7"`
// Success bool `log:"pos:8"`
// Message string `log:"pos:9"`
// }
// func (l *TaskLog) Reset() {
// l.BaseLog.Reset()
// l.Task = ""
// l.UsedTime = 0
// l.Success = false
// l.Message = ""
// }
// type MonitorLog struct {
// BaseLog
// Target string `log:"pos:6"`
// Status int `log:"pos:7"`
// Message string `log:"pos:8"`
// }
// func (l *MonitorLog) Reset() {
// l.BaseLog.Reset()
// l.Target = ""
// l.Status = 0
// l.Message = ""
// }
// type StatisticLog struct {
// BaseLog
// Category string `log:"pos:6"`
// Item string `log:"pos:7"`
// Value float64 `log:"pos:8"`
// }
// func (l *StatisticLog) Reset() {
// l.BaseLog.Reset()
// l.Category = ""
// l.Item = ""
// l.Value = 0
// }
// func (logger *Logger) Task(taskName string, usedTime float32, success bool, message string, extra ...any) {
// if logger.CheckLevel(INFO) {
// entry := GetEntry[TaskLog]()
// logger.FillBase(&entry.BaseLog, LogTypeTask)
// entry.Task = taskName
// entry.UsedTime = usedTime
// entry.Success = success
// entry.Message = message
// if len(extra) > 0 {
// cast.FillMap(&entry.Extra, extra)
// }
// logger.Log(entry)
// }
// }
// func (logger *Logger) Monitor(target string, status int, message string, extra ...any) {
// if logger.CheckLevel(INFO) {
// entry := GetEntry[MonitorLog]()
// logger.FillBase(&entry.BaseLog, LogTypeMonitor)
// entry.Target = target
// entry.Status = status
// entry.Message = message
// if len(extra) > 0 {
// cast.FillMap(&entry.Extra, extra)
// }
// logger.Log(entry)
// }
// }
// func (logger *Logger) Statistic(category, item string, value float64, extra ...any) {
// if logger.CheckLevel(INFO) {
// entry := GetEntry[StatisticLog]()
// logger.FillBase(&entry.BaseLog, LogTypeStatistic)
// entry.Category = category
// entry.Item = item
// entry.Value = value
// if len(extra) > 0 {
// cast.FillMap(&entry.Extra, extra)
// }
// logger.Log(entry)
// }
// }
// func init() {
// RegisterType(LogTypeRequest, &RequestLog{})
// RegisterType(LogTypeTask, &TaskLog{})
// RegisterType(LogTypeMonitor, &MonitorLog{})
// RegisterType(LogTypeStatistic, &StatisticLog{})
// }

15
go.mod
View File

@ -6,6 +6,7 @@ require (
apigo.cc/go/cast v1.2.8 apigo.cc/go/cast v1.2.8
apigo.cc/go/config v1.0.6 apigo.cc/go/config v1.0.6
apigo.cc/go/file v1.0.6 apigo.cc/go/file v1.0.6
apigo.cc/go/id v1.0.5
apigo.cc/go/shell v1.0.5 apigo.cc/go/shell v1.0.5
) )
@ -17,17 +18,3 @@ require (
golang.org/x/sys v0.43.0 // indirect golang.org/x/sys v0.43.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect
) )
replace apigo.cc/go/cast => ../cast
replace apigo.cc/go/config => ../config
replace apigo.cc/go/shell => ../shell
replace apigo.cc/go/file => ../file
replace apigo.cc/go/encoding => ../encoding
replace apigo.cc/go/safe => ../safe
replace apigo.cc/go/rand => ../rand

16
go.sum
View File

@ -1,3 +1,19 @@
apigo.cc/go/cast v1.2.8 h1:plb676DH2TjYljzf8OEMGT6lIhmZ/xaxEFfs0kDOiSI=
apigo.cc/go/cast v1.2.8/go.mod h1:lGlwImiOvHxG7buyMWhFzcdvQzmSaoKbmr7bcDfUpHk=
apigo.cc/go/config v1.0.6 h1:32nOCr+8AkGFnKuythCjHPOjxilg6SOlSWXKTkNtx6I=
apigo.cc/go/config v1.0.6/go.mod h1:nX+nLKZTP6Xton9Gt/9XsTh0d1sQ+Qkwysgyjq/k4R0=
apigo.cc/go/encoding v1.0.5 h1:a2XbXyd8D2gKo1ekXn/pt5adltWbIfdJCMhaF2uvzF0=
apigo.cc/go/encoding v1.0.5/go.mod h1:V5CgT7rBbCxy+uCU20q0ptcNNRSgMtpA8cNOs6r8IeI=
apigo.cc/go/file v1.0.6 h1:kyrPJ+oqC0DtYubX2aI+3QIVoDAPkRiYyBwd1F0cBlA=
apigo.cc/go/file v1.0.6/go.mod h1:AOw8+3q1fmCZpBWpBfUSSb+Q6Li3W9jH1EktQXmFhVg=
apigo.cc/go/id v1.0.5 h1:23YkR7oklSA69gthYlu8zl/kpIkeIoEYxi1f1Sz5l3A=
apigo.cc/go/id v1.0.5/go.mod h1:ZaYLIyrJvkf3j7J8a0lnKywSAHljaczWxU0x2HmQDzg=
apigo.cc/go/rand v1.0.5 h1:AkUoWr0SELgeDmRjLEDjOIp29nXdzqQQvmGRIHpTN7U=
apigo.cc/go/rand v1.0.5/go.mod h1:mZ/4Soa3bk+XvDaqPWJuUe1bfEi4eThBj1XmEAuYxsk=
apigo.cc/go/safe v1.0.5 h1:yZJLhpMntJrtqU/ev0UlyOoHu/cLrnnGUO4aHyIZcwE=
apigo.cc/go/safe v1.0.5/go.mod h1:i9xnh7reJIFPauLnlzuIDgvrQvhjxpFlpVh3O6ulWd0=
apigo.cc/go/shell v1.0.5 h1:bmvUTJGe1GwsHAy42v3iaoK40PoBC7Xq1aMCYxUZmtg=
apigo.cc/go/shell v1.0.5/go.mod h1:sx/nYw5CihHWmo5JHkaZUbmMYXNHx8swzArbQCUGHjc=
github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0= github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk= github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=

View File

@ -8,6 +8,7 @@ import (
"time" "time"
"apigo.cc/go/cast" "apigo.cc/go/cast"
"apigo.cc/go/id"
) )
type Logger struct { type Logger struct {
@ -47,6 +48,7 @@ func NewLogger(conf Config) *Logger {
logger := Logger{ logger := Logger{
truncations: cast.Split(conf.Truncations, ","), truncations: cast.Split(conf.Truncations, ","),
traceId: id.MakeID(10),
} }
if len(conf.Sensitive) > 0 { if len(conf.Sensitive) > 0 {

118
meta.go
View File

@ -36,6 +36,26 @@ func LoadMeta(path string) error {
// RegisterType registers a log model's metadata into the global registry. // RegisterType registers a log model's metadata into the global registry.
// logType is the string identifier (e.g. "info", "error"). // logType is the string identifier (e.g. "info", "error").
func RegisterType(logType string, model any) { func RegisterType(logType string, model any) {
t := reflect.TypeOf(model)
if t.Kind() == reflect.Ptr {
t = t.Elem()
}
// 强制检查 Reset 方法是否被显式实现(防止继承 BaseLog 后忘记重置业务字段)
if t.Kind() == reflect.Struct {
ptrType := reflect.PointerTo(t)
method, ok := ptrType.MethodByName("Reset")
if !ok {
panic("log model must implement Reset() method: " + t.Name())
}
// 检查该方法是否属于当前类型(而不是继承自 BaseLog 且没有被重写)
baseResetMethod, _ := reflect.PointerTo(reflect.TypeOf(BaseLog{})).MethodByName("Reset")
if method.Func.Pointer() == baseResetMethod.Func.Pointer() {
panic("log model must override Reset() method to clear its own fields: " + t.Name())
}
}
fields := extractMetaFields(model) fields := extractMetaFields(model)
metaLock.Lock() metaLock.Lock()
@ -52,7 +72,7 @@ func GetMeta(logType string) []MetaField {
return metaRegistry[logType] return metaRegistry[logType]
} }
// fieldInfo is used internally for sorting fields before flattening. // fieldInfo is used internally for storing fields with their absolute position.
type fieldInfo struct { type fieldInfo struct {
field reflect.StructField field reflect.StructField
pos int pos int
@ -68,46 +88,57 @@ func extractMetaFields(model any) []MetaField {
return nil return nil
} }
var flatFields []reflect.StructField var flatFields []fieldInfo
flattenStructFields(t, &flatFields, nil) flattenStructFields(t, &flatFields, nil)
var metaFields []MetaField // Determine final indices
var extraField *reflect.StructField maxLiteralPos := -1
var callStacksField *reflect.StructField var highPosFields []fieldInfo
// Process fields, separating Extra and CallStacks
var regularFields []reflect.StructField
for _, f := range flatFields { for _, f := range flatFields {
if f.Name == "Extra" { if f.pos < 1000 {
extraField = &f if f.pos > maxLiteralPos {
continue maxLiteralPos = f.pos
} }
if f.Name == "CallStacks" { } else {
callStacksField = &f highPosFields = append(highPosFields, f)
continue
} }
regularFields = append(regularFields, f)
} }
// Reassemble: regular fields -> CallStacks -> Extra // Sort high pos fields by their pos
var finalFields []reflect.StructField sort.Slice(highPosFields, func(i, j int) bool {
finalFields = append(finalFields, regularFields...) return highPosFields[i].pos < highPosFields[j].pos
if callStacksField != nil { })
finalFields = append(finalFields, *callStacksField)
// Assign real indices to high pos fields
finalPosMap := make(map[string]int)
for _, f := range flatFields {
if f.pos < 1000 {
finalPosMap[f.field.Name] = f.pos
} }
if extraField != nil { }
finalFields = append(finalFields, *extraField) nextPos := maxLiteralPos + 1
for _, f := range highPosFields {
finalPosMap[f.field.Name] = nextPos
nextPos++
} }
for i, f := range finalFields { maxPos := nextPos - 1
tag := f.Tag.Get("log") metaFields := make([]MetaField, maxPos+1)
// Initialize with empty MetaFields having Index set
for i := range metaFields {
metaFields[i] = MetaField{Index: i}
}
for _, f := range flatFields {
tag := f.field.Tag.Get("log")
if tag == "-" { if tag == "-" {
continue continue
} }
realPos := finalPosMap[f.field.Name]
meta := MetaField{ meta := MetaField{
Index: i, Index: realPos,
Name: f.Name, Name: f.field.Name,
} }
if tag != "" { if tag != "" {
@ -132,38 +163,31 @@ func extractMetaFields(model any) []MetaField {
} }
// Apply some default visual rules if not specified // Apply some default visual rules if not specified
// LogType shouldn't show the key in standard console // LogType shouldn't show the key in standard console
if f.Name == "LogType" && meta.Color == "" { if f.field.Name == "LogType" && meta.Color == "" {
meta.WithoutKey = true meta.WithoutKey = true
} }
metaFields = append(metaFields, meta) metaFields[realPos] = meta
} }
return metaFields return metaFields
} }
func flattenStructFields(t reflect.Type, result *[]reflect.StructField, parentIndex []int) { func flattenStructFields(t reflect.Type, result *[]fieldInfo, parentIndex []int) {
var infos []fieldInfo
for i := 0; i < t.NumField(); i++ { for i := 0; i < t.NumField(); i++ {
f := t.Field(i) f := t.Field(i)
if !f.IsExported() && !f.Anonymous { if !f.IsExported() && !f.Anonymous {
continue continue
} }
isEmbeddedStruct := f.Anonymous && f.Type.Kind() == reflect.Struct pos := 10 + i // default position if not specified
pos := 1000 + i // default position if not specified
if isEmbeddedStruct {
pos = i - 1000 // default to top priority for embedded structs
}
tag := f.Tag.Get("log") tag := f.Tag.Get("log")
if tag != "" { if tag != "" {
parts := strings.Split(tag, ",") parts := strings.Split(tag, ",")
for _, part := range parts { for _, part := range parts {
kv := strings.SplitN(part, ":", 2) kv := strings.SplitN(part, ":", 2)
if len(kv) == 2 && strings.TrimSpace(kv[0]) == "pos" { if len(kv) == 2 && strings.TrimSpace(kv[0]) == "pos" {
if p := cast.To[int](strings.TrimSpace(kv[1])); p > 0 { if p := cast.To[int](strings.TrimSpace(kv[1])); p >= 0 {
pos = p pos = p
} }
} }
@ -176,24 +200,14 @@ func flattenStructFields(t reflect.Type, result *[]reflect.StructField, parentIn
fullIndex = append(fullIndex, i) fullIndex = append(fullIndex, i)
f.Index = fullIndex f.Index = fullIndex
infos = append(infos, fieldInfo{ if f.Anonymous && f.Type.Kind() == reflect.Struct {
flattenStructFields(f.Type, result, f.Index)
} else {
*result = append(*result, fieldInfo{
field: f, field: f,
pos: pos, pos: pos,
}) })
} }
// Sort fields in the current struct level by pos
sort.Slice(infos, func(i, j int) bool {
return infos[i].pos < infos[j].pos
})
for _, info := range infos {
if info.field.Anonymous && info.field.Type.Kind() == reflect.Struct {
// Embedded struct, extract its fields first (parent first)
flattenStructFields(info.field.Type, result, info.field.Index)
} else {
*result = append(*result, info.field)
}
} }
} }

View File

@ -7,21 +7,42 @@ import (
) )
type MockBaseLog struct { type MockBaseLog struct {
BaseField1 string `log:"pos:1,color:red"` BaseField1 string `log:"pos:0,color:red"`
BaseField2 int `log:"pos:2,withoutkey:true"` BaseField2 int `log:"pos:1,withoutkey:true"`
} }
func (b *MockBaseLog) Reset() {
b.BaseField1 = ""
b.BaseField2 = 0
}
func (b *MockBaseLog) IsLogEntry() bool { return true }
func (b *MockBaseLog) GetBaseLog() *BaseLog { return &BaseLog{} }
type MockInfoLog struct { type MockInfoLog struct {
MockBaseLog MockBaseLog
Message string `log:"pos:3"` Message string `log:"pos:2"`
Extra map[string]any Extra map[string]any `log:"pos:1000"`
}
func (l *MockInfoLog) Reset() {
l.MockBaseLog.Reset()
l.Message = ""
clear(l.Extra)
} }
type MockErrorLog struct { type MockErrorLog struct {
MockBaseLog MockBaseLog
Error string `log:"pos:3,color:red"` Error string `log:"pos:2,color:red"`
CallStacks []string CallStacks []string `log:"pos:1001"`
Extra map[string]any Extra map[string]any `log:"pos:1000"`
}
func (l *MockErrorLog) Reset() {
l.MockBaseLog.Reset()
l.Error = ""
l.CallStacks = l.CallStacks[:0]
clear(l.Extra)
} }
func TestMetaExtraction(t *testing.T) { func TestMetaExtraction(t *testing.T) {
@ -33,36 +54,38 @@ func TestMetaExtraction(t *testing.T) {
RegisterType("mock_error", MockErrorLog{}) RegisterType("mock_error", MockErrorLog{})
infoMeta := GetMeta("mock_info") infoMeta := GetMeta("mock_info")
if len(infoMeta) != 4 { // BaseField1, BaseField2, Message, Extra // Index 0, 1, 2 are used, Extra gets max(2)+1=3. Total size 4.
if len(infoMeta) != 4 {
t.Fatalf("expected 4 fields for mock_info, got %d", len(infoMeta)) t.Fatalf("expected 4 fields for mock_info, got %d", len(infoMeta))
} }
if infoMeta[0].Name != "BaseField1" || infoMeta[0].Color != "red" { if infoMeta[0].Name != "BaseField1" || infoMeta[0].Color != "red" {
t.Errorf("unexpected meta for BaseField1: %+v", infoMeta[0]) t.Errorf("unexpected meta for BaseField1 at index 0: %+v", infoMeta[0])
} }
if infoMeta[1].Name != "BaseField2" || infoMeta[1].WithoutKey != true { if infoMeta[1].Name != "BaseField2" || infoMeta[1].WithoutKey != true {
t.Errorf("unexpected meta for BaseField2: %+v", infoMeta[1]) t.Errorf("unexpected meta for BaseField2 at index 1: %+v", infoMeta[1])
} }
if infoMeta[2].Name != "Message" { if infoMeta[2].Name != "Message" {
t.Errorf("unexpected meta for Message: %+v", infoMeta[2]) t.Errorf("unexpected meta for Message at index 2: %+v", infoMeta[2])
} }
if infoMeta[3].Name != "Extra" { if infoMeta[3].Name != "Extra" {
t.Errorf("unexpected meta for Extra: %+v", infoMeta[3]) t.Errorf("unexpected meta for Extra at index 3: %+v", infoMeta[3])
} }
errorMeta := GetMeta("mock_error") errorMeta := GetMeta("mock_error")
if len(errorMeta) != 5 { // BaseField1, BaseField2, Error, CallStacks, Extra // Indices: 0, 1, 2, Extra(3), CallStacks(4). Total size 5.
if len(errorMeta) != 5 {
t.Fatalf("expected 5 fields for mock_error, got %d", len(errorMeta)) t.Fatalf("expected 5 fields for mock_error, got %d", len(errorMeta))
} }
if errorMeta[2].Name != "Error" || errorMeta[2].Color != "red" { if errorMeta[2].Name != "Error" || errorMeta[2].Color != "red" {
t.Errorf("unexpected meta for Error: %+v", errorMeta[2]) t.Errorf("unexpected meta for Error at index 2: %+v", errorMeta[2])
} }
if errorMeta[3].Name != "CallStacks" { if errorMeta[3].Name != "Extra" {
t.Errorf("unexpected meta for CallStacks: %+v", errorMeta[3]) t.Errorf("unexpected meta for Extra at index 3: %+v", errorMeta[3])
} }
if errorMeta[4].Name != "Extra" { if errorMeta[4].Name != "CallStacks" {
t.Errorf("unexpected meta for Extra: %+v", errorMeta[4]) t.Errorf("unexpected meta for CallStacks at index 4: %+v", errorMeta[4])
} }
// Verify file was created and contains correct data // Verify file was created and contains correct data

80
pool.go
View File

@ -27,86 +27,12 @@ func GetEntry[T any]() *T {
}) })
} }
entry := p.(*sync.Pool).Get().(*T) entry := p.(*sync.Pool).Get().(*T)
ResetLogEntry(entry) // 自动重置所有字段,无需子类实现 Reset if le, ok := any(entry).(LogEntry); ok {
le.Reset()
}
return entry return entry
} }
// ResetLogEntry 使用反射自动化重置日志对象的所有字段
// 特别是对 Map 和 Slice 进行初始化长度0容量8
func ResetLogEntry(v any) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr || rv.IsNil() {
return
}
t := rv.Type()
resetFunc, ok := resetCache.Load(t)
if !ok {
resetFunc = buildResetFunc(t.Elem())
resetCache.Store(t, resetFunc)
}
resetFunc.(func(reflect.Value))(rv.Elem())
}
func buildResetFunc(t reflect.Type) func(reflect.Value) {
var funcs []func(reflect.Value)
for i := 0; i < t.NumField(); i++ {
field := t.Field(i)
fieldIdx := i
switch field.Type.Kind() {
case reflect.String:
funcs = append(funcs, func(rv reflect.Value) { rv.Field(fieldIdx).SetString("") })
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
funcs = append(funcs, func(rv reflect.Value) { rv.Field(fieldIdx).SetInt(0) })
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
funcs = append(funcs, func(rv reflect.Value) { rv.Field(fieldIdx).SetUint(0) })
case reflect.Float32, reflect.Float64:
funcs = append(funcs, func(rv reflect.Value) { rv.Field(fieldIdx).SetFloat(0) })
case reflect.Bool:
funcs = append(funcs, func(rv reflect.Value) { rv.Field(fieldIdx).SetBool(false) })
case reflect.Map:
funcs = append(funcs, func(rv reflect.Value) {
f := rv.Field(fieldIdx)
if f.IsNil() {
f.Set(reflect.MakeMapWithSize(f.Type(), 8))
} else {
f.Clear()
}
})
case reflect.Slice:
funcs = append(funcs, func(rv reflect.Value) {
f := rv.Field(fieldIdx)
if f.Cap() < 8 {
f.Set(reflect.MakeSlice(f.Type(), 0, 8))
} else {
f.SetLen(0)
}
})
case reflect.Struct:
subReset := buildResetFunc(field.Type)
funcs = append(funcs, func(rv reflect.Value) {
subReset(rv.Field(fieldIdx))
})
case reflect.Ptr, reflect.Interface:
zero := reflect.Zero(field.Type)
funcs = append(funcs, func(rv reflect.Value) {
rv.Field(fieldIdx).Set(zero)
})
}
}
return func(rv reflect.Value) {
for _, f := range funcs {
f(rv)
}
}
}
func resetStruct(rv reflect.Value) {
// 已经不再直接调用,保留 buildResetFunc 逻辑即可
}
// PutEntry 将日志对象归还到池中 // PutEntry 将日志对象归还到池中
func PutEntry(entry any) { func PutEntry(entry any) {
t := reflect.TypeOf(entry) t := reflect.TypeOf(entry)

View File

@ -11,6 +11,12 @@ type MockRequestLog struct {
UsedTime float32 UsedTime float32
} }
func (l *MockRequestLog) Reset() {
l.BaseLog.Reset()
l.RequestId = ""
l.UsedTime = 0
}
func TestWithEntry(t *testing.T) { func TestWithEntry(t *testing.T) {
WithEntry(func(entry *MockRequestLog) { WithEntry(func(entry *MockRequestLog) {
entry.RequestId = "with-entry-id" entry.RequestId = "with-entry-id"

View File

@ -3,6 +3,7 @@ package log
import ( import (
"bytes" "bytes"
"reflect" "reflect"
"sort"
"strconv" "strconv"
"apigo.cc/go/cast" "apigo.cc/go/cast"
@ -39,43 +40,50 @@ func getAccessors(logType string, model any) []fieldAccessor {
t = t.Elem() t = t.Elem()
} }
var flatFields []reflect.StructField var flatFields []fieldInfo
flattenStructFields(t, &flatFields, nil) flattenStructFields(t, &flatFields, nil)
var extraField *reflect.StructField // Determine final indices (must match meta.go)
var callStacksField *reflect.StructField maxLiteralPos := -1
var regularFields []reflect.StructField var highPosFields []fieldInfo
for _, f := range flatFields { for _, f := range flatFields {
if f.Name == "Extra" { if f.pos < 1000 {
extraField = &f if f.pos > maxLiteralPos {
continue maxLiteralPos = f.pos
} }
if f.Name == "CallStacks" { } else {
callStacksField = &f highPosFields = append(highPosFields, f)
continue
} }
regularFields = append(regularFields, f)
} }
var finalFields []reflect.StructField // Sort high pos fields by their pos
finalFields = append(finalFields, regularFields...) sort.Slice(highPosFields, func(i, j int) bool {
if callStacksField != nil { return highPosFields[i].pos < highPosFields[j].pos
finalFields = append(finalFields, *callStacksField)
}
if extraField != nil {
finalFields = append(finalFields, *extraField)
}
var accessors []fieldAccessor
for _, f := range finalFields {
if f.Tag.Get("log") == "-" {
continue
}
accessors = append(accessors, fieldAccessor{
indexPath: f.Index,
name: f.Name,
}) })
finalPosMap := make(map[string]int)
for _, f := range flatFields {
if f.pos < 1000 {
finalPosMap[f.field.Name] = f.pos
}
}
nextPos := maxLiteralPos + 1
for _, f := range highPosFields {
finalPosMap[f.field.Name] = nextPos
nextPos++
}
maxPos := nextPos - 1
accessors := make([]fieldAccessor, maxPos+1)
for _, f := range flatFields {
if f.field.Tag.Get("log") == "-" {
continue
}
realPos := finalPosMap[f.field.Name]
accessors[realPos] = fieldAccessor{
indexPath: f.field.Index,
name: f.field.Name,
}
} }
accessorsCache[logType] = accessors accessorsCache[logType] = accessors
@ -109,6 +117,11 @@ func ToArrayBytes(entry LogEntry, sensitiveKeys []string) []byte {
buf.WriteByte(',') buf.WriteByte(',')
} }
if acc.indexPath == nil {
buf.WriteByte('0')
continue
}
fv := v.FieldByIndex(acc.indexPath) fv := v.FieldByIndex(acc.indexPath)
writeValue(&buf, fv, acc.name, sensitiveKeys) writeValue(&buf, fv, acc.name, sensitiveKeys)
} }

View File

@ -6,10 +6,10 @@ import (
) )
type SerializerMockBaseLog struct { type SerializerMockBaseLog struct {
LogName string `log:"pos:1"` LogName string `log:"pos:0"`
LogType string `log:"pos:2"` LogType string `log:"pos:1"`
LogTime int64 `log:"pos:3"` LogTime int64 `log:"pos:2"`
TraceId string `log:"pos:4"` TraceId string `log:"pos:3"`
} }
func (b *SerializerMockBaseLog) IsLogEntry() bool { func (b *SerializerMockBaseLog) IsLogEntry() bool {
@ -17,15 +17,30 @@ func (b *SerializerMockBaseLog) IsLogEntry() bool {
} }
func (b *SerializerMockBaseLog) GetBaseLog() *BaseLog { func (b *SerializerMockBaseLog) GetBaseLog() *BaseLog {
// Return a dummy BaseLog just for interface satisfaction,
// ToArrayBytes actually extracts LogType from here, so let's mock it.
return &BaseLog{LogType: b.LogType} return &BaseLog{LogType: b.LogType}
} }
func (b *SerializerMockBaseLog) Reset() {
b.LogName = ""
b.LogType = ""
b.LogTime = 0
b.TraceId = ""
}
type SerializerMockInfoLog struct { type SerializerMockInfoLog struct {
SerializerMockBaseLog SerializerMockBaseLog
Message string `log:"pos:5"` Message string `log:"pos:4"`
Extra map[string]any Extra map[string]any `log:"pos:1000"`
}
func (l *SerializerMockInfoLog) Reset() {
l.SerializerMockBaseLog.Reset()
l.Message = ""
if l.Extra == nil {
l.Extra = make(map[string]any, 8)
} else {
clear(l.Extra)
}
} }
func TestToArrayBytes(t *testing.T) { func TestToArrayBytes(t *testing.T) {
@ -46,14 +61,15 @@ func TestToArrayBytes(t *testing.T) {
bytes := ToArrayBytes(entry, nil) bytes := ToArrayBytes(entry, nil)
str := string(bytes) str := string(bytes)
t.Logf("Raw log: %s", str)
// Expect format: ["test-app","mock_info_test",1620000000,"abc-123","Hello, World!",{"user_id":42}]
var arr []any var arr []any
err := json.Unmarshal(bytes, &arr) err := json.Unmarshal(bytes, &arr)
if err != nil { if err != nil {
t.Fatalf("failed to unmarshal generated array: %v, raw: %s", err, str) t.Fatalf("failed to unmarshal generated array: %v, raw: %s", err, str)
} }
// Indices: 0, 1, 2, 3, 4, 1000(mapped to 5). Total size 6.
if len(arr) != 6 { if len(arr) != 6 {
t.Fatalf("expected 6 elements, got %d. raw: %s", len(arr), str) t.Fatalf("expected 6 elements, got %d. raw: %s", len(arr), str)
} }
@ -77,7 +93,7 @@ func TestToArrayBytes(t *testing.T) {
extraMap, ok := arr[5].(map[string]any) extraMap, ok := arr[5].(map[string]any)
if !ok { if !ok {
t.Fatalf("expected arr[5] to be map[string]any, got %T", arr[5]) t.Fatalf("expected arr[5] to be map[string]any, got %T (value: %v)", arr[5], arr[5])
} }
if extraMap["user_id"] != float64(42) { if extraMap["user_id"] != float64(42) {
t.Errorf("expected extraMap['user_id'] == 42, got %v", extraMap["user_id"]) t.Errorf("expected extraMap['user_id'] == 42, got %v", extraMap["user_id"])

View File

@ -23,16 +23,17 @@ const LogEnvSensitive = "LOG_SENSITIVE"
type LogEntry interface { type LogEntry interface {
IsLogEntry() bool IsLogEntry() bool
GetBaseLog() *BaseLog GetBaseLog() *BaseLog
Reset()
} }
type BaseLog struct { type BaseLog struct {
LogName string `log:"pos:1,color:cyan,hide:true"` LogName string `log:"pos:0,color:cyan,hide:true"`
LogType string `log:"pos:2,color:magenta,hide:true"` LogType string `log:"pos:1,color:magenta,hide:true"`
LogTime int64 `log:"pos:3,format:time"` LogTime int64 `log:"pos:2,format:time"`
TraceId string `log:"pos:4,color:blue"` TraceId string `log:"pos:3,color:blue"`
Image string `log:"pos:5,color:darkGray,hide:true"` Image string `log:"pos:4,color:darkGray,hide:true"`
Server string `log:"pos:6,color:darkGray,hide:true"` Server string `log:"pos:5,color:darkGray,hide:true"`
Extra map[string]any `log:"pos:99"` Extra map[string]any `log:"pos:1000"`
} }
func (b *BaseLog) IsLogEntry() bool { func (b *BaseLog) IsLogEntry() bool {
@ -43,31 +44,67 @@ func (b *BaseLog) GetBaseLog() *BaseLog {
return b return b
} }
func (b *BaseLog) Reset() {
b.LogName = ""
b.LogType = ""
b.LogTime = 0
b.TraceId = ""
b.Image = ""
b.Server = ""
if b.Extra == nil {
b.Extra = make(map[string]any, 8)
} else {
clear(b.Extra)
}
}
type DebugLog struct { type DebugLog struct {
BaseLog BaseLog
Debug string `log:"pos:9,withoutkey:true"` // white Debug string `log:"pos:6,withoutkey:true"` // white
}
func (l *DebugLog) Reset() {
l.BaseLog.Reset()
l.Debug = ""
} }
type InfoLog struct { type InfoLog struct {
BaseLog BaseLog
Info string `log:"pos:9,color:cyan,withoutkey:true"` Info string `log:"pos:6,color:cyan,withoutkey:true"`
}
func (l *InfoLog) Reset() {
l.BaseLog.Reset()
l.Info = ""
} }
type WarningLog struct { type WarningLog struct {
BaseLog BaseLog
Warning string `log:"pos:9,color:yellow,withoutkey:true"` Warning string `log:"pos:6,color:yellow,withoutkey:true"`
CallStacks []string `log:"pos:98"` CallStacks []string `log:"pos:1001"`
}
func (l *WarningLog) Reset() {
l.BaseLog.Reset()
l.Warning = ""
l.CallStacks = l.CallStacks[:0]
} }
type ErrorLog struct { type ErrorLog struct {
BaseLog BaseLog
Error string `log:"pos:9,color:red,withoutkey:true"` Error string `log:"pos:6,color:red,withoutkey:true"`
CallStacks []string `log:"pos:98"` CallStacks []string `log:"pos:1001"`
}
func (l *ErrorLog) Reset() {
l.BaseLog.Reset()
l.Error = ""
l.CallStacks = l.CallStacks[:0]
} }
func init() { func init() {
RegisterType(LogTypeDebug, DebugLog{}) RegisterType(LogTypeDebug, &DebugLog{})
RegisterType(LogTypeInfo, InfoLog{}) RegisterType(LogTypeInfo, &InfoLog{})
RegisterType(LogTypeWarning, WarningLog{}) RegisterType(LogTypeWarning, &WarningLog{})
RegisterType(LogTypeError, ErrorLog{}) RegisterType(LogTypeError, &ErrorLog{})
} }

View File

@ -7,6 +7,7 @@ import (
"os" "os"
"path" "path"
"runtime" "runtime"
"runtime/debug"
"strings" "strings"
"time" "time"
@ -24,12 +25,21 @@ func init() {
dockerImageName = os.Getenv("DOCKER_IMAGE_NAME") dockerImageName = os.Getenv("DOCKER_IMAGE_NAME")
dockerImageTag = os.Getenv("DOCKER_IMAGE_TAG") dockerImageTag = os.Getenv("DOCKER_IMAGE_TAG")
serverName, _ = os.Hostname() serverName, _ = os.Hostname()
// 获取真实局域网 IP (UDP 8.8.8.8 伪拨号法)
conn, err := net.Dial("udp", "8.8.8.8:80")
if err == nil {
localAddr := conn.LocalAddr().(*net.UDPAddr)
serverIp = localAddr.IP.String()
_ = conn.Close()
}
if serverIp == "" {
addrs, err := net.InterfaceAddrs() addrs, err := net.InterfaceAddrs()
if err == nil { if err == nil {
for _, a := range addrs { for _, a := range addrs {
if an, ok := a.(*net.IPNet); ok { if an, ok := a.(*net.IPNet); ok {
// 忽略 Docker 私有网段 if an.IP.IsGlobalUnicast() {
if an.IP.IsGlobalUnicast() && !strings.HasPrefix(an.IP.To4().String(), "172.17.") {
serverIp = an.IP.To4().String() serverIp = an.IP.To4().String()
break break
} }
@ -37,6 +47,7 @@ func init() {
} }
} }
} }
}
// MakeTime 解析纳秒时间戳或 RFC3339 字符串 // MakeTime 解析纳秒时间戳或 RFC3339 字符串
func MakeTime(v any) time.Time { func MakeTime(v any) time.Time {
@ -188,13 +199,8 @@ func GetDefaultName() string {
name = os.Getenv("discover_app") name = os.Getenv("discover_app")
} }
if name == "" { if name == "" {
imageName := os.Getenv("DOCKER_IMAGE_NAME") if info, ok := debug.ReadBuildInfo(); ok && info.Path != "" && info.Path != "command-line-arguments" {
if imageName != "" { name = path.Base(info.Path)
parts := strings.Split(imageName, "/")
imageName = parts[len(parts)-1]
imageName = strings.SplitN(imageName, ":", 2)[0]
imageName = strings.SplitN(imageName, "#", 2)[0]
name = imageName
} }
} }
if name == "" { if name == "" {

View File

@ -34,16 +34,20 @@ func Viewable(line string) string {
return line return line
} }
if len(arr) < 3 { if len(arr) < 2 {
return line // At least Name, Type, Time return line
} }
logType := cast.String(arr[1]) logType := ""
if logType == "" { if len(arr) > 2 {
logType = "undefined" logType = cast.String(arr[2])
} }
meta := GetMeta(logType) meta := GetMeta(logType)
if len(meta) == 0 {
logType = cast.String(arr[1])
meta = GetMeta(logType)
}
if len(meta) == 0 { if len(meta) == 0 {
// Fallback rendering // Fallback rendering
return fallbackRenderArray(arr) return fallbackRenderArray(arr)
@ -52,7 +56,7 @@ func Viewable(line string) string {
var builder strings.Builder var builder strings.Builder
for i, v := range arr { for i, v := range arr {
if v == nil { if v == nil || cast.String(v) == "0" { // 0 is gap
continue continue
} }
if i >= len(meta) { if i >= len(meta) {
@ -65,7 +69,7 @@ func Viewable(line string) string {
m := meta[i] m := meta[i]
if m.Hide { if m.Hide || m.Name == "" {
continue continue
} }
@ -155,12 +159,20 @@ func ToJSON(line string) string {
return line return line
} }
if len(arr) < 3 { if len(arr) < 2 {
return line return line
} }
logType := cast.String(arr[1]) logType := ""
if len(arr) > 2 {
logType = cast.String(arr[2])
}
meta := GetMeta(logType) meta := GetMeta(logType)
if len(meta) == 0 {
logType = cast.String(arr[1])
meta = GetMeta(logType)
}
if len(meta) == 0 { if len(meta) == 0 {
return line return line
} }
@ -169,6 +181,9 @@ func ToJSON(line string) string {
for i, v := range arr { for i, v := range arr {
if i < len(meta) { if i < len(meta) {
m := meta[i] m := meta[i]
if m.Name == "" {
continue
}
if m.Name == "Extra" { if m.Name == "Extra" {
if extraMap, ok := v.(map[string]any); ok { if extraMap, ok := v.(map[string]any); ok {
for k, ev := range extraMap { for k, ev := range extraMap {
@ -178,7 +193,7 @@ func ToJSON(line string) string {
} else { } else {
result[m.Name] = v result[m.Name] = v
} }
} else { } else if cast.String(v) != "0" {
result[fmt.Sprintf("Extra%d", i)] = v result[fmt.Sprintf("Extra%d", i)] = v
} }
} }

View File

@ -27,6 +27,7 @@ var (
writerStopChan chan bool writerStopChan chan bool
writers atomic.Value // 存储 []Writer writers atomic.Value // 存储 []Writer
logChannel chan logPayload logChannel chan logPayload
droppedLogs atomic.Uint64
) )
// ConsoleWriter 控制台写入器 // ConsoleWriter 控制台写入器
@ -59,9 +60,21 @@ func WriteAsync(payload logPayload) {
select { select {
case logChannel <- payload: case logChannel <- payload:
default: default:
// 丢弃或处理过载,此处简单丢弃 // 丢弃或处理过载
dropped := droppedLogs.Add(1)
if dropped%1000 == 1 {
if DefaultLogger != nil {
// 注意:这里可能会产生递归调用,但 select default 保证了不会死锁
DefaultLogger.Error(fmt.Sprintf("log channel full, dropped %d logs", dropped))
} }
} }
}
}
// GetDroppedLogs 获取被丢弃的日志数量
func GetDroppedLogs() uint64 {
return droppedLogs.Load()
}
// Start 启动写入器 // Start 启动写入器
func Start() { func Start() {