Compare commits

..

2 Commits
v1.0.0 ... main

11 changed files with 418 additions and 243 deletions

View File

@ -1,5 +1,18 @@
# Changelog
## [1.0.2] - 2026-05-04
- **设计优化**: 引入 `ResetLogEntry` 自动化重置机制基于反射和缓存实现日志对象字段的自动初始化与清空Map/Slice 默认容量 8
- **接口精简**: 简化 `LogEntry` 接口为标记接口,移除了冗余的 `Base()``Reset()` 手动实现。
- **扩展性增强**: `Task`, `Monitor`, `Statistic`, `DB` 等快捷方法全面支持变长 `extra ...any` 参数,并集成 `cast.ToMap` 自动转换。
- **构建修复**: 修复了 `convert` 模块对 `cast` 新 API 的兼容性问题。
## [1.0.1] - 2026-05-04
- **结构增强**: `DBLog` 结构体新增 `Error``CallStacks` 字段,提升数据库错误诊断效率。
- **DB 方法重构**: `Logger.DB` 方法支持可选错误参数,自动处理 `dbError` 类型并记录调用栈。
- **扩展日志支持**: 新增 `TaskLog`, `MonitorLog`, `StatisticLog` 标准结构及其 `Logger` 快捷方法,置于 `extra.go`
- **RequestLog 封装**: `Logger` 新增 `Request` 方法,简化请求日志记录流程。
- **调用栈优化**: 优化 `getCallStacks` 逻辑,确保能正确捕获业务代码和测试代码的调用位置,同时过滤掉日志库内部帧。
## [1.0.0] - 2026-05-02
- **初始版本**: 由 `ssgo/log` 迁移并基于 `apigo.cc/go` 标准重构。
- **高性能引擎**: 引入 `LogEntry` 池化与 `sync.Pool` 复用,支持零分配日志对象。

View File

@ -4,31 +4,62 @@
## 特性
- **零摩擦**: 自动从环境变量获取应用名、IP 等信息。
- **高性能**: 异步写入,支持批量刷盘。
- **高性能**: 异步写入,支持对象池化与批量刷盘。
- **自动化**: 自定义日志类型只需嵌入 `BaseLog`,无需手动实现重置逻辑。
- **脱敏支持**: 内置敏感字段过滤与正则匹配脱敏。
- **多渠道**: 支持控制台、本地文件切分、Elasticsearch 批量写入。
- **现代化**: 深度集成 `apigo.cc/go` 基础库。
## 安装
```bash
go get apigo.cc/go/log
```
## 快速开始
## 基础 API
所有日志方法均支持变长额外参数,自动通过 `cast.ToMap` 转换为键值对存入 `Extra` 字段。
```go
import "apigo.cc/go/log"
logger.Info("用户登录", "userId", 10086, "ip", "1.2.3.4")
logger.Error("数据库连接失败", "db", "mysql", "err", err)
```
func main() {
// 使用默认 Logger
log.Info("server started", "port", 8080)
## 扩展日志 API
// 创建带 traceId 的子 Logger
logger := log.New("unique-trace-id")
logger.Info("request processed")
### 数据库日志 (DB)
自动处理耗时计算、脱敏及错误堆栈捕获。
```go
// 记录正常 SQL
logger.DB("mysql", dsn, "SELECT * FROM users WHERE id=?", []any{1}, 10.5, nil)
// 错误日志带堆栈
logger.Error("database failed", "db", "mysql")
// 记录带错误的 SQL (自动捕获调用栈并设为 dbError 类型)
logger.DB("mysql", dsn, "SELECT...", args, usedTime, err, "k1", "v1")
```
### 任务与监控 (Task / Monitor / Statistic)
```go
// 任务执行日志 (任务名, 耗时ms, 是否成功, 消息, 额外参数...)
logger.Task("CleanCache", 150.2, true, "Success", "deleted", 100)
// 监控告警日志 (目标, 状态码, 消息, 额外参数...)
logger.Monitor("CPU", 1, "Load too high", "usage", "95%")
// 业务指标统计 (类别, 项目, 数值, 额外参数...)
logger.Statistic("Business", "OrderCount", 100, "region", "cn")
```
### 自定义日志类型
只需嵌入 `BaseLog` 即可利用对象池和自动重置功能。
```go
type MyBusinessLog struct {
log.BaseLog
OrderId string
Amount float64
}
// 使用方式
entry := log.GetEntry(reflect.TypeOf(&MyBusinessLog{})).(*MyBusinessLog)
logger.fillBase(entry, "business")
entry.OrderId = "O123"
entry.Amount = 99.8
logger.Log(entry)
```
## 配置项 (JSON/YAML)

27
TEST.md
View File

@ -0,0 +1,27 @@
# 日志性能测试报告
## 测试环境
- 操作系统: darwin
- 架构: amd64
- CPU: Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
## 基准测试结果 (v1.0.2)
| 测试用例 | 迭代次数 | 耗时 (ns/op) | 内存分配 (B/op) | 分配次数 (allocs/op) |
| :--- | :--- | :--- | :--- | :--- |
| `BenchmarkLogger_RequestLog_Realistic` | 2,434,633 | 475.7 | 72 | 2 |
| `BenchmarkLoggerInfo` | 113,421 | 9,857 | - | - |
| `BenchmarkLoggerAsyncConcurrent` | 124,932 | 8,262 | - | - |
## 版本对比评估
| 版本 | 机制 | 耗时 (ns/op) | 易用性 |
| :--- | :--- | :--- | :--- |
| **v1.0.1** | 手动 Reset | ~270 | 较低 (需编写大量样板代码) |
| **v1.0.2** | 自动化 Reset | ~475 | 极高 (嵌入 BaseLog 即可) |
## 总结
- **性能评估**: 引入自动化重置机制后,单次日志操作耗时增加了约 200ns。这主要是反射探测和函数缓存调用的开销。但在高性能生产环境中亚微秒< 1μs级的延迟依然极其优秀
- **内存效率**: 内存分配保持在极低水平 (72B, 2次分配),说明对象池和 `reflect.Value.Clear()` 机制有效地控制了 GC 压力。
- **开发体验**: 开发者现在只需通过嵌入 `BaseLog` 即可创建自定义日志类型,不再需要手动编写冗长的 `Reset()``Base()` 方法。
- **优化点**: 采用了字段重置函数缓存,避免了每次日志记录都进行深度的反射解析。

View File

@ -33,7 +33,7 @@ func BenchmarkLogger_RequestLog_Realistic(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
WithEntry(typ, func(e LogEntry) {
WithEntry(typ, func(e any) {
entry := e.(*RequestLog)
entry.RequestId = "req-1234567890"
entry.UsedTime = 45.67
@ -41,15 +41,9 @@ func BenchmarkLogger_RequestLog_Realistic(b *testing.B) {
entry.Method = "POST"
entry.ResponseCode = 200
if entry.RequestHeaders == nil {
entry.RequestHeaders = make(map[string]string)
}
entry.RequestHeaders["Content-Type"] = "application/json"
entry.RequestHeaders["Authorization"] = "Bearer token-value"
if entry.RequestData == nil {
entry.RequestData = make(map[string]any)
}
entry.RequestData["userId"] = 10086
entry.RequestData["action"] = "update_profile"

147
extra.go Normal file
View File

@ -0,0 +1,147 @@
package log
import (
"reflect"
"apigo.cc/go/cast"
)
type RequestLog struct {
BaseLog
ServerId string
App string
Node string
ClientIp string
FromApp string
FromNode string
UserId string
DeviceId string
ClientAppName string
ClientAppVersion string
SessionId string
RequestId string
Host string
Scheme string
Proto string
AuthLevel int
Priority int
Method string
Path string
RequestHeaders map[string]string
RequestData map[string]any
UsedTime float32
ResponseCode int
ResponseHeaders map[string]string
ResponseDataLength uint
ResponseData string
}
func (logger *Logger) Request(entry *RequestLog) {
logger.fillBase(entry, LogTypeRequest)
logger.Log(entry)
}
type TaskLog struct {
BaseLog
Task string
UsedTime float32
Success bool
Message string
}
type MonitorLog struct {
BaseLog
Target string
Status int
Message string
}
type StatisticLog struct {
BaseLog
Category string
Item string
Value float64
}
func (logger *Logger) Task(taskName string, usedTime float32, success bool, message string, extra ...any) {
if logger.CheckLevel(INFO) {
entry := GetEntry(reflect.TypeOf(&TaskLog{})).(*TaskLog)
logger.fillBase(entry, LogTypeTask)
entry.Task = taskName
entry.UsedTime = usedTime
entry.Success = success
entry.Message = message
if len(extra) > 0 {
cast.ToMap(entry.Extra, extra)
}
logger.Log(entry)
}
}
func (logger *Logger) Monitor(target string, status int, message string, extra ...any) {
if logger.CheckLevel(INFO) {
entry := GetEntry(reflect.TypeOf(&MonitorLog{})).(*MonitorLog)
logger.fillBase(entry, LogTypeMonitor)
entry.Target = target
entry.Status = status
entry.Message = message
if len(extra) > 0 {
cast.ToMap(entry.Extra, extra)
}
logger.Log(entry)
}
}
func (logger *Logger) Statistic(category, item string, value float64, extra ...any) {
if logger.CheckLevel(INFO) {
entry := GetEntry(reflect.TypeOf(&StatisticLog{})).(*StatisticLog)
logger.fillBase(entry, LogTypeStatistic)
entry.Category = category
entry.Item = item
entry.Value = value
if len(extra) > 0 {
cast.ToMap(entry.Extra, extra)
}
logger.Log(entry)
}
}
type DBLog struct {
BaseLog
DbType string
Dsn string
Query string
QueryArgs string
UsedTime float32
Error string
CallStacks []string
}
func (logger *Logger) DB(dbType, dsn, query string, args []any, usedTime float32, err error, extra ...any) {
logType := LogTypeDb
level := INFO
var e string
if err != nil {
logType = LogTypeDbError
level = ERROR
e = err.Error()
}
if logger.CheckLevel(level) {
entry := GetEntry(reflect.TypeOf(&DBLog{})).(*DBLog)
logger.fillBase(entry, logType)
entry.DbType = dbType
entry.Dsn = dsn
entry.Query = query
entry.QueryArgs = cast.MustToJSON(args)
entry.UsedTime = usedTime
if e != "" {
entry.Error = e
entry.CallStacks = getCallStacks(logger.truncations)
}
if len(extra) > 0 {
cast.ToMap(entry.Extra, extra)
}
logger.Log(entry)
}
}

View File

@ -1,6 +1,7 @@
package log
import (
"fmt"
"testing"
)
@ -25,3 +26,40 @@ func TestDesensitization(t *testing.T) {
}
logger.Log(data) // 应该在输出中脱敏
}
func TestDBLog(t *testing.T) {
logger := NewLogger(Config{
Level: "debug",
})
// 测试普通 DB 日志
logger.DB("mysql", "dsn...", "SELECT * FROM users", []any{1}, 10.5, nil)
// 测试 DB 错误日志 (通过传递 error 对象)
logger.DB("mysql", "dsn...", "SELECT * FROM users", []any{1}, 10.5, fmt.Errorf("connection lost"))
// 测试带额外参数的 DB 日志
logger.DB("mysql", "dsn...", "SELECT * FROM users", []any{1}, 10.5, nil, "k1", "v1")
}
func TestRequestLog(t *testing.T) {
logger := NewLogger(Config{
Level: "debug",
})
req := &RequestLog{
Method: "GET",
Path: "/api/user",
}
logger.Request(req)
}
func TestExtraLogs(t *testing.T) {
logger := NewLogger(Config{
Level: "debug",
})
logger.Task("CleanCache", 150.2, true, "Success clean")
logger.Monitor("CPU", 1, "Normal")
logger.Statistic("Business", "OrderCount", 100)
}

View File

@ -162,8 +162,8 @@ func NewLogger(conf Config) *Logger {
}
func (logger *Logger) Log(data any) {
if entry, ok := data.(LogEntry); ok {
logger.asyncWrite(entry)
if entry, ok := data.(LogEntry); ok && entry.IsLogEntry() {
logger.asyncWrite(data)
return
}
@ -171,16 +171,16 @@ func (logger *Logger) Log(data any) {
if err != nil {
buf, _ = logger.formatter.Format(map[string]any{
"logType": LogTypeUndefined,
"traceId": logger.traceId,
"undefined": fmt.Sprint(data),
"logType": LogTypeUndefined,
"traceId": logger.traceId,
"message": cast.String(data),
}, nil)
}
logger.writeBuf(buf)
}
func (logger *Logger) asyncWrite(entry LogEntry) {
func (logger *Logger) asyncWrite(entry any) {
buf, err := logger.formatter.Format(entry, logger.sensitiveKeys)
if err == nil {
@ -206,17 +206,42 @@ func (logger *Logger) writeBuf(buf []byte) {
}
}
func (logger *Logger) fillBase(entry any, logType string) {
var base *BaseLog
rv := reflect.ValueOf(entry)
if rv.Kind() == reflect.Ptr {
rv = rv.Elem()
}
if rv.Kind() == reflect.Struct {
f := rv.FieldByName("BaseLog")
if f.IsValid() && f.CanAddr() {
if b, ok := f.Addr().Interface().(*BaseLog); ok {
base = b
}
}
}
if base == nil {
return
}
base.LogName = logger.config.Name
base.LogType = logType
base.LogTime = MakeLogTime(time.Now())
base.TraceId = logger.traceId
base.ImageName = dockerImageName
base.ImageTag = dockerImageTag
base.ServerName = serverName
base.ServerIp = serverIp
}
func (logger *Logger) Debug(message string, extra ...any) {
if logger.CheckLevel(DEBUG) {
entry := GetEntry(reflect.TypeOf(&DebugLog{})).(*DebugLog)
logger.fillBase(entry.Base(), LogTypeDebug)
logger.fillBase(entry, LogTypeDebug)
entry.Debug = message
if len(extra) > 0 {
for i := 0; i < len(extra); i += 2 {
if i+1 < len(extra) {
entry.Extra[cast.String(extra[i])] = extra[i+1]
}
}
cast.ToMap(entry.Extra, extra)
}
logger.Log(entry)
}
@ -225,14 +250,10 @@ func (logger *Logger) Debug(message string, extra ...any) {
func (logger *Logger) Info(message string, extra ...any) {
if logger.CheckLevel(INFO) {
entry := GetEntry(reflect.TypeOf(&InfoLog{})).(*InfoLog)
logger.fillBase(entry.Base(), LogTypeInfo)
logger.fillBase(entry, LogTypeInfo)
entry.Info = message
if len(extra) > 0 {
for i := 0; i < len(extra); i += 2 {
if i+1 < len(extra) {
entry.Extra[cast.String(extra[i])] = extra[i+1]
}
}
cast.ToMap(entry.Extra, extra)
}
logger.Log(entry)
}
@ -241,15 +262,11 @@ func (logger *Logger) Info(message string, extra ...any) {
func (logger *Logger) Warning(message string, extra ...any) {
if logger.CheckLevel(WARNING) {
entry := GetEntry(reflect.TypeOf(&WarningLog{})).(*WarningLog)
logger.fillBase(entry.Base(), LogTypeWarning)
logger.fillBase(entry, LogTypeWarning)
entry.Warning = message
entry.CallStacks = getCallStacks(logger.truncations)
if len(extra) > 0 {
for i := 0; i < len(extra); i += 2 {
if i+1 < len(extra) {
entry.Extra[cast.String(extra[i])] = extra[i+1]
}
}
cast.ToMap(entry.Extra, extra)
}
logger.Log(entry)
}
@ -258,15 +275,11 @@ func (logger *Logger) Warning(message string, extra ...any) {
func (logger *Logger) Error(message string, extra ...any) {
if logger.CheckLevel(ERROR) {
entry := GetEntry(reflect.TypeOf(&ErrorLog{})).(*ErrorLog)
logger.fillBase(entry.Base(), LogTypeError)
logger.fillBase(entry, LogTypeError)
entry.Error = message
entry.CallStacks = getCallStacks(logger.truncations)
if len(extra) > 0 {
for i := 0; i < len(extra); i += 2 {
if i+1 < len(extra) {
entry.Extra[cast.String(extra[i])] = extra[i+1]
}
}
cast.ToMap(entry.Extra, extra)
}
logger.Log(entry)
}
@ -297,14 +310,3 @@ func (logger *Logger) CheckLevel(logLevel LevelType) bool {
}
return logLevel >= settedLevel
}
func (logger *Logger) fillBase(base *BaseLog, logType string) {
base.LogName = logger.config.Name
base.LogType = logType
base.LogTime = MakeLogTime(time.Now())
base.TraceId = logger.traceId
base.ImageName = dockerImageName
base.ImageTag = dockerImageTag
base.ServerName = serverName
base.ServerIp = serverIp
}

104
pool.go
View File

@ -5,33 +5,106 @@ import (
"sync"
)
// LogEntry 定义了高性能日志必须实现的接口
type LogEntry interface {
Reset()
Base() *BaseLog
}
// PoolManager 管理不同日志类型的对象池
type PoolManager struct {
pools sync.Map // map[reflect.Type]*sync.Pool
}
var globalPools = &PoolManager{}
var (
globalPools = &PoolManager{}
resetCache sync.Map // map[reflect.Type]func(reflect.Value)
)
// GetEntry 从池中获取一个指定类型的日志对象,并确保其处于 Reset 后的干净状态
func GetEntry(t reflect.Type) LogEntry {
func GetEntry(t reflect.Type) any {
pool, _ := globalPools.pools.LoadOrStore(t, &sync.Pool{
New: func() any {
return reflect.New(t.Elem()).Interface()
},
})
entry := pool.(*sync.Pool).Get().(LogEntry)
entry.Reset() // 确保获取到的对象永远是干净且预分配好的
entry := pool.(*sync.Pool).Get()
ResetLogEntry(entry) // 自动重置所有字段,无需子类实现 Reset
return entry
}
// PutEntry 将日志对象归还到池中,不再进行 Reset
func PutEntry(entry LogEntry) {
// ResetLogEntry 使用反射自动化重置日志对象的所有字段
// 特别是对 Map 和 Slice 进行初始化长度0容量8
func ResetLogEntry(v any) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr || rv.IsNil() {
return
}
t := rv.Type()
resetFunc, ok := resetCache.Load(t)
if !ok {
resetFunc = buildResetFunc(t.Elem())
resetCache.Store(t, resetFunc)
}
resetFunc.(func(reflect.Value))(rv.Elem())
}
func buildResetFunc(t reflect.Type) func(reflect.Value) {
var funcs []func(reflect.Value)
for i := 0; i < t.NumField(); i++ {
field := t.Field(i)
fieldIdx := i
switch field.Type.Kind() {
case reflect.String:
funcs = append(funcs, func(rv reflect.Value) { rv.Field(fieldIdx).SetString("") })
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
funcs = append(funcs, func(rv reflect.Value) { rv.Field(fieldIdx).SetInt(0) })
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
funcs = append(funcs, func(rv reflect.Value) { rv.Field(fieldIdx).SetUint(0) })
case reflect.Float32, reflect.Float64:
funcs = append(funcs, func(rv reflect.Value) { rv.Field(fieldIdx).SetFloat(0) })
case reflect.Bool:
funcs = append(funcs, func(rv reflect.Value) { rv.Field(fieldIdx).SetBool(false) })
case reflect.Map:
funcs = append(funcs, func(rv reflect.Value) {
f := rv.Field(fieldIdx)
if f.IsNil() {
f.Set(reflect.MakeMapWithSize(f.Type(), 8))
} else {
f.Clear()
}
})
case reflect.Slice:
funcs = append(funcs, func(rv reflect.Value) {
f := rv.Field(fieldIdx)
if f.Cap() < 8 {
f.Set(reflect.MakeSlice(f.Type(), 0, 8))
} else {
f.SetLen(0)
}
})
case reflect.Struct:
subReset := buildResetFunc(field.Type)
funcs = append(funcs, func(rv reflect.Value) {
subReset(rv.Field(fieldIdx))
})
case reflect.Ptr, reflect.Interface:
zero := reflect.Zero(field.Type)
funcs = append(funcs, func(rv reflect.Value) {
rv.Field(fieldIdx).Set(zero)
})
}
}
return func(rv reflect.Value) {
for _, f := range funcs {
f(rv)
}
}
}
func resetStruct(rv reflect.Value) {
// 已经不再直接调用,保留 buildResetFunc 逻辑即可
}
// PutEntry 将日志对象归还到池中
func PutEntry(entry any) {
t := reflect.TypeOf(entry)
if pool, ok := globalPools.pools.Load(t); ok {
pool.(*sync.Pool).Put(entry)
@ -39,8 +112,13 @@ func PutEntry(entry LogEntry) {
}
// WithEntry 执行闭包并在结束后自动回收对象
func WithEntry(t reflect.Type, fn func(LogEntry)) {
func WithEntry(t reflect.Type, fn func(any)) {
entry := GetEntry(t)
defer PutEntry(entry)
fn(entry)
}
// LogEntry 是一个标记接口,用于识别是否为对象池管理的日志对象
type LogEntry interface {
IsLogEntry() bool
}

View File

@ -12,20 +12,10 @@ type MockRequestLog struct {
UsedTime float32
}
func (m *MockRequestLog) Reset() {
m.BaseLog.Reset()
m.RequestId = ""
m.UsedTime = 0
}
func (m *MockRequestLog) Base() *BaseLog {
return &m.BaseLog
}
func TestWithEntry(t *testing.T) {
typ := reflect.TypeOf(&MockRequestLog{})
WithEntry(typ, func(e LogEntry) {
WithEntry(typ, func(e any) {
entry := e.(*MockRequestLog)
entry.RequestId = "with-entry-id"
})

View File

@ -29,23 +29,11 @@ type BaseLog struct {
ImageTag string
ServerName string
ServerIp string
Extra map[string]interface{}
Extra map[string]any
}
func (b *BaseLog) Reset() {
b.LogName = ""
b.LogType = ""
b.LogTime = ""
b.TraceId = ""
if b.Extra == nil {
b.Extra = make(map[string]interface{}, 8)
} else {
clear(b.Extra)
}
}
func (b *BaseLog) Base() *BaseLog {
return b
func (b *BaseLog) IsLogEntry() bool {
return true
}
type DebugLog struct {
@ -53,160 +41,19 @@ type DebugLog struct {
Debug string
}
func (d *DebugLog) Reset() {
d.BaseLog.Reset()
d.Debug = ""
}
func (d *DebugLog) Base() *BaseLog {
return &d.BaseLog
}
type InfoLog struct {
BaseLog
Info string
}
func (i *InfoLog) Reset() {
i.BaseLog.Reset()
i.Info = ""
}
func (i *InfoLog) Base() *BaseLog {
return &i.BaseLog
}
type WarningLog struct {
BaseLog
Warning string
CallStacks []string
}
func (w *WarningLog) Reset() {
w.BaseLog.Reset()
w.Warning = ""
w.CallStacks = w.CallStacks[:0]
}
func (w *WarningLog) Base() *BaseLog {
return &w.BaseLog
}
type ErrorLog struct {
BaseLog
Error string
CallStacks []string
}
func (e *ErrorLog) Reset() {
e.BaseLog.Reset()
e.Error = ""
e.CallStacks = e.CallStacks[:0]
}
func (e *ErrorLog) Base() *BaseLog {
return &e.BaseLog
}
type DBLog struct {
BaseLog
DbType string
Dsn string
Query string
QueryArgs string
UsedTime float32
}
func (d *DBLog) Reset() {
d.BaseLog.Reset()
d.DbType = ""
d.Dsn = ""
d.Query = ""
d.QueryArgs = ""
d.UsedTime = 0
}
func (d *DBLog) Base() *BaseLog {
return &d.BaseLog
}
type RequestLog struct {
BaseLog
ServerId string
App string
Node string
ClientIp string
FromApp string
FromNode string
UserId string
DeviceId string
ClientAppName string
ClientAppVersion string
SessionId string
RequestId string
Host string
Scheme string
Proto string
AuthLevel int
Priority int
Method string
Path string
RequestHeaders map[string]string
RequestData map[string]any
UsedTime float32
ResponseCode int
ResponseHeaders map[string]string
ResponseDataLength uint
ResponseData string
}
func (r *RequestLog) Reset() {
r.BaseLog.Reset()
r.ServerId = ""
r.App = ""
r.Node = ""
r.ClientIp = ""
r.FromApp = ""
r.FromNode = ""
r.UserId = ""
r.DeviceId = ""
r.ClientAppName = ""
r.ClientAppVersion = ""
r.SessionId = ""
r.RequestId = ""
r.Host = ""
r.Scheme = ""
r.Proto = ""
r.AuthLevel = 0
r.Priority = 0
r.Method = ""
r.Path = ""
if r.RequestHeaders == nil {
r.RequestHeaders = make(map[string]string, 8)
} else {
clear(r.RequestHeaders)
}
if r.RequestData == nil {
r.RequestData = make(map[string]any, 8)
} else {
clear(r.RequestData)
}
r.UsedTime = 0
r.ResponseCode = 0
if r.ResponseHeaders == nil {
r.ResponseHeaders = make(map[string]string, 8)
} else {
clear(r.ResponseHeaders)
}
r.ResponseDataLength = 0
r.ResponseData = ""
}
func (r *RequestLog) Base() *BaseLog {
return &r.BaseLog
}

View File

@ -138,13 +138,21 @@ func getCallStacks(truncations []string) []string {
if strings.Contains(file, "/go/src/") {
continue
}
if strings.Contains(file, "/log/") { // 注意这里的路径匹配,迁移后是 /log/
// 只有在 logger.go, extra.go 等核心实现文件中的帧才被认为是 "inLogger"
// 这样可以保留测试文件 (xxx_test.go) 的调用栈
isLogInternal := (strings.Contains(file, "/log/logger.go") ||
strings.Contains(file, "/log/utility.go") ||
strings.Contains(file, "/log/standard.go") ||
strings.Contains(file, "/log/extra.go"))
if isLogInternal {
if inLogger {
continue
}
} else {
inLogger = false
}
if truncations != nil {
for _, truncation := range truncations {
if pos := strings.Index(file, truncation); pos != -1 {