Skip to content

WireGuard outbound throughput regression: ~200 Mbps → ~100-120 Mbps since v25.12.8 #5878

@almatv54

Description

@almatv54

完整性要求

  • 我读完了 issue 模板中的所有注释,确保填写符合要求。
  • 我保证阅读了文档,了解所有我编写的配置文件项的含义,而不是大量堆砌看似有用的选项或默认值。
  • 我提供了完整的配置文件和日志,而不是出于自己的判断只给出截取的部分。
  • 我搜索了 issues, 没有发现已提出的类似问题。
  • 问题在 Release 最新的版本上可以成功复现

描述

WireGuard outbound throughput dropped from ~200 Mbps (v25.12.8) to ~100-120 Mbps (v26.3.27) for single-peer outbound configurations.

Root cause: commits d8a8629 and 67a71ad changed the read path in proxy/wireguard/bind.go.

Old behavior (v25.12.8): The receive function sent an empty buffer via channel to the read goroutine, which read directly into it (zero-copy). One allocation, one copy.

New behavior (v26.3.27): The read goroutine allocates a new []byte of device.MaxMessageSize (~65KB) for every UDP packet, reads into it, sends via channel, then the receive function copies it again via copy(bufs[0], r.buff).

This causes:

  1. Per-packet heap allocation of ~65KB (make([]byte, device.MaxMessageSize)) — heavy GC pressure at high throughput
  2. Double data copy (read into temp buffer → copy into WireGuard buffer)

At 200 Mbps with ~1400 byte packets, this is ~17,000 allocations/sec of 65KB each ≈ 1.1 GB/sec of allocations.

Suggested fix: Use sync.Pool for buffer reuse instead of make() per packet, and consider reading directly into the destination buffer to eliminate the double copy.

重现方式

  1. Configure a WireGuard outbound with a single peer
  2. Run speed test on v25.12.8 → ~200 Mbps download
  3. Run speed test on v26.3.27 with the same config → ~100-120 Mbps download
  4. Server: Linux amd64, VPS in Germany

客户端配置

N/A — issue is on server side only

服务端配置

{
  "outbounds": [
    {
      "protocol": "wireguard",
      "settings": {
        "secretKey": "secretKey",
        "address": ["10.1.1.1/32"],
        "peers": [
          {
            "endpoint": "endpoint",
            "publicKey": "publicKey",
            "allowedIPs": ["0.0.0.0/0", "::/0"]
          }
        ],
        "domainStrategy": "ForceIPv4"
      },
      "tag": "tag"
    }
  ]
}

客户端日志

N/A

服务端日志

No errors in logs. The issue is a performance degradation, not a functional bug.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions