Skip to content

Pool hot-path ARQ buffers to reduce per-packet allocations#131

Open
Isusami wants to merge 1 commit intomasterking32:mainfrom
Isusami:feature/arq-buffer-pooling
Open

Pool hot-path ARQ buffers to reduce per-packet allocations#131
Isusami wants to merge 1 commit intomasterking32:mainfrom
Isusami:feature/arq-buffer-pooling

Conversation

@Isusami
Copy link
Copy Markdown

@Isusami Isusami commented Apr 13, 2026

Summary

Introduce sync.Pool-backed buffer pooling for ARQ data, control, and rx payload buffers to reduce per-packet heap allocations on hot paths.

Changes

  • Add arqDataPool with getARQBuffer/putARQBuffer for reusable []byte buffers
  • Add pool-pointer fields (dataPoolBuf, payloadPoolBuf, poolBuf) to arqDataItem, arqControlItem, and rxPayload for deterministic release
  • Update all send/receive/control/release paths to use pooled buffers
  • Fix pooled buffer leak on duplicate control packet early return in SendControlPacketWithTTL

Why

Every packet currently allocates fresh []byte slices that become GC pressure under load. Pooling reuses buffers across packets, reducing allocation churn on the critical data plane.

Test plan

  • go test ./internal/arq/... -- all 50+ tests pass
  • Pre-existing race in TestARQ_FinHandshakeWaitsForInboundWriteDrain (test logger, not production code) -- same on main

Introduce sync.Pool-backed arqDataPool with getARQBuffer/putARQBuffer
for data, control, and rx payload buffers. Each struct gains a pool
pointer (dataPoolBuf, payloadPoolBuf, poolBuf) for deterministic
release. Fixes a pooled buffer leak on duplicate control packet early
return in SendControlPacketWithTTL.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant