Documentation
¶
Overview ¶
Buffer Pool for High-Concurrency HTTP Processing ================================================
This buffer pool manages reusable byte buffers to optimize memory allocation for high-throughput HTTP response processing. When handling thousands of concurrent HTTP requests with large response bodies (blockchain data often exceeds 1MB), naive allocation patterns create significant performance issues.
Memory Allocation Patterns:
- Without pooling: Each request allocates new []byte buffers
- With pooling: Buffers are reused across requests via sync.Pool
Benefits:
- Reduces garbage collection pressure
- Provides predictable memory usage under load
- Maintains consistent performance during traffic spikes
- Size limits prevent memory bloat
The pool automatically grows buffer capacity as needed while preventing oversized buffers from being returned to avoid memory waste.
Index ¶
Constants ¶
const ( // DefaultInitialBufferSize is the initial size of the buffer pool. // Start with 256KB buffers - can grow as needed DefaultInitialBufferSize = 256 * 1024 // TODO_IMPROVE: Make this configurable via YAML settings // DefaultMaxBufferSize is the maximum size of the buffer pool. // Set the max buffer size to 4MB to avoid memory bloat. DefaultMaxBufferSize = 4 * 1024 * 1024 )
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type BufferPool ¶
type BufferPool struct {
// contains filtered or unexported fields
}
BufferPool manages reusable byte buffers to reduce GC pressure. Uses sync.Pool for efficient buffer recycling with size limits.
func NewBufferPool ¶
func NewBufferPool(maxReaderSize int64) *BufferPool
func (*BufferPool) ReadWithBuffer ¶
func (bp *BufferPool) ReadWithBuffer(r io.Reader) ([]byte, error)
ReadWithBuffer reads from an io.Reader using a pooled buffer.