Documentation
¶
Overview ¶
Buffer Pool for High-Concurrency HTTP Processing ================================================
This buffer pool manages reusable byte buffers to optimize memory allocation for high-throughput HTTP response processing. When handling thousands of concurrent HTTP requests with large response bodies (blockchain data often exceeds 1MB), naive allocation patterns create significant performance issues.
Memory Allocation Patterns:
- Without pooling: Each request allocates new []byte buffers
- With pooling: Buffers are reused across requests via sync.Pool
Benefits:
- Reduces garbage collection pressure
- Provides predictable memory usage under load
- Maintains consistent performance during traffic spikes
- Size limits prevent memory bloat
The pool automatically grows buffer capacity as needed while preventing oversized buffers from being returned to avoid memory waste.
Concurrency Limiter for Resource Management ===========================================
This concurrency limiter implements a semaphore pattern to bound the number of concurrent HTTP operations, preventing resource exhaustion under high load.
When processing thousands of simultaneous HTTP requests, unlimited concurrency can overwhelm system resources (memory, file descriptors, network connections).
Resource Protection Mechanisms:
- Semaphore-based admission control using buffered channels
- Context-aware blocking with cancellation support
- Real-time tracking of active request counts
- Graceful degradation when limits are exceeded
Operational Characteristics:
- Blocks new requests when limit is reached
- Respects context cancellation for timeout handling
- Integrates with metrics for observability
- Thread-safe for concurrent access
The limiter prevents cascading failures by ensuring system resources remain available even during traffic spikes or slow downstream services.
Index ¶
Constants ¶
const ( // DefaultInitialBufferSize is the initial size of the buffer pool. // Start with 256KB buffers - can grow as needed DefaultInitialBufferSize = 256 * 1024 // TODO_IMPROVE: Make this configurable via YAML settings // DefaultMaxBufferSize is the maximum size of the buffer pool. // Set the max buffer size to 4MB to avoid memory bloat. DefaultMaxBufferSize = 4 * 1024 * 1024 )
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type BufferPool ¶
type BufferPool struct {
// contains filtered or unexported fields
}
BufferPool manages reusable byte buffers to reduce GC pressure. Uses sync.Pool for efficient buffer recycling with size limits.
func NewBufferPool ¶
func NewBufferPool(maxReaderSize int64) *BufferPool
func (*BufferPool) ReadWithBuffer ¶
func (bp *BufferPool) ReadWithBuffer(r io.Reader) ([]byte, error)
ReadWithBuffer reads from an io.Reader using a pooled buffer.
type ConcurrencyLimiter ¶
type ConcurrencyLimiter struct {
// contains filtered or unexported fields
}
ConcurrencyLimiter bounds concurrent operations via semaphore pattern. Prevents resource exhaustion and tracks active request counts.
func NewConcurrencyLimiter ¶
func NewConcurrencyLimiter(maxConcurrent int) *ConcurrencyLimiter
NewConcurrencyLimiter creates a limiter that bounds concurrent operations.
func (*ConcurrencyLimiter) Acquire ¶
func (cl *ConcurrencyLimiter) Acquire(ctx context.Context) bool
Acquire blocks until a slot is available or context is canceled. Returns true if acquired, false if context was canceled.
func (*ConcurrencyLimiter) Release ¶
func (cl *ConcurrencyLimiter) Release()
Release returns a slot to the pool.