tcp

package
v1.19.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 4, 2026 License: MIT Imports: 13 Imported by: 0

README

TCP Server Package

License Go Version Coverage

Production-ready TCP server implementation with TLS support, graceful shutdown, connection lifecycle management, and comprehensive monitoring capabilities.


Table of Contents


Overview

The tcp package provides a high-performance, production-ready TCP server with first-class support for TLS encryption, graceful shutdown, and connection lifecycle monitoring. It implements a goroutine-per-connection model optimized for hundreds to thousands of concurrent connections.

Design Philosophy
  1. Simplicity First: Minimal API surface with sensible defaults
  2. Production Ready: Built-in monitoring, error handling, and graceful shutdown
  3. Security by Default: TLS 1.2/1.3 support with secure configuration
  4. Observable: Real-time connection tracking and lifecycle callbacks
  5. Context-Aware: Full integration with Go's context for cancellation and timeouts
Key Features
  • TCP Server: Pure TCP with optional TLS/SSL encryption
  • TLS Support: TLS 1.2/1.3 with configurable cipher suites and mutual TLS
  • Graceful Shutdown: Connection draining with configurable timeouts
  • Connection Tracking: Real-time connection counting and monitoring
  • Idle Timeout: Automatic cleanup of inactive connections
  • Lifecycle Callbacks: Hook into connection events (new, read, write, close)
  • Thread-Safe: Lock-free atomic operations for state management
  • Context Integration: Full context support for cancellation and deadlines
  • Zero Dependencies: Only standard library + golib packages

Architecture

Component Diagram
┌─────────────────────────────────────────────────────┐
│                    TCP Server                       │
├─────────────────────────────────────────────────────┤
│                                                     │
│  ┌──────────────┐       ┌───────────────────┐       │
│  │   Listener   │       │  Context Manager  │       │
│  │  (net.TCP)   │       │  (cancellation)   │       │
│  └──────┬───────┘       └─────────┬─────────┘       │
│         │                         │                 │
│         ▼                         ▼                 │
│  ┌──────────────────────────────────────────┐       │
│  │       Connection Accept Loop             │       │
│  │     (with optional TLS handshake)        │       │
│  └──────────────┬───────────────────────────┘       │
│                 │                                   │
│                 ▼                                   │
│         Per-Connection Goroutine                    │
│         ┌─────────────────────┐                     │
│         │  sCtx (I/O wrapper) │                     │
│         │   - Read/Write      │                     │
│         │   - Idle timeout    │                     │
│         │   - State tracking  │                     │
│         └──────────┬──────────┘                     │
│                    │                                │
│                    ▼                                │
│         ┌─────────────────────┐                     │
│         │   User Handler      │                     │
│         │   (custom logic)    │                     │
│         └─────────────────────┘                     │
│                                                     │
│  Optional Callbacks:                                │
│   - UpdateConn: TCP connection tuning               │
│   - FuncError: Error reporting                      │
│   - FuncInfo: Connection events                     │
│   - FuncInfoSrv: Server lifecycle                   │
│                                                     │
└─────────────────────────────────────────────────────┘
Data Flow
  1. Server Start: Listen() creates TCP listener (with optional TLS)
  2. Accept Loop: Continuously accepts new connections
  3. Connection Setup:
    • TLS handshake (if enabled)
    • Connection counter incremented
    • UpdateConn callback invoked
    • Connection wrapped in sCtx context
    • Handler goroutine spawned
    • Idle timeout monitoring started
  4. Handler Execution: User handler processes the connection
  5. Connection Close:
    • Connection closed
    • Context cancelled
    • Counter decremented
    • Goroutine cleaned up
Lifecycle States
┌─────────────┐
│  Created    │  IsRunning=false, IsGone=false
└──────┬──────┘
       │ Listen()
       ▼
┌─────────────┐
│  Running    │  IsRunning=true, IsGone=false
└──────┬──────┘  (Accepting connections)
       │ Shutdown()
       ▼
┌─────────────┐
│  Draining   │  IsRunning=false, IsGone=true
└──────┬──────┘  (Waiting for connections to close)
       │ All connections closed
       ▼
┌─────────────┐
│  Stopped    │  IsRunning=false, IsGone=true
└─────────────┘  (All resources released)

Performance

Throughput

Based on benchmarks with echo server on localhost:

Configuration Connections Throughput Latency (P50)
Plain TCP 100 ~500K req/s <1 ms
Plain TCP 1000 ~450K req/s <2 ms
TLS 1.3 100 ~350K req/s 2-3 ms
TLS 1.3 1000 ~300K req/s 3-5 ms

Actual throughput depends on handler complexity and network conditions

Memory Usage

Per-connection memory footprint:

Goroutine stack:      ~8 KB
sCtx structure:       ~1 KB
Application buffers:  Variable (e.g., 4 KB)
────────────────────────────
Total per connection: ~10-15 KB

Memory scaling examples:

  • 100 connections: ~1-2 MB
  • 1,000 connections: ~10-15 MB
  • 10,000 connections: ~100-150 MB
Scalability

Recommended connection limits:

Connections Performance Notes
1-1,000 Excellent Ideal range
1,000-5,000 Good Monitor memory
5,000-10,000 Fair Consider profiling
10,000+ Not advised Event-driven model recommended

Use Cases

1. Custom Protocol Server

Problem: Implement a proprietary binary or text protocol over TCP.

handler := func(ctx libsck.Context) {
    defer ctx.Close()
    
    // Read length-prefixed messages
    lenBuf := make([]byte, 4)
    if _, err := io.ReadFull(ctx, lenBuf); err != nil {
        return
    }
    
    msgLen := binary.BigEndian.Uint32(lenBuf)
    msg := make([]byte, msgLen)
    if _, err := io.ReadFull(ctx, msg); err != nil {
        return
    }
    
    // Process and respond
    response := processMessage(msg)
    ctx.Write(response)
}

Real-world: IoT device communication, game servers, financial data feeds.

2. Secure API Gateway

Problem: TLS-encrypted gateway for backend services.

cfg := sckcfg.Server{
    Network: libptc.NetworkTCP,
    Address: ":8443",
    TLS: sckcfg.TLS{
        Enable: true,
        Config: tlsConfig,  // Mutual TLS with client certs
    },
    ConIdleTimeout: 5 * time.Minute,
}

srv, _ := tcp.New(nil, gatewayHandler, cfg)

Real-world: Microservice mesh, secure API endpoints.

3. Connection Pooling Proxy

Problem: Maintain persistent connections to backend servers.

var backendPool sync.Pool

handler := func(ctx libsck.Context) {
    defer ctx.Close()
    
    // Get backend connection from pool
    backend := backendPool.Get().(net.Conn)
    defer backendPool.Put(backend)
    
    // Bidirectional copy
    go io.Copy(backend, ctx)
    io.Copy(ctx, backend)
}

Real-world: Database proxy, load balancer, connection multiplexer.

4. Real-Time Monitoring Server

Problem: Stream real-time metrics to monitoring clients.

srv.RegisterFuncInfo(func(local, remote net.Addr, state libsck.ConnState) {
    switch state {
    case libsck.ConnectionNew:
        metricsCollector.IncCounter("connections_total")
    case libsck.ConnectionClose:
        metricsCollector.DecGauge("connections_active")
    }
})

Real-world: Telemetry collection, log aggregation.

5. WebSocket-like Protocol

Problem: Implement frame-based messaging without HTTP.

handler := func(ctx libsck.Context) {
    defer ctx.Close()
    
    for {
        // Read frame header
        header := make([]byte, 2)
        if _, err := io.ReadFull(ctx, header); err != nil {
            return
        }
        
        opcode := header[0]
        payloadLen := header[1]
        
        // Read payload
        payload := make([]byte, payloadLen)
        if _, err := io.ReadFull(ctx, payload); err != nil {
            return
        }
        
        processFrame(opcode, payload)
    }
}

Real-world: Game protocols, streaming applications.


Quick Start

Installation
go get github.com/nabbar/golib/socket/server/tcp
Basic Echo Server
package main

import (
    "context"
    "io"
    
    libptc "github.com/nabbar/golib/network/protocol"
    libsck "github.com/nabbar/golib/socket"
    sckcfg "github.com/nabbar/golib/socket/config"
    tcp "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
    // Define echo handler
    handler := func(ctx libsck.Context) {
        defer ctx.Close()
        io.Copy(ctx, ctx)  // Echo
    }
    
    // Create configuration
    cfg := sckcfg.Server{
        Network: libptc.NetworkTCP,
        Address: ":8080",
    }
    
    // Create and start server
    srv, _ := tcp.New(nil, handler, cfg)
    srv.Listen(context.Background())
}
Server with TLS
import (
    libtls "github.com/nabbar/golib/certificates"
    tlscrt "github.com/nabbar/golib/certificates/certs"
    // ... other imports
)

func main() {
    // Load TLS certificate
    cert, _ := tlscrt.LoadPair("server.key", "server.crt")
    
    tlsConfig := libtls.Config{
        Certs:      []tlscrt.Certif{cert.Model()},
        VersionMin: tlsvrs.VersionTLS12,
        VersionMax: tlsvrs.VersionTLS13,
    }
    
    // Configure server with TLS
    cfg := sckcfg.Server{
        Network: libptc.NetworkTCP,
        Address: ":8443",
        TLS: sckcfg.TLS{
            Enable: true,
            Config: tlsConfig,
        },
    }
    
    srv, _ := tcp.New(nil, handler, cfg)
    srv.Listen(context.Background())
}
Production Server
func main() {
    // Handler with error handling
    handler := func(ctx libsck.Context) {
        defer ctx.Close()
        
        buf := make([]byte, 4096)
        for ctx.IsConnected() {
            n, err := ctx.Read(buf)
            if err != nil {
                log.Printf("Read error: %v", err)
                return
            }
            
            if _, err := ctx.Write(buf[:n]); err != nil {
                log.Printf("Write error: %v", err)
                return
            }
        }
    }
    
    // Configuration with idle timeout
    cfg := sckcfg.Server{
        Network:        libptc.NetworkTCP,
        Address:        ":8080",
        ConIdleTimeout: 5 * time.Minute,
    }
    
    srv, _ := tcp.New(nil, handler, cfg)
    
    // Register monitoring callbacks
    srv.RegisterFuncError(func(errs ...error) {
        for _, err := range errs {
            log.Printf("Server error: %v", err)
        }
    })
    
    srv.RegisterFuncInfo(func(local, remote net.Addr, state libsck.ConnState) {
        log.Printf("[%s] %s -> %s", state, remote, local)
    })
    
    // Start server
    ctx, cancel := context.WithCancel(context.Background())
    defer cancel()
    
    go func() {
        if err := srv.Listen(ctx); err != nil {
            log.Fatalf("Server error: %v", err)
        }
    }()
    
    // Graceful shutdown on signal
    sigChan := make(chan os.Signal, 1)
    signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM)
    
    <-sigChan
    log.Println("Shutting down...")
    
    shutdownCtx, shutdownCancel := context.WithTimeout(
        context.Background(), 30*time.Second)
    defer shutdownCancel()
    
    if err := srv.Shutdown(shutdownCtx); err != nil {
        log.Printf("Shutdown error: %v", err)
    }
    
    log.Println("Server stopped")
}

Best Practices

✅ DO

Always close connections:

handler := func(ctx libsck.Context) {
    defer ctx.Close()  // Ensures cleanup
    // Handler logic...
}

Implement graceful shutdown:

shutdownCtx, cancel := context.WithTimeout(
    context.Background(), 30*time.Second)
defer cancel()

if err := srv.Shutdown(shutdownCtx); err != nil {
    log.Printf("Shutdown timeout: %v", err)
}

Monitor connection count:

go func() {
    ticker := time.NewTicker(10 * time.Second)
    defer ticker.Stop()
    
    for range ticker.C {
        count := srv.OpenConnections()
        if count > 1000 {
            log.Printf("WARNING: High connection count: %d", count)
        }
    }
}()

Handle errors properly:

n, err := ctx.Read(buf)
if err != nil {
    if err != io.EOF {
        log.Printf("Read error: %v", err)
    }
    return  // Exit handler
}
❌ DON'T

Don't ignore graceful shutdown:

// ❌ BAD: Abrupt shutdown loses data
srv.Close()

// ✅ GOOD: Wait for connections to finish
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
srv.Shutdown(ctx)

Don't leak goroutines:

// ❌ BAD: Forgot to close connection
handler := func(ctx libsck.Context) {
    io.Copy(ctx, ctx)  // Connection never closed!
}

// ✅ GOOD: Always defer Close
handler := func(ctx libsck.Context) {
    defer ctx.Close()
    io.Copy(ctx, ctx)
}

Don't use in ultra-high concurrency:

// ❌ BAD: 100K+ connections on goroutine-per-connection
// This will consume excessive memory and goroutines

// ✅ GOOD: For >10K connections, use event-driven model
// Consider alternatives like netpoll, epoll, or io_uring
Testing

The package includes a comprehensive test suite with 79.1% code coverage and 58 test specifications using BDD methodology (Ginkgo v2 + Gomega).

Key test coverage:

  • ✅ All public APIs and lifecycle operations
  • ✅ Concurrent access with race detector (zero races detected)
  • ✅ Performance benchmarks (throughput, latency, scalability)
  • ✅ Error handling and edge cases
  • ✅ TLS handshake and encryption
  • ✅ Context integration and cancellation

For detailed test documentation, see TESTING.md.


API Reference

ServerTcp Interface
type ServerTcp interface {
    // Start accepting connections
    Listen(ctx context.Context) error
    
    // Stop accepting, wait for connections to close
    Shutdown(ctx context.Context) error
    
    // Stop accepting, close all connections immediately
    Close() error
    
    // Check if server is accepting connections
    IsRunning() bool
    
    // Check if server is draining connections
    IsGone() bool
    
    // Get current connection count
    OpenConnections() int64
    
    // Configure TLS
    SetTLS(enable bool, config libtls.TLSConfig) error
    
    // Register address
    RegisterServer(address string) error
    
    // Register callbacks
    RegisterFuncError(f libsck.FuncError)
    RegisterFuncInfo(f libsck.FuncInfo)
    RegisterFuncInfoServer(f libsck.FuncInfoSrv)
}
Configuration
type Server struct {
    Network        libptc.NetworkType  // Protocol (TCP)
    Address        string              // Listen address ":8080"
    ConIdleTimeout time.Duration       // Idle timeout (0=disabled)
    TLS            TLS                 // TLS configuration
}

type TLS struct {
    Enable bool           // Enable TLS
    Config libtls.Config  // TLS certificates and settings
}
Error Codes
var (
    ErrInvalidAddress   = "invalid listen address"
    ErrInvalidHandler   = "invalid handler"
    ErrInvalidInstance  = "invalid socket instance"
    ErrServerClosed     = "server closed"
    ErrContextClosed    = "context closed"
    ErrShutdownTimeout  = "timeout on stopping socket"
    ErrGoneTimeout      = "timeout on closing connections"
    ErrIdleTimeout      = "timeout on idle connections"
)

Contributing

Contributions are welcome! Please follow these guidelines:

Reporting Bugs

If you find a bug, please open an issue on GitHub with:

  1. Description: Clear and concise description of the bug
  2. Reproduction Steps: Minimal code example to reproduce the issue
  3. Expected Behavior: What you expected to happen
  4. Actual Behavior: What actually happened
  5. Environment: Go version, OS, and relevant system information
  6. Logs/Errors: Any error messages or stack traces

Submit issues at: https://github.com/nabbar/golib/issues

Code Contributions
  1. Code Quality

    • Follow Go best practices and idioms
    • Maintain or improve code coverage (target: >80%)
    • Pass all tests including race detector
    • Use gofmt and golint
  2. AI Usage Policy

    • AI must NEVER be used to generate package code or core functionality
    • AI assistance is limited to:
      • Testing (writing and improving tests)
      • Debugging (troubleshooting and bug resolution)
      • Documentation (comments, README, TESTING.md)
    • All AI-assisted work must be reviewed and validated by humans
  3. Testing

    • Add tests for new features
    • Use Ginkgo v2 / Gomega for test framework
    • Ensure zero race conditions with go test -race
  4. Documentation

    • Update GoDoc comments for public APIs
    • Add examples for new features
    • Update README.md and TESTING.md if needed

Improvements & Security

Current Status

The package is production-ready with no urgent improvements or security vulnerabilities identified.

Code Quality Metrics
  • 79.1% test coverage (target: >80%)
  • Zero race conditions detected with -race flag
  • Thread-safe implementation using atomic operations
  • TLS 1.2/1.3 support with secure defaults
  • Graceful shutdown with connection draining
Known Limitations

Architectural Constraints:

  1. Scalability: Goroutine-per-connection model is optimal for 1-10K connections. For >10K connections, consider event-driven alternatives (epoll, io_uring)
  2. No Protocol Framing: Applications must implement their own message framing layer
  3. No Connection Pooling: Each connection is independent - implement pooling at application level if needed
  4. No Built-in Rate Limiting: Application must implement rate limiting for connection/request throttling
  5. No Metrics Export: No built-in Prometheus or OpenTelemetry integration - use callbacks for custom metrics

Not Suitable For:

  • Ultra-high concurrency scenarios (>50K simultaneous connections)
  • Low-latency high-frequency trading (<10µs response time requirements)
  • Short-lived connections at extreme rates (>100K connections/second)
  • Protocol multiplexing scenarios (use HTTP/2, gRPC, or QUIC instead)
Future Enhancements (Non-urgent)

The following enhancements could be considered for future versions:

  1. Connection Pooling: Built-in connection pool management for backend proxies
  2. Rate Limiting: Configurable per-IP and global rate limiting
  3. Metrics Integration: Optional Prometheus/OpenTelemetry exporters
  4. Protocol Helpers: Common framing protocols (length-prefixed, delimited, chunked)
  5. Load Balancing: Built-in connection distribution strategies
  6. Circuit Breaker: Automatic failure detection and recovery

These are optional improvements and not required for production use. The current implementation is stable and performant for its intended use cases.

Security Considerations

Security Best Practices Applied:

  • TLS 1.2/1.3 with configurable cipher suites
  • Mutual TLS (mTLS) support for client authentication
  • Idle timeout to prevent resource exhaustion
  • Graceful shutdown prevents data loss
  • Context cancellation for timeouts and deadlines

No Known Vulnerabilities:

  • Regular security audits performed
  • Dependencies limited to Go stdlib and internal golib packages
  • No CVEs reported
Comparison with Alternatives
Feature tcp (this package) net/http gRPC
Protocol Raw TCP HTTP/1.1, HTTP/2 HTTP/2
Framing Manual Built-in Built-in
TLS Optional Optional Optional
Concurrency Per-connection Per-request Per-stream
Best For Custom protocols REST APIs RPC services
Max Connections ~10K ~10K ~10K per server
Learning Curve Low Medium High

Resources

Package Documentation
  • GoDoc - Complete API reference
  • doc.go - In-depth package documentation with architecture details
  • TESTING.md - Comprehensive testing documentation
External References

AI Transparency

In compliance with EU AI Act Article 50.4: AI assistance was used for testing, documentation, and bug resolution under human supervision. All core functionality is human-designed and validated.


License

MIT License - See LICENSE file for details.

Copyright (c) 2022 Nicolas JUHEL


Maintained by: Nicolas JUHEL
Package: github.com/nabbar/golib/socket/server/tcp
Version: See releases for versioning

Documentation

Overview

Package tcp provides a robust, production-ready TCP server implementation with support for TLS, connection management, and comprehensive monitoring capabilities.

Overview

This package implements a high-performance TCP server that supports:

  • TLS/SSL encryption with configurable cipher suites and protocols (TLS 1.2/1.3)
  • Graceful shutdown with connection draining and timeout management
  • Connection lifecycle monitoring with state callbacks
  • Context-aware operations with cancellation propagation
  • Configurable idle timeouts for inactive connections
  • Thread-safe concurrent connection handling (goroutine-per-connection)
  • Connection counting and tracking
  • Customizable connection configuration via UpdateConn callback

Architecture

## Component Diagram

The server follows a layered architecture with clear separation of concerns:

┌─────────────────────────────────────────────────────┐
│                    TCP Server                       │
├─────────────────────────────────────────────────────┤
│                                                     │
│  ┌──────────────┐       ┌───────────────────┐       │
│  │   Listener   │       │  Context Manager  │       │
│  │  (Listen)    │       │  (ctx tracking)   │       │
│  └──────┬───────┘       └─────────┬─────────┘       │
│         │                         │                 │
│         ▼                         ▼                 │
│  ┌──────────────────────────────────────────┐       │
│  │          Connection Acceptor             │       │
│  │   (Accept loop + TLS handshake)          │       │
│  └──────────────┬───────────────────────────┘       │
│                 │                                   │
│                 ▼                                   │
│         Per-Connection Goroutine                    │
│         ┌─────────────────────┐                     │
│         │  Connection Context │                     │
│         │   - sCtx (I/O wrap) │                     │
│         │   - Idle timeout    │                     │
│         │   - State tracking  │                     │
│         └──────────┬──────────┘                     │
│                    │                                │
│                    ▼                                │
│         ┌─────────────────────┐                     │
│         │   User Handler Func │                     │
│         │   (custom logic)    │                     │
│         └─────────────────────┘                     │
│                                                     │
│  Optional Callbacks:                                │
│   - UpdateConn: Connection configuration            │
│   - FuncError: Error reporting                      │
│   - FuncInfo: Connection state changes              │
│   - FuncInfoSrv: Server lifecycle events            │
│                                                     │
└─────────────────────────────────────────────────────┘

## Data Flow

  1. Server.Listen() starts the accept loop
  2. For each new connection: a. net.Listener.Accept() receives the connection b. Optional TLS handshake (if configured) c. Connection counter incremented atomically d. UpdateConn callback invoked (if registered) e. Connection wrapped in sCtx (context + I/O) f. Handler goroutine spawned g. Idle timeout monitoring started (if configured)
  3. Handler processes the connection
  4. On close: a. Connection closed b. Context cancelled c. Counter decremented d. Goroutine terminates

## Lifecycle States

The server maintains two atomic state flags:

  • IsRunning: Server is accepting new connections

  • false → true: Listen() called successfully

  • true → false: Shutdown/Close initiated

  • IsGone: Server is draining existing connections

  • false → true: Shutdown() called

  • Used to signal accept loop to stop

## Thread Safety Model

Synchronization primitives used:

  • atomic.Bool: run, gon (server state)
  • atomic.Int64: nc (connection counter)
  • libatm.Value: ssl, fe, fi, fs, ad (atomic storage)
  • No mutexes: All state changes are lock-free

Concurrency guarantees:

  • All exported methods are safe for concurrent use
  • Connection handlers run in isolated goroutines
  • No shared mutable state between connections
  • Atomic counters prevent race conditions

Features

## Security

  • TLS 1.2/1.3 support with configurable cipher suites and curves
  • Mutual TLS (mTLS) support for client authentication
  • Secure defaults for TLS configuration (minimum TLS 1.2)
  • Certificate validation and chain verification
  • Integration with github.com/nabbar/golib/certificates for TLS management

## Reliability

  • Graceful shutdown with configurable timeouts
  • Connection draining during shutdown (wait for active connections)
  • Automatic reclamation of resources (goroutines, memory, file descriptors)
  • Idle connection timeout with automatic cleanup
  • Context-aware operations with deadline and cancellation support
  • Error recovery and propagation

## Monitoring & Observability

  • Connection state change callbacks (new, read, write, close)
  • Error reporting through callback functions
  • Server lifecycle notifications
  • Real-time connection counting (OpenConnections)
  • Server state queries (IsRunning, IsGone)

## Performance

  • Goroutine-per-connection model (suitable for 100s-1000s of connections)
  • Lock-free atomic operations for state management
  • Zero-copy I/O where possible
  • Minimal memory overhead per connection (~10KB)
  • Efficient connection tracking without locks

Usage Example

Basic echo server:

import (
	"context"
	"io"
	"github.com/nabbar/golib/socket"
	tcp "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	handler := func(r socket.Reader, w socket.Writer) {
		defer r.Close()
		defer w.Close()
		io.Copy(w, r) // Echo back received data
	}

	srv, err := tcp.New(nil, handler, socket.DefaultServerConfig(":8080"))
	if err != nil {
		panic(err)
	}

	// Start the server
	if err := srv.Listen(context.Background()); err != nil {
		panic(err)
	}
}

Concurrency Model

## Goroutine-Per-Connection

The server uses a goroutine-per-connection model, where each accepted connection spawns a dedicated goroutine to handle it. This model is well-suited for:

  • Low to medium concurrent connections (100s to low 1000s)
  • Long-lived connections (WebSockets, persistent HTTP, SSH-like protocols)
  • Applications requiring per-connection state and context
  • Connections with varying processing times
  • Connections requiring blocking I/O operations

## Scalability Characteristics

Typical performance profile:

	┌─────────────────┬──────────────┬────────────────┬──────────────┐
	│  Connections    │  Goroutines  │  Memory Usage  │  Throughput  │
	├─────────────────┼──────────────┼────────────────┼──────────────┤
	│  10             │  ~12         │  ~100 KB       │  Excellent   │
	│  100            │  ~102        │  ~1 MB         │  Excellent   │
	│  1,000          │  ~1,002      │  ~10 MB        │  Good        │
	│  10,000         │  ~10,002     │  ~100 MB       │  Fair*       │
	│  100,000+       │  ~100,002+   │  ~1 GB+        │  Not advised │
	└─────────────────┴──────────────┴────────────────┴──────────────┘

  - At 10K+ connections, consider profiling and potentially switching to
    an event-driven model or worker pool architecture.

## Memory Overhead

Per-connection memory allocation:

Base overhead:           ~8 KB  (goroutine stack)
Connection context:      ~1 KB  (sCtx structure)
Buffers (handler):       Variable (depends on implementation)
─────────────────────────────────
Total minimum:           ~10 KB per connection

Example calculation for 1000 connections:

1000 connections × 10 KB = ~10 MB base
+ application buffers (e.g., 4KB read buffer × 1000 = 4 MB)
= ~14 MB total for connections

## Alternative Patterns for High Concurrency

For scenarios with >10,000 concurrent connections, consider:

  1. Worker Pool Pattern: Fixed number of worker goroutines processing connections from a queue. Trades connection isolation for better resource control.

  2. Event-Driven Model: Single-threaded or few-threaded event loop (epoll/kqueue). Requires careful state machine design but scales to millions.

  3. Connection Multiplexing: Use protocols that support multiplexing (HTTP/2, gRPC, QUIC). Reduces OS-level connection overhead.

  4. Rate Limiting: Limit concurrent connections with a semaphore or connection pool. Prevents resource exhaustion under load spikes.

This package is optimized for the common case of hundreds to low thousands of connections with good developer ergonomics and code simplicity.

Performance Considerations

## Throughput

The server's throughput is primarily limited by:

  1. Handler function complexity (CPU-bound operations)
  2. Network bandwidth and latency
  3. TLS overhead (if enabled): ~10-30% CPU cost for encryption
  4. System limits: File descriptors, port exhaustion, kernel tuning

Typical throughput (echo handler on localhost):

  • Without TLS: ~500K requests/sec (small payloads)
  • With TLS: ~350K requests/sec (small payloads)
  • Network I/O: Limited by bandwidth, not server

## Latency

Expected latency profile:

┌──────────────────────┬─────────────────┐
│  Operation           │  Typical Time   │
├──────────────────────┼─────────────────┤
│  Connection accept   │  <1 ms          │
│  TLS handshake       │  1-5 ms         │
│  Handler spawn       │  <100 µs        │
│  Context creation    │  <10 µs         │
│  Read/Write syscall  │  <100 µs        │
│  Graceful shutdown   │  100 ms - 1 s   │
└──────────────────────┴─────────────────┘

## Resource Limits

System-level limits to consider:

  1. File Descriptors: - Each connection uses 1 file descriptor - Check: ulimit -n (default often 1024 on Linux) - Increase: ulimit -n 65536 or via /etc/security/limits.conf

  2. Ephemeral Ports (client-side): - Default range: ~28,000 ports (varies by OS) - Tune: sysctl net.ipv4.ip_local_port_range

  3. TCP Buffer Memory: - Per-connection send/receive buffers - Default: 87380 bytes (varies) - Tune: sysctl net.ipv4.tcp_rmem and tcp_wmem

  4. Connection Tracking (firewall): - Conntrack table size limits active connections - Check: sysctl net.netfilter.nf_conntrack_max

Limitations

## Known Limitations

  1. No built-in rate limiting or connection throttling
  2. No support for connection pooling or multiplexing
  3. Goroutine-per-connection model limits scalability >10K connections
  4. No built-in protocol framing (implement in handler)
  5. TLS session resumption not explicitly managed
  6. No built-in metrics export (Prometheus, etc.)

## Not Suitable For

  • Ultra-high concurrency scenarios (>50K simultaneous connections)
  • Low-latency HFT applications (<10µs response time)
  • Systems requiring protocol multiplexing (use HTTP/2 or gRPC)
  • Short-lived connections at very high rates (>100K conn/sec)

## Comparison with Alternatives

┌──────────────────┬────────────────┬──────────────────┬──────────────┐
│  Feature         │  This Package  │  net/http        │  gRPC        │
├──────────────────┼────────────────┼──────────────────┼──────────────┤
│  Protocol        │  Raw TCP       │  HTTP/1.1, HTTP/2│  HTTP/2      │
│  Framing         │  Manual        │  Built-in        │  Built-in    │
│  TLS             │  Optional      │  Optional        │  Optional    │
│  Concurrency     │  Per-conn      │  Per-request     │  Per-stream  │
│  Complexity      │  Low           │  Medium          │  High        │
│  Best For        │  Custom proto  │  REST APIs       │  RPC         │
│  Max Connections │  ~1-10K        │  ~10K+           │  ~10K+       │
└──────────────────┴────────────────┴──────────────────┴──────────────┘

Best Practices

## Error Handling

  1. Always register error callbacks:

    srv.RegisterFuncError(func(errs ...error) { for _, err := range errs { log.Printf("Server error: %v", err) } })

  2. Handle all errors in your handler:

    handler := func(ctx libsck.Context) { defer ctx.Close() // Always close

    buf := make([]byte, 4096) n, err := ctx.Read(buf) if err != nil { if err != io.EOF { log.Printf("Read error: %v", err) } return } // Process data... }

## Resource Management

  1. Always use defer for cleanup:

    defer srv.Close() // Server defer ctx.Close() // Connection (in handler)

  2. Implement graceful shutdown:

    sigChan := make(chan os.Signal, 1) signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM)

    <-sigChan log.Println("Shutting down...")

    shutdownCtx, cancel := context.WithTimeout( context.Background(), 30*time.Second) defer cancel()

    if err := srv.Shutdown(shutdownCtx); err != nil { log.Printf("Shutdown error: %v", err) }

  3. Monitor connection count:

    go func() { ticker := time.NewTicker(10 * time.Second) defer ticker.Stop()

    for range ticker.C { count := srv.OpenConnections() if count > warnThreshold { log.Printf("WARNING: High connection count: %d", count) } } }()

## Security

  1. Always use TLS in production:

    cfg.TLS.Enable = true cfg.TLS.Config = tlsConfig // From certificates package

  2. Configure idle timeouts to prevent resource exhaustion:

    cfg.ConIdleTimeout = 5 * time.Minute

  3. Validate input in handlers (prevent injection, DoS, etc.)

  4. Consider implementing rate limiting at the application level

## Testing

  1. Test with concurrent connections:

    for i := 0; i < numClients; i++ { go func() { conn, _ := net.Dial("tcp", serverAddr) defer conn.Close() // Test logic... }() }

  2. Test graceful shutdown under load

  3. Test with slow/misbehaving clients

  4. Run with race detector: go test -race

  • github.com/nabbar/golib/socket: Base interfaces and types
  • github.com/nabbar/golib/socket/config: Server configuration
  • github.com/nabbar/golib/certificates: TLS certificate management
  • github.com/nabbar/golib/network/protocol: Protocol constants

See the example_test.go file for runnable examples covering common use cases.

Package tcp provides a TCP server implementation with support for TLS, connection management, and various callback hooks for monitoring and error handling.

This package implements the github.com/nabbar/golib/socket.Server interface and provides a robust TCP server with features including:

  • TLS/SSL support with certificate management
  • Graceful shutdown with connection draining
  • Connection lifecycle callbacks (new, read, write, close)
  • Error and informational logging callbacks
  • Atomic connection counting and state management
  • Context-aware operations

See github.com/nabbar/golib/socket for the Server interface definition.

Example

Example demonstrates a basic echo server. This is the simplest possible TCP server implementation.

package main

import (
	"context"
	"time"

	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	// Create handler function that echoes back received data
	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()
		buf := make([]byte, 1024)
		for {
			n, err := c.Read(buf)
			if err != nil {
				return
			}
			if n > 0 {
				_, _ = c.Write(buf[:n])
			}
		}
	}

	// Create server configuration
	cfg := sckcfg.Server{
		Network: libptc.NetworkTCP,
		Address: ":8080",
	}

	// Create server
	srv, err := scksrt.New(nil, handler, cfg)
	if err != nil {
		panic(err)
	}

	// Start server
	ctx := context.Background()
	go func() {
		_ = srv.Listen(ctx)
	}()

	// Wait for server to start
	time.Sleep(100 * time.Millisecond)

	// Shutdown after demonstration
	_ = srv.Shutdown(ctx)
}
Example (Complete)

Example_complete demonstrates a production-ready server with all features. This example shows error handling, monitoring, graceful shutdown, and logging.

package main

import (
	"context"
	"fmt"
	"net"
	"time"

	libdur "github.com/nabbar/golib/duration"
	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	// Handler with proper error handling
	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()

		buf := make([]byte, 4096)
		for c.IsConnected() {
			n, err := c.Read(buf)
			if err != nil {
				return
			}

			if n > 0 {
				if _, err := c.Write(buf[:n]); err != nil {
					return
				}
			}
		}
	}

	// Create configuration with idle timeout
	cfg := sckcfg.Server{
		Network:        libptc.NetworkTCP,
		Address:        ":8081",
		ConIdleTimeout: libdur.Minutes(5),
	}

	// Create server
	srv, err := scksrt.New(nil, handler, cfg)
	if err != nil {
		fmt.Printf("Failed to create server: %v\n", err)
		return
	}

	// Register monitoring callbacks
	srv.RegisterFuncError(func(errs ...error) {
		for _, e := range errs {
			fmt.Printf("Server error: %v\n", e)
		}
	})

	srv.RegisterFuncInfo(func(local, remote net.Addr, state libsck.ConnState) {
		fmt.Printf("Connection %s from %s\n", state, remote)
	})

	// Start server
	ctx, cancel := context.WithCancel(context.Background())
	defer cancel()

	go func() {
		if err := srv.Listen(ctx); err != nil {
			fmt.Printf("Server stopped: %v\n", err)
		}
	}()

	// Wait for server to be ready
	time.Sleep(50 * time.Millisecond)
	fmt.Printf("Server running with %d connections\n", srv.OpenConnections())

	// Graceful shutdown
	shutdownCtx, shutdownCancel := context.WithTimeout(
		context.Background(), 10*time.Second)
	defer shutdownCancel()

	if err := srv.Shutdown(shutdownCtx); err != nil {
		fmt.Printf("Shutdown error: %v\n", err)
	}

	fmt.Println("Server stopped gracefully")
}
Output:

Server running with 0 connections
Server stopped gracefully

Index

Examples

Constants

This section is empty.

Variables

View Source
var (
	// ErrInvalidAddress is returned when the server address is empty or malformed.
	// The address must be in the format "host:port" or ":port" for all interfaces.
	//
	// Example of valid addresses:
	//   - "localhost:8080" - Listen on localhost port 8080
	//   - ":8080" - Listen on all interfaces port 8080
	//   - "0.0.0.0:8080" - Explicitly listen on all IPv4 interfaces
	ErrInvalidAddress = fmt.Errorf("invalid listen address")

	// ErrInvalidHandler is returned when attempting to start a server without a valid handler function.
	// A handler must be provided via the New() constructor and must not be nil.
	//
	// Example of valid usage:
	//   handler := func(r socket.Reader, w socket.Writer) { /* ... */ }
	//   srv, err := tcp.New(nil, handler, config)
	ErrInvalidHandler = fmt.Errorf("invalid handler")

	// ErrShutdownTimeout is returned when the server shutdown exceeds the context timeout.
	// This typically happens when StopListen() takes longer than expected to complete.
	// The server will attempt to close all active connections before returning this error.
	//
	// To handle this error, you may want to implement a fallback strategy or log the event.
	ErrShutdownTimeout = fmt.Errorf("timeout on stopping socket")

	// ErrInvalidInstance is returned when operating on a nil server instance.
	// This typically occurs if the server was not properly initialized or has been set to nil.
	// Always check for this error when working with server instances that might be nil.
	ErrInvalidInstance = fmt.Errorf("invalid socket instance")
)

Functions

This section is empty.

Types

type ServerTcp

type ServerTcp interface {
	libsck.Server

	// RegisterServer sets the TCP address for the server to listen on.
	// The address must be in the format "host:port" or ":port" to bind to all interfaces.
	//
	// Example addresses:
	//   - "127.0.0.1:8080" - Listen on localhost port 8080
	//   - ":8080" - Listen on all interfaces port 8080
	//   - "0.0.0.0:8080" - Explicitly listen on all IPv4 interfaces
	//
	// This method must be called before Listen(). Returns ErrInvalidAddress
	// if the address is empty or malformed.
	RegisterServer(address string) error
}

ServerTcp defines the interface for a TCP server implementation. It extends the base github.com/nabbar/golib/socket.Server interface with TCP-specific functionality.

Features

## Connection Handling

  • Concurrent connection handling with goroutine per connection
  • Configurable idle timeout for connections
  • Graceful connection draining during shutdown
  • Connection state tracking and monitoring

## Security

  • TLS/SSL encryption with configurable cipher suites
  • Support for mutual TLS (mTLS) with client certificate verification
  • Secure defaults for TLS configuration

## Monitoring & Observability

  • Connection lifecycle callbacks (new, read, write, close)
  • Error reporting through configurable callbacks
  • Server status notifications
  • Atomic counters for active connections

## Thread Safety

  • All exported methods are safe for concurrent use
  • Atomic operations for state management
  • No shared state between connections

Lifecycle

A typical server lifecycle follows these steps:

  1. Create: server := tcp.New(updateFunc, handler, config)
  2. Configure: server.RegisterServer(":8080")
  3. Start: server.Listen(ctx)
  4. (Run until shutdown signal)
  5. Shutdown: server.Shutdown(ctx)

Error Handling

The server provides multiple ways to handle errors:

  • Return values from methods (e.g., Listen(), Shutdown())
  • Error callback function (RegisterFuncError)
  • Context cancellation for timeouts

See github.com/nabbar/golib/socket.Server for inherited methods:

  • Listen(context.Context) error - Start accepting connections
  • Shutdown(context.Context) error - Graceful shutdown
  • Close() error - Immediate shutdown
  • IsRunning() bool - Check if server is accepting connections
  • IsGone() bool - Check if all connections are closed
  • OpenConnections() int64 - Get current connection count
  • Done() <-chan struct{} - Channel closed when server stops listening
  • SetTLS(bool, TLSConfig) error - Configure TLS
  • RegisterFuncError(FuncError) - Register error callback
  • RegisterFuncInfo(FuncInfo) - Register connection info callback
  • RegisterFuncInfoServer(FuncInfoSrv) - Register server info callback
Example (ContextValues)

ExampleServerTcp_contextValues demonstrates using context values

package main

import (
	"context"
	"fmt"

	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	type contextKey string
	const userIDKey contextKey = "userID"

	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()

		// Access context value
		if userID := c.Value(userIDKey); userID != nil {
			fmt.Printf("Processing request for user: %v\n", userID)
		}
	}

	cfg := sckcfg.Server{
		Network: libptc.NetworkTCP,
		Address: ":9011",
	}

	srv, _ := scksrt.New(nil, handler, cfg)
	fmt.Println("Server with context values ready")
	_ = srv.Shutdown(context.Background())
}
Output:

Server with context values ready
Example (IdleTimeout)

ExampleServerTcp_idleTimeout demonstrates idle connection timeout

package main

import (
	"context"
	"fmt"
	"time"

	libdur "github.com/nabbar/golib/duration"
	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()
		// Handler that doesn't read/write (connection will idle)
		time.Sleep(200 * time.Millisecond)
	}

	cfg := sckcfg.Server{
		Network:        libptc.NetworkTCP,
		Address:        ":9007",
		ConIdleTimeout: libdur.ParseDuration(100 * time.Millisecond),
	}
	srv, _ := scksrt.New(nil, handler, cfg)

	ctx := context.Background()
	go func() {
		_ = srv.Listen(ctx)
	}()

	time.Sleep(50 * time.Millisecond)
	fmt.Println("Server with idle timeout running")

	_ = srv.Shutdown(ctx)
}
Output:

Server with idle timeout running
Example (Monitoring)

ExampleServerTcp_monitoring demonstrates complete monitoring setup

package main

import (
	"context"
	"fmt"
	"net"

	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()
	}

	cfg := sckcfg.Server{
		Network: libptc.NetworkTCP,
		Address: ":9009",
	}
	srv, _ := scksrt.New(nil, handler, cfg)

	// Register all callbacks
	srv.RegisterFuncError(func(errs ...error) {
		fmt.Println("Error callback registered")
	})

	srv.RegisterFuncInfo(func(local, remote net.Addr, state libsck.ConnState) {
		fmt.Println("Connection callback registered")
	})

	srv.RegisterFuncInfoServer(func(msg string) {
		fmt.Println("Server info callback registered")
	})

	fmt.Println("All monitoring callbacks configured")
	_ = srv.Shutdown(context.Background())
}
Output:

All monitoring callbacks configured

func New

New creates and initializes a new TCP server instance with the provided configuration.

Parameters

  • upd: Optional UpdateConn callback that's invoked when a new connection is accepted, before the handler is called. Use this to configure connection-specific settings:

  • TCP keepalive

  • Read/write timeouts

  • Buffer sizes

  • Other TCP options

    Example: upd := func(conn net.Conn) error { if tcpConn, ok := conn.(*net.TCPConn); ok { return tcpConn.SetKeepAlive(true) } return nil }

  • hdl: Required HandlerFunc that processes each client connection. This function runs in its own goroutine per connection. The handler receives:

  • r: A socket.Reader for reading from the client

  • w: A socket.Writer for writing to the client

    Example echo server handler: hdl := func(r socket.Reader, w socket.Writer) { defer r.Close() defer w.Close() if _, err := io.Copy(w, r); err != nil { log.Printf("Error in handler: %v", err) } }

  • cfg: Server configuration including address, timeouts, and TLS settings. Use socket.DefaultServerConfig() for default values.

Return Value

Returns a new ServerTcp instance that implements the Server interface. The server is not started until Listen() is called.

Example Usage

Basic echo server:

func main() {
  // Create a simple echo handler
  handler := func(r socket.Reader, w socket.Writer) {
    defer r.Close()
    defer w.Close()
    io.Copy(w, r) // Echo back received data
  }

  // Create server with default config
  cfg := socket.DefaultServerConfig(":8080")
  srv, err := tcp.New(nil, handler, cfg)
  if err != nil {
    log.Fatalf("Failed to create server: %v", err)
  }

  // Start the server
  ctx := context.Background()
  if err := srv.Listen(ctx); err != nil {
    log.Fatalf("Server error: %v", err)
  }
}

Error Handling

The following errors may be returned:

  • ErrInvalidHandler: if hdl is nil
  • Any error returned by the configuration validation

Concurrency

The returned server instance is safe for concurrent use by multiple goroutines. The handler function (hdl) may be called concurrently for different connections.

Memory Management

The server manages the lifecycle of connections and associated resources. Ensure that all resources are properly closed by calling Shutdown() or Close() when the server is no longer needed.

See also:

  • github.com/nabbar/golib/socket.HandlerFunc
  • github.com/nabbar/golib/socket.UpdateConn
  • github.com/nabbar/golib/socket/config.ServerConfig
Example

ExampleNew demonstrates creating a TCP server

package main

import (
	"fmt"
	"io"

	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	// Define connection handler
	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()
		_, _ = io.Copy(c, c) // Echo
	}

	// Create configuration
	cfg := sckcfg.Server{
		Network: libptc.NetworkUDP,
		Address: ":9000",
	}

	// Create server
	srv, err := scksrt.New(nil, handler, cfg)
	if err != nil {
		fmt.Printf("Failed to create server: %v\n", err)
		return
	}

	fmt.Printf("Server created successfully\n")
	_ = srv
}
Output:

Server created successfully
Example (SimpleProtocol)

ExampleNew_simpleProtocol demonstrates a simple line-based protocol

package main

import (
	"context"
	"fmt"

	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	// Handler for a simple line-based protocol (newline-delimited)
	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()

		buf := make([]byte, 1024)
		for {
			n, err := c.Read(buf)
			if err != nil {
				return
			}

			// Echo each line back
			if n > 0 {
				_, _ = c.Write(buf[:n])
			}
		}
	}

	cfg := sckcfg.Server{
		Network: libptc.NetworkTCP,
		Address: ":9010",
	}

	srv, err := scksrt.New(nil, handler, cfg)
	if err != nil {
		fmt.Printf("Error: %v\n", err)
		return
	}

	fmt.Println("Line-based protocol server created")
	_ = srv.Shutdown(context.Background())
}
Output:

Line-based protocol server created
Example (WithUpdateConn)

ExampleNew_withUpdateConn demonstrates custom connection configuration

package main

import (
	"context"
	"fmt"
	"net"
	"time"

	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	// UpdateConn callback to configure TCP keepalive
	upd := func(c net.Conn) {
		if tcpConn, ok := c.(*net.TCPConn); ok {
			_ = tcpConn.SetKeepAlive(true)
			_ = tcpConn.SetKeepAlivePeriod(30 * time.Second)
		}
	}

	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()
	}

	cfg := sckcfg.Server{
		Network: libptc.NetworkTCP,
		Address: ":9008",
	}

	srv, err := scksrt.New(upd, handler, cfg)
	if err != nil {
		fmt.Printf("Error: %v\n", err)
		return
	}

	fmt.Println("Server with custom connection config created")
	_ = srv.Shutdown(context.Background())
}
Output:

Server with custom connection config created

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL