tcp

package
v1.22.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 17, 2026 License: MIT Imports: 18 Imported by: 0

README

TCP Server Package

License Go Version Coverage

Production-ready TCP server implementation with TLS support, graceful shutdown, connection lifecycle management, and comprehensive monitoring capabilities.


Table of Contents


Overview

The tcp package provides a high-performance, production-ready TCP server with first-class support for TLS encryption, graceful shutdown, and connection lifecycle monitoring. It implements a goroutine-per-connection model optimized for hundreds to thousands of concurrent connections.

Design Philosophy
  1. Simplicity First: Minimal API surface with sensible defaults
  2. Production Ready: Built-in monitoring, error handling, and graceful shutdown
  3. Security by Default: TLS 1.2/1.3 support with secure configuration
  4. Observable: Real-time connection tracking and lifecycle callbacks
  5. Context-Aware: Full integration with Go's context for cancellation and timeouts
Key Features
  • TCP Server: Pure TCP with optional TLS/SSL encryption
  • TLS Support: TLS 1.2/1.3 with configurable cipher suites and mutual TLS
  • Graceful Shutdown: Connection draining with configurable timeouts
  • Connection Tracking: Real-time connection counting and monitoring
  • Idle Timeout: Automatic cleanup of inactive connections
  • Lifecycle Callbacks: Hook into connection events (new, read, write, close)
  • Thread-Safe: Lock-free atomic operations for state management
  • Context Integration: Full context support for cancellation and deadlines
  • Zero Dependencies: Only standard library + golib packages

Architecture

Component Diagram
┌─────────────────────────────────────────────────────┐
│                    TCP Server                       │
├─────────────────────────────────────────────────────┤
│                                                     │
│  ┌──────────────┐       ┌───────────────────┐       │
│  │   Listener   │       │  Context Manager  │       │
│  │  (net.TCP)   │       │  (cancellation)   │       │
│  └──────┬───────┘       └─────────┬─────────┘       │
│         │                         │                 │
│         ▼                         ▼                 │
│  ┌──────────────────────────────────────────┐       │
│  │       Connection Accept Loop             │       │
│  │     (with optional TLS handshake)        │       │
│  └──────────────┬───────────────────────────┘       │
│                 │                                   │
│                 ▼                                   │
│         Per-Connection Goroutine                    │
│         ┌─────────────────────┐                     │
│         │  sCtx (I/O wrapper) │                     │
│         │   - Read/Write      │                     │
│         │   - Idle timeout    │                     │
│         │   - State tracking  │                     │
│         └──────────┬──────────┘                     │
│                    │                                │
│                    ▼                                │
│         ┌─────────────────────┐                     │
│         │   User Handler      │                     │
│         │   (custom logic)    │                     │
│         └─────────────────────┘                     │
│                                                     │
│  Optional Callbacks:                                │
│   - UpdateConn: TCP connection tuning               │
│   - FuncError: Error reporting                      │
│   - FuncInfo: Connection events                     │
│   - FuncInfoSrv: Server lifecycle                   │
│                                                     │
└─────────────────────────────────────────────────────┘
Data Flow
  1. Server Start: Listen() creates TCP listener (with optional TLS)
  2. Accept Loop: Continuously accepts new connections
  3. Connection Setup:
    • TLS handshake (if enabled)
    • Connection counter incremented
    • UpdateConn callback invoked
    • Connection wrapped in sCtx context
    • Handler goroutine spawned
    • Idle timeout monitoring started
  4. Handler Execution: User handler processes the connection
  5. Connection Close:
    • Connection closed
    • Context cancelled
    • Counter decremented
    • Goroutine cleaned up
Lifecycle States
┌─────────────┐
│  Created    │  IsRunning=false, IsGone=false
└──────┬──────┘
       │ Listen()
       ▼
┌─────────────┐
│  Running    │  IsRunning=true, IsGone=false
└──────┬──────┘  (Accepting connections)
       │ Shutdown()
       ▼
┌─────────────┐
│  Draining   │  IsRunning=false, IsGone=true
└──────┬──────┘  (Waiting for connections to close)
       │ All connections closed
       ▼
┌─────────────┐
│  Stopped    │  IsRunning=false, IsGone=true
└─────────────┘  (All resources released)

Performance

Throughput

Based on benchmarks with echo server on localhost:

Configuration Connections Throughput Latency (P50)
Plain TCP 100 ~500K req/s <1 ms
Plain TCP 1000 ~450K req/s <2 ms
TLS 1.3 100 ~350K req/s 2-3 ms
TLS 1.3 1000 ~300K req/s 3-5 ms

Actual throughput depends on handler complexity and network conditions

Memory Usage

Per-connection memory footprint:

Goroutine stack:      ~8 KB
sCtx structure:       ~1 KB
Application buffers:  Variable (e.g., 4 KB)
────────────────────────────
Total per connection: ~10-15 KB

Memory scaling examples:

  • 100 connections: ~1-2 MB
  • 1,000 connections: ~10-15 MB
  • 10,000 connections: ~100-150 MB
Scalability

Recommended connection limits:

Connections Performance Notes
1-1,000 Excellent Ideal range
1,000-5,000 Good Monitor memory
5,000-10,000 Fair Consider profiling
10,000+ Not advised Event-driven model recommended

Use Cases

1. Custom Protocol Server

Problem: Implement a proprietary binary or text protocol over TCP.

handler := func(ctx libsck.Context) {
    defer ctx.Close()
    
    // Read length-prefixed messages
    lenBuf := make([]byte, 4)
    if _, err := io.ReadFull(ctx, lenBuf); err != nil {
        return
    }
    
    msgLen := binary.BigEndian.Uint32(lenBuf)
    msg := make([]byte, msgLen)
    if _, err := io.ReadFull(ctx, msg); err != nil {
        return
    }
    
    // Process and respond
    response := processMessage(msg)
    ctx.Write(response)
}

Real-world: IoT device communication, game servers, financial data feeds.

2. Secure API Gateway

Problem: TLS-encrypted gateway for backend services.

cfg := sckcfg.Server{
    Network: libptc.NetworkTCP,
    Address: ":8443",
    TLS: sckcfg.TLS{
        Enable: true,
        Config: tlsConfig,  // Mutual TLS with client certs
    },
    ConIdleTimeout: 5 * time.Minute,
}

srv, _ := tcp.New(nil, gatewayHandler, cfg)

Real-world: Microservice mesh, secure API endpoints.

3. Connection Pooling Proxy

Problem: Maintain persistent connections to backend servers.

var backendPool sync.Pool

handler := func(ctx libsck.Context) {
    defer ctx.Close()
    
    // Get backend connection from pool
    backend := backendPool.Get().(net.Conn)
    defer backendPool.Put(backend)
    
    // Bidirectional copy
    go io.Copy(backend, ctx)
    io.Copy(ctx, backend)
}

Real-world: Database proxy, load balancer, connection multiplexer.

4. Real-Time Monitoring Server

Problem: Stream real-time metrics to monitoring clients.

srv.RegisterFuncInfo(func(local, remote net.Addr, state libsck.ConnState) {
    switch state {
    case libsck.ConnectionNew:
        metricsCollector.IncCounter("connections_total")
    case libsck.ConnectionClose:
        metricsCollector.DecGauge("connections_active")
    }
})

Real-world: Telemetry collection, log aggregation.

5. WebSocket-like Protocol

Problem: Implement frame-based messaging without HTTP.

handler := func(ctx libsck.Context) {
    defer ctx.Close()
    
    for {
        // Read frame header
        header := make([]byte, 2)
        if _, err := io.ReadFull(ctx, header); err != nil {
            return
        }
        
        opcode := header[0]
        payloadLen := header[1]
        
        // Read payload
        payload := make([]byte, payloadLen)
        if _, err := io.ReadFull(ctx, payload); err != nil {
            return
        }
        
        processFrame(opcode, payload)
    }
}

Real-world: Game protocols, streaming applications.


Quick Start

Installation
go get github.com/nabbar/golib/socket/server/tcp
Basic Echo Server
package main

import (
    "context"
    "io"
    
    libptc "github.com/nabbar/golib/network/protocol"
    libsck "github.com/nabbar/golib/socket"
    sckcfg "github.com/nabbar/golib/socket/config"
    tcp "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
    // Define echo handler
    handler := func(ctx libsck.Context) {
        defer ctx.Close()
        io.Copy(ctx, ctx)  // Echo
    }
    
    // Create configuration
    cfg := sckcfg.Server{
        Network: libptc.NetworkTCP,
        Address: ":8080",
    }
    
    // Create and start server
    srv, _ := tcp.New(nil, handler, cfg)
    srv.Listen(context.Background())
}
Server with TLS
import (
    libtls "github.com/nabbar/golib/certificates"
    tlscrt "github.com/nabbar/golib/certificates/certs"
    // ... other imports
)

func main() {
    // Load TLS certificate
    cert, _ := tlscrt.LoadPair("server.key", "server.crt")
    
    tlsConfig := libtls.Config{
        Certs:      []tlscrt.Certif{cert.Model()},
        VersionMin: tlsvrs.VersionTLS12,
        VersionMax: tlsvrs.VersionTLS13,
    }
    
    // Configure server with TLS
    cfg := sckcfg.Server{
        Network: libptc.NetworkTCP,
        Address: ":8443",
        TLS: sckcfg.TLS{
            Enable: true,
            Config: tlsConfig,
        },
    }
    
    srv, _ := tcp.New(nil, handler, cfg)
    srv.Listen(context.Background())
}
Production Server
func main() {
    // Handler with error handling
    handler := func(ctx libsck.Context) {
        defer ctx.Close()
        
        buf := make([]byte, 4096)
        for ctx.IsConnected() {
            n, err := ctx.Read(buf)
            if err != nil {
                log.Printf("Read error: %v", err)
                return
            }
            
            if _, err := ctx.Write(buf[:n]); err != nil {
                log.Printf("Write error: %v", err)
                return
            }
        }
    }
    
    // Configuration with idle timeout
    cfg := sckcfg.Server{
        Network:        libptc.NetworkTCP,
        Address:        ":8080",
        ConIdleTimeout: 5 * time.Minute,
    }
    
    srv, _ := tcp.New(nil, handler, cfg)
    
    // Register monitoring callbacks
    srv.RegisterFuncError(func(errs ...error) {
        for _, err := range errs {
            log.Printf("Server error: %v", err)
        }
    })
    
    srv.RegisterFuncInfo(func(local, remote net.Addr, state libsck.ConnState) {
        log.Printf("[%s] %s -> %s", state, remote, local)
    })
    
    // Start server
    ctx, cancel := context.WithCancel(context.Background())
    defer cancel()
    
    go func() {
        if err := srv.Listen(ctx); err != nil {
            log.Fatalf("Server error: %v", err)
        }
    }()
    
    // Graceful shutdown on signal
    sigChan := make(chan os.Signal, 1)
    signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM)
    
    <-sigChan
    log.Println("Shutting down...")
    
    shutdownCtx, shutdownCancel := context.WithTimeout(
        context.Background(), 30*time.Second)
    defer shutdownCancel()
    
    if err := srv.Shutdown(shutdownCtx); err != nil {
        log.Printf("Shutdown error: %v", err)
    }
    
    log.Println("Server stopped")
}

Best Practices

✅ DO

Always close connections:

handler := func(ctx libsck.Context) {
    defer ctx.Close()  // Ensures cleanup
    // Handler logic...
}

Implement graceful shutdown:

shutdownCtx, cancel := context.WithTimeout(
    context.Background(), 30*time.Second)
defer cancel()

if err := srv.Shutdown(shutdownCtx); err != nil {
    log.Printf("Shutdown timeout: %v", err)
}

Monitor connection count:

go func() {
    ticker := time.NewTicker(10 * time.Second)
    defer ticker.Stop()
    
    for range ticker.C {
        count := srv.OpenConnections()
        if count > 1000 {
            log.Printf("WARNING: High connection count: %d", count)
        }
    }
}()

Handle errors properly:

n, err := ctx.Read(buf)
if err != nil {
    if err != io.EOF {
        log.Printf("Read error: %v", err)
    }
    return  // Exit handler
}
❌ DON'T

Don't ignore graceful shutdown:

// ❌ BAD: Abrupt shutdown loses data
srv.Close()

// ✅ GOOD: Wait for connections to finish
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
srv.Shutdown(ctx)

Don't leak goroutines:

// ❌ BAD: Forgot to close connection
handler := func(ctx libsck.Context) {
    io.Copy(ctx, ctx)  // Connection never closed!
}

// ✅ GOOD: Always defer Close
handler := func(ctx libsck.Context) {
    defer ctx.Close()
    io.Copy(ctx, ctx)
}

Don't use in ultra-high concurrency:

// ❌ BAD: 100K+ connections on goroutine-per-connection
// This will consume excessive memory and goroutines

// ✅ GOOD: For >10K connections, use event-driven model
// Consider alternatives like netpoll, epoll, or io_uring
Testing

The package includes a comprehensive test suite with 79.1% code coverage and 58 test specifications using BDD methodology (Ginkgo v2 + Gomega).

Key test coverage:

  • ✅ All public APIs and lifecycle operations
  • ✅ Concurrent access with race detector (zero races detected)
  • ✅ Performance benchmarks (throughput, latency, scalability)
  • ✅ Error handling and edge cases
  • ✅ TLS handshake and encryption
  • ✅ Context integration and cancellation

For detailed test documentation, see TESTING.md.


API Reference

ServerTcp Interface
type ServerTcp interface {
    // Start accepting connections
    Listen(ctx context.Context) error
    
    // Stop accepting, wait for connections to close
    Shutdown(ctx context.Context) error
    
    // Stop accepting, close all connections immediately
    Close() error
    
    // Check if server is accepting connections
    IsRunning() bool
    
    // Check if server is draining connections
    IsGone() bool
    
    // Get current connection count
    OpenConnections() int64
    
    // Configure TLS
    SetTLS(enable bool, config libtls.TLSConfig) error
    
    // Register address
    RegisterServer(address string) error
    
    // Register callbacks
    RegisterFuncError(f libsck.FuncError)
    RegisterFuncInfo(f libsck.FuncInfo)
    RegisterFuncInfoServer(f libsck.FuncInfoSrv)
}
Configuration
type Server struct {
    Network        libptc.NetworkType  // Protocol (TCP)
    Address        string              // Listen address ":8080"
    ConIdleTimeout time.Duration       // Idle timeout (0=disabled)
    TLS            TLS                 // TLS configuration
}

type TLS struct {
    Enable bool           // Enable TLS
    Config libtls.Config  // TLS certificates and settings
}
Error Codes
var (
    ErrInvalidAddress   = "invalid listen address"
    ErrInvalidHandler   = "invalid handler"
    ErrInvalidInstance  = "invalid socket instance"
    ErrServerClosed     = "server closed"
    ErrContextClosed    = "context closed"
    ErrShutdownTimeout  = "timeout on stopping socket"
    ErrGoneTimeout      = "timeout on closing connections"
    ErrIdleTimeout      = "timeout on idle connections"
)

Contributing

Contributions are welcome! Please follow these guidelines:

Reporting Bugs

If you find a bug, please open an issue on GitHub with:

  1. Description: Clear and concise description of the bug
  2. Reproduction Steps: Minimal code example to reproduce the issue
  3. Expected Behavior: What you expected to happen
  4. Actual Behavior: What actually happened
  5. Environment: Go version, OS, and relevant system information
  6. Logs/Errors: Any error messages or stack traces

Submit issues at: https://github.com/nabbar/golib/issues

Code Contributions
  1. Code Quality

    • Follow Go best practices and idioms
    • Maintain or improve code coverage (target: >80%)
    • Pass all tests including race detector
    • Use gofmt and golint
  2. AI Usage Policy

    • AI must NEVER be used to generate package code or core functionality
    • AI assistance is limited to:
      • Testing (writing and improving tests)
      • Debugging (troubleshooting and bug resolution)
      • Documentation (comments, README, TESTING.md)
    • All AI-assisted work must be reviewed and validated by humans
  3. Testing

    • Add tests for new features
    • Use Ginkgo v2 / Gomega for test framework
    • Ensure zero race conditions with go test -race
  4. Documentation

    • Update GoDoc comments for public APIs
    • Add examples for new features
    • Update README.md and TESTING.md if needed

Improvements & Security

Current Status

The package is production-ready with no urgent improvements or security vulnerabilities identified.

Code Quality Metrics
  • 79.1% test coverage (target: >80%)
  • Zero race conditions detected with -race flag
  • Thread-safe implementation using atomic operations
  • TLS 1.2/1.3 support with secure defaults
  • Graceful shutdown with connection draining
Known Limitations

Architectural Constraints:

  1. Scalability: Goroutine-per-connection model is optimal for 1-10K connections. For >10K connections, consider event-driven alternatives (epoll, io_uring)
  2. No Protocol Framing: Applications must implement their own message framing layer
  3. No Connection Pooling: Each connection is independent - implement pooling at application level if needed
  4. No Built-in Rate Limiting: Application must implement rate limiting for connection/request throttling
  5. No Metrics Export: No built-in Prometheus or OpenTelemetry integration - use callbacks for custom metrics

Not Suitable For:

  • Ultra-high concurrency scenarios (>50K simultaneous connections)
  • Low-latency high-frequency trading (<10µs response time requirements)
  • Short-lived connections at extreme rates (>100K connections/second)
  • Protocol multiplexing scenarios (use HTTP/2, gRPC, or QUIC instead)
Future Enhancements (Non-urgent)

The following enhancements could be considered for future versions:

  1. Connection Pooling: Built-in connection pool management for backend proxies
  2. Rate Limiting: Configurable per-IP and global rate limiting
  3. Metrics Integration: Optional Prometheus/OpenTelemetry exporters
  4. Protocol Helpers: Common framing protocols (length-prefixed, delimited, chunked)
  5. Load Balancing: Built-in connection distribution strategies
  6. Circuit Breaker: Automatic failure detection and recovery

These are optional improvements and not required for production use. The current implementation is stable and performant for its intended use cases.

Security Considerations

Security Best Practices Applied:

  • TLS 1.2/1.3 with configurable cipher suites
  • Mutual TLS (mTLS) support for client authentication
  • Idle timeout to prevent resource exhaustion
  • Graceful shutdown prevents data loss
  • Context cancellation for timeouts and deadlines

No Known Vulnerabilities:

  • Regular security audits performed
  • Dependencies limited to Go stdlib and internal golib packages
  • No CVEs reported
Comparison with Alternatives
Feature tcp (this package) net/http gRPC
Protocol Raw TCP HTTP/1.1, HTTP/2 HTTP/2
Framing Manual Built-in Built-in
TLS Optional Optional Optional
Concurrency Per-connection Per-request Per-stream
Best For Custom protocols REST APIs RPC services
Max Connections ~10K ~10K ~10K per server
Learning Curve Low Medium High

Resources

Package Documentation
  • GoDoc - Complete API reference
  • doc.go - In-depth package documentation with architecture details
  • TESTING.md - Comprehensive testing documentation
External References

AI Transparency

In compliance with EU AI Act Article 50.4: AI assistance was used for testing, documentation, and bug resolution under human supervision. All core functionality is human-designed and validated.


License

MIT License - See LICENSE file for details.

Copyright (c) 2022 Nicolas JUHEL


Maintained by: Nicolas JUHEL
Package: github.com/nabbar/golib/socket/server/tcp
Version: See releases for versioning

Documentation

Overview

Package tcp provides a high-performance, production-ready TCP server implementation with support for TLS, connection pooling, and centralized idle management.

1. ARCHITECTURE

The TCP server is engineered to handle massive concurrency while maintaining a predictable and low memory footprint. It achieves this by combining Go's standard library net.TCPListener with several advanced architectural patterns.

┌─────────────────────────────────────────────────────────────────────────────┐
│                              TCP SERVER SYSTEM                              │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│   [ NETWORK LAYER ]         [ KERNEL SPACE ]          [ USER SPACE ]        │
│          │                         │                        │               │
│   TCP SYN Package ----> [ TCP Accept Queue ] ----> [ Accept Loop (srv) ]    │
│                                                             │               │
│                                                             ▼               │
│   [ RESOURCE MANAGEMENT ]                           [ CONNECTION HANDLER ]  │
│          │                                                  │               │
│   ┌──────────────┐          ┌────────────────┐      ┌───────────────┐       │
│   │  sync.Pool   │ <─────── │ Context Reset  │ <─── │  net.Conn     │       │
│   │ (sCtx items) │          └────────────────┘      │  (Raw Socket) │       │
│   └──────────────┘                                  └───────┬───────┘       │
│          ^                                                  │               │
│          │                                                  v               │
│   ┌──────────────┐          ┌────────────────┐      ┌───────────────┐       │
│   │ Idle Manager │ <─────── │ Registration   │ <─── │ User Handler  │       │
│   │ (sckidl.Mgr) │          └────────────────┘      │ (Goroutine)   │       │
│   └──────────────┘                                  └───────────────┘       │
│                                                             │               │
│   [ SHUTDOWN CONTROL ]                                      v               │
│   ┌──────────────────┐                              ┌───────────────┐       │
│   │   Gone Channel   │ ───────(Broadcast)─────────> │   Cleanup     │       │
│   │      (gnc)       │                              │ (Pool Return) │       │
│   └──────────────────┘                              └───────────────┘       │
└─────────────────────────────────────────────────────────────────────────────┘

2. PERFORMANCE OPTIMIZATIONS

  • Zero-Allocation Connection Path: The server uses a sync.Pool to manage connection contexts (sCtx). Instead of allocating a new context for every client, the server fetches one from the pool, resets its internal state (atomic counters, context propagation), and returns it once the connection is closed. This reduces GC pause times by up to 90% in high-load scenarios.

  • Synchronous Acceptance Loop: The listener loop blocks directly on net.Listener.Accept(). Recent performance profiling showed that using intermediate channels for connection distribution introduced unnecessary scheduling overhead.

  • Centralized Idle Scanning: Replaced individual per-connection tickers with integration into the nabbar/golib/socket/idlemgr. A single background scanner handles thousands of timeouts by checking atomic activity counters, reducing the overall "timer" overhead in the Go runtime.

  • Event-Driven Shutdown (Gone Channel): The 'gnc' channel acts as a broadcast mechanism. When the server starts shutting down, this channel is closed. All active connection goroutines select on this channel, allowing them to terminate gracefully and instantly.

  • Systematic NoDelay: TCP_NODELAY is enabled by default to ensure minimal latency for small packets, bypassing Nagle's algorithm.

3. DATA FLOW

The following diagram illustrates the lifecycle of a connection within the server:

[CLIENT]          [SERVER LISTENER]          [IDLE MGR]          [HANDLER]
   │                     │                       │                   │
   │───(TCP Connect)────>│                       │                   │
   │                     │───(Fetch sCtx)───────>│                   │
   │                     │                       │                   │
   │                     │───(Register)─────────>│                   │
   │                     │                       │                   │
   │                     │───(Spawn Handler)────────────────────────>│
   │                     │                       │                   │
   │<───(I/O Activity)───┼───────────────────────────────────────────│
   │                     │                       │                   │
   │                     │                       │<──(Atomic Reset)──│
   │                     │                       │                   │
   │───(TCP Close)──────>│                       │                   │
   │                     │───(Unregister)───────>│                   │
   │                     │                       │                   │
   │                     │───(Release sCtx)─────>│                   │
   │                     │                       │                   │

4. SECURITY & TLS (RFC 8446, RFC 5246) RFC 8446, RFC 5246)" aria-label="Go to 4. SECURITY & TLS (RFC 8446, RFC 5246)">¶

The server provides first-class support for TLS 1.2 and 1.3 through the SetTLS method. It integrates seamlessly with the nabbar/golib/certificates package.

## TLS over Pure Sockets: Limitations and Constraints

When using TLS over pure sockets (without a higher-level protocol like HTTP/1.1 or HTTP/2 which provide explicit host headers), several security and validation constraints must be acknowledged:

  1. Trust Chain Relativity: Since the TLS handshake often happens on a direct IP dial, the verification of the certificate's Common Name (CN) or Subject Alternative Name (SAN) is relative to the IP or the provided ServerName.

  2. SNI Constraints (RFC 6066): Server Name Indication (SNI) is used to select the appropriate certificate. If the client does not provide SNI, the server falls back to its default certificate. In pure socket environments, clients must be explicitly configured to send SNI for proper virtual hosting support.

  3. RFC 8446 (TLS 1.3): The server prioritizes TLS 1.3, which eliminates vulnerable cryptographic primitives and provides "0-RTT" (not enabled by default for security reasons).

  4. Trust Chain Validation: Chain of trust verification is performed by the underlying crypto/tls package. However, in pure socket scenarios, the lack of standardized application-level verification (like HTTP's HSTS) means the security of the initial handshake is paramount.

5. BEST PRACTICES & USE CASES

  • Use Case: Building high-performance microservices communicating over TCP.
  • Use Case: Implementing custom protocols (e.g., database drivers, legacy IPC).
  • Best Practice: Always call defer ctx.Close() within your HandlerFunc to ensure resource return even if a panic occurs.
  • Best Practice: Use the UpdateConn callback for OS-level tuning like SO_RCVBUF and SO_SNDBUF for high-bandwidth applications.

6. IMPLEMENTATION EXAMPLE

package main

import (
    "context"
    "fmt"
    "github.com/nabbar/golib/socket"
    "github.com/nabbar/golib/socket/config"
    "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
    // 1. Define the handler
    handler := func(ctx socket.Context) {
        defer ctx.Close()
        buf := make([]byte, 1024)
        n, _ := ctx.Read(buf)
        fmt.Printf("Received: %s\n", string(buf[:n]))
        ctx.Write([]byte("ACK"))
    }

    // 2. Setup Config
    cfg := config.Server{
        Network: "tcp",
        Address: ":9090",
    }

    // 3. Instantiate
    srv, _ := tcp.New(nil, handler, cfg)

    // 4. Start
    srv.Listen(context.Background())
}

Package tcp provides a robust and performance-oriented TCP server implementation. It integrates with nabbar/golib/socket for common interfaces and configuration.

Features include:

  • TLS support (v1.2, v1.3) with certificate management.
  • Graceful shutdown with connection draining.
  • High-performance memory pooling (sync.Pool).
  • Centralized idle connection scanning.
  • TCP_NODELAY and Keep-Alive tuning.
Example

Example demonstrates a minimal TCP echo server. This is the simplest possible implementation for development and testing.

package main

import (
	"context"
	"time"

	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	// 1. Define the handler function
	handler := func(c libsck.Context) {
		// Ensure connection is closed after the handler exits.
		defer func() { _ = c.Close() }()

		buf := make([]byte, 1024)
		for {
			// Read from the context-aware wrapper.
			n, err := c.Read(buf)
			if err != nil {
				return
			}
			// Write received data back (Echo).
			if n > 0 {
				_, _ = c.Write(buf[:n])
			}
		}
	}

	// 2. Create server configuration
	cfg := sckcfg.Server{
		Network: libptc.NetworkTCP,
		Address: ":8080",
	}

	// 3. Instantiate the server
	srv, err := scksrt.New(nil, handler, cfg)
	if err != nil {
		panic(err)
	}

	// 4. Start listening in a background goroutine
	ctx := context.Background()
	go func() {
		_ = srv.Listen(ctx)
	}()

	// Wait briefly for the listener to bind.
	time.Sleep(100 * time.Millisecond)

	// 5. Gracefully shutdown the server.
	_ = srv.Shutdown(ctx)
}
Example (Complete)

Example_complete demonstrates a robust, production-ready TCP server. This example includes error handling, connection monitoring, and graceful shutdown.

package main

import (
	"context"
	"fmt"
	"net"
	"time"

	libdur "github.com/nabbar/golib/duration"
	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	// Advanced handler with activity check
	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()

		buf := make([]byte, 4096)
		// Use IsConnected() to check the socket state in a loop.
		for c.IsConnected() {
			n, err := c.Read(buf)
			if err != nil {
				return
			}

			if n > 0 {
				if _, err := c.Write(buf[:n]); err != nil {
					return
				}
			}
		}
	}

	// Configure with a 5-minute idle timeout (centralized management).
	cfg := sckcfg.Server{
		Network:        libptc.NetworkTCP,
		Address:        ":8081",
		ConIdleTimeout: libdur.Minutes(5),
	}

	// Create server
	srv, err := scksrt.New(nil, handler, cfg)
	if err != nil {
		fmt.Printf("Failed to create server: %v\n", err)
		return
	}

	// Register monitoring callbacks for observability
	srv.RegisterFuncError(func(errs ...error) {
		for _, e := range errs {
			fmt.Printf("Server error: %v\n", e)
		}
	})

	srv.RegisterFuncInfo(func(local, remote net.Addr, state libsck.ConnState) {
		fmt.Printf("Connection %s from %s\n", state, remote)
	})

	// Start server with context support
	ctx, cancel := context.WithCancel(context.Background())
	defer cancel()

	go func() {
		if err := srv.Listen(ctx); err != nil {
			fmt.Printf("Server stopped: %v\n", err)
		}
	}()

	// Active polling for server readiness
	for i := 0; i < 25; i++ {
		if srv.IsRunning() {
			break
		}
		time.Sleep(100 * time.Millisecond)
	}
	if !srv.IsRunning() {
		fmt.Println("Server not started")
	}

	fmt.Printf("Server running with %d connections\n", srv.OpenConnections())

	// Shutdown with a specific timeout for connection draining
	shutdownCtx, shutdownCancel := context.WithTimeout(
		context.Background(), 10*time.Second)
	defer shutdownCancel()

	if err := srv.Shutdown(shutdownCtx); err != nil {
		fmt.Printf("Shutdown error: %v\n", err)
	}

	fmt.Println("Server stopped gracefully")
}
Output:
Server running with 0 connections
Server stopped gracefully

Index

Examples

Constants

This section is empty.

Variables

View Source
var (
	// ErrInvalidAddress is returned when the provided server address is empty,
	// malformed, or cannot be resolved using net.ResolveTCPAddr.
	//
	// Valid address examples:
	//   - "127.0.0.1:8080" (IPv4 localhost)
	//   - "[::1]:8080"     (IPv6 localhost)
	//   - ":8080"          (All interfaces)
	ErrInvalidAddress = fmt.Errorf("invalid listen address")

	// ErrInvalidHandler is returned when the required HandlerFunc is not provided
	// during server initialization via New().
	//
	// The handler must be a function matching the libsck.HandlerFunc signature
	// and is responsible for processing each client connection.
	ErrInvalidHandler = fmt.Errorf("invalid handler")

	// ErrShutdownTimeout is returned when the graceful shutdown process exceeds
	// the provided context's deadline.
	//
	// This error occurs during the draining phase, when active connections
	// fail to close or finish their task within the allocated time.
	ErrShutdownTimeout = fmt.Errorf("timeout on stopping socket")

	// ErrInvalidInstance is returned when a method is called on a nil server instance
	// or an instance that has not been properly initialized via New().
	ErrInvalidInstance = fmt.Errorf("invalid socket instance")
)

Functions

This section is empty.

Types

type ServerTcp

type ServerTcp interface {
	libsck.Server

	// RegisterServer sets the TCP address for the server to listen on.
	// The address should be in "host:port" format (e.g., "localhost:8080" or ":8080").
	// Must be called before Listen(). Returns ErrInvalidAddress if the input
	// is malformed.
	RegisterServer(address string) error
}

ServerTcp defines the interface for a high-performance TCP server. It extends the base libsck.Server interface with specific TCP functionality.

A ServerTcp provides a concurrent TCP server that handles client connections using a customizable handler function. It supports TLS encryption, idle connection management, and graceful shutdown.

Thread Safety

All implementations of ServerTcp MUST be safe for concurrent use by multiple goroutines. All methods can be called simultaneously from different threads.

Lifecycle Management

  1. Creation: New() initializes the server with configuration and handler.
  2. Configuration: via SetTLS() and RegisterFunc* methods (Error, Info, InfoServer).
  3. Bind: via RegisterServer() to set the listen address.
  4. Operation: via Listen() to start accepting connections.
  5. Shutdown: via Shutdown() (graceful) or Close() (immediate).

Example Usage (Echo Server)

hdl := func(ctx libsck.Context) {
    defer ctx.Close()
    io.Copy(ctx, ctx) // Echo data back
}
cfg := sckcfg.DefaultServer(":8080")
srv, _ := tcp.New(nil, hdl, cfg)
srv.Listen(context.Background())
Example (ContextValues)

ExampleServerTcp_contextValues shows how to use the context for request-scoped data.

package main

import (
	"context"
	"fmt"

	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	type contextKey string
	const userIDKey contextKey = "userID"

	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()

		// The context wrapper (sCtx) delegates Value() calls to the parent context.
		if userID := c.Value(userIDKey); userID != nil {
			fmt.Printf("Processing request for user: %v\n", userID)
		}
	}

	cfg := sckcfg.Server{
		Network: libptc.NetworkTCP,
		Address: ":9011",
	}

	srv, _ := scksrt.New(nil, handler, cfg)
	fmt.Println("Server with context values ready")
	_ = srv.Shutdown(context.Background())
}
Output:
Server with context values ready
Example (IdleTimeout)

ExampleServerTcp_idleTimeout shows how inactivity thresholds are enforced.

package main

import (
	"context"
	"fmt"
	"time"

	libdur "github.com/nabbar/golib/duration"
	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()
		// Connection remains idle. The idlemgr will terminate it.
		time.Sleep(200 * time.Millisecond)
	}

	cfg := sckcfg.Server{
		Network:        libptc.NetworkTCP,
		Address:        ":9007",
		ConIdleTimeout: libdur.ParseDuration(100 * time.Millisecond),
	}
	srv, _ := scksrt.New(nil, handler, cfg)

	ctx := context.Background()
	go func() {
		_ = srv.Listen(ctx)
	}()

	time.Sleep(50 * time.Millisecond)
	fmt.Println("Server with idle timeout running")

	_ = srv.Shutdown(ctx)
}
Output:
Server with idle timeout running
Example (Monitoring)

ExampleServerTcp_monitoring shows a full setup of observability callbacks.

package main

import (
	"context"
	"fmt"
	"net"

	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()
	}

	cfg := sckcfg.Server{
		Network: libptc.NetworkTCP,
		Address: ":9009",
	}
	srv, _ := scksrt.New(nil, handler, cfg)

	// Register all available notification hooks.
	srv.RegisterFuncError(func(errs ...error) {
		fmt.Println("Error callback registered")
	})

	srv.RegisterFuncInfo(func(local, remote net.Addr, state libsck.ConnState) {
		fmt.Println("Connection callback registered")
	})

	srv.RegisterFuncInfoServer(func(msg string) {
		fmt.Println("Server info callback registered")
	})

	fmt.Println("All monitoring callbacks configured")
	_ = srv.Shutdown(context.Background())
}
Output:
All monitoring callbacks configured

func New

New creates and initializes a new TCP server instance with the provided configuration.

Configuration and Initialization Dataflow

  1. Validation: cfg.Validate() ensures basic parameters (address, timeouts) are sound.
  2. Defaults: Default TLS versions (1.2/1.3) and empty callbacks are set.
  3. Structure: The srv internal structure is allocated.
  4. Resource Pooling: The sync.Pool for sCtx recycling is initialized.
  5. Idle Manager: If ConIdleTimeout > 0, an sckidl.Manager is started to handle timeouts.
  6. Binding: RegisterServer() is called with the address from the config.
  7. Security: SetTLS() is called with TLS settings from the config.
  8. State: gon is set to true (server is ready to be started).

Parameters

  • upd: Optional callback to configure each net.Conn (e.g., buffer sizes) before handling.
  • hdl: Required handler function that will be called for each new connection.
  • cfg: Server configuration structure including address, TLS settings, and timeouts.

Returns

  • ServerTcp: The initialized server instance.
  • error: Initialization errors (ErrInvalidHandler, ErrInvalidAddress, sckidl errors).
Example

ExampleNew shows how to initialize a server and handle initial validation errors.

package main

import (
	"fmt"
	"io"

	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()
		_, _ = io.Copy(c, c)
	}

	// Note: Providing a non-TCP protocol will fail validation for scksrt.New
	cfg := sckcfg.Server{
		Network: libptc.NetworkTCP,
		Address: ":9000",
	}

	// Create server
	srv, err := scksrt.New(nil, handler, cfg)
	if err != nil {
		fmt.Printf("Failed to create server: %v\n", err)
		return
	}

	fmt.Printf("Server created successfully\n")
	_ = srv
}
Output:
Server created successfully
Example (SimpleProtocol)

ExampleNew_simpleProtocol shows a line-based TCP server implementation.

package main

import (
	"context"
	"fmt"

	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()

		buf := make([]byte, 1024)
		for {
			n, err := c.Read(buf)
			if err != nil {
				return
			}

			// Process and response logic (Echo for simplicity)
			if n > 0 {
				_, _ = c.Write(buf[:n])
			}
		}
	}

	cfg := sckcfg.Server{
		Network: libptc.NetworkTCP,
		Address: ":9010",
	}

	srv, err := scksrt.New(nil, handler, cfg)
	if err != nil {
		fmt.Printf("Error: %v\n", err)
		return
	}

	fmt.Println("Line-based protocol server created")
	_ = srv.Shutdown(context.Background())
}
Output:
Line-based protocol server created
Example (WithUpdateConn)

ExampleNew_withUpdateConn shows how to tune low-level socket options.

package main

import (
	"context"
	"fmt"
	"net"
	"time"

	libptc "github.com/nabbar/golib/network/protocol"

	libsck "github.com/nabbar/golib/socket"

	sckcfg "github.com/nabbar/golib/socket/config"

	scksrt "github.com/nabbar/golib/socket/server/tcp"
)

func main() {
	// Use UpdateConn for advanced OS-level tuning.
	upd := func(c net.Conn) {
		if tcpConn, ok := c.(*net.TCPConn); ok {
			_ = tcpConn.SetKeepAlive(true)
			_ = tcpConn.SetKeepAlivePeriod(30 * time.Second)
		}
	}

	handler := func(c libsck.Context) {
		defer func() { _ = c.Close() }()
	}

	cfg := sckcfg.Server{
		Network: libptc.NetworkTCP,
		Address: ":9008",
	}

	srv, err := scksrt.New(upd, handler, cfg)
	if err != nil {
		fmt.Printf("Error: %v\n", err)
		return
	}

	fmt.Println("Server with custom connection config created")
	_ = srv.Shutdown(context.Background())
}
Output:
Server with custom connection config created

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL