paycloudhelper

package module
v1.10.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 29, 2026 License: MIT Imports: 38 Imported by: 0

README

paycloudhelper

Go shared library — common utilities for all PayCloud Hub microservices.


Version: 2.1.0 Go Version: 1.25.0 (toolchain: go1.25.9) Last Updated: April 25, 2026

Module: github.com/PayCloud-ID/paycloudhelper
Go: 1.25 (toolchain pinned via go.mod)


Table of Contents


Overview

paycloudhelper is a shared library imported by ~30 PayCloud microservices. It is not a standalone service. On import, init() runs automatically and bootstraps the logger and app identity. Consumer services then explicitly opt into Redis, RabbitMQ, and Sentry.

Auto-Initialization Flow
import paycloudhelper → init() runs:
  AddValidatorLibs() → InitializeLogger() → InitializeApp()

Consumer must explicitly call:
  InitializeRedisWithRetry(opts)   → Redis pool + RedSync
  SetUpRabbitMq(...)               → Audit trail
  InitSentry(options)              → Error tracking (optional)
  ConfigureLogForwarding(cfg)      → Log → Sentry forwarding (optional)

Architecture

Service Flow
flowchart TD
    Consumer[Consumer Service] -->|import| Init[paycloudhelper init]
    Init --> AddValidator[AddValidatorLibs]
    Init --> InitLogger[InitializeLogger]
    Init --> InitApp[InitializeApp]
    Consumer -->|explicit call| InitRedis[InitializeRedisWithRetry]
    Consumer -->|explicit call| InitRabbit[SetUpRabbitMq]
    Consumer -->|optional call| InitSentry[InitSentry]
    Consumer -->|runtime usage| RedisOps[Store/Get/Lock helpers]
    Consumer -->|runtime usage| Middleware[CSRF/Idempotency/Revoke middleware]
    Consumer -->|runtime usage| AuditTrail[Audit trail publish]
Integration Map
flowchart LR
    subgraph paycloudhelper
        Core[Root package]
        Logger[phlogger]
        Sentry[phsentry]
        Helper[phhelper]
    end

    Service[Consumer service] --> Core
    Core --> Redis[(Redis)]
    Core --> Rabbit[(RabbitMQ)]
    Core --> SentrySDK[(Sentry)]
    Core --> Logger
    Core --> Helper

Quick Start

import pch "github.com/PayCloud-ID/paycloudhelper"

// In main() — after godotenv.Load()
pch.InitializeRedisWithRetry(pch.RedisInitOptions{...})
pch.SetUpRabbitMq(...)
pch.InitSentry(pch.SentryOptions{Dsn: os.Getenv("SENTRY_DSN")})

// Optional: forward Fatal logs to Sentry automatically
pch.ConfigureLogForwarding(pch.LogForwardConfig{
    ForwardFatal: true, // default true when Sentry is enabled
})

Package Structure

Package Path Purpose
Root . Public API — all below re-exported here
phlogger phlogger/ Logger wrapper (kataras/golog) + sampler + context logger + metrics hooks + KeyedLimiter + forwarding hooks
phsentry phsentry/ Sentry error tracking, log receiver
phhelper phhelper/ Global state (APP_NAME, APP_ENV), JSON/string helpers
phaudittrailv0 phaudittrailv0/ Legacy v0 audit trail (RabbitMQ)
phjson phjson/ Sonic JSON wrapper for high-throughput consumers
sdk/services/s3minio sdk/services/s3minio/ Service-scoped S3MinIO SDK (helper, grpc, http bridge, pb, proto, facade)
sdk/shared sdk/shared/ Shared runtime placeholders for transport, observability, and error normalization across future SDKs
Service-Scoped SDK Foundation

sdk/services/s3minio is the active runtime path and reference layout for future service SDKs.

  • All S3MinIO helper/grpc/http/pb logic now lives under sdk/services/s3minio/*.
  • New services should follow the same sdk/services/<service> structure.
  • Governance scripts under scripts/proto/ and scripts/check-*.sh enforce drift and transport boundaries.

API Reference

Logging

Import the root package (e.g. import pch "github.com/PayCloud-ID/paycloudhelper"). Do not import phlogger directly in consumer services. Every log line must include the caller in square brackets: use [Type.MethodName] for methods (e.g. [Server.Start]) and [FuncName] for plain functions. Prefer key=value style after the prefix.

pch.LogI("[FuncName] started id=%s", id)    // Info — or [Server.Start] for methods
pch.LogE("[FuncName] error: %v", err)        // Error
pch.LogW("[FuncName] warn: %s", msg)         // Warning
pch.LogD("[FuncName] debug key=%s", key)     // Debug (off in production)
pch.LogF("[FuncName] fatal: %v", err)        // Fatal — process exits
pch.LogJ(obj)                                // JSON (compact)
pch.LogJI(obj)                               // JSON (indented)
pch.LogErr(err)                              // Error shorthand (no format string)
Sampled Logging (Default Behavior)

All log functions (LogI, LogE, LogW, LogD, LogF) are sampled by default using the format string as key. The sampler uses an Initial/Thereafter pattern per time period:

Environment Initial Thereafter Period Behavior
production / prod 5 50 1s First 5/sec per key, then every 50th
staging / stg 10 10 1s First 10/sec, then every 10th
develop / "" (default) 0 (disabled) All logs pass through

The sampler is initialized automatically from APP_ENV. Override at startup:

pch.InitializeSampler(pch.SamplerConfig{
    Initial:    3,
    Thereafter: 100,
    Period:     time.Second,
})

When suppressed logs are emitted, the message includes [+N suppressed].

Sampled Logging with Custom Key
// Custom key isolates sampling from the format string.
// Uses the global sampler config (env-aware):
pch.LogIRated("cache.miss", "[FuncName] cache miss key=%s", key)
pch.LogERated("db.error", "[FuncName] db error: %v", err)

// Explicit time-window override (bypasses sampler, uses simple dedup):
pch.LogIRatedW("cache.miss", 5*time.Second, "[FuncName] cache miss key=%s", key)
pch.LogERatedW("db.error", 500*time.Millisecond, "[FuncName] db error: %v", err)
Context Logger (Request-Scoped Fields)

LogContext is a child logger that prepends key-value fields to every message. Useful for request-scoped or operation-scoped logging:

ctx := pch.NewLogContext("req_id", "abc-123", "merchant", "M001")
ctx.LogI("processing payment amount=%d", 5000)
// output: [req_id=abc-123 merchant=M001] processing payment amount=5000

ctx.LogE("payment failed: %v", err)
// output: [req_id=abc-123 merchant=M001] payment failed: timeout

// Add more fields without losing parent context:
child := ctx.With("step", "validate")
child.LogI("checking input")
// output: [req_id=abc-123 merchant=M001 step=validate] checking input
Metrics Hooks (High-Frequency Events)

For events that happen thousands of times per second, measure instead of logging. Register a hook once at startup — no prometheus dependency in the library:

// Wire your own counter backend (prometheus, statsd, etc.)
pch.RegisterMetricsHook(func(event string, count int64) {
    myPromCounter.WithLabelValues(event).Add(float64(count))
})

// Then in hot paths:
pch.IncrementMetric("cache.miss")
pch.IncrementMetricBy("batch.processed", int64(batchSize))

If no hook is registered, calls are no-ops (zero overhead).

KeyedLimiter (Token Bucket)

For precise per-key rate control (max N events/sec), use KeyedLimiter instead of the sampler:

limiter := pch.NewKeyedLimiter(10, 1)  // 10 events/sec, burst 1

if limiter.Allow("db.timeout") {
    pch.LogE("[Handler] database timeout host=%s", host)
}

Each key gets an independent token bucket. Tokens refill at the configured rate.

Sentry Structured Logging

Paycloudhelper integrates structured logging with Sentry (SDK v0.33.0+) to forward all logs to Sentry for centralized error tracking and observability.

Simple Setup (Recommended):

// 1. Initialize Sentry
pch.InitSentry(pch.SentryOptions{
    Dsn:         os.Getenv("SENTRY_DSN"),
    Environment: os.Getenv("APP_ENV"),
    Release:     os.Getenv("SENTRY_RELEASE"),
})

// 2. Enable structured logging via environment variable (one-liner)
pch.ConfigureSentryLogging(pch.SentryLoggingFromEnv())

// 3. Logs are now automatically forwarded to Sentry
pch.LogE("[Main.start] error: %v", err)     // → Sentry exception event
pch.LogI("[Main.start] listening on :8080") // → Sentry breadcrumb

Environment Variables:

Env Var Default Effect
SENTRY_LOGGING false Enable/disable structured logging to Sentry. Accepts: true, 1, t, T, false, 0, f, F (case-insensitive). Invalid values default to false.
SENTRY_DSN empty Sentry ingestion endpoint
SENTRY_RELEASE empty Application version
SENTRY_DEBUG false Verbose SDK diagnostics (local/staging only)

How It Works:

  • All logs via LogI(), LogE(), LogW(), LogD(), LogF() are forwarded to Sentry
  • Error/fatal logs → exception events (appear as issues in Sentry)
  • Info/warn/debug logs → breadcrumbs (appear as context in related issues)
  • [FunctionName] prefix is extracted for issue grouping
  • Thread-safe: hooks are registered synchronously once

Advanced: Granular Control

For more granular per-level configuration (legacy):

pch.ConfigureLogForwarding(pch.LogForwardConfig{
    ForwardFatal: true,  // default: true
    ForwardError: true,  // default: false  
    ForwardWarn:  false, // default: false
    ForwardInfo: false,  // default: false
    // OR autoload: pch.LogForwardConfigFromEnv()
})

Before Process Exit:

pch.FlushSentry(2 * time.Second) // Ensure events are delivered
Response
var resp pch.ResponseApi
resp.Success("ok", data)            // 200
resp.Accepted(data)                 // 202
resp.BadRequest("msg", "ERR_CODE")  // 400
resp.Unauthorized("msg", "")        // 401
resp.InternalServerError(err)       // 500
return c.JSON(resp.Code, resp)
Redis
pch.StoreRedis(key, value, duration)
pch.GetRedis(key)
pch.StoreRedisWithLock(key, value, duration)
pch.AcquireLockWithRetry(key, ttl, retries, delay)
pch.ReleaseLockWithRetry(mutex, retries)
Sentry Error Tracking

Initialize Sentry for error tracking. A non-empty Dsn is required; an empty DSN skips initialization.

For structured logging integration, see Sentry Structured Logging above.

**Debug and SENTRY_DEBUG:** This only controls SDK internal diagnostics verbosity, not structured logging. When Debug is true, the sentry-go SDK prints verbose diagnostics to the configured DebugWriter (default: application logs with [pchelper.Sentry] prefix).

Deploy context Typical Debug value
Local / active troubleshooting true only while fixing DSN, network, or “events not arriving”
Staging Usually false; true briefly if you are debugging the SDK
Production false (less noise and log volume)
pch.InitSentry(pch.SentryOptions{
    Dsn:         os.Getenv("SENTRY_DSN"),
    Environment: os.Getenv("APP_ENV"),
    Release:     os.Getenv("SENTRY_RELEASE"),
    Debug:       os.Getenv("SENTRY_DEBUG") == "true",
})
pch.SendSentryError(err)
pch.SendSentryMessage("something happened")
pch.FlushSentry(2 * time.Second) // before process exit
S3MinIO Service SDK (sdk/services/s3minio/helper)

The service-scoped helper centralizes repeated request-building and response-validation logic. It is transport-neutral and used by both gRPC and HTTP bridge adapters in the same SDK namespace.

import s3helper "github.com/PayCloud-ID/paycloudhelper/sdk/services/s3minio/helper"

// Adapter implements s3helper.Downloader by mapping to local gRPC client code.
type Adapter struct{}

func (a Adapter) Download(ctx context.Context, req *s3helper.DownloadRequest) (*s3helper.DownloadResponse, error) {
    // map req to service-specific protobuf request and call downstream client
    return &s3helper.DownloadResponse{Code: s3helper.CodeOK, Data: "https://..."}, nil
}

url, err := s3helper.GetPresignedURL(ctx, Adapter{}, "file.pdf", userID, merchantID, "path", "bucket", 300)

Available helpers:

  • BuildDownloadRequest
  • BuildUploadRequestForMultipart
  • BuildUploadRequestForFile
  • GetPresignedURL
  • UploadByMultipart
  • UploadByFile
  • UploadByRequest
Audit Trail

V1 — goroutine-per-call (legacy, still supported):

client := pch.SetUpRabbitMq(host, port, vhost, user, pass, queue, appName)
pch.LogAudittrailData(funcName, desc, source, commType, &keys, &reqResp)
pch.LogAudittrailProcess(funcName, desc, info, &keys)
  • Push() retries up to PushMaxRetries (3) with PushTimeout (15s).
  • Nil client / not-ready → early exit with rate-limited warning.
  • Atomic counter IDs prevent collision under high throughput.

V2 — worker pool with circuit breaker (recommended for new services):

pub := pch.SetUpAuditTrailPublisher(host, port, vhost, user, pass, queue, appName,
    pch.WithWorkerCount(10),
    pch.WithBufferSize(1000),
    pch.WithMessageTTL("60000"),
)
pch.LogAudittrailDataV2(funcName, desc, source, commType, &keys, &reqResp)
pch.LogAudittrailProcessV2(funcName, desc, info, &keys)

// Lifecycle
pub.Stop() // graceful drain on shutdown
  • Bounded worker pool (default 10 workers, 1000 buffer).
  • Circuit breaker: trips after 10 consecutive failures, 30s cooldown.
  • Falls back to V1 goroutine-per-call when publisher is nil.
  • Functional options: WithWorkerCount, WithBufferSize, WithMaxRetries, WithPublishTimeout, WithMessageTTL, WithCircuitBreakerThreshold, WithCircuitBreakerCooldown.
Middleware (Echo)
e.Use(pch.VerifCsrf)       // X-Xsrf-Token validation
e.Use(pch.VerifIdemKey)    // Idempotency-Key deduplication
e.Use(pch.RevokeToken)     // JWT + Redis revocation check

Integrations

Redis
  • Purpose: caching, idempotency, token revoke checks, distributed locks.
  • Connection: provided by consumer service config and initialized through InitializeRedisWithRetry.
  • Key operations: StoreRedis, GetRedis, DeleteRedis, AcquireLockWithRetry, ReleaseLockWithRetry.
RabbitMQ
  • Purpose: publish audit trail payloads (v1 and v2 publisher modes).
  • Connection: set through SetUpRabbitMq or v2 publisher setup APIs.
  • Key operations: process/data audit log publishing with retry and backpressure controls.
Sentry
  • Purpose: exception capture and optional structured log forwarding.
  • Connection: InitSentry with DSN/environment/release options.
  • Key operations: panic/error forwarding, breadcrumb stream from logger hooks.

Configuration

All configuration is loaded from environment variables in InitializeApp():

Var Required Default Purpose
APP_NAME Yes "" Service name (used in Sentry, logs)
APP_ENV Yes "" develop / staging / production
REDIS_HOST For Redis "" Redis server
REDIS_PORT For Redis 6379 Redis port
REDIS_PASSWORD No "" Redis auth
SENTRY_DSN For Sentry "" Sentry project DSN (validated in InitializeApp; empty disables Sentry)
SENTRY_LOGGING No false Enable structured logging to Sentry (all log levels)
SENTRY_DEBUG No false Not read by this library. Services pass Debug: os.Getenv("SENTRY_DEBUG") == "true" into InitSentry if desired; controls SDK diagnostics verbosity.
LOG_FORWARD_FATAL No true Forward Fatal → Sentry (legacy; use SENTRY_LOGGING instead)
LOG_FORWARD_ERROR No true Forward Error → Sentry (legacy; use SENTRY_LOGGING instead)
LOG_FORWARD_WARN No false Forward Warn → Sentry (legacy; use SENTRY_LOGGING instead)
LOG_FORWARD_INFO No false Forward Info → Sentry (legacy; use SENTRY_LOGGING instead)
TRANSACTION_REDIS_LOCK_TIMEOUT No 2000 (ms) Distributed lock TTL
TRANSACTION_REDIS_BACKOFF No 10 (ms) Lock retry backoff

Testing

Unit tests cover helpers, headers, configuration, response handling, Redis options/mutex/LockError, init/app env, validator rules, and subpackages (phhelper, phjson, phlogger, phsentry). Integration tests for Redis and middleware are skipped by default (require Redis/Echo).

Run all tests (from repo root)
./scripts/run_tests.sh

Options:

Option Description
-v, --verbose Verbose test output
-race Run with race detector (required for concurrency-related changes)
-cover Print coverage per package
-coverprofile Write coverage.out and print go tool cover -func summary
-short Skip long-running tests
-h, --help Show usage

Examples:

./scripts/run_tests.sh -v
./scripts/run_tests.sh -race
./scripts/run_tests.sh -coverprofile
go tool cover -html=coverage.out   # open HTML report (after -coverprofile)

Makefile (short tests + merged coverage):

make test-go                 # go test -short ./...
make test-coverage           # merged -coverpkg=$(COVERAGE_PKGS)
make coverage-inventory      # coverage.out + coverage-func.txt + summary (same -coverpkg defaults)
make test-coverage-check     # fail if merged total < COVERAGE_MIN (default 65; goal 90%)
make test-coverage-check COVERAGE_MIN=90   # enforce 90% when the suite is ready
make test-coverage-integration   # optional: same merged -coverpkg without -short

COVERAGE_PKGS defaults to all packages from go list ./... except phaudittrailv0 (legacy dial-heavy) and **sdk/shared/*** (doc-only placeholder packages). Use COVERAGE_PKGS=./... to include everything in the merged profile.

Without the script:

go test ./...
go test ./... -race
go test ./... -cover
go test ./... -coverprofile=coverage.out -covermode=atomic

Makefile / run.sh: targets mirror CI (build, vet, test) plus test-race, test-cover, and deps. For this repo (library, no main), ./run.sh runs go test -race ./.... Regenerate or adapt for other layouts:

make help
./scripts/generate-makefile.sh [--service-path DIR] [--dry-run]
Code quality
  • Lint: go vet ./...
  • Build: go build ./...
  • Tests follow Go testing conventions, table-driven where appropriate, with clear names and edge-case coverage. Integration-heavy code (AMQP, audit trail, health checks, Echo middleware) is covered by integration tests when Redis/services are available.

Verifying the library

To confirm the library is working correctly and all tested behaviour passes:

  1. Build — compiles without errors:
 go build ./...
  1. Vet — no suspicious constructs:
 go vet ./...
  1. Tests — all unit tests pass:
 go test ./...
  1. Race detector (recommended for concurrency-related changes):
 go test -race ./...

One-liner from repo root:

go build ./... && go vet ./... && go test ./...

Or use the script: ./scripts/run_tests.sh (add -race for race detection).


Consumer Migration (v2.0.0 / Redis v9)

paycloudhelper v2.0.0 introduces a major dependency alignment on github.com/redis/go-redis/v9. Consumer services should treat this as a coordinated migration, not only a module bump.

Migration Checklist for Consumer Services
  1. Update module dependency:
    go get github.com/PayCloud-ID/paycloudhelper@v2.0.0
    go mod tidy
    
  2. Replace direct Redis imports from github.com/go-redis/redis/v8 to github.com/redis/go-redis/v9.
  3. Keep startup initialization through InitializeRedisWithRetry and preserve key naming conventions.
  4. Re-run service validation gates:
    • go build ./...
    • go vet ./...
    • go test ./...
    • go test -race ./...
Migration Skills for Services

Use these skill packs from this repository as migration playbooks:

  • .agents/skills/redis-v9-consumer-migration-core/
  • .agents/skills/redis-v9-consumer-migration-echo-api/
  • .agents/skills/redis-v9-consumer-migration-worker/
  • .agents/skills/redis-v9-consumer-migration-scheduler/

CI (Bitbucket Pipelines)

Every push to develop and main runs a pipeline that:

  • Builds the module (go build ./...)
  • Runs the linter (go vet ./...)
  • Runs all unit tests (go test ./...)

If any step fails, the pipeline fails. Fix the code and push again.

Note: Pipelines run after the push. The push itself is not blocked. To keep main (or develop) from accepting broken code:

  1. In Bitbucket: Repository settings → Branch restrictions.
  2. Add a restriction for main (and optionally develop): Require passing pipelines (and/or require pull requests). Then merges to that branch only succeed when the pipeline is green.

Pipeline config: bitbucket-pipelines.yml in the repo root.


Versioning

Bump When
PATCH Bug fixes, zero behavior change
MINOR New backward-compatible features
MAJOR Breaking changes — requires coordinating all consumer updates

S3MinIO Shared SDK Workflow

Use a single canonical provider proto and regenerate shared helper SDK packages in this repository.

  1. Edit canonical proto in paycloud-be-s3minio-manager/proto/s3minio.proto.
  2. Run ./scripts/proto/update-s3minio-proto.sh from paycloudhelper.
  3. Run ./scripts/proto/gen-s3minio-client.sh to regenerate and test helper packages.
  4. Optionally run ./scripts/proto/check-stub-drift.sh in CI or before release.
  5. Release paycloudhelper and bump consumer service dependencies.

Governance check for direct internal HTTP usage:

./scripts/check-no-direct-s3minio-http.sh

Design and rollout references are stored in docs/sdk/.

Scaffold a future service SDK with:

make proto.service.scaffold SERVICE=clientpg

Automation Prompts

  • prompt-migrate-bitbucket-pipelines-to-github-actions.md
    • Reads bitbucket-pipelines.yml and generates an equivalent GitHub Actions workflow.
    • Preserves build/vet/test/coverage gates and branch triggers.

Contributing

  1. git checkout -b feat/your-feature
  2. Write failing test first (TDD)
  3. Implement minimal code
  4. go test -race ./... — must pass
  5. go build ./... — must pass
  6. git tag vX.Y.Z when ready to release

See .agents/rules/ and AGENTS.md for full development rules.

Documentation

Overview

this middleware function is to revoke token jwt if status user blocked or suspend

Index

Constants

View Source
const (
	CmdAuditTrailProcess = "audit-trail-process"
	CmdAuditTrailData    = "audit-trail-data"
)
View Source
const (
	AuditTrxStateRequestReceived     = "request_received"
	AuditTrxStateRequestValidated    = "request_validated"
	AuditTrxStateOrderCreated        = "order_created"
	AuditTrxStateChannelSelected     = "channel_selected"
	AuditTrxStateChannelProcessed    = "channel_processed"
	AuditTrxStateVendorRequestSent   = "vendor_request_sent"
	AuditTrxStateVendorTokenAcquired = "vendor_token_acquired"
	AuditTrxStateQrGenerated         = "qr_generated"
	AuditTrxStateVendorRequestFailed = "vendor_request_failed"
	AuditTrxStateTransactionUpdated  = "transaction_updated"
	AuditTrxStatePaymentNotified     = "payment_notified"
	AuditTrxStateResponseReturned    = "response_returned"
	AuditTrxStateOrderExpired        = "order_expired"
	AuditTrxStatePaymentReceived     = "payment_received"
	AuditTrxStateStatusChecked       = "status_checked"
)
View Source
const (
	AuditTrxStatusProcessing = "processing"
	AuditTrxStatusSuccess    = "success"
	AuditTrxStatusFailed     = "failed"
	AuditTrxStatusExpired    = "expired"
)
View Source
const (
	FieldTraceID    = phtrace.FieldTraceID
	FieldSpanID     = phtrace.FieldSpanID
	FieldTicketID   = phtrace.FieldTicketID
	FieldReffNo     = phtrace.FieldReffNo
	FieldMerchantID = phtrace.FieldMerchantID
	FieldOrderID    = phtrace.FieldOrderID
	FieldTrxID      = phtrace.FieldTrxID
	FieldTrxNo      = phtrace.FieldTrxNo
	FieldService    = phtrace.FieldService
	FieldRoute      = phtrace.FieldRoute
	FieldVendor     = phtrace.FieldVendor
)

Standard log/span field keys re-exported from phtrace so callers can use pchelper.FieldTicketID instead of importing phtrace directly. Promoting these constants to the top-level package is part of Phase 2 Task 2.9 so the field names can't drift across services.

View Source
const (
	Numeric string = "^-?[0-9]+$"
	Key     string = "^[-a-zA-Z0-9_-]+$"
)
View Source
const CmdAuditTrailTrx = "audit-trail-trx"

Variables

View Source
var (
	PushMaxRetries = 3                // max retry attempts for Push()
	PushTimeout    = 15 * time.Second // total timeout for a single Push() call
)

Push retry and timeout configuration. Package-level vars allow consumer services to override before calling SetUpRabbitMq.

View Source
var (
	Log                  = phlogger.Log
	Logf                 = Log.Logf
	GinLevel golog.Level = phlogger.GinLevel
)
View Source
var (
	DefaultRedisTimeout = 1000 * time.Millisecond
)

Functions

func AcquireLock

func AcquireLock(key string, ttl time.Duration) (bool, error)

AcquireLock acquires a distributed lock using RedSync

func AcquireLockWithRetry

func AcquireLockWithRetry(key string, ttl time.Duration, maxRetries int, retryDelay time.Duration) (*redsync.Mutex, bool, error)

AcquireLockWithRetry attempts to acquire a distributed lock with retries key: the lock key ttl: lock time-to-live maxRetries: maximum number of retry attempts retryDelay: delay between retries Returns: - mutex: the lock mutex (nil if not acquired) - acquired: whether the lock was acquired - err: any error that occurred

func AddValidatorLibs

func AddValidatorLibs()

func ConfigureLogForwarding

func ConfigureLogForwarding(cfg phlogger.LogForwardConfig)

ConfigureLogForwarding registers Sentry forwarding hooks based on cfg. Call once at startup AFTER InitSentry(). Safe to call multiple times — each call adds hooks cumulatively; use phlogger.ClearLogHooks() if you need a reset.

Example (startup):

pch.InitSentry(pch.SentryOptions{...})
pch.ConfigureLogForwarding(pch.LogForwardConfigFromEnv())

func ConfigureSentryLogging

func ConfigureSentryLogging(enable bool)

ConfigureSentryLogging enables or disables forwarding of all log levels to Sentry. When enabled (true), all logs emitted via LogI, LogE, LogW, LogD, LogF are forwarded to Sentry as structured events and breadcrumbs.

Call once at startup AFTER InitSentry() has completed. This is the recommended simpler API compared to ConfigureLogForwarding for basic setup.

Example (startup):

pch.InitSentry(pch.SentryOptions{...})
pch.ConfigureSentryLogging(true)  // enable all log levels to Sentry
pch.LogE("[Main] startup error: %v", err)  // forwarded to Sentry

To enable via environment variable (default):

// In your service
enableSentryLogging := os.Getenv("SENTRY_LOGGING") == "true"
pch.ConfigureSentryLogging(enableSentryLogging)

func DeleteRedis

func DeleteRedis(id string) error

DeleteRedis deletes data from Redis (backward compatible wrapper)

func DeleteRedisWithContext

func DeleteRedisWithContext(ctx context.Context, id string) error

DeleteRedisWithContext deletes data from Redis with a custom context

func FlushSentry

func FlushSentry(timeout time.Duration)

FlushSentry waits up to timeout for buffered Sentry events to drain. Call before process shutdown to avoid losing queued errors.

func GetAppEnv

func GetAppEnv() string

func GetAppName

func GetAppName() string

func GetConfigurationStatus

func GetConfigurationStatus() map[string]interface{}

GetConfigurationStatus returns a summary of configuration validation Useful for health check endpoints

func GetMutex

func GetMutex(key string) *redsync.Mutex

GetMutex retrieves a mutex from the map

func GetOrGenerateRequestID

func GetOrGenerateRequestID(headerValue string) string

GetOrGenerateRequestID extracts X-Request-ID header or generates a new one

func GetRedis

func GetRedis(id string) (string, error)

GetRedis retrieves data from Redis (backward compatible wrapper)

func GetRedisClient

func GetRedisClient(redisHost, redisPort, redisPassword string, redisDb int) error

func GetRedisOptions

func GetRedisOptions() *redis.Options

func GetRedisPoolClient

func GetRedisPoolClient() (*redis.Client, error)

func GetRedisWithContext

func GetRedisWithContext(ctx context.Context, id string) (string, error)

GetRedisWithContext retrieves data from Redis with a custom context

func GetSentryClient

func GetSentryClient() *sentry.Client

func GetSentryClientOptions

func GetSentryClientOptions() *sentry.ClientOptions

func GetSentryData

func GetSentryData() *phsentry.SentryData

func GetTrxRedisBackoff

func GetTrxRedisBackoff() int

func GetTrxRedisLockTimeout

func GetTrxRedisLockTimeout() time.Duration

func IncrementMetric

func IncrementMetric(event string)

IncrementMetric records one occurrence of a named event via the registered hook.

func IncrementMetricBy

func IncrementMetricBy(event string, n int64)

IncrementMetricBy records `n` occurrences of a named event via the registered hook.

func InitRedSyncOnce

func InitRedSyncOnce() error

InitRedSyncOnce initializes the redSync instance once

func InitRedisOptions

func InitRedisOptions(rawOpt redis.Options) *redis.Options

func InitSentry

func InitSentry(options phsentry.SentryOptions) *sentry.Client

InitSentry initializes the global Sentry client from options; nil when Dsn is empty. Debug enables sentry-go SDK diagnostic logs (not the same as SendSentryDebug events).

func InitSentryOptions

func InitSentryOptions(options phsentry.SentryOptions)

InitSentryOptions prepares merged Sentry client options without instantiating the client. See phsentry.SentryOptions (including Debug / service-level SENTRY_DEBUG wiring).

func InitializeApp

func InitializeApp()

func InitializeLogger

func InitializeLogger()

func InitializeRedis

func InitializeRedis(opt redis.Options)

InitializeRedis initializes Redis with default retry behavior (backward compatible wrapper) For advanced retry configuration, use InitializeRedisWithRetry instead

func InitializeRedisWithRetry

func InitializeRedisWithRetry(opts RedisInitOptions) error

InitializeRedisWithRetry initializes Redis connection with configurable retry logic This provides better resilience against transient connection failures during startup

func InitializeSampler

func InitializeSampler(cfg SamplerConfig)

InitializeSampler sets the global sampler config. Called automatically by InitializeLogger() with env-aware defaults. Use to override defaults at startup.

func IsAuditTrailTrxEnabled

func IsAuditTrailTrxEnabled() bool

func JSONEncode

func JSONEncode(obj interface{}) string

func JsonMinify

func JsonMinify(jsonB []byte) ([]byte, error)

func LogAuditTrailTrx

func LogAuditTrailTrx(data AuditTrailTrx)

LogAuditTrailTrx publishes one transaction audit event to the dedicated queue.

func LogAudittrailData

func LogAudittrailData(funcName, desc, source, commType string, key *[]string, data *RequestAndResponse)

LogAudittrailData add audittrail data

func LogAudittrailDataV2

func LogAudittrailDataV2(funcName, desc, source, commType string, key *[]string, data *RequestAndResponse)

LogAudittrailDataV2 publishes an audit trail data event via the worker pool. Falls back to the legacy LogAudittrailData if the publisher is not set up.

func LogAudittrailProcess

func LogAudittrailProcess(funcName, desc, info string, key *[]string)

LogAudittrailProcess add audittrail process

func LogAudittrailProcessV2

func LogAudittrailProcessV2(funcName, desc, info string, key *[]string)

LogAudittrailProcessV2 publishes an audit trail process event via the worker pool. Falls back to the legacy LogAudittrailProcess if the publisher is not set up.

func LogConfigurationWarnings

func LogConfigurationWarnings()

LogConfigurationWarnings logs all configuration validation warnings This is a convenience function to log validation results at startup

func LogCtx

func LogCtx(ctx context.Context, fields ...string) *phtrace.LogContextCtx

LogCtx is an alias alphabet — returns a ctx-bound logger that lets you add standard fields (ticket_id, reff_no, merchant_id, order_id, trx_no, etc.) once and have them appear on every log line. See phtrace.WithFields.

func LogD

func LogD(format string, args ...interface{})

LogD logs at Debug level (delegates to phlogger wrapper with hooks).

func LogDCtx

func LogDCtx(ctx context.Context, format string, args ...interface{})

LogDCtx logs at Debug level with trace context prefix.

func LogDRated

func LogDRated(key string, format string, args ...interface{})

LogDRated logs at Debug level with rate limiting using key and the default 50ms window.

func LogDRatedW

func LogDRatedW(key string, window time.Duration, format string, args ...interface{})

LogDRatedW logs at Debug level with rate limiting using key and an explicit window.

func LogE

func LogE(format string, args ...interface{})

LogE logs at Error level (delegates to phlogger wrapper with hooks).

func LogECtx

func LogECtx(ctx context.Context, format string, args ...interface{})

LogECtx logs at Error level with trace context prefix and records an exception event on the active span (if any).

func LogERated

func LogERated(key string, format string, args ...interface{})

LogERated logs at Error level with rate limiting using key and the default 50ms window.

func LogERatedW

func LogERatedW(key string, window time.Duration, format string, args ...interface{})

LogERatedW logs at Error level with rate limiting using key and an explicit window.

func LogErr

func LogErr(err error)

LogErr logs an error value.

func LogF

func LogF(format string, args ...interface{})

LogF logs at Fatal level (process exits after hook execution).

func LogForwardConfigFromEnv

func LogForwardConfigFromEnv() phlogger.LogForwardConfig

LogForwardConfigFromEnv returns a LogForwardConfig loaded from environment variables. See phlogger.LogForwardConfigFromEnv for variable names and defaults.

func LogI

func LogI(format string, args ...interface{})

LogI logs at Info level (delegates to phlogger wrapper with hooks).

func LogICtx

func LogICtx(ctx context.Context, format string, args ...interface{})

LogICtx logs at Info level with trace context prefix.

func LogIRated

func LogIRated(key string, format string, args ...interface{})

LogIRated logs at Info level with rate limiting using key and the default 50ms window.

func LogIRatedW

func LogIRatedW(key string, window time.Duration, format string, args ...interface{})

LogIRatedW logs at Info level with rate limiting using key and an explicit window.

func LogJ

func LogJ(arg interface{})

LogJ logs arg as compact JSON.

func LogJI

func LogJI(arg interface{})

LogJI logs arg as indented JSON.

func LogSetLevel

func LogSetLevel(levelName string)

func LogW

func LogW(format string, args ...interface{})

LogW logs at Warning level (delegates to phlogger wrapper with hooks).

func LogWCtx

func LogWCtx(ctx context.Context, format string, args ...interface{})

LogWCtx logs at Warning level with trace context prefix.

func LogWRated

func LogWRated(key string, format string, args ...interface{})

LogWRated logs at Warning level with rate limiting using key and the default 50ms window.

func LogWRatedW

func LogWRatedW(key string, window time.Duration, format string, args ...interface{})

LogWRatedW logs at Warning level with rate limiting using key and an explicit window.

func LoggerErrorHub

func LoggerErrorHub(err interface{}, args ...interface{})

func NewAmqp

func NewAmqp(addr string, c *AmqpClient)

NewAmqp creates a new consumer state instance, and automatically attempts to connect to the server.

func NewSentryData

func NewSentryData(dt *phsentry.SentryData)

func ReadBody

func ReadBody(c echo.Context, idem string) (map[string]interface{}, string, error)

ReadBody read body payload and validate with the key

func RegisterMetricsHook

func RegisterMetricsHook(hook MetricsHook)

RegisterMetricsHook sets the global metrics callback for high-frequency events. Consumers wire their own backend (prometheus, statsd, etc.).

func ReleaseLock

func ReleaseLock(key string) error

func ReleaseLockWithRetry

func ReleaseLockWithRetry(mutex *redsync.Mutex, maxRetries int) error

ReleaseLockWithRetry releases a previously acquired lock with retry mechanism

func RemoveMutex

func RemoveMutex(key string)

RemoveMutex removes a mutex from the map

func RevokeToken

func RevokeToken(next echo.HandlerFunc) echo.HandlerFunc

func SendSentryDebug

func SendSentryDebug(err error, args ...string)

func SendSentryError

func SendSentryError(err error, args ...string)

func SendSentryErrorWithContext

func SendSentryErrorWithContext(ctx context.Context, err error, args ...string)

SendSentryErrorWithContext captures an error with request context (e.g. from Echo). Extra string args are passed through to phsentry for breadcrumb metadata.

func SendSentryEvent

func SendSentryEvent(event *sentry.Event, args ...string)

func SendSentryMessage

func SendSentryMessage(msg string, args ...string)

func SendSentryWarning

func SendSentryWarning(err error, args ...string)

func SendToSentryDebug

func SendToSentryDebug(err error, service, module, function string)

func SendToSentryError

func SendToSentryError(err error, service, module, function string)

func SendToSentryEvent

func SendToSentryEvent(event *sentry.Event, service, module, function string)

func SendToSentryMessage

func SendToSentryMessage(message string, service, module, function string)

func SendToSentryWarning

func SendToSentryWarning(err error, service, module, function string)

func SentryEnabled

func SentryEnabled() bool

SentryEnabled returns true if Sentry has been initialized.

func SentryLoggingFromEnv

func SentryLoggingFromEnv() bool

SentryLoggingFromEnv loads the SENTRY_LOGGING environment variable and returns its boolean value. Parses common boolean formats: "1", "t", "T", "true", "TRUE", "True", "0", "f", "F", "false", "FALSE", "False". Returns false for invalid or unset values (default behavior).

Call once at startup AFTER InitSentry() has completed, then pass result to ConfigureSentryLogging. This is the recommended pattern for environment-driven sentry logging control.

Example (typical service startup):

pch.InitSentry(pch.SentryOptions{Dsn: os.Getenv("SENTRY_DSN"), ...})
pch.ConfigureSentryLogging(pch.SentryLoggingFromEnv())

Environment variable examples:

SENTRY_LOGGING=true   → enable
SENTRY_LOGGING=1      → enable
SENTRY_LOGGING=false  → disable (default)
SENTRY_LOGGING=0      → disable (default)
(unset)               → disable (default)

func SetAppEnv

func SetAppEnv(v string)

func SetAppName

func SetAppName(v string)

func StoreMutex

func StoreMutex(key string, mutex *redsync.Mutex)

StoreMutex stores a mutex in the map for later release

func StoreRedis

func StoreRedis(id string, data interface{}, duration time.Duration) error

StoreRedis stores data to Redis (backward compatible wrapper)

func StoreRedisWithContext

func StoreRedisWithContext(ctx context.Context, id string, data interface{}, duration time.Duration) error

StoreRedisWithContext stores data to Redis with a custom context Allows caller to control cancellation and timeout behavior

func StoreRedisWithLock

func StoreRedisWithLock(id string, data interface{}, duration time.Duration) (err error)

func ToJson

func ToJson(data interface{}) string

ToJson Encode json from object to JSON and beautify the output.

func ToJsonIndent

func ToJsonIndent(data interface{}) string

ToJsonIndent Encode json from object to JSON and beautify the output.

func VerifCsrf

func VerifCsrf(next echo.HandlerFunc) echo.HandlerFunc

func VerifIdemKey

func VerifIdemKey(next echo.HandlerFunc) echo.HandlerFunc

func VerifyMD5

func VerifyMD5(idemKey string, request []byte) (string, error)

VerifyMD5 generate md5 hash and compare the result with current key submitted

Types

type AmqpClient

type AmqpClient struct {
	// contains filtered or unexported fields
}

AmqpClient is the base struct for handling connection recovery, consumption and publishing. Note that this struct has an internal mutex to safeguard against data races. As you develop and iterate over this example, you may need to add further locks, or safeguards, to keep your application safe from data races

func NewAmqpClient

func NewAmqpClient(queueName, connName, addr string, config *amqp.Config) *AmqpClient

NewAmqpClient creates a new consumer state instance, and automatically attempts to connect to the server.

func SetUpRabbitMq

func SetUpRabbitMq(host, port, vhost, username, password, auditTrailQue, appName string) *AmqpClient

SetUpRabbitMq service must call this func in main function NOTE : for audittrail purpose

func (*AmqpClient) AmqpConfig

func (c *AmqpClient) AmqpConfig() amqp.Config

func (*AmqpClient) Cc

func (c *AmqpClient) Cc() error

Cc For debug purpose TODO : debugging close channel

func (*AmqpClient) Channel

func (c *AmqpClient) Channel() *amqp.Channel

func (*AmqpClient) Close

func (c *AmqpClient) Close() error

Close will cleanly shut down the channel and connection.

func (*AmqpClient) ConnName

func (c *AmqpClient) ConnName() string

func (*AmqpClient) Consume

func (c *AmqpClient) Consume() (<-chan amqp.Delivery, error)

Consume will continuously put queue items on the channel. It is required to call delivery.Ack when it has been successfully processed, or delivery.Nack when it fails. Ignoring this will cause data to build up on the server.

func (*AmqpClient) ErrLog

func (c *AmqpClient) ErrLog() *log.Logger

func (*AmqpClient) InfoLog

func (c *AmqpClient) InfoLog() *log.Logger

func (*AmqpClient) IsReady

func (c *AmqpClient) IsReady() bool

IsReady returns true if the AMQP client has an active connection and channel.

func (*AmqpClient) Push

func (c *AmqpClient) Push(data []byte) error

Push will push data onto the queue, and wait for a confirmation. Retries up to PushMaxRetries times with a total timeout of PushTimeout. Returns an error if all retries are exhausted or the timeout is reached.

func (*AmqpClient) PushWithTTL

func (c *AmqpClient) PushWithTTL(data []byte, ttl string) error

PushWithTTL pushes data to the queue with a configurable TTL. An empty ttl means the message never expires in the queue.

func (*AmqpClient) SetAmqpConfig

func (c *AmqpClient) SetAmqpConfig(amqpConfig *amqp.Config)

func (*AmqpClient) UnsafePush

func (c *AmqpClient) UnsafePush(data []byte) error

UnsafePush will push to the queue without checking for confirmation. It returns an error if it fails to connect. No guarantees are provided for whether the server will receive the message.

func (*AmqpClient) WaitForReady

func (c *AmqpClient) WaitForReady(timeout time.Duration) bool

WaitForReady blocks until the client is ready or the timeout expires. Returns true if ready, false if timed out.

type AuditPublisher

type AuditPublisher struct {
	// contains filtered or unexported fields
}

AuditPublisher provides production-grade audit message publishing with bounded concurrency via a worker pool, backpressure via a buffered channel, and a circuit breaker to avoid wasting resources when RabbitMQ is down.

func GetAuditPublisher

func GetAuditPublisher() *AuditPublisher

GetAuditPublisher returns the package-level AuditPublisher, or nil if not initialized.

func GetAuditTrailTrxPublisher

func GetAuditTrailTrxPublisher() *AuditPublisher

func NewAuditPublisher

func NewAuditPublisher(client *AmqpClient, opts ...AuditPublisherOption) *AuditPublisher

NewAuditPublisher creates an AuditPublisher with the given AMQP client and options. Call Start() to launch worker goroutines.

func SetUpAuditTrailPublisher

func SetUpAuditTrailPublisher(host, port, vhost, username, password, queue, appName string, opts ...AuditPublisherOption) *AuditPublisher

SetUpAuditTrailPublisher creates an AmqpClient and AuditPublisher with a production-grade worker pool. Returns the publisher for lifecycle management. Existing SetUpRabbitMq is unchanged — services can migrate at their own pace.

func SetUpAuditTrailTrxPublisher

func SetUpAuditTrailTrxPublisher(
	enabled bool,
	host, port, vhost, username, password, queue, appName string,
	opts ...AuditPublisherOption,
) *AuditPublisher

SetUpAuditTrailTrxPublisher initializes dedicated transaction-audit publishing.

func (*AuditPublisher) Start

func (p *AuditPublisher) Start()

Start launches the worker goroutines. Call Stop() to shut down gracefully.

func (*AuditPublisher) Stop

func (p *AuditPublisher) Stop()

Stop gracefully shuts down the publisher: closes the channel, waits for workers to drain remaining messages, and returns.

func (*AuditPublisher) Submit

func (p *AuditPublisher) Submit(payload MessagePayloadAudit)

Submit adds a message to the worker pool. Non-blocking: if the buffer is full or the circuit breaker is open, the message is dropped with a warning log.

type AuditPublisherOption

type AuditPublisherOption func(*AuditPublisher)

AuditPublisherOption configures an AuditPublisher via functional options.

func WithBufferSize

func WithBufferSize(n int) AuditPublisherOption

WithBufferSize sets the channel buffer size (default 1000).

func WithCircuitBreakerCooldown

func WithCircuitBreakerCooldown(d time.Duration) AuditPublisherOption

WithCircuitBreakerCooldown sets the cooldown duration when circuit is open (default 30s).

func WithCircuitBreakerThreshold

func WithCircuitBreakerThreshold(n int) AuditPublisherOption

WithCircuitBreakerThreshold sets consecutive failures before circuit opens (default 10).

func WithMaxRetries

func WithMaxRetries(n int) AuditPublisherOption

WithMaxRetries sets the maximum push retry count per message (default 3).

func WithMessageTTL

func WithMessageTTL(ttl string) AuditPublisherOption

WithMessageTTL sets the message TTL for published audit messages. Empty string means no expiration (recommended for audit data).

func WithPublishTimeout

func WithPublishTimeout(d time.Duration) AuditPublisherOption

WithPublishTimeout sets the total timeout for a single push call (default 15s).

func WithWorkerCount

func WithWorkerCount(n int) AuditPublisherOption

WithWorkerCount sets the number of worker goroutines (default 10).

type AuditTrailData

type AuditTrailData struct {
	Subject           string              `json:"Subject,omitempty"`
	Function          string              `json:"Function,omitempty"`
	Description       string              `json:"Description,omitempty"`
	Key               []string            `json:"Key"`    //
	Source            string              `json:"Source"` // internal or external
	CommunicationType string              `json:"CommunicationType"`
	Data              *RequestAndResponse `json:"Data"`
}

type AuditTrailProcess

type AuditTrailProcess struct {
	Subject     string                `json:"Subject,omitempty"`
	Function    string                `json:"Function,omitempty"`
	Description string                `json:"Description,omitempty"`
	Key         []string              `json:"Key"`
	Data        DataAuditTrailProcess `json:"Data"`
}

type AuditTrailTrx

type AuditTrailTrx struct {
	ReffNo   string `json:"reffNo"`
	OrderNo  string `json:"orderNo"`
	TicketId string `json:"ticketId,omitempty"`

	Status  string `json:"status"`
	State   string `json:"state"`
	Message string `json:"message"`

	Service           string `json:"service"`
	Function          string `json:"function"`
	Description       string `json:"description"`
	CommunicationType string `json:"communicationType"`

	EventTime  string `json:"eventTime"`
	DurationMs int64  `json:"durationMs,omitempty"`

	Amount      string `json:"amount,omitempty"`
	Currency    string `json:"currency,omitempty"`
	MerchantNo  string `json:"merchantNo,omitempty"`
	PaymentCode string `json:"paymentCode,omitempty"`
	QrValue     string `json:"qrValue,omitempty"`
	Rrn         string `json:"rrn,omitempty"`
	ErrorCode   string `json:"errorCode,omitempty"`
	VendorName  string `json:"vendorName,omitempty"`

	Request  interface{} `json:"request,omitempty"`
	Response interface{} `json:"response,omitempty"`

	Metadata map[string]interface{} `json:"metadata,omitempty"`

	CreatedAt time.Time `json:"createdAt,omitempty"`
}

AuditTrailTrx represents one transaction lifecycle event.

type ConfigError

type ConfigError struct {
	Field   string `json:"field"`
	Message string `json:"message"`
	Level   string `json:"level"` // "warning" or "error"
}

ConfigError represents a configuration validation error

func ValidateConfiguration

func ValidateConfiguration() []ConfigError

ValidateConfiguration validates the runtime configuration Returns a slice of ConfigError for any issues found

type DataAuditTrailProcess

type DataAuditTrailProcess struct {
	Time string `json:"Time"` // time will be handle in library
	Info string `json:"Info"` // message from service/app want to print in log
}

type Detail

type Detail struct {
	StatusCode int         `json:"StatusCode"`
	Message    string      `json:"Message"`
	Data       interface{} `json:"Data,omitempty"`
}

type Headers

type Headers struct {
	IdempotencyKey string `json:"idem_key"`
	Session        string `json:"session"`
	Csrf           string `json:"csrf"`
	RequestID      string `json:"request_id"` // Request ID for tracing
}

func (*Headers) ValiadateHeaderCsrf

func (h *Headers) ValiadateHeaderCsrf() interface{}

func (*Headers) ValiadateHeaderIdem

func (h *Headers) ValiadateHeaderIdem() interface{}

type HealthCheck

type HealthCheck struct {
	AppName   string         `json:"app_name"`
	AppEnv    string         `json:"app_env"`
	Timestamp time.Time      `json:"timestamp"`
	Overall   string         `json:"overall_status"`
	Checks    []HealthStatus `json:"checks"`
}

HealthCheck represents the overall health check result

func CheckHealth

func CheckHealth() *HealthCheck

CheckHealth performs health checks on all initialized components Returns comprehensive health status for Redis, RabbitMQ, and other components This is backward compatible - safe to call even if components aren't initialized

type HealthStatus

type HealthStatus struct {
	Component string `json:"component"`
	Status    string `json:"status"` // "healthy", "degraded", "unhealthy"
	Message   string `json:"message,omitempty"`
	Latency   int64  `json:"latency_ms,omitempty"`
}

HealthStatus represents the health status of a component

type IRqAutoConnect

type IRqAutoConnect interface {
	StartConnection(username, password, host, port, vhost string) (c *amqp.Connection, err error)
	DeclareQueues(queues ...string) (err error)
	GetRqChannel() *amqp.Channel
	Stop()
	// contains filtered or unexported methods
}

IRqAutoConnect is interface defining method of rabbit mq auto connect

type KeyedLimiter

type KeyedLimiter = phlogger.KeyedLimiter

KeyedLimiter provides per-key token bucket rate limiting. See phlogger.KeyedLimiter for full documentation.

func NewKeyedLimiter

func NewKeyedLimiter(r float64, burst int) *KeyedLimiter

NewKeyedLimiter creates a per-key token bucket limiter. r is events/second, burst is max burst per key.

type LockError

type LockError struct {
	Key    string // The lock key
	Op     string // Operation: "acquire" or "release"
	Reason string // Human-readable reason for the error
	Err    error  // Underlying error (if any)
}

LockError represents a distributed lock operation error with context

func (*LockError) Error

func (e *LockError) Error() string

func (*LockError) Unwrap

func (e *LockError) Unwrap() error

type LogContext

type LogContext = phlogger.LogContext

LogContext is a child logger with key-value context prefix. See phlogger.LogContext for full documentation.

func NewLogContext

func NewLogContext(fields ...string) *LogContext

NewLogContext creates a child logger with key-value context fields. Fields are prepended as [key=value key2=value2] to every message.

type MessagePayloadAudit

type MessagePayloadAudit struct {
	Id       int         `json:"Id"`
	Command  string      `json:"Command"`
	Time     string      `json:"Time"`
	ModuleId string      `json:"ModuleId"`
	Data     interface{} `json:"Data"`
}

type MetricsHook

type MetricsHook = phlogger.MetricsHook

MetricsHook is a callback for high-frequency event counting. See phlogger.MetricsHook for full documentation.

type RabbitMQConnection

type RabbitMQConnection struct {
	Host, Port, Username, Password, VirtualHost, QueueName string
}

func (*RabbitMQConnection) GlobalConn

func (connection *RabbitMQConnection) GlobalConn()

func (*RabbitMQConnection) QueueConn

func (connection *RabbitMQConnection) QueueConn(RQ_QUEUE string)

type RabbitMQDefaultPayload

type RabbitMQDefaultPayload struct {
	Route string      `json:"command"`
	Param interface{} `json:"param"`
	Data  interface{} `json:"data"`
}

type RedisInitOptions

type RedisInitOptions struct {
	Options    redis.Options
	MaxRetries int           // Maximum number of retry attempts (default: 3)
	RetryDelay time.Duration // Base delay between retries (default: 1s, uses exponential backoff)
	FailFast   bool          // If true, return error on failure; if false, log but continue (default: false for backward compat)
}

RedisInitOptions provides advanced configuration for Redis initialization with retry logic

type RedisMetricsDetailed

type RedisMetricsDetailed struct {
	PoolStats    *RedisPoolStats `json:"pool_stats"`
	HitRate      float64         `json:"hit_rate_percent"`      // Cache hit rate percentage
	ActiveConns  int             `json:"active_conns"`          // TotalConns - IdleConns
	PoolUtilized float64         `json:"pool_utilized_percent"` // Percentage of pool in use
}

RedisMetricsDetailed provides extended Redis metrics including calculated ratios

func GetRedisMetricsDetailed

func GetRedisMetricsDetailed() *RedisMetricsDetailed

GetRedisMetricsDetailed returns enhanced metrics with calculated statistics Useful for monitoring and capacity planning

type RedisPoolStats

type RedisPoolStats struct {
	TotalConns int    `json:"total_conns"`
	IdleConns  int    `json:"idle_conns"`
	StaleConns int    `json:"stale_conns"`
	Hits       uint32 `json:"hits"`
	Misses     uint32 `json:"misses"`
	Timeouts   uint32 `json:"timeouts"`
}

RedisPoolStats represents Redis connection pool statistics

func GetRedisMetrics

func GetRedisMetrics() *RedisPoolStats

GetRedisMetrics is an alias for GetRedisPoolStats for API consistency Returns comprehensive Redis connection pool metrics

func GetRedisPoolStats

func GetRedisPoolStats() *RedisPoolStats

GetRedisPoolStats returns Redis connection pool statistics Safe to call - returns nil if Redis not initialized

type Request

type Request struct {
	Time        string      `json:"Time"`
	Path        string      `json:"Path,omitempty"`
	QueryString interface{} `json:"QueryString,omitempty"`
	Header      interface{} `json:"Header,omitempty"`
	Param       interface{} `json:"Param,omitempty"`
	Body        interface{} `json:"Body,omitempty"`
	IpAddress   string      `json:"IpAddress,omitempty"`
	BrowserId   int         `json:"BrowserId,omitempty"`
	Latitude    string      `json:"Latitude,omitempty"`
	Longitude   string      `json:"Longitude,omitempty"`
}

type RequestAndResponse

type RequestAndResponse struct {
	Request  Request       `json:"Request"`
	Response ResponseAudit `json:"Response"`
}

type ResponseApi

type ResponseApi struct {
	Code         int         `json:"code"`
	Status       string      `json:"status"`
	Message      string      `json:"message"`
	InternalCode string      `json:"internal_code,omitempty"`
	Data         interface{} `json:"data,omitempty"`
}

func (*ResponseApi) Accepted

func (r *ResponseApi) Accepted(data interface{})

Accepted in process response

func (*ResponseApi) BadRequest

func (r *ResponseApi) BadRequest(message string, intenalCode string)

BadRequest is method for bad request

func (*ResponseApi) InternalServerError

func (r *ResponseApi) InternalServerError(err error)

InternalServerError is method for internal server error

func (*ResponseApi) Out

func (r *ResponseApi) Out(code int, message, internalCode string, status string, data interface{})

func (*ResponseApi) Success

func (r *ResponseApi) Success(message string, data interface{})

func (*ResponseApi) Unauthorized

func (r *ResponseApi) Unauthorized(message string, intenalCode string)

Unauthorized unauthorized user

type ResponseAudit

type ResponseAudit struct {
	Time   string `json:"Time"`
	Detail Detail `json:"Detail,omitempty"`
}

type SamplerConfig

type SamplerConfig = phlogger.SamplerConfig

SamplerConfig controls log sampling behavior per key per period. See phlogger.SamplerConfig for full documentation.

func SamplerConfigForEnv

func SamplerConfigForEnv(env string) SamplerConfig

SamplerConfigForEnv returns production-tuned sampler defaults for the given environment.

func SamplerConfigFromAppEnv

func SamplerConfigFromAppEnv() SamplerConfig

SamplerConfigFromAppEnv returns a SamplerConfig based on APP_ENV.

Directories

Path Synopsis
Package phtrace provides OpenTelemetry tracing and metrics helpers for PayCloud services.
Package phtrace provides OpenTelemetry tracing and metrics helpers for PayCloud services.
sdk
services/s3minio/facade
Package facade provides stable constructors for service-scoped S3MinIO SDK usage.
Package facade provides stable constructors for service-scoped S3MinIO SDK usage.
services/s3minio/grpc
Package grpc exposes the service-scoped S3MinIO gRPC transport adapter.
Package grpc exposes the service-scoped S3MinIO gRPC transport adapter.
services/s3minio/helper
Package helper exposes the service-scoped S3MinIO SDK contracts.
Package helper exposes the service-scoped S3MinIO SDK contracts.
services/s3minio/http
Package http exposes the service-scoped S3MinIO HTTP bridge adapter.
Package http exposes the service-scoped S3MinIO HTTP bridge adapter.
services/s3minio/pb
Package pb exposes the service-scoped S3MinIO protobuf compatibility surface.
Package pb exposes the service-scoped S3MinIO protobuf compatibility surface.
shared/errors
Package errors holds shared SDK error normalization helpers for future service packages.
Package errors holds shared SDK error normalization helpers for future service packages.
shared/observability
Package observability holds shared SDK instrumentation helpers for future service packages.
Package observability holds shared SDK instrumentation helpers for future service packages.
shared/transport
Package transport holds cross-service transport primitives for future SDK migrations.
Package transport holds cross-service transport primitives for future SDK migrations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL