Documentation
¶
Overview ¶
Package db provides database client implementations for BaseX, DragonflyDB, CouchDB, PoolParty, and other database systems. This package offers consistent interfaces for XML databases, key-value stores, document stores, and semantic repositories.
Package db provides comprehensive CouchDB integration for document-based data storage and flow processing. This package implements a complete CouchDB client with specialized support for flow process management, document lifecycle operations, and bulk data export capabilities using the Kivik CouchDB driver.
CouchDB Integration:
CouchDB is a document-oriented NoSQL database that provides: - JSON document storage with schema flexibility - ACID transactions and eventual consistency - Multi-Version Concurrency Control (MVCC) for conflict resolution - MapReduce views for complex queries and aggregation - HTTP RESTful API for language-agnostic access
Flow Processing Support:
The package provides specialized functionality for workflow management: - Process state tracking with complete audit trails - Document versioning and history management - State-based queries and filtering capabilities - Integration with RabbitMQ messaging for distributed workflows
Document Operations:
Supports complete document lifecycle management: - CRUD operations with revision management - Bulk operations for high-performance scenarios - Conflict resolution through MVCC - Selective querying with Mango query language - Database export and backup capabilities
Service Architecture:
Implements service-oriented patterns with: - Connection pooling and resource management - Error handling and graceful degradation - Configuration-driven database setup - Clean separation of concerns between data and business logic
Package db provides comprehensive GraphDB integration for RDF graph database operations. This package implements a complete client library for interacting with GraphDB servers, supporting repository management, RDF graph operations, data import/export, and secure connectivity through Ziti zero-trust networking.
GraphDB Integration:
GraphDB is an enterprise RDF graph database that provides: - SPARQL 1.1 query and update support - Named graph management for data organization - High-performance RDF storage and retrieval - RESTful API for administration and data operations - Binary RDF format (BRF) for efficient data transfer
Graph Database Operations:
The package supports complete graph database lifecycle: - Repository discovery and configuration management - Named graph creation, import, and deletion - RDF data export in multiple formats (RDF/XML, Turtle, BRF) - SPARQL query execution and result processing - Backup and restore operations for data persistence
Security and Connectivity:
Integrates with Ziti zero-trust networking for secure database access: - HTTP Basic Authentication for traditional security - Ziti overlay networking for network invisibility - Configurable HTTP client with timeout management - Support for both public and private network deployments
Data Format Support:
- RDF/XML for W3C standard compatibility
- Turtle (TTL) for human-readable configuration
- Binary RDF (BRF) for high-performance data transfer
- JSON for API responses and metadata
- SPARQL Update for graph modifications
Package db provides PostgreSQL database integration with GORM ORM for RabbitMQ message logging. This package implements a complete database layer for storing and managing RabbitMQ message logs with PostgreSQL as the persistent storage backend, supporting various output formats and connection management patterns.
Database Integration:
The package uses GORM (Go Object-Relational Mapping) library for database operations, providing a high-level abstraction over raw SQL while maintaining performance and flexibility for complex queries and database management tasks.
RabbitMQ Logging System:
Designed to capture and persist RabbitMQ message processing logs including: - Document processing states and transitions - Version tracking for document changes - Binary log data with efficient bytea storage - Timestamp tracking with GORM's automatic timestamps
Connection Management:
Implements proper PostgreSQL connection pooling with configurable parameters: - Maximum idle connections for resource efficiency - Maximum open connections for load management - Connection lifetime management for stability - Automatic reconnection and error handling
Data Persistence Strategy:
- Structured logging with relational database benefits
- ACID transactions for data consistency
- Indexing support for query performance
- Backup and recovery through PostgreSQL tooling
Output Format Support:
Multiple data serialization formats for different consumption patterns: - JSON serialization for API responses - Go struct format for internal processing - Raw database records for administrative access
Package db provides comprehensive RDF4J server integration for semantic data management. This package implements a complete client library for interacting with RDF4J triple stores, supporting repository management, RDF data import/export, and SPARQL query operations.
RDF4J Integration:
RDF4J is a powerful Java framework for processing RDF data, providing: - Multiple storage backends (memory, native, LMDB) - SPARQL 1.1 query and update support - RDF serialization format support (RDF/XML, Turtle, JSON-LD, N-Triples) - Repository management and configuration - Transaction support and concurrent access
Semantic Data Management:
The package enables working with semantic web technologies: - RDF triple storage and retrieval - Ontology and schema management - Knowledge graph construction and querying - Linked data publishing and consumption - Reasoning and inference capabilities
Repository Types:
Supports multiple RDF4J repository configurations: - Memory stores for fast, temporary data processing - Native stores for persistent, file-based storage - LMDB stores for high-performance persistent storage - Remote repository connections for distributed setups
Authentication and Security:
All operations support HTTP Basic Authentication for secure access to RDF4J servers, enabling integration with enterprise authentication systems and access control policies.
Data Format Support:
- RDF/XML for W3C standard compatibility
- Turtle for human-readable RDF serialization
- JSON-LD for web-friendly linked data
- N-Triples for simple triple streaming
- SPARQL Results JSON for query responses
Index ¶
- Variables
- func BaseXCreateDB(dbName string) error
- func BaseXQuery(db, doc, query string) ([]byte, error)
- func BaseXSaveDocument(dbName, docID string, xmlData []byte) error
- func BulkGet[T any](c *CouchDBService, ids []string) (map[string]*T, map[string]error, error)
- func BulkUpdate[T any](c *CouchDBService, selector map[string]interface{}, updateFunc func(*T) error) (int, error)
- func CompactJSONLD(doc interface{}, context string) (map[string]interface{}, error)
- func CouchDBAnimals(url string)
- func CouchDBDocGet(url, db, docId string) *kivik.Document
- func CouchDBDocNew(url, db string, doc interface{}) (string, string)
- func CreateDatabaseFromURL(url, dbName string) error
- func CreateLMDBRepository(serverURL, repositoryID, username, password string) error
- func CreateRepository(serverURL, repositoryID, username, password string) error
- func DatabaseExistsFromURL(url, dbName string) (bool, error)
- func DeleteDatabaseFromURL(url, dbName string) error
- func DeleteGenericDocument(c *CouchDBService, id, rev string) error
- func DeleteRepository(serverURL, repositoryID, username, password string) error
- func DownloadAllDocuments(url, db, outputDir string) error
- func DragonflyDBGetKey(key string) ([]byte, error)
- func DragonflyDBGetKeyWithDialer(key string, ...) ([]byte, error)
- func DragonflyDBSaveKeyValue(key string, value []byte) error
- func DragonflyDBSaveKeyValueWithDialer(key string, value []byte, ...) error
- func ExpandJSONLD(doc interface{}) (map[string]interface{}, error)
- func ExportRDFXml(...) error
- func ExtractJSONLDType(doc interface{}) (string, error)
- func FindTyped[T any](c *CouchDBService, query MangoQuery) ([]T, error)
- func GetAllDocuments[T any](c *CouchDBService, docType string) ([]T, error)
- func GetDocument[T any](c *CouchDBService, id string) (*T, error)
- func GetDocumentsByType[T any](c *CouchDBService, docType string) ([]T, error)
- func GraphDBDeleteGraph(URL, user, pass, repo, graph string) error
- func GraphDBDeleteRepository(URL, user, pass, repo string) error
- func GraphDBExportGraphRdf(url, user, pass, repo, graph, exportFile string) error
- func GraphDBImportGraphRdf(url, user, pass, repo, graph, restoreFile string) error
- func GraphDBRepositoryBrf(url string, user string, pass string, repo string) (string, error)
- func GraphDBRepositoryConf(url string, user string, pass string, repo string) (string, error)
- func GraphDBRestoreBrf(url string, user string, pass string, restoreFile string) error
- func GraphDBRestoreConf(url string, user string, pass string, restoreFile string) error
- func GraphDBZitiClient(identityFile, serviceName string) (*http.Client, error)
- func ImportRDF(serverURL, repositoryID, username, password, rdfFilePath, contentType string) ([]byte, error)
- func NormalizeJSONLD(doc interface{}) (string, error)
- func PGInfo(pgUrl string)
- func PGMigrations(pgUrl string)
- func PGRabbitLogFormatList(pgUrl string, format string) interface{}
- func PGRabbitLogList(pgUrl string)
- func PGRabbitLogNew(pgUrl, documentId, state, version string)
- func PGRabbitLogUpdate(pgUrl, documentId, state string, logText []byte)
- func PoolPartyProjects(baseURL, username, password, templateDir string)
- func PrintProjects(projects []Project)
- func RunSparQLFromFile(...) ([]byte, error)
- func SetJSONLDContext(doc interface{}, context string) map[string]interface{}
- func TraverseTyped[T any](c *CouchDBService, opts TraversalOptions) ([]T, error)
- func ValidateJSONLD(doc interface{}, context string) error
- type Binding
- type BulkDeleteDoc
- type BulkResult
- type Change
- type ChangeRev
- type ChangesFeedOptions
- type Command
- type Commands
- type ContextID
- type CouchDBConfig
- type CouchDBError
- type CouchDBResponse
- type CouchDBService
- func (c *CouchDBService) BulkDeleteDocuments(docs []BulkDeleteDoc) ([]BulkResult, error)
- func (c *CouchDBService) BulkSaveDocuments(docs []interface{}) ([]BulkResult, error)
- func (c *CouchDBService) Close() error
- func (c *CouchDBService) CompactDatabase() error
- func (c *CouchDBService) Count(selector map[string]interface{}) (int, error)
- func (c *CouchDBService) CreateDesignDoc(designDoc DesignDoc) error
- func (c *CouchDBService) CreateIndex(index Index) error
- func (c *CouchDBService) DeleteDesignDoc(designName string) error
- func (c *CouchDBService) DeleteDocument(id, rev string) error
- func (c *CouchDBService) DeleteIndex(designDoc, indexName string) error
- func (c *CouchDBService) EnsureIndex(index Index) (bool, error)
- func (c *CouchDBService) Find(query MangoQuery) ([]json.RawMessage, error)
- func (c *CouchDBService) GetAllDocuments() ([]eve.FlowProcessDocument, error)
- func (c *CouchDBService) GetAllGenericDocuments(docType string, result interface{}) error
- func (c *CouchDBService) GetChanges(opts ChangesFeedOptions) ([]Change, string, error)
- func (c *CouchDBService) GetDatabaseInfo() (*DatabaseInfo, error)
- func (c *CouchDBService) GetDependencies(id string, relationFields []string) (map[string]json.RawMessage, error)
- func (c *CouchDBService) GetDependents(id string, relationField string) ([]json.RawMessage, error)
- func (c *CouchDBService) GetDesignDoc(designName string) (*DesignDoc, error)
- func (c *CouchDBService) GetDocument(id string) (*eve.FlowProcessDocument, error)
- func (c *CouchDBService) GetDocumentsByState(state eve.FlowProcessState) ([]eve.FlowProcessDocument, error)
- func (c *CouchDBService) GetGenericDocument(id string, result interface{}) error
- func (c *CouchDBService) GetLastSequence() (string, error)
- func (c *CouchDBService) GetRelationshipGraph(startID string, relationField string, maxDepth int) (*RelationshipGraph, error)
- func (c *CouchDBService) ListDesignDocs() ([]string, error)
- func (c *CouchDBService) ListIndexes() ([]IndexInfo, error)
- func (c *CouchDBService) ListenChanges(opts ChangesFeedOptions, handler func(Change)) error
- func (c *CouchDBService) QueryView(designName, viewName string, opts ViewOptions) (*ViewResult, error)
- func (c *CouchDBService) SaveDocument(doc eve.FlowProcessDocument) (*eve.FlowCouchDBResponse, error)
- func (c *CouchDBService) SaveGenericDocument(doc interface{}) (*CouchDBResponse, error)
- func (c *CouchDBService) Traverse(opts TraversalOptions) ([]json.RawMessage, error)
- func (c *CouchDBService) WatchChanges(opts ChangesFeedOptions) (<-chan Change, <-chan error, func())
- type DatabaseInfo
- type Description
- type DesignDoc
- type DocumentStore
- type GraphDBBinding
- type GraphDBResponse
- type GraphDBResults
- type Head
- type Index
- type IndexInfo
- type MangoQuery
- type PoolPartyClient
- func (c *PoolPartyClient) ExecuteSPARQL(projectID, query, contentType string) ([]byte, error)
- func (c *PoolPartyClient) ExecuteSPARQLFromTemplate(projectID, templateName, contentType string, params interface{}) ([]byte, error)
- func (c *PoolPartyClient) GetProjectDetails(projectID string) (*Project, error)
- func (c *PoolPartyClient) ListProjects() ([]Project, error)
- func (c *PoolPartyClient) LoadTemplate(templateName string) (*template.Template, error)
- type PostgresDB
- func (db *PostgresDB) Close()
- func (db *PostgresDB) Exec(ctx context.Context, sql string, args ...interface{}) error
- func (db *PostgresDB) Pool() *pgxpool.Pool
- func (db *PostgresDB) Query(ctx context.Context, sql string, args ...interface{}) (pgx.Rows, error)
- func (db *PostgresDB) QueryRow(ctx context.Context, sql string, args ...interface{}) pgx.Row
- type PrefLabel
- type Project
- type QueryBuilder
- func (qb *QueryBuilder) And() *QueryBuilder
- func (qb *QueryBuilder) Build() MangoQuery
- func (qb *QueryBuilder) Limit(n int) *QueryBuilder
- func (qb *QueryBuilder) Or() *QueryBuilder
- func (qb *QueryBuilder) Select(fields ...string) *QueryBuilder
- func (qb *QueryBuilder) Skip(n int) *QueryBuilder
- func (qb *QueryBuilder) Sort(field, direction string) *QueryBuilder
- func (qb *QueryBuilder) UseIndex(indexName string) *QueryBuilder
- func (qb *QueryBuilder) Where(field string, operator string, value interface{}) *QueryBuilder
- type RDF
- type RabbitLog
- type RelationshipEdge
- type RelationshipGraph
- type Repository
- type Resource
- type Restriction
- type Result
- type Results
- type SPARQLBindings
- type SPARQLHead
- type SPARQLResult
- type SPARQLValue
- type Sparql
- type TLSConfig
- type TraversalOptions
- type UploadResult
- func BaseXUploadBinaryFile(dbName, localFilePath, remotePath string) (*UploadResult, error)
- func BaseXUploadFile(dbName, localFilePath, remotePath string) (*UploadResult, error)
- func BaseXUploadFileAuto(dbName, localFilePath, remotePath string) (*UploadResult, error)
- func BaseXUploadToFilesystem(localFilePath, remotePath string) (*UploadResult, error)
- func BaseXUploadXMLFile(dbName, localFilePath, remotePath string) (*UploadResult, error)
- type Variable
- type View
- type ViewOptions
- type ViewResult
- type ViewRow
Constants ¶
This section is empty.
Variables ¶
var (
HttpClient *http.Client = http.DefaultClient
)
HttpClient provides the global HTTP client for GraphDB operations. This client can be customized for different connectivity patterns including Ziti zero-trust networking, custom timeouts, and proxy configurations.
Default Configuration:
Uses http.DefaultClient with standard Go HTTP client settings. Can be replaced with custom clients for specific networking requirements such as Ziti overlay networks or enterprise proxy configurations.
Customization Examples:
- Ziti client via GraphDBZitiClient() function
- Custom timeouts for long-running operations
- Proxy configuration for corporate networks
- Certificate-based authentication for secure environments
Functions ¶
func BaseXCreateDB ¶ added in v0.0.6
BaseXCreateDB creates a new database in BaseX with the specified name. It sends a batch of commands to the BaseX REST endpoint including database creation and info retrieval.
The function requires the following environment variables:
- BASEX_URL: The BaseX server URL (e.g., "http://localhost:8984")
- BASEX_USERNAME: Authentication username
- BASEX_PASSWORD: Authentication password
Parameters:
- dbName: The name of the database to create
Returns:
- error: Any error encountered during database creation
Example:
err := BaseXCreateDB("mydb")
if err != nil {
log.Fatal(err)
}
func BaseXQuery ¶ added in v0.0.6
BaseXQuery executes an XQuery query against a specific document in a BaseX database. The query is sent as a POST request to the document endpoint with Content-Type "application/query+xml".
The function requires the following environment variables:
- BASEX_URL: The BaseX server URL
- BASEX_USERNAME: Authentication username
- BASEX_PASSWORD: Authentication password
Parameters:
- db: The database name
- doc: The document identifier (without .xml extension)
- query: The XQuery expression to execute
Returns:
- []byte: The query results as bytes
- error: Any error encountered during query execution
Example:
query := "//book[@year > 2020]"
results, err := BaseXQuery("mydb", "book1", query)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(results))
func BaseXSaveDocument ¶ added in v0.0.6
BaseXSaveDocument saves an XML document to a BaseX database. The document is stored with the specified ID and ".xml" extension.
The function requires the following environment variables:
- BASEX_URL: The BaseX server URL
- BASEX_USERNAME: Authentication username
- BASEX_PASSWORD: Authentication password
Parameters:
- dbName: The name of the target database
- docID: The document identifier (without .xml extension)
- xmlData: The XML document content as bytes
Returns:
- error: Any error encountered during document save
Example:
xmlContent := []byte("<book><title>Go Programming</title></book>")
err := BaseXSaveDocument("mydb", "book1", xmlContent)
func BulkGet ¶ added in v0.0.7
BulkGet retrieves multiple documents in a single database operation. This is more efficient than individual get operations for batch retrieval.
Type Parameter:
- T: Expected document type
Parameters:
- ids: Slice of document IDs to retrieve
Returns:
- map[string]*T: Map of document ID to document pointer
- map[string]error: Map of document ID to error (for failures)
- error: Request execution errors
Result Handling:
- Successfully retrieved documents appear in the first map
- Failed retrievals (not found, etc.) appear in the error map
- Request-level errors returned as error parameter
Example Usage:
ids := []string{"c1", "c2", "c3", "missing"}
docs, errors, err := BulkGet[Container](service, ids)
if err != nil {
log.Printf("Bulk get failed: %v", err)
return
}
fmt.Printf("Retrieved %d documents\n", len(docs))
for id, doc := range docs {
fmt.Printf(" %s: %s (%s)\n", id, doc.Name, doc.Status)
}
if len(errors) > 0 {
fmt.Println("Errors:")
for id, err := range errors {
fmt.Printf(" %s: %v\n", id, err)
}
}
func BulkUpdate ¶ added in v0.0.7
func BulkUpdate[T any](c *CouchDBService, selector map[string]interface{}, updateFunc func(*T) error) (int, error)
BulkUpdate performs a bulk update operation on documents matching a selector. This applies an update function to all documents matching the criteria.
Type Parameter:
- T: Document type to update
Parameters:
- selector: Mango selector for finding documents to update
- updateFunc: Function to apply to each document
Returns:
- int: Number of documents successfully updated
- error: Query or update errors
Update Process:
- Query documents matching selector
- Apply updateFunc to each document
- Bulk save all modified documents
- Return count of successful updates
Example Usage:
// Stop all running containers in us-east
selector := map[string]interface{}{
"status": "running",
"location": map[string]interface{}{"$regex": "^us-east"},
}
count, err := BulkUpdate[Container](service, selector, func(doc *Container) error {
doc.Status = "stopped"
return nil
})
if err != nil {
log.Printf("Bulk update failed: %v", err)
return
}
fmt.Printf("Stopped %d containers\n", count)
func CompactJSONLD ¶ added in v0.0.7
CompactJSONLD performs basic JSON-LD compaction. This converts expanded JSON-LD to a more compact form using a context.
Parameters:
- doc: Expanded document to compact
- context: Context to use for compaction
Returns:
- map[string]interface{}: Compacted document
- error: Compaction errors
Compaction Process:
Converts expanded form back to compact: - Replaces full IRIs with short terms - Uses context for term mapping - Simplifies value objects to simple values - Converts arrays to single values where appropriate
Example Usage:
expanded := map[string]interface{}{
"@type": []interface{}{"https://schema.org/SoftwareApplication"},
"https://schema.org/name": "nginx",
"https://schema.org/applicationCategory": "web-server",
}
compacted, err := CompactJSONLD(expanded, "https://schema.org")
if err != nil {
log.Printf("Compaction failed: %v", err)
return
}
// compacted uses short terms:
// {"@context": "https://schema.org", "@type": "SoftwareApplication", "name": "nginx", ...}
func CouchDBAnimals ¶
func CouchDBAnimals(url string)
CouchDBAnimals demonstrates basic CouchDB operations with a simple animal document. This function serves as an example of fundamental CouchDB operations including database creation, document insertion, and revision management.
Operation Workflow:
- Establishes connection to CouchDB server
- Creates "animals" database if it doesn't exist
- Inserts a sample document with predefined ID
- Reports successful insertion with revision information
Parameters:
- url: CouchDB server URL (e.g., "http://admin:password@localhost:5984/")
Example Document:
The function creates a document with: - _id: "cow" (document identifier) - feet: 4 (integer field) - greeting: "moo" (string field)
Error Handling:
- Connection failures cause panic for immediate feedback
- Database creation errors are logged but don't halt execution
- Document insertion failures cause panic to indicate critical errors
Educational Value:
This function demonstrates: - Basic CouchDB connection establishment - Database existence checking and creation - Document insertion with explicit ID - Revision handling and success confirmation
Example Usage:
CouchDBAnimals("http://admin:password@localhost:5984/")
Production Considerations:
This function is intended for demonstration and testing purposes. Production code should implement proper error handling instead of panic.
func CouchDBDocGet ¶
CouchDBDocGet retrieves a document from the specified database by document ID. This function provides direct document access with automatic database creation and returns a Kivik document handle for flexible data extraction.
Document Retrieval Process:
- Establishes connection to CouchDB server
- Creates database if it doesn't exist (for development convenience)
- Retrieves document by ID from the specified database
- Returns Kivik document handle for data access
Parameters:
- url: CouchDB server URL with authentication
- db: Database name containing the document
- docId: Document identifier to retrieve
Returns:
- *kivik.Document: Document handle for data extraction and metadata access
Document Handle Usage:
The returned kivik.Document provides methods for: - ScanDoc(): Extract document data into Go structures - Rev(): Access document revision information - Err(): Check for retrieval errors
Error Handling:
- Connection failures cause panic for immediate feedback
- Database creation errors are logged but don't prevent retrieval
- Document not found errors are returned via the document's Err() method
Example Usage:
doc := CouchDBDocGet("http://admin:pass@localhost:5984/", "users", "user123")
if doc.Err() != nil {
if kivik.HTTPStatus(doc.Err()) == 404 {
fmt.Println("Document not found")
} else {
fmt.Printf("Error: %v\n", doc.Err())
}
return
}
var userData map[string]interface{}
if err := doc.ScanDoc(&userData); err != nil {
fmt.Printf("Scan error: %v\n", err)
return
}
fmt.Printf("Retrieved user: %v\n", userData)
Database Auto-Creation:
The function creates the database if it doesn't exist, which is convenient for development but may not be desired in production environments where database creation should be explicit and controlled.
Document Metadata:
The document handle provides access to CouchDB metadata including revision information, which is essential for subsequent update operations.
func CouchDBDocNew ¶
CouchDBDocNew creates a new document in the specified database with automatic ID generation. This function provides a simplified interface for document creation with automatic database setup and unique document ID assignment by CouchDB.
Document Creation Process:
- Establishes connection to CouchDB server
- Creates database if it doesn't exist
- Inserts document with CouchDB-generated unique ID
- Returns both document ID and initial revision
Parameters:
- url: CouchDB server URL with authentication
- db: Database name for document storage
- doc: Document data as interface{} (typically map[string]interface{})
Returns:
- string: Document ID assigned by CouchDB (UUID format)
- string: Initial revision ID for subsequent updates
Automatic ID Generation:
CouchDB generates UUIDs for document IDs when not explicitly provided, ensuring uniqueness across the database and enabling distributed scenarios without coordination overhead.
Database Auto-Creation:
The function automatically creates the target database if it doesn't exist, enabling rapid development and deployment without manual database setup.
Error Handling:
- Connection failures cause panic for immediate feedback
- Database creation errors are logged but don't prevent document creation
- Document creation failures cause panic to indicate data operation issues
Example Usage:
docData := map[string]interface{}{
"name": "John Doe",
"email": "john@example.com",
"created": time.Now(),
}
docId, revId := CouchDBDocNew("http://admin:pass@localhost:5984/", "users", docData)
fmt.Printf("Created document %s with revision %s\n", docId, revId)
Document Structure:
The doc parameter should be JSON-serializable data typically represented
as map[string]interface{} for maximum flexibility with CouchDB's schema-free nature.
Revision Management:
The returned revision ID is essential for subsequent document updates and should be stored with application state for conflict resolution.
func CreateDatabaseFromURL ¶ added in v0.0.7
CreateDatabaseFromURL creates a new CouchDB database with the given name. This is a standalone function that doesn't require a service instance.
Parameters:
- url: CouchDB server URL with authentication
- dbName: Name of the database to create
Returns:
- error: Database creation or connection errors
Error Handling:
- Returns error if database already exists
- Returns error if insufficient permissions
- Returns error on connection failures
Example Usage:
err := CreateDatabaseFromURL(
"http://admin:password@localhost:5984",
"my_new_database")
if err != nil {
log.Printf("Failed to create database: %v", err)
}
func CreateLMDBRepository ¶ added in v0.0.3
CreateLMDBRepository creates a new LMDB-based RDF4J repository. This function creates a high-performance persistent repository using Lightning Memory-Mapped Database (LMDB) as the storage backend.
LMDB Storage Benefits:
- High-performance read and write operations
- Memory-mapped file access for efficiency
- ACID transaction support with durability
- Crash-safe persistence with automatic recovery
- Excellent scalability for large datasets
Repository Configuration:
Creates an LMDB repository with the following characteristics: - File-based persistent storage - Standard query evaluation mode - Automatic indexing and optimization - Full SPARQL 1.1 support
Parameters:
- serverURL: Base URL of the RDF4J server
- repositoryID: Unique identifier for the new repository
- username: Username for HTTP Basic Authentication
- password: Password for HTTP Basic Authentication
Returns:
- error: Repository creation, authentication, or configuration errors
LMDB Characteristics:
- Memory-mapped files for efficient access
- Copy-on-write semantics for consistent reads
- No write-ahead logging overhead
- Automatic file growth and management
- Operating system page cache integration
Performance Profile:
- Excellent read performance through memory mapping
- Good write performance with batch operations
- Efficient range queries and indexing
- Low memory overhead for large datasets
- Scales well with available system memory
Success Conditions:
Repository creation is successful when the server returns: - HTTP 200 OK (creation successful) - HTTP 201 Created (alternative success response) - HTTP 204 No Content (creation without response body)
Error Conditions:
- Repository ID already exists on the server
- Authentication failures or insufficient permissions
- Invalid LMDB configuration parameters
- Insufficient disk space for database files
- File system permission errors
- Server-side LMDB initialization failures
Example Usage:
err := CreateLMDBRepository(
"http://localhost:8080/rdf4j-server",
"production-lmdb-repo",
"admin",
"password",
)
if err != nil {
log.Fatal("LMDB repository creation failed:", err)
}
log.Println("LMDB repository created successfully")
Use Cases:
LMDB repositories are ideal for: - Production systems requiring persistence - Large-scale data processing and analytics - High-throughput read-heavy workloads - Systems requiring fast startup times - Applications with strict consistency requirements
Configuration Details:
The function uses RDF4J's modern configuration vocabulary (tag:rdf4j.org,2023:config/) rather than legacy OpenRDF namespaces for future compatibility and enhanced functionality.
Storage Considerations:
- Database files created in RDF4J server data directory
- Automatic file size management and growth
- Backup requires file system level copying
- Consider disk I/O performance for optimal results
func CreateRepository ¶ added in v0.0.3
CreateRepository creates a new in-memory RDF4J repository. This function dynamically creates a repository configuration using Turtle syntax and deploys it to the RDF4J server for immediate use.
Repository Configuration:
Creates an in-memory repository with the following characteristics: - Memory-based storage (non-persistent) - Fast read/write operations - Automatic cleanup on server restart - Suitable for temporary data processing and testing
Parameters:
- serverURL: Base URL of the RDF4J server
- repositoryID: Unique identifier for the new repository
- username: Username for HTTP Basic Authentication
- password: Password for HTTP Basic Authentication
Returns:
- error: Repository creation, authentication, or configuration errors
Configuration Format:
The function generates a Turtle configuration that defines: - Repository type as SailRepository with MemoryStore - Repository ID and human-readable label - Storage backend configuration and parameters
Memory Store Characteristics:
- All data stored in server memory (RAM)
- No persistence across server restarts
- Excellent performance for read/write operations
- Limited by available server memory
- Immediate data loss on server failure
Success Conditions:
Repository creation is successful when the server returns: - HTTP 204 No Content (creation successful) - HTTP 200 OK (alternative success response)
Error Conditions:
- Repository ID already exists on the server
- Authentication failures or insufficient permissions
- Invalid repository configuration syntax
- Server-side errors during repository initialization
- Network connectivity issues
Example Usage:
err := CreateRepository(
"http://localhost:8080/rdf4j-server",
"test-memory-repo",
"admin",
"password",
)
if err != nil {
log.Fatal("Repository creation failed:", err)
}
log.Println("Memory repository created successfully")
Use Cases:
Memory repositories are ideal for: - Unit testing and integration testing - Temporary data processing and analysis - Development and prototyping - Cache-like storage for computed results - Session-based data storage
Repository Lifecycle:
- Created empty and ready for data import
- Destroyed automatically on server restart
- Can be explicitly deleted using DeleteRepository
- Performance degrades as data size approaches memory limits
func DatabaseExistsFromURL ¶ added in v0.0.7
DatabaseExistsFromURL checks if a database exists. This is a standalone function that doesn't require a service instance.
Parameters:
- url: CouchDB server URL with authentication
- dbName: Name of the database to check
Returns:
- bool: true if database exists, false otherwise
- error: Connection or query errors
Example Usage:
exists, err := DatabaseExistsFromURL(
"http://admin:password@localhost:5984",
"my_database")
if err != nil {
log.Printf("Error checking database: %v", err)
return
}
if exists {
fmt.Println("Database exists")
} else {
fmt.Println("Database does not exist")
}
func DeleteDatabaseFromURL ¶ added in v0.0.7
DeleteDatabaseFromURL deletes a CouchDB database. This permanently removes the database and all its documents.
Parameters:
- url: CouchDB server URL with authentication
- dbName: Name of the database to delete
Returns:
- error: Database deletion or connection errors
WARNING:
This operation is irreversible and deletes all data in the database. Use with extreme caution in production environments.
Example Usage:
err := DeleteDatabaseFromURL(
"http://admin:password@localhost:5984",
"old_database")
if err != nil {
log.Printf("Failed to delete database: %v", err)
}
func DeleteGenericDocument ¶ added in v0.0.7
func DeleteGenericDocument(c *CouchDBService, id, rev string) error
DeleteGenericDocument deletes a generic document from CouchDB. Requires both document ID and current revision for conflict detection.
Parameters:
- c: CouchDBService instance
- id: Document identifier to delete
- rev: Current document revision (for MVCC conflict detection)
Returns:
- error: Deletion failures or conflicts
MVCC Conflict Handling:
If revision doesn't match current document: - Returns CouchDBError with status 409 - Application should retrieve latest revision and retry
Example Usage:
err := DeleteGenericDocument(service, "container-123", "2-abc123")
if err != nil {
if couchErr, ok := err.(*CouchDBError); ok && couchErr.IsConflict() {
fmt.Println("Conflict - document was modified")
// Retrieve latest and retry
return
}
log.Printf("Delete failed: %v", err)
}
func DeleteRepository ¶ added in v0.0.2
DeleteRepository removes a repository and all its data from an RDF4J server. This function permanently deletes a repository configuration and all stored RDF data, providing a clean removal mechanism for repository management.
Deletion Process:
- Sends HTTP DELETE request to the repository endpoint
- Authenticates using provided credentials
- Verifies successful deletion via HTTP status codes
- Returns error information for failed deletions
Parameters:
- serverURL: Base URL of the RDF4J server
- repositoryID: Identifier of the repository to delete
- username: Username for HTTP Basic Authentication
- password: Password for HTTP Basic Authentication
Returns:
- error: Authentication, permission, or deletion errors
Data Loss Warning:
This operation is irreversible and will permanently destroy: - All RDF triples stored in the repository - Repository configuration and metadata - Any custom indexes or optimization data - Transaction logs and backup information
Success Conditions:
The function considers deletion successful when the server returns: - HTTP 204 No Content (standard success response) - HTTP 200 OK (alternative success response)
Security Considerations:
- Requires appropriate authentication credentials
- User must have repository deletion permissions
- Consider implementing confirmation mechanisms in applications
- Audit logging recommended for deletion operations
Error Conditions:
- Repository not found (may already be deleted)
- Authentication failures or insufficient permissions
- Server-side errors during deletion process
- Network connectivity issues
- Repository currently in use by other operations
Example Usage:
err := DeleteRepository(
"http://localhost:8080/rdf4j-server",
"temporary-repository",
"admin",
"password",
)
if err != nil {
log.Fatal("Deletion failed:", err)
}
log.Println("Repository deleted successfully")
Repository Management:
Use this function as part of: - Automated testing cleanup procedures - Repository lifecycle management - Data migration and reorganization - Administrative maintenance tasks
Best Practices:
- Always backup important data before deletion
- Verify repository ID to prevent accidental deletions
- Implement proper access controls and audit logs
- Consider soft deletion patterns for critical systems
func DownloadAllDocuments ¶ added in v0.0.3
DownloadAllDocuments exports all documents from a CouchDB database to the filesystem. This function provides comprehensive database backup and export capabilities with organized file structure and progress monitoring for large datasets.
Export Process:
- Connects to CouchDB server with provided credentials
- Creates organized directory structure for exported data
- Iterates through all documents in the specified database
- Saves each document as individual JSON file
- Provides progress feedback for large datasets
Parameters:
- url: CouchDB server URL with authentication
- db: Database name to export documents from
- outputDir: Base directory for exported document files
Returns:
- error: Connection, permission, or file system errors
File Organization:
Creates directory structure: outputDir/database/document_id.json - Each document saved as separate JSON file - Document IDs sanitized for filesystem compatibility - Pretty-printed JSON for human readability - Preserves all document fields and metadata
Progress Monitoring:
- Reports progress every 100 documents for large datasets
- Displays total document count upon completion
- Provides feedback for long-running export operations
- Logs errors for individual document processing failures
Error Handling:
- Individual document errors don't halt the entire export
- Connection errors are reported and terminate the operation
- File system errors are logged with appropriate context
- Design document (_design/) are automatically skipped
Example Usage:
err := DownloadAllDocuments(
"http://admin:password@localhost:5984",
"flow_processes",
"/backup/couchdb_export")
if err != nil {
log.Printf("Export failed: %v", err)
return
}
fmt.Println("Database export completed successfully")
File Naming:
Document IDs are sanitized for filesystem compatibility: - Invalid characters replaced with underscores - Length limited to prevent filesystem issues - Maintains uniqueness while ensuring compatibility
Use Cases:
- Database backup and disaster recovery
- Data migration between CouchDB instances
- Offline analysis and data processing
- Compliance and audit data archival
- Development data seeding and testing
Performance Considerations:
- Memory usage remains constant regardless of dataset size
- Network bandwidth depends on document sizes and count
- Disk I/O performance affects export speed
- Large datasets benefit from progress monitoring
func DragonflyDBGetKey ¶ added in v0.0.6
DragonflyDBGetKey retrieves a value from DragonflyDB by key. DragonflyDB is compatible with Redis protocol, so this function uses the go-redis client. The function prints the key and connection status for debugging purposes.
The function requires the following environment variables:
- DRAGONFLYDB_HOST: DragonflyDB server address (e.g., "localhost:6379")
- DRAGONFLYDB_PASSWORD: Authentication password (use empty string if no password)
Parameters:
- key: The key to retrieve
Returns:
- []byte: The value stored at the key
- error: Any error encountered (including redis.Nil if key doesn't exist)
Example:
data, err := DragonflyDBGetKey("user:1234")
if err == redis.Nil {
fmt.Println("Key does not exist")
} else if err != nil {
log.Fatal(err)
} else {
fmt.Printf("Value: %s\n", data)
}
func DragonflyDBGetKeyWithDialer ¶ added in v0.0.13
func DragonflyDBGetKeyWithDialer(key string, dialer func(ctx context.Context, network, addr string) (net.Conn, error)) ([]byte, error)
DragonflyDBGetKeyWithDialer retrieves a value from DragonflyDB by key using a custom dialer. This function supports Ziti zero-trust networking by accepting a custom dialer for connection setup. DragonflyDB is compatible with Redis protocol, so this function uses the go-redis client.
The function requires the following environment variables:
- DRAGONFLYDB_HOST: DragonflyDB server address (e.g., "localhost:6379" or Ziti service name)
- DRAGONFLYDB_PASSWORD: Authentication password (use empty string if no password)
Parameters:
- key: The key to retrieve
- dialer: Custom dialer function for creating connections (e.g., Ziti dialer)
Returns:
- []byte: The value stored at the key
- error: Any error encountered (including redis.Nil if key doesn't exist)
Example with Ziti:
import (
"eve.evalgo.org/db"
"eve.evalgo.org/network"
"github.com/openziti/sdk-golang/ziti"
)
// Setup Ziti context
cfg, _ := ziti.NewConfigFromFile("/path/to/identity.json")
zitiCtx, _ := ziti.NewContext(cfg)
// Create Ziti dialer
dialer := func(ctx context.Context, network, addr string) (net.Conn, error) {
return zitiCtx.Dial("dragonflydb-service")
}
// Get key through Ziti
data, err := db.DragonflyDBGetKeyWithDialer("user:1234", dialer)
if err != nil {
log.Fatal(err)
}
func DragonflyDBSaveKeyValue ¶ added in v0.0.6
DragonflyDBSaveKeyValue stores a key-value pair in DragonflyDB. DragonflyDB is compatible with Redis protocol, so this function uses the go-redis client. The value is stored with no expiration (TTL = 0).
The function requires the following environment variables:
- DRAGONFLYDB_HOST: DragonflyDB server address (e.g., "localhost:6379")
- DRAGONFLYDB_PASSWORD: Authentication password (use empty string if no password)
Parameters:
- key: The key under which to store the value
- value: The value to store as bytes
Returns:
- error: Any error encountered during storage operation
Example:
data := []byte("user profile data")
err := DragonflyDBSaveKeyValue("user:1234", data)
if err != nil {
log.Fatal(err)
}
func DragonflyDBSaveKeyValueWithDialer ¶ added in v0.0.13
func DragonflyDBSaveKeyValueWithDialer(key string, value []byte, dialer func(ctx context.Context, network, addr string) (net.Conn, error)) error
DragonflyDBSaveKeyValueWithDialer stores a key-value pair in DragonflyDB using a custom dialer. This function supports Ziti zero-trust networking by accepting a custom dialer for connection setup. DragonflyDB is compatible with Redis protocol, so this function uses the go-redis client. The value is stored with no expiration (TTL = 0).
The function requires the following environment variables:
- DRAGONFLYDB_HOST: DragonflyDB server address (e.g., "localhost:6379" or Ziti service name)
- DRAGONFLYDB_PASSWORD: Authentication password (use empty string if no password)
Parameters:
- key: The key under which to store the value
- value: The value to store as bytes
- dialer: Custom dialer function for creating connections (e.g., Ziti dialer)
Returns:
- error: Any error encountered during storage operation
Example with Ziti:
import (
"eve.evalgo.org/db"
"eve.evalgo.org/network"
"github.com/openziti/sdk-golang/ziti"
)
// Setup Ziti context
cfg, _ := ziti.NewConfigFromFile("/path/to/identity.json")
zitiCtx, _ := ziti.NewContext(cfg)
// Create Ziti dialer
dialer := func(ctx context.Context, network, addr string) (net.Conn, error) {
return zitiCtx.Dial("dragonflydb-service")
}
// Save key through Ziti
data := []byte("user profile data")
err := db.DragonflyDBSaveKeyValueWithDialer("user:1234", data, dialer)
if err != nil {
log.Fatal(err)
}
func ExpandJSONLD ¶ added in v0.0.7
ExpandJSONLD performs basic JSON-LD expansion. This is a simplified implementation that handles common expansion patterns without requiring a full JSON-LD processor.
Parameters:
- doc: Document to expand
Returns:
- map[string]interface{}: Expanded document
- error: Expansion errors
Expansion Process:
JSON-LD expansion converts compact representation to explicit form: - Resolves all terms to full IRIs - Converts values to explicit object format - Expands @type to @type array - Converts single values to arrays
Limitations:
This is a basic implementation: - Does not fetch remote contexts - Limited to simple context resolution - For full expansion, use a dedicated JSON-LD library
Example Usage:
doc := map[string]interface{}{
"@context": "https://schema.org",
"@type": "SoftwareApplication",
"name": "nginx",
}
expanded, err := ExpandJSONLD(doc)
if err != nil {
log.Printf("Expansion failed: %v", err)
return
}
// expanded now has full IRIs
fmt.Printf("Expanded: %+v\n", expanded)
func ExportRDFXml ¶
func ExportRDFXml(serverURL, repositoryID, username, password, outputFilePath, contentType string) error
ExportRDFXml exports all RDF data from a repository to a file. This function retrieves complete repository contents and saves them in the specified RDF serialization format for backup, transfer, or analysis.
Export Process:
- Connects to the repository statements endpoint
- Requests data in the specified serialization format
- Downloads all triples from the repository
- Writes the data to the specified output file
Parameters:
- serverURL: Base URL of the RDF4J server
- repositoryID: Source repository identifier
- username: Username for HTTP Basic Authentication
- password: Password for HTTP Basic Authentication
- outputFilePath: File system path for the exported data
- contentType: MIME type for the desired output format
Returns:
- error: Network, authentication, or file writing errors
Supported Export Formats:
- "application/rdf+xml": Standard RDF/XML serialization
- "text/turtle": Turtle format (human-readable)
- "application/ld+json": JSON-LD format (web-friendly)
- "application/n-triples": N-Triples format (streaming-friendly)
- "text/plain": N-Triples in plain text format
Data Scope:
The export includes all triples stored in the repository across all named graphs, providing a complete dump of repository contents. Named graph information is preserved in formats that support it.
File Handling:
The exported file is written with standard permissions (0644), making it readable by the owner and group while preventing unauthorized modifications.
Error Conditions:
- Repository not found or access denied
- Network connectivity issues during download
- Authentication failures
- Insufficient disk space for export file
- File system permission errors
- Server-side errors during data serialization
Example Usage:
err := ExportRDFXml(
"http://localhost:8080/rdf4j-server",
"my-repository",
"admin",
"password",
"/backup/repository-export.rdf",
"application/rdf+xml",
)
if err != nil {
log.Fatal("Export failed:", err)
}
Backup and Recovery:
Exported files can be used for: - Repository backup and disaster recovery - Data migration between different RDF stores - Data analysis and processing with external tools - Version control and change tracking
Performance Considerations:
- Large repositories may take significant time to export
- Memory usage depends on repository size and export format
- Network bandwidth affects download time
- Consider compression for large export files
func ExtractJSONLDType ¶ added in v0.0.7
ExtractJSONLDType extracts the @type value from a JSON-LD document. This is a convenience function for type checking.
Parameters:
- doc: JSON-LD document
Returns:
- string: The @type value
- error: Error if @type not found
Example Usage:
docType, err := ExtractJSONLDType(doc)
if err != nil {
log.Printf("No type found: %v", err)
return
}
switch docType {
case "SoftwareApplication":
// Handle container
case "ComputerServer":
// Handle host
}
func FindTyped ¶ added in v0.0.7
func FindTyped[T any](c *CouchDBService, query MangoQuery) ([]T, error)
FindTyped executes a Mango query with typed results using generics. This provides compile-time type safety for query results.
Type Parameter:
- T: Expected document type
Parameters:
- query: MangoQuery structure with filtering and options
Returns:
- []T: Slice of documents matching type T
- error: Query execution or parsing errors
Example Usage:
type Container struct {
ID string `json:"_id"`
Name string `json:"name"`
Status string `json:"status"`
HostedOn string `json:"hostedOn"`
}
query := MangoQuery{
Selector: map[string]interface{}{
"status": "running",
"@type": "SoftwareApplication",
},
Limit: 50,
}
containers, err := FindTyped[Container](service, query)
if err != nil {
log.Printf("Query failed: %v", err)
return
}
for _, container := range containers {
fmt.Printf("Container %s is %s\n", container.Name, container.Status)
}
func GetAllDocuments ¶ added in v0.0.7
func GetAllDocuments[T any](c *CouchDBService, docType string) ([]T, error)
GetAllDocuments retrieves all documents from the database with optional type filtering. This function uses Go generics to return strongly-typed document results.
Type Parameter:
- T: Expected document type (will skip documents that don't match)
Parameters:
- docType: Optional document type filter (e.g., "@type" field value) Pass empty string to retrieve all documents
Returns:
- []T: Slice of documents of type T
- error: Query execution or parsing errors
Type Filtering:
If docType is provided, only returns documents with matching "@type" field. Documents without "@type" field or with different type are skipped.
Performance Considerations:
- Retrieves all documents using _all_docs view
- Memory usage scales with database size
- Consider pagination for very large databases
Example Usage:
// Get all documents of any type
allDocs, err := GetAllDocuments[map[string]interface{}](service, "")
// Get only Container documents
containers, err := GetAllDocuments[Container](service, "SoftwareApplication")
for _, container := range containers {
fmt.Printf("Container: %s (%s)\n", container.Name, container.Status)
}
func GetDocument ¶ added in v0.0.7
func GetDocument[T any](c *CouchDBService, id string) (*T, error)
GetDocument retrieves a generic document from CouchDB by ID. This function uses Go generics to return strongly-typed document results.
Type Parameter:
- T: Expected document type (must match stored document structure)
Parameters:
- id: Document identifier to retrieve
Returns:
- *T: Pointer to the retrieved document of type T
- error: Document not found, access, or parsing errors
Error Handling:
- Returns CouchDBError for HTTP errors (404, 401, etc.)
- Returns parsing error if document structure doesn't match type T
Example Usage:
type Container struct {
ID string `json:"_id"`
Rev string `json:"_rev"`
Name string `json:"name"`
Status string `json:"status"`
}
container, err := GetDocument[Container](service, "container-123")
if err != nil {
if couchErr, ok := err.(*CouchDBError); ok && couchErr.IsNotFound() {
fmt.Println("Container not found")
return
}
log.Printf("Error: %v", err)
return
}
fmt.Printf("Container %s is %s\n", container.Name, container.Status)
func GetDocumentsByType ¶ added in v0.0.7
func GetDocumentsByType[T any](c *CouchDBService, docType string) ([]T, error)
GetDocumentsByType retrieves documents filtered by @type field using Mango query. This is more efficient than GetAllDocuments for type-filtered queries on large databases.
Type Parameter:
- T: Expected document type
Parameters:
- docType: Value of the "@type" field to filter by
Returns:
- []T: Slice of matching documents
- error: Query execution or parsing errors
Index Recommendation:
For optimal performance, create an index on the @type field:
index := Index{Name: "type-index", Fields: []string{"@type"}, Type: "json"}
service.CreateIndex(index)
Example Usage:
containers, err := GetDocumentsByType[Container](service, "SoftwareApplication")
if err != nil {
log.Printf("Query failed: %v", err)
return
}
for _, container := range containers {
fmt.Printf("Found container: %s\n", container.Name)
}
func GraphDBDeleteGraph ¶
GraphDBDeleteGraph removes a specific named graph from a repository using SPARQL UPDATE. This function executes a DROP GRAPH operation to permanently delete all triples in the specified named graph while preserving other graphs in the repository.
Named Graph Deletion:
Uses SPARQL UPDATE "DROP GRAPH" command to: - Remove all triples from the specified named graph - Preserve other named graphs and default graph content - Execute atomic deletion operation - Update graph metadata and indexes
Parameters:
- URL: Base URL of the GraphDB server
- user: Username for HTTP Basic Authentication
- pass: Password for HTTP Basic Authentication
- repo: Repository identifier containing the graph
- graph: Named graph URI to delete
Returns:
- error: Authentication, execution, or server errors
SPARQL Operation:
Constructs and executes: DROP GRAPH <graph-uri> This standard SPARQL UPDATE operation removes all triples where the named graph matches the specified URI.
Graph URI Format:
The graph parameter should be a valid IRI (Internationalized Resource Identifier) that uniquely identifies the named graph within the repository.
Success Conditions:
Graph deletion is successful when the server returns: - HTTP 204 No Content status code - Empty response body indicating successful execution
Error Conditions:
- Named graph does not exist (may be silent success)
- Invalid graph URI format
- Authentication failures or insufficient permissions
- Repository not found or inaccessible
- SPARQL syntax or execution errors
Data Impact:
- Only affects triples in the specified named graph
- Default graph and other named graphs remain unchanged
- Graph metadata and context information is removed
- Indexes are updated to reflect graph deletion
Example Usage:
err := GraphDBDeleteGraph(
"http://localhost:7200", "admin", "password",
"knowledge-base", "http://example.org/graph/outdated-data")
if err != nil {
log.Fatal("Graph deletion failed:", err)
}
log.Println("Named graph deleted successfully")
Safety Considerations:
- Verify graph URI to prevent accidental deletions
- Backup important graph data before deletion
- Check for applications depending on the graph
- Consider graph dependencies and relationships
func GraphDBDeleteRepository ¶ added in v0.0.3
GraphDBDeleteRepository removes a repository and all its data from GraphDB server. This function permanently deletes a repository configuration and all stored RDF data, providing a clean removal mechanism for repository lifecycle management.
Repository Deletion:
Sends a DELETE request to GraphDB's REST repository endpoint to: - Remove repository configuration and metadata - Delete all stored RDF triples and named graphs - Clean up associated indexes and cached data - Free storage space and system resources
Parameters:
- URL: Base URL of the GraphDB server
- user: Username for HTTP Basic Authentication
- pass: Password for HTTP Basic Authentication
- repo: Repository identifier to delete
Returns:
- error: Authentication, permission, or deletion errors
Data Loss Warning:
This operation is irreversible and will permanently destroy: - All RDF triples and named graphs in the repository - Repository configuration and settings - Custom indexes and optimization data - Query result caches and temporary data
Success Conditions:
Repository deletion is successful when the server returns: - HTTP 200 OK (deletion completed successfully) - HTTP 204 No Content (deletion completed without response body)
Authentication Requirements:
- Valid credentials with repository deletion permissions
- Administrative access to the GraphDB server
- Appropriate role-based access controls
Error Conditions:
- Repository not found (may already be deleted)
- Authentication failures or insufficient permissions
- Repository currently in use by active connections
- Server-side errors during deletion process
- Network connectivity issues
Example Usage:
err := GraphDBDeleteRepository("http://localhost:7200", "admin", "password", "test-repository")
if err != nil {
log.Fatal("Repository deletion failed:", err)
}
log.Println("Repository deleted successfully")
Safety Recommendations:
- Always backup important repositories before deletion
- Verify repository name to prevent accidental deletions
- Check for dependent applications or integrations
- Implement proper access controls and audit logging
- Consider soft deletion patterns for critical systems
Administrative Use:
This function is typically used for: - Development environment cleanup - Automated testing teardown procedures - Repository lifecycle management - Data migration and reorganization
func GraphDBExportGraphRdf ¶
GraphDBExportGraphRdf exports a specific named graph from a repository to an RDF/XML file. This function retrieves all triples from a named graph and saves them in RDF/XML format for backup, analysis, or data transfer purposes.
Named Graph Export:
Downloads all RDF triples from the specified named graph using GraphDB's RDF graphs service endpoint, providing: - Graph-specific data extraction - Complete triple preservation with context - Standard RDF/XML serialization format - File-based output for external processing
Parameters:
- url: Base URL of the GraphDB server
- user: Username for HTTP Basic Authentication
- pass: Password for HTTP Basic Authentication
- repo: Repository identifier containing the graph
- graph: Named graph URI to export
- exportFile: Output filename for the exported RDF/XML data
Returns:
- error: File creation, network, or server errors
Export Process:
- Constructs request URL with graph parameter
- Requests RDF/XML representation of the named graph
- Downloads all triples from the specified graph context
- Writes RDF/XML data to the specified output file
RDF/XML Output:
The exported file contains standard RDF/XML with: - All triples from the named graph - Proper namespace declarations - XML structure following RDF/XML specification - UTF-8 encoding for international character support
Graph Context Preservation:
While the exported RDF/XML doesn't explicitly contain graph context information, all triples that belonged to the named graph are preserved for reimport into the same or different graph contexts.
Success Conditions:
Export is successful when the server returns: - HTTP 200 OK status code - Valid RDF/XML content in response body - Successful file creation and writing
Error Conditions:
- Named graph does not exist or is empty
- Authentication failures or insufficient permissions
- File creation or writing permissions errors
- Network connectivity issues during download
- Server errors during RDF serialization
Example Usage:
err := GraphDBExportGraphRdf(
"http://localhost:7200", "admin", "password",
"knowledge-base", "http://example.org/graph/publications",
"/backup/publications-export.rdf")
if err != nil {
log.Fatal("Graph export failed:", err)
}
log.Println("Named graph exported successfully")
Use Cases:
- Graph-specific backup and archival
- Data migration between repositories
- Selective data analysis and processing
- Graph-based data distribution and sharing
- Integration with external RDF tools and systems
func GraphDBImportGraphRdf ¶
GraphDBImportGraphRdf imports RDF data into a specific named graph within a repository. This function enables graph-level data management by importing RDF/XML content into designated named graphs for organized data storage and querying.
Named Graph Import:
Loads RDF data into a specific named graph context within the repository, enabling: - Data organization by source, domain, or purpose - Graph-level access control and permissions - Selective querying and data management - Logical separation of different datasets
Parameters:
- url: Base URL of the GraphDB server
- user: Username for HTTP Basic Authentication
- pass: Password for HTTP Basic Authentication
- repo: Repository identifier for data import
- graph: Named graph URI for data context
- restoreFile: Path to RDF/XML file containing data
Returns:
- error: File reading, upload, or server errors
Graph Context Management:
The graph parameter specifies the named graph URI where data will be stored, enabling graph-aware operations and queries. The URI typically follows standard IRI format for global uniqueness.
RDF/XML Format:
The function expects RDF/XML formatted data files containing: - Valid RDF triples with subject, predicate, object - Namespace declarations for vocabulary terms - Proper XML structure and RDF syntax - Compatible encoding (UTF-8 recommended)
Import Process:
- Reads RDF/XML file from filesystem
- Constructs URL with graph parameter
- Uploads data via HTTP PUT to RDF graphs service
- Validates successful import response
Success Conditions:
Data import is successful when the server returns: - HTTP 204 No Content status code - Empty response body indicating successful processing
Error Conditions:
- RDF file not found or unreadable
- Invalid RDF/XML syntax or structure
- Repository does not exist
- Graph URI format errors
- Authentication failures or insufficient permissions
Example Usage:
err := GraphDBImportGraphRdf(
"http://localhost:7200", "admin", "password",
"knowledge-base", "http://example.org/graph/publications",
"/data/publications.rdf")
if err != nil {
log.Fatal("Graph import failed:", err)
}
Graph Organization Strategies:
- Domain-based: separate graphs for different knowledge domains
- Source-based: separate graphs for different data sources
- Time-based: separate graphs for different time periods
- Access-based: separate graphs for different security levels
func GraphDBRepositoryBrf ¶
GraphDBRepositoryBrf exports all RDF data from a repository in Binary RDF format. This function downloads the complete repository content in GraphDB's efficient Binary RDF (BRF) format for high-performance backup and data transfer operations.
Binary RDF Format:
BRF is GraphDB's proprietary binary serialization that provides: - Compact data representation with reduced file sizes - Fast serialization and deserialization performance - Preservation of all RDF data including named graphs - Optimized format for large dataset operations
Parameters:
- url: Base URL of the GraphDB server
- user: Username for HTTP Basic Authentication
- pass: Password for HTTP Basic Authentication
- repo: Repository identifier to export data from
Returns:
- string: Filename of the exported data file (repo.brf)
Export Process:
- Connects to the repository statements endpoint
- Requests data in Binary RDF format
- Downloads all triples and named graphs
- Saves to a .brf file for future restoration
File Output:
Creates a file named "{repo}.brf" containing all repository data
in binary format, suitable for:
- High-performance backup operations
- Data migration between GraphDB instances
- Bulk data transfer for analytics
- Repository cloning and replication
Performance Benefits:
- Significantly smaller file sizes compared to RDF/XML or Turtle
- Faster download and upload operations
- Reduced network bandwidth usage
- Optimized for GraphDB's internal data structures
Error Handling:
- Network errors are logged and may cause termination
- HTTP errors result in fatal logging and program exit
- File creation errors are logged as informational
Example Usage:
backupFile := GraphDBRepositoryBrf("http://localhost:7200", "admin", "password", "production-data")
fmt.Printf("Repository backup saved to: %s\n", backupFile)
Restore Compatibility:
The exported BRF file can be restored using GraphDBRestoreBrf() function to recreate the repository with identical data content.
func GraphDBRepositoryConf ¶
GraphDBRepositoryConf downloads the configuration of a GraphDB repository in Turtle format. This function retrieves the complete repository configuration for backup, analysis, or recreation purposes in a human-readable Turtle serialization.
Configuration Export:
Downloads the repository configuration from GraphDB's REST API endpoint, saving it as a Turtle (.ttl) file that contains: - Repository type and storage backend settings - Index configurations and performance tuning - Security and access control settings - Plugin and extension configurations
Parameters:
- url: Base URL of the GraphDB server
- user: Username for HTTP Basic Authentication
- pass: Password for HTTP Basic Authentication
- repo: Repository identifier to download configuration for
Returns:
- string: Filename of the downloaded configuration file (repo.ttl)
File Output:
Creates a file named "{repo}.ttl" in the current directory containing
the complete repository configuration in Turtle format, suitable for:
- Repository backup and disaster recovery
- Configuration analysis and documentation
- Repository recreation on different servers
- Version control of repository settings
Error Handling:
- Network errors are logged via eve.Logger.Error
- HTTP errors cause fatal logging and program termination
- File creation errors are logged but don't terminate execution
Example Usage:
configFile := GraphDBRepositoryConf("http://localhost:7200", "admin", "password", "my-repo")
fmt.Printf("Repository configuration saved to: %s\n", configFile)
Turtle Format Benefits:
- Human-readable RDF configuration
- Easy editing and version control
- Standard RDF serialization format
- Compatible with RDF tools and editors
func GraphDBRestoreBrf ¶
GraphDBRestoreBrf restores RDF data from a Binary RDF file into a repository. This function uploads BRF data to recreate repository content with high performance and complete data fidelity preservation.
Binary RDF Restoration:
Uploads Binary RDF data directly to the repository statements endpoint, restoring all triples and named graphs with: - High-performance data upload using binary format - Complete preservation of graph structure and content - Efficient processing of large datasets - Atomic restoration operation
Parameters:
- url: Base URL of the GraphDB server
- user: Username for HTTP Basic Authentication
- pass: Password for HTTP Basic Authentication
- restoreFile: Path to the Binary RDF file (.brf)
Returns:
- error: File reading, upload, or server errors
Repository Inference:
The target repository name is automatically inferred from the BRF filename (excluding the .brf extension), enabling automated restoration workflows with consistent naming patterns.
Data Upload Process:
- Reads the complete BRF file into memory
- Extracts repository name from filename
- Uploads data to the repository statements endpoint
- Validates successful data import response
Performance Characteristics:
- Binary format provides fastest upload speeds
- Single HTTP request for complete data transfer
- Minimal CPU overhead during upload
- Efficient memory usage for large datasets
Success Conditions:
Data restoration is successful when the server returns: - HTTP 204 No Content status code - Empty response body (data loaded successfully)
Error Conditions:
- BRF file not found or unreadable
- Repository does not exist (must be created first)
- Authentication failures or insufficient permissions
- Server-side data processing errors
- Memory limitations with very large datasets
Example Usage:
err := GraphDBRestoreBrf("http://localhost:7200", "admin", "password", "production-data.brf")
if err != nil {
log.Fatal("Data restore failed:", err)
}
log.Println("Repository data restored successfully")
Workflow Integration:
Typically used after GraphDBRestoreConf() to complete repository restoration: 1. Restore repository configuration 2. Restore repository data from BRF file 3. Verify data integrity and accessibility
func GraphDBRestoreConf ¶
GraphDBRestoreConf restores a repository configuration from a Turtle file. This function uploads a repository configuration file to create a new repository with the settings defined in the Turtle configuration.
Configuration Restoration:
Uses multipart form upload to send the Turtle configuration file to GraphDB's repository creation endpoint, enabling: - Repository recreation from backup configurations - Deployment automation with predefined settings - Configuration migration between environments - Template-based repository creation
Parameters:
- url: Base URL of the GraphDB server
- user: Username for HTTP Basic Authentication
- pass: Password for HTTP Basic Authentication
- restoreFile: Path to the Turtle configuration file (.ttl)
Returns:
- error: File reading, upload, or server errors
Upload Process:
- Reads the Turtle configuration file from disk
- Creates multipart form data with the file content
- Uploads to GraphDB's REST repositories endpoint
- Validates successful repository creation response
File Format Requirements:
The restore file must be a valid Turtle configuration containing: - Repository type and identifier - Storage backend configuration - Index and performance settings - Security and access control definitions
Success Conditions:
Repository creation is successful when the server returns: - HTTP 201 Created status code - Confirmation message in response body
Error Conditions:
- Configuration file not found or unreadable
- Invalid Turtle syntax in configuration
- Repository ID conflicts with existing repositories
- Authentication failures or insufficient permissions
- Server-side configuration validation errors
Example Usage:
err := GraphDBRestoreConf("http://localhost:7200", "admin", "password", "backup-repo.ttl")
if err != nil {
log.Fatal("Configuration restore failed:", err)
}
log.Println("Repository restored successfully")
Best Practices:
- Validate configuration files before restoration
- Ensure repository IDs don't conflict with existing ones
- Test configurations in development before production use
- Backup existing repositories before restoration operations
func GraphDBZitiClient ¶ added in v0.0.3
GraphDBZitiClient creates an HTTP client configured for Ziti zero-trust networking. This function enables secure GraphDB access through Ziti overlay networks, providing network invisibility and strong identity-based authentication.
Ziti Integration:
Uses the ZitiSetup function to create a transport that routes all GraphDB traffic through the Ziti network overlay, ensuring: - End-to-end encryption for all database communications - Network invisibility (no exposed ports or network discovery) - Identity-based access control and policy enforcement - Automatic service discovery within the Ziti network
Parameters:
- identityFile: Path to Ziti identity file for authentication
- serviceName: Name of the Ziti service for GraphDB access
Returns:
- *http.Client: HTTP client configured with Ziti transport
- error: Ziti setup or configuration errors
Client Configuration:
The returned client includes: - Custom transport for Ziti network routing - 30-second timeout for database operations - Standard HTTP client features for GraphDB API calls
Error Conditions:
- Invalid Ziti identity file or credentials
- Ziti service not found or accessible
- Network connectivity issues to Ziti controllers
- Invalid service name or configuration
Example Usage:
client, err := GraphDBZitiClient("/path/to/identity.json", "graphdb-service")
if err != nil {
log.Fatal("Failed to create Ziti client:", err)
}
HttpClient = client // Use Ziti client for all GraphDB operations
Security Benefits:
- GraphDB server becomes invisible to traditional networks
- Strong cryptographic identity for all connections
- Dynamic policy enforcement and access control
- Protection against network-based attacks and reconnaissance
func ImportRDF ¶
func ImportRDF(serverURL, repositoryID, username, password, rdfFilePath, contentType string) ([]byte, error)
ImportRDF imports RDF data from a file into an RDF4J repository. This function uploads RDF data in various serialization formats to a specified repository, handling encoding validation and content negotiation.
Import Process:
- Reads RDF data from the specified file path
- Removes UTF-8 BOM if present to ensure parser compatibility
- Validates UTF-8 encoding to prevent parser errors
- Uploads data to the repository via HTTP POST
- Returns server response for status verification
Parameters:
- serverURL: Base URL of the RDF4J server
- repositoryID: Target repository identifier
- username: Username for HTTP Basic Authentication
- password: Password for HTTP Basic Authentication
- rdfFilePath: File system path to the RDF data file
- contentType: MIME type for the RDF serialization format
Returns:
- []byte: Raw response body from the server
- error: File reading, encoding validation, or HTTP errors
Supported Content Types:
- "application/rdf+xml": RDF/XML format
- "text/turtle": Turtle format
- "application/ld+json": JSON-LD format
- "application/n-triples": N-Triples format
- "application/n-quads": N-Quads format
File Validation:
The function performs UTF-8 validation to ensure the RDF file contains valid Unicode text. Invalid UTF-8 sequences will cause the function to return an error before attempting upload.
Error Conditions:
- File not found or permission denied
- Invalid UTF-8 encoding in the file
- Network connectivity issues
- Authentication failures
- Repository not found or access denied
- Invalid RDF syntax (server-side validation)
Example Usage:
response, err := ImportRDF(
"http://localhost:8080/rdf4j-server",
"my-repository",
"admin",
"password",
"/path/to/data.rdf",
"application/rdf+xml",
)
if err != nil {
log.Fatal("Import failed:", err)
}
fmt.Printf("Server response: %s\n", string(response))
Transaction Behavior:
The import operation is typically atomic at the repository level, meaning either all triples are imported successfully or none are added in case of errors during processing.
Performance Considerations:
- Large files are uploaded entirely into memory before sending
- Consider chunking or streaming for very large datasets
- Server timeout settings may affect large imports
- Repository performance depends on storage backend type
func NormalizeJSONLD ¶ added in v0.0.7
NormalizeJSONLD performs basic JSON-LD normalization (canonicalization). This produces a deterministic representation for comparison and hashing.
Parameters:
- doc: Document to normalize
Returns:
- string: Normalized JSON-LD as string
- error: Normalization errors
Normalization Process:
Creates canonical form: - Sorts all keys alphabetically - Removes whitespace - Ensures consistent formatting - Produces deterministic output
Use Cases:
- Document comparison and deduplication
- Cryptographic signing of JSON-LD
- Cache key generation
- Content-addressed storage
Example Usage:
doc := map[string]interface{}{
"@context": "https://schema.org",
"name": "nginx",
"@type": "SoftwareApplication",
}
normalized, err := NormalizeJSONLD(doc)
if err != nil {
log.Printf("Normalization failed: %v", err)
return
}
// normalized is deterministic string representation
fmt.Printf("Hash: %x\n", sha256.Sum256([]byte(normalized)))
func PGInfo ¶
func PGInfo(pgUrl string)
PGInfo establishes a PostgreSQL connection and displays database information. This function provides database connectivity testing and metadata discovery, including connection pool configuration and table listing for administrative purposes.
Connection Pool Configuration:
The function configures PostgreSQL connection pooling with production-ready settings: - MaxIdleConns: 10 connections in idle pool for efficiency - MaxOpenConns: 100 maximum concurrent connections for load management - ConnMaxLifetime: 1 hour maximum connection reuse for stability
Database Discovery:
Queries the information_schema to discover existing tables in the public schema, providing visibility into the current database structure for debugging and administrative purposes.
Parameters:
- pgUrl: PostgreSQL connection string (format: "host=localhost user=username dbname=mydb sslmode=disable")
Connection String Format:
Standard PostgreSQL connection strings with parameters: - host: Database server hostname or IP - port: Database server port (default 5432) - user: Database username - password: Database password - dbname: Database name - sslmode: SSL connection mode (disable, require, verify-ca, verify-full)
Error Handling:
Uses panic for unrecoverable errors, indicating this function is intended for initialization and administrative scenarios where database connectivity is essential for application operation.
Output Information:
- Database connection object details
- List of existing tables in the public schema
- Success confirmation message
Example Usage:
PGInfo("host=localhost user=admin password=secret dbname=rabbitlogs sslmode=disable")
Security Considerations:
- Connection strings may contain sensitive credentials
- Use environment variables or secure configuration for production
- Consider connection encryption for sensitive environments
- Implement proper access controls and authentication
Performance Impact:
- Connection pool settings affect resource usage and performance
- Table discovery query may be slow with many tables
- Connection lifetime affects memory usage and stability
func PGMigrations ¶
func PGMigrations(pgUrl string)
PGMigrations performs database schema migrations for RabbitMQ logging tables. This function ensures the database schema is up-to-date with the current model definitions, creating or updating tables as needed for proper operation.
Migration Process:
Uses GORM's AutoMigrate functionality to: - Create tables if they don't exist - Add new columns to existing tables - Update column types when compatible - Create indexes defined in model tags - Handle foreign key relationships
Parameters:
- pgUrl: PostgreSQL connection string for database access
Migration Safety:
GORM AutoMigrate is designed to be safe for production use: - Only adds new columns, never removes existing ones - Preserves existing data during schema changes - Creates tables and indexes if they don't exist - Does not modify existing column types incompatibly
Schema Evolution:
- New fields added to RabbitLog model will create new columns
- Index changes in model tags will be applied
- Foreign key relationships will be established
- Constraints defined in tags will be applied
Error Handling:
Uses panic for migration failures, indicating that database schema issues are critical and prevent application startup. This ensures that schema problems are addressed before the application runs.
Best Practices:
- Run migrations during application startup
- Test migrations in development environment first
- Backup database before running migrations in production
- Monitor migration performance for large tables
Example Usage:
PGMigrations("host=localhost user=admin password=secret dbname=rabbitlogs sslmode=disable")
Production Considerations:
- Migrations may take time with large existing tables
- Consider maintenance windows for significant schema changes
- Monitor application logs for migration success/failure
- Implement rollback procedures for critical schema changes
func PGRabbitLogFormatList ¶
PGRabbitLogFormatList retrieves RabbitMQ log entries with configurable output formats. This function provides flexible data access with multiple serialization options for different consumption patterns and integration requirements.
Format Support:
- "application/json": JSON serialization for API responses and web clients
- "struct": Raw Go structs for internal application processing
- Other formats: Returns error for unsupported format requests
Parameters:
- pgUrl: PostgreSQL connection string for database access
- format: Desired output format ("application/json" or "struct")
Returns:
- interface{}: Data in requested format ([]byte for JSON, []RabbitLog for struct)
- nil: On errors or unsupported formats
JSON Serialization:
When format is "application/json", the function: - Marshals all log records to JSON byte array - Includes all fields from RabbitLog struct and embedded GORM model - Provides timestamp formatting according to JSON standards - Binary log data is automatically base64-encoded by JSON marshaler
Struct Format:
When format is "struct", returns raw []RabbitLog slice for: - Direct access to typed data without serialization overhead - Internal application processing with full type safety - Further processing or filtering by calling code
Error Handling:
- Database connection errors are logged and return nil
- Query errors are logged and return nil
- JSON marshaling errors are logged and return nil
- Unsupported formats are logged and return nil
Use Cases:
- REST API endpoints serving log data
- Internal microservice communication
- Data export and backup operations
- Integration with monitoring and analytics systems
Example Usage:
// For API response jsonData := PGRabbitLogFormatList(connectionString, "application/json") // For internal processing logs := PGRabbitLogFormatList(connectionString, "struct").([]RabbitLog)
Performance Considerations:
- JSON serialization adds CPU overhead for large datasets
- Memory usage depends on number of records and serialization format
- Network transfer size varies significantly between formats
- Consider pagination for large result sets
Return Type Handling:
Callers must type assert the interface{} return value:
- JSON format returns []byte
- Struct format returns []RabbitLog
- Check for nil before type assertion
func PGRabbitLogList ¶
func PGRabbitLogList(pgUrl string)
PGRabbitLogList retrieves and displays all RabbitMQ log entries from the database. This function provides a complete listing of log records with formatted output for debugging, monitoring, and administrative purposes.
Data Retrieval:
Uses GORM's Find method to retrieve all RabbitLog records from the database, including all fields and automatically populated timestamps from the embedded GORM model.
Parameters:
- pgUrl: PostgreSQL connection string for database access
Output Format:
Each log entry is displayed with: - Complete RabbitLog struct information (ID, timestamps, DocumentID, State, Version) - Decoded log data as string (converted from byte array) - Structured format suitable for console output and debugging
Error Handling:
- Database connection errors cause panic for immediate feedback
- Query errors are logged via eve.Logger.Error but don't halt execution
- Graceful handling allows partial results even with some errors
Log Data Display:
The Log field contains raw binary data which is displayed as a string for human readability, enabling inspection of log content. Binary data that is not valid UTF-8 may display with special characters.
Performance Considerations:
- Retrieves all records without pagination (may be slow with large datasets)
- Memory usage grows with number of log entries
- Consider adding LIMIT clauses for production use with large tables
- Network bandwidth usage increases with log data size
Use Cases:
- Development debugging and log inspection
- Administrative monitoring of processing status
- Troubleshooting document processing issues
- Audit trail review for compliance purposes
Example Output:
{1 2024-01-15 10:30:00 2024-01-15 10:35:00 <nil> doc-12345 completed v1.0.0 [log data]} => Processing completed successfully
Production Alternatives:
For production environments, consider: - Pagination support for large datasets - Filtering options by DocumentID, State, or date ranges - Export to file formats for offline analysis - Integration with log aggregation systems
func PGRabbitLogNew ¶
func PGRabbitLogNew(pgUrl, documentId, state, version string)
PGRabbitLogNew creates a new RabbitMQ log entry in the database. This function inserts a new log record with document identification, processing state, and version information for message tracking purposes.
Record Creation:
Creates a new RabbitLog record with: - Automatic ID assignment by database - Automatic CreatedAt timestamp from GORM - Provided DocumentID, State, and Version values - Empty Log field (to be populated later via updates)
Parameters:
- pgUrl: PostgreSQL connection string
- documentId: Unique identifier for the document being processed
- state: Initial processing state (e.g., "started", "initialized")
- version: Document or processing version identifier
Database Transaction:
Uses GORM's Create method which automatically handles: - SQL generation and parameter binding - Transaction management for single record insertion - Primary key assignment and return - Timestamp population for CreatedAt and UpdatedAt
Error Handling:
Uses panic for database errors, indicating that log creation failures are critical and should halt processing to prevent data loss or inconsistent state tracking.
Usage Pattern:
Typically called at the beginning of document processing to establish a log entry that will be updated throughout the processing lifecycle: 1. Create initial log entry with "started" state 2. Update log entry with progress and intermediate states 3. Final update with "completed" or "failed" state and full log data
Example Usage:
PGRabbitLogNew(connectionString, "doc-12345", "started", "v1.0.0")
Data Validation:
Consider adding validation for: - DocumentID format and uniqueness constraints - State values against allowed enumeration - Version format according to versioning scheme
Performance Considerations:
- Single record insertion is efficient
- Consider batch operations for high-volume scenarios
- Database indexes on DocumentID improve lookup performance
func PGRabbitLogUpdate ¶
PGRabbitLogUpdate updates an existing RabbitMQ log entry with new state and log data. This function modifies log records to reflect processing progress and capture detailed log information throughout the document processing lifecycle.
Update Operations:
Updates existing RabbitLog records by DocumentID with: - New processing state to track progress - Raw binary log data stored directly in bytea field - Automatic UpdatedAt timestamp via GORM
Parameters:
- pgUrl: PostgreSQL connection string
- documentId: Document identifier to locate the record for update
- state: New processing state (e.g., "running", "completed", "failed")
- logText: Binary log data to be stored directly
Database Operation:
Uses GORM's Model().Where().Updates() pattern for: - Efficient conditional updates by DocumentID - Multiple field updates in single database transaction - Automatic timestamp management for UpdatedAt field - Safe parameter binding to prevent SQL injection
Data Storage:
Binary log data is stored directly in PostgreSQL's bytea field: - No encoding overhead, saving CPU and storage space - Handles arbitrary binary content including null bytes - Maintains data integrity without encoding/decoding steps - Optimal performance for binary data storage
Error Handling:
Uses panic for database errors, indicating that log update failures are critical for maintaining processing state consistency and should halt execution to prevent data loss or inconsistent tracking.
Update Pattern:
Typically used to track processing progress: 1. Initial record created with basic information 2. Intermediate updates with progress states and partial logs 3. Final update with completion state and full log data
Example Usage:
logData := []byte("Processing completed successfully with result: {...}")
PGRabbitLogUpdate(connectionString, "doc-12345", "completed", logData)
Concurrency Considerations:
- Multiple updates to same DocumentID are handled by database locking
- Last update wins for conflicting simultaneous updates
- Consider optimistic locking for critical update scenarios
Performance Impact:
- Direct binary storage without encoding overhead
- Index on DocumentID ensures efficient record location
- Single transaction minimizes database round trips
- Large log data may impact update performance
Storage Optimization:
For production environments with large log data, consider: - Compression before storage for space efficiency - Separate blob storage for very large log files - Log rotation and archival strategies - Database partitioning for time-based queries
func PoolPartyProjects ¶ added in v0.0.6
func PoolPartyProjects(baseURL, username, password, templateDir string)
PoolPartyProjects is a convenience function that creates a PoolParty client, fetches all projects, and prints them with troubleshooting tips if the operation fails. This is useful for quick CLI tools and debugging connection issues.
Parameters:
- baseURL: The PoolParty server base URL
- username: Authentication username
- password: Authentication password
- templateDir: Directory for SPARQL query templates (can be empty string)
Example:
PoolPartyProjects("https://poolparty.example.com", "admin", "password", "./templates")
func PrintProjects ¶ added in v0.0.6
func PrintProjects(projects []Project)
PrintProjects prints a list of PoolParty projects in a human-readable format. Each project is displayed with its ID, title, description, URI, type, status, creation date, and modification date. This is useful for debugging and CLI output.
Parameters:
- projects: List of projects to print
Example:
projects, _ := client.ListProjects() PrintProjects(projects)
func RunSparQLFromFile ¶ added in v0.0.6
func RunSparQLFromFile(baseURL, username, password, projectID, templateDir, tmplFileName, contentType string, params interface{}) ([]byte, error)
RunSparQLFromFile is a convenience function that creates a PoolParty client and executes a SPARQL query from a template file in a single call. This is useful for one-off queries where you don't need to maintain a persistent client.
Parameters:
- baseURL: The PoolParty server base URL
- username: Authentication username
- password: Authentication password
- projectID: The PoolParty project/thesaurus ID
- templateDir: Directory containing template files
- tmplFileName: Name of the template file
- contentType: Desired response format
- params: Parameters to pass to the template
Returns:
- []byte: Query results in the requested format
- error: Any error encountered during execution
Example:
params := map[string]string{"limit": "100"}
results, err := RunSparQLFromFile("https://poolparty.example.com", "admin", "pass",
"myproject", "./templates", "concepts.sparql", "application/json", params)
func SetJSONLDContext ¶ added in v0.0.7
SetJSONLDContext adds or updates the @context field in a document. This is a convenience function for adding JSON-LD context.
Parameters:
- doc: Document to modify
- context: Context URL or object
Returns:
- map[string]interface{}: Document with updated context
Example Usage:
doc := map[string]interface{}{
"name": "nginx",
"status": "running",
}
doc = SetJSONLDContext(doc, "https://schema.org")
// Now doc has @context field
func TraverseTyped ¶ added in v0.0.7
func TraverseTyped[T any](c *CouchDBService, opts TraversalOptions) ([]T, error)
TraverseTyped performs typed graph traversal using generics. This provides compile-time type safety for traversal results.
Type Parameter:
- T: Expected document type
Parameters:
- opts: TraversalOptions configuration
Returns:
- []T: Slice of traversed documents of type T
- error: Traversal or parsing errors
Example Usage:
type Container struct {
ID string `json:"_id"`
Name string `json:"name"`
HostedOn string `json:"hostedOn"`
}
opts := TraversalOptions{
StartID: "host-123",
Depth: 1,
RelationField: "hostedOn",
Direction: "reverse",
}
containers, err := TraverseTyped[Container](service, opts)
for _, container := range containers {
fmt.Printf("Container: %s on %s\n", container.Name, container.HostedOn)
}
func ValidateJSONLD ¶ added in v0.0.7
ValidateJSONLD performs basic JSON-LD validation on a document. This checks for required JSON-LD fields and structure without full RDF processing.
Parameters:
- doc: Document to validate (map or struct)
- context: Expected @context value (empty string to skip context check)
Returns:
- error: Validation errors if document is invalid
Validation Checks:
- Document must have @context field (if context parameter provided)
- Document should have @type field for semantic typing
- @id field should be present for linked data
- Context must match expected value (if specified)
JSON-LD Requirements:
Minimum valid JSON-LD document:
{
"@context": "https://schema.org",
"@type": "Thing",
"@id": "https://example.com/thing/123"
}
Example Usage:
doc := map[string]interface{}{
"@context": "https://schema.org",
"@type": "SoftwareApplication",
"@id": "urn:container:nginx-1",
"name": "nginx",
}
err := ValidateJSONLD(doc, "https://schema.org")
if err != nil {
log.Printf("Invalid JSON-LD: %v", err)
return
}
Types ¶
type Binding ¶ added in v0.0.6
type Binding struct {
Name string `xml:"name,attr"`
Uri *string `xml:"uri,omitempty"`
Literal *string `xml:"literal,omitempty"`
}
Binding represents a variable binding in a SPARQL XML result. The value can be either a URI or a literal.
type BulkDeleteDoc ¶ added in v0.0.7
type BulkDeleteDoc struct {
ID string `json:"_id"` // Document ID
Rev string `json:"_rev"` // Current revision
Deleted bool `json:"_deleted"` // Deletion flag (must be true)
}
BulkDeleteDoc represents a document to be deleted in a bulk operation. Bulk deletions require the document ID, current revision, and deleted flag.
Fields:
- ID: Document identifier to delete
- Rev: Current document revision for conflict detection
- Deleted: Must be true to indicate deletion
Example Usage:
deleteOps := []BulkDeleteDoc{
{ID: "doc1", Rev: "1-abc", Deleted: true},
{ID: "doc2", Rev: "2-def", Deleted: true},
}
results, _ := service.BulkDeleteDocuments(deleteOps)
type BulkResult ¶ added in v0.0.7
type BulkResult struct {
ID string `json:"id"` // Document ID
Rev string `json:"rev,omitempty"` // New revision (success)
Error string `json:"error,omitempty"` // Error type (failure)
Reason string `json:"reason,omitempty"` // Error description (failure)
OK bool `json:"ok"` // Success indicator
}
BulkResult represents the result of a single document operation in a bulk request. Bulk operations return an array of results, one for each document processed.
Result Fields:
- ID: Document identifier
- Rev: New document revision (on success)
- Error: Error type if operation failed
- Reason: Error description if operation failed
- OK: Boolean indicating success or failure
Success Indicators:
- OK=true: Operation succeeded, Rev contains new revision
- OK=false: Operation failed, Error and Reason explain why
Common Errors:
- "conflict": Document revision conflict
- "forbidden": Insufficient permissions
- "not_found": Document doesn't exist (for updates)
Example Usage:
results, _ := service.BulkSaveDocuments(docs)
for _, result := range results {
if result.OK {
fmt.Printf("Saved %s with rev %s\n", result.ID, result.Rev)
} else {
fmt.Printf("Failed %s: %s - %s\n", result.ID, result.Error, result.Reason)
}
}
func BulkUpsert ¶ added in v0.0.7
func BulkUpsert[T any](c *CouchDBService, docs []T, getIDFunc func(T) string) ([]BulkResult, error)
BulkUpsert performs bulk upsert (insert or update) operations. Documents are inserted if they don't exist, updated if they do.
Parameters:
- docs: Slice of documents to upsert
- getIDFunc: Function to extract document ID from each document
Returns:
- []BulkResult: Result for each operation
- error: Request execution errors
Upsert Process:
- For each document, extract ID using getIDFunc
- Check if document exists and get current revision
- Update document with current revision (if exists)
- Perform bulk save operation
Example Usage:
containers := []Container{
{ID: "c1", Name: "nginx", Status: "running"},
{ID: "c2", Name: "redis", Status: "running"},
}
results, err := BulkUpsert(service, containers, func(c Container) string {
return c.ID
})
if err != nil {
log.Printf("Bulk upsert failed: %v", err)
return
}
for _, result := range results {
if result.OK {
fmt.Printf("Upserted %s\n", result.ID)
}
}
type Change ¶ added in v0.0.7
type Change struct {
Seq string `json:"seq"` // Sequence ID
ID string `json:"id"` // Document ID
Changes []ChangeRev `json:"changes"` // Revision changes
Deleted bool `json:"deleted,omitempty"` // Deletion flag
Doc json.RawMessage `json:"doc,omitempty"` // Document content
}
Change represents a single change notification from the changes feed. Each change indicates a document modification (create, update, or delete).
Change Fields:
- Seq: Change sequence identifier (for resuming)
- ID: Document identifier that changed
- Changes: Array of revision changes
- Deleted: True if document was deleted
- Doc: Full document content (if IncludeDocs=true)
Sequence Numbers:
Sequence IDs are opaque strings that uniquely identify each change: - Used to resume changes feed after interruption - Monotonically increasing (newer changes have higher sequences) - Format varies by CouchDB version and configuration
Example Usage:
service.ListenChanges(opts, func(change Change) {
if change.Deleted {
fmt.Printf("Document %s was deleted\n", change.ID)
} else {
fmt.Printf("Document %s updated to rev %s\n",
change.ID, change.Changes[0].Rev)
if change.Doc != nil {
var doc map[string]interface{}
json.Unmarshal(change.Doc, &doc)
// Process document
}
}
})
type ChangeRev ¶ added in v0.0.7
type ChangeRev struct {
Rev string `json:"rev"` // Revision ID
}
ChangeRev represents a document revision in a change notification. Each change can include multiple revisions in conflict scenarios.
Fields:
- Rev: Document revision identifier
Example Usage:
for _, change := range changes {
for _, rev := range change.Changes {
fmt.Printf("Revision: %s\n", rev.Rev)
}
}
type ChangesFeedOptions ¶ added in v0.0.7
type ChangesFeedOptions struct {
Since string `json:"since,omitempty"` // Starting sequence
Feed string `json:"feed,omitempty"` // Feed type
Filter string `json:"filter,omitempty"` // Filter function
IncludeDocs bool `json:"include_docs,omitempty"` // Include documents
Heartbeat int `json:"heartbeat,omitempty"` // Heartbeat interval (ms)
Timeout int `json:"timeout,omitempty"` // Request timeout (ms)
Limit int `json:"limit,omitempty"` // Maximum changes
Descending bool `json:"descending,omitempty"` // Reverse order
Selector map[string]interface{} `json:"selector,omitempty"` // Mango selector filter
}
ChangesFeedOptions configures the CouchDB changes feed for real-time updates. The changes feed provides notification of all document modifications in the database.
Feed Types:
- "normal": Return all changes since sequence and close
- "longpoll": Wait for changes, return when available, close
- "continuous": Keep connection open, stream changes indefinitely
Configuration Options:
- Since: Starting sequence ("now", "0", or specific sequence ID)
- Feed: Feed type for change delivery
- Filter: Server-side filter function name
- IncludeDocs: Include full document content with changes
- Heartbeat: Milliseconds between heartbeat signals
- Timeout: Request timeout in milliseconds
- Limit: Maximum number of changes to return
- Descending: Reverse chronological order
- Selector: Mango selector for filtering changes
Example Usage:
// Continuous monitoring from current point
opts := ChangesFeedOptions{
Since: "now",
Feed: "continuous",
IncludeDocs: true,
Heartbeat: 60000,
Selector: map[string]interface{}{
"type": "container",
},
}
service.ListenChanges(opts, func(change Change) {
fmt.Printf("Document %s changed\n", change.ID)
})
type Command ¶ added in v0.0.6
Command represents a single BaseX command with optional name attribute. The XMLName field determines the command type (e.g., "create-db", "info-db"). Common commands include: create-db, info-db, drop-db, open, close, add, delete.
Example usage:
cmd := Command{XMLName: xml.Name{Local: "create-db"}, Name: "mydb"}
type Commands ¶ added in v0.0.6
Commands represents a collection of BaseX commands for batch execution. It is marshaled to XML format for the BaseX REST API.
Example XML:
<commands> <create-db name="mydb"/> <info-db/> </commands>
type ContextID ¶
type ContextID struct {
Type string `json:"type"` // Context type (uri, literal, bnode)
Value string `json:"value"` // Context value or identifier
}
ContextID represents a context identifier in GraphDB responses. This structure captures context information for RDF graph operations, providing type and value information for graph context management.
Context Types:
- "uri": Named graph URI contexts
- "literal": Literal value contexts
- "bnode": Blank node contexts
Usage in GraphDB:
Context IDs are used to identify and manage named graphs within GraphDB repositories, enabling graph-level operations and queries.
type CouchDBConfig ¶ added in v0.0.7
type CouchDBConfig struct {
URL string // CouchDB server URL
Database string // Database name
Username string // Authentication username
Password string // Authentication password
MaxConnections int // Maximum number of concurrent connections
Timeout int // Request timeout in milliseconds
CreateIfMissing bool // Create database if it doesn't exist
TLS *TLSConfig // Optional TLS configuration
// Ziti zero-trust networking support
// If ZitiIdentityFile is provided, connection will use Ziti overlay network
ZitiIdentityFile string // Path to Ziti identity file for zero-trust networking
ZitiServiceName string // Ziti service name for the CouchDB instance
HTTPClient *http.Client // Custom HTTP client (e.g., with Ziti transport)
}
CouchDBConfig provides generic CouchDB connection configuration. This configuration structure supports advanced connection options including TLS security, connection pooling, and automatic database creation.
Configuration Options:
- URL: CouchDB server URL (e.g., "http://localhost:5984")
- Database: Target database name for operations
- Username: Authentication username for CouchDB access
- Password: Authentication password for secure connections
- MaxConnections: Connection pool size for concurrent operations
- Timeout: Request timeout in milliseconds
- CreateIfMissing: Automatically create database if it doesn't exist
- TLS: Optional TLS/SSL configuration for secure connections
Example Usage:
config := &CouchDBConfig{
URL: "https://couchdb.example.com:6984",
Database: "graphium",
Username: "admin",
Password: "secure-password",
MaxConnections: 100,
Timeout: 30000,
CreateIfMissing: true,
TLS: &TLSConfig{
Enabled: true,
CAFile: "/path/to/ca.crt",
CertFile: "/path/to/client.crt",
KeyFile: "/path/to/client.key",
},
}
type CouchDBError ¶ added in v0.0.7
type CouchDBError struct {
StatusCode int `json:"status_code"` // HTTP status code
ErrorType string `json:"error"` // Error type identifier
Reason string `json:"reason"` // Human-readable error description
}
CouchDBError represents a CouchDB-specific error with HTTP status information. This error type provides structured error handling with helper methods for common CouchDB error conditions like conflicts, not found, and authorization.
Error Fields:
- StatusCode: HTTP status code from CouchDB response
- ErrorType: Error type identifier (e.g., "conflict", "not_found")
- Reason: Human-readable error description
Common Error Types:
- 404 Not Found: Document or database doesn't exist
- 409 Conflict: Document revision conflict (MVCC)
- 401 Unauthorized: Authentication required or failed
- 403 Forbidden: Insufficient permissions
- 412 Precondition Failed: Missing or invalid revision
Example Usage:
err := service.GetDocument("missing-doc")
if err != nil {
if couchErr, ok := err.(*CouchDBError); ok {
if couchErr.IsNotFound() {
fmt.Println("Document not found")
} else if couchErr.IsConflict() {
fmt.Println("Revision conflict - retry needed")
}
}
}
func (*CouchDBError) Error ¶ added in v0.0.7
func (e *CouchDBError) Error() string
Error implements the error interface for CouchDBError. Returns a formatted error message containing status code, error type, and reason.
func (*CouchDBError) IsConflict ¶ added in v0.0.7
func (e *CouchDBError) IsConflict() bool
IsConflict checks if the error is a document conflict error (HTTP 409). Conflicts occur when attempting to update a document with an outdated revision, indicating that another process has modified the document since it was retrieved.
Returns:
- bool: true if this is a revision conflict error, false otherwise
Usage:
if couchErr.IsConflict() {
// Retrieve latest version and retry
latestDoc, _ := service.GetDocument(docID)
// Merge changes and retry save
}
func (*CouchDBError) IsNotFound ¶ added in v0.0.7
func (e *CouchDBError) IsNotFound() bool
IsNotFound checks if the error is a not found error (HTTP 404). Not found errors occur when attempting to access a document or database that doesn't exist in CouchDB.
Returns:
- bool: true if this is a not found error, false otherwise
Usage:
if couchErr.IsNotFound() {
// Create new document instead
service.SaveDocument(newDoc)
}
func (*CouchDBError) IsUnauthorized ¶ added in v0.0.7
func (e *CouchDBError) IsUnauthorized() bool
IsUnauthorized checks if the error is an authorization error (HTTP 401 or 403). Authorization errors occur when authentication fails or the authenticated user lacks sufficient permissions for the requested operation.
Returns:
- bool: true if this is an authorization error, false otherwise
Usage:
if couchErr.IsUnauthorized() {
// Check credentials or request elevated permissions
log.Println("Authentication or authorization failed")
}
type CouchDBResponse ¶ added in v0.0.7
CouchDBResponse represents a successful CouchDB operation response. This is the generic response structure used by SaveDocument and related operations.
Fields:
- OK: Boolean indicating success (typically true)
- ID: Document identifier
- Rev: New document revision
func SaveDocument ¶ added in v0.0.7
func SaveDocument[T any](c *CouchDBService, doc T) (*CouchDBResponse, error)
SaveDocument saves a generic document to CouchDB with automatic ID and revision management. This function uses Go generics to provide type-safe document operations for any struct type.
Type Parameter:
- T: Any struct type that represents a CouchDB document
Document Requirements:
- Document should have "_id" and "_rev" fields (optional, can use struct tags)
- Struct fields should use `json:` tags for proper serialization
- Use `json:"_id"` and `json:"_rev"` tags for ID and revision fields
ID and Revision Handling:
- If document has no ID, CouchDB generates a UUID automatically
- If document has ID but no revision, creates new document or updates existing
- If document has both ID and revision, updates the specific version
- Returns new revision for subsequent updates
Parameters:
- c: CouchDBService instance
- doc: Document to save (any struct type)
Returns:
- *CouchDBResponse: Contains document ID and new revision
- error: Save failures, conflicts, or validation errors
Example Usage:
type Container struct {
ID string `json:"_id,omitempty"`
Rev string `json:"_rev,omitempty"`
Type string `json:"@type"`
Name string `json:"name"`
Status string `json:"status"`
HostedOn string `json:"hostedOn"`
}
container := Container{
ID: "container-123",
Type: "SoftwareApplication",
Name: "nginx",
Status: "running",
HostedOn: "host-456",
}
response, err := SaveDocument(service, container)
if err != nil {
log.Printf("Save failed: %v", err)
return
}
fmt.Printf("Saved with revision: %s\n", response.Rev)
type CouchDBService ¶ added in v0.0.2
type CouchDBService struct {
// contains filtered or unexported fields
}
CouchDBService encapsulates CouchDB client functionality for flow processing operations. This service provides a high-level abstraction over CouchDB operations with specialized support for flow document management, state tracking, and audit trail maintenance.
Service Components:
- client: Kivik CouchDB client for database connectivity
- database: Active database handle for document operations
- dbName: Database name for configuration and logging purposes
Connection Management:
The service maintains persistent connections to CouchDB with automatic database creation and proper resource management. Connections are pooled internally by the Kivik driver for optimal performance.
Transaction Support:
CouchDB's MVCC model provides optimistic concurrency control through document revisions. The service handles revision management automatically for conflict resolution and consistent updates.
Error Handling:
Implements comprehensive error handling with wrapped errors for debugging and appropriate HTTP status code interpretation for CouchDB-specific conditions.
func NewCouchDBService ¶ added in v0.0.2
func NewCouchDBService(config eve.FlowConfig) (*CouchDBService, error)
NewCouchDBService creates a new CouchDB service instance for flow processing operations. This constructor establishes a persistent connection to CouchDB and configures the service for flow document management with proper database initialization.
Service Initialization:
- Creates CouchDB client with provided configuration
- Verifies or creates the target database
- Establishes database handle for operations
- Returns configured service ready for use
Parameters:
- config: FlowConfig containing CouchDB connection details and database name
Returns:
- *CouchDBService: Configured service instance for flow operations
- error: Connection, authentication, or database creation errors
Configuration Requirements:
The FlowConfig must contain: - CouchDBURL: Complete connection URL with authentication - DatabaseName: Target database for flow document storage
Connection Management:
The service maintains a persistent connection to CouchDB with automatic reconnection and connection pooling handled by the Kivik driver. Connections should be properly closed when the service is no longer needed.
Database Setup:
The constructor automatically creates the target database if it doesn't exist, enabling immediate use without manual database provisioning. This behavior is suitable for development and can be controlled in production environments.
Error Conditions:
- Invalid CouchDB URL or authentication credentials
- Network connectivity issues to CouchDB server
- Insufficient permissions for database creation
- CouchDB server errors or unavailability
Example Usage:
config := eve.FlowConfig{
CouchDBURL: "http://admin:password@localhost:5984",
DatabaseName: "flow_processes",
}
service, err := NewCouchDBService(config)
if err != nil {
log.Fatal("Failed to create CouchDB service:", err)
}
defer service.Close()
// Use service for flow operations
response, err := service.SaveDocument(flowDocument)
Resource Management:
Services should be closed when no longer needed to release database connections and prevent resource leaks. Use defer statements or proper lifecycle management in long-running applications.
Concurrent Usage:
CouchDBService instances are safe for concurrent use across multiple goroutines. The underlying Kivik client handles connection pooling and thread safety automatically.
func NewCouchDBServiceFromConfig ¶ added in v0.0.7
func NewCouchDBServiceFromConfig(config CouchDBConfig) (*CouchDBService, error)
NewCouchDBServiceFromConfig creates a new CouchDB service from generic configuration. This constructor provides more flexibility than NewCouchDBService by supporting advanced configuration options including TLS, timeouts, and connection pooling.
Parameters:
- config: CouchDBConfig with connection details and options
Returns:
- *CouchDBService: Configured service instance
- error: Connection, authentication, or database creation errors
Configuration Features:
- Custom connection URL and database name
- Optional TLS/SSL configuration for secure connections
- Connection timeout settings
- Automatic database creation
- Flexible authentication options
Example Usage:
config := CouchDBConfig{
URL: "https://couchdb.example.com:6984",
Database: "graphium",
Username: "admin",
Password: "secure-password",
Timeout: 30000,
CreateIfMissing: true,
}
service, err := NewCouchDBServiceFromConfig(config)
if err != nil {
log.Fatal("Failed to create service:", err)
}
defer service.Close()
// Use service for operations
response, _ := service.SaveDocument(myDocument)
func (*CouchDBService) BulkDeleteDocuments ¶ added in v0.0.7
func (c *CouchDBService) BulkDeleteDocuments(docs []BulkDeleteDoc) ([]BulkResult, error)
BulkDeleteDocuments deletes multiple documents in a single database operation. This is more efficient than individual delete operations for batch deletions.
Parameters:
- docs: Slice of BulkDeleteDoc with ID, Rev, and Deleted=true
Returns:
- []BulkResult: Result for each deletion (success or error)
- error: Request execution errors
Deletion Requirements:
- Each document must have _id and _rev fields
- _deleted field must be set to true
- Revision must match current document (MVCC conflict detection)
Example Usage:
// Get documents to delete with their current revisions
container1, _ := GetDocument[Container](service, "c1")
container2, _ := GetDocument[Container](service, "c2")
deleteOps := []BulkDeleteDoc{
{ID: container1.ID, Rev: container1.Rev, Deleted: true},
{ID: container2.ID, Rev: container2.Rev, Deleted: true},
}
results, err := service.BulkDeleteDocuments(deleteOps)
if err != nil {
log.Printf("Bulk delete failed: %v", err)
return
}
for _, result := range results {
if result.OK {
fmt.Printf("Deleted %s\n", result.ID)
} else {
fmt.Printf("Failed to delete %s: %s\n", result.ID, result.Reason)
}
}
func (*CouchDBService) BulkSaveDocuments ¶ added in v0.0.7
func (c *CouchDBService) BulkSaveDocuments(docs []interface{}) ([]BulkResult, error)
BulkSaveDocuments saves multiple documents in a single database operation. Bulk operations significantly improve performance when saving many documents by reducing network round trips and database overhead.
Parameters:
- docs: Slice of documents to save (any JSON-serializable type)
Returns:
- []BulkResult: Result for each document (success or error)
- error: Request execution errors (not individual document errors)
Document Requirements:
- Documents can have _id field (explicit ID) or let CouchDB generate UUID
- Documents with _rev field are updated, without _rev are created
- Each document is processed independently with individual success/failure
Result Handling:
Each BulkResult indicates success or failure for one document: - OK=true: Document saved successfully, Rev contains new revision - OK=false: Save failed, Error and Reason explain why
Common Errors:
- "conflict": Document revision mismatch (concurrent modification)
- "forbidden": Insufficient permissions for document
- "invalid": Document validation failed
Performance:
- Single HTTP request for all documents
- Transactional consistency within the bulk operation
- Suitable for batch imports and synchronization
Example Usage:
type Container struct {
ID string `json:"_id,omitempty"`
Name string `json:"name"`
Status string `json:"status"`
}
containers := []interface{}{
Container{ID: "c1", Name: "nginx", Status: "running"},
Container{ID: "c2", Name: "redis", Status: "running"},
Container{ID: "c3", Name: "postgres", Status: "stopped"},
}
results, err := service.BulkSaveDocuments(containers)
if err != nil {
log.Printf("Bulk save failed: %v", err)
return
}
successCount := 0
for _, result := range results {
if result.OK {
fmt.Printf("Saved %s with rev %s\n", result.ID, result.Rev)
successCount++
} else {
fmt.Printf("Failed %s: %s - %s\n", result.ID, result.Error, result.Reason)
}
}
fmt.Printf("Successfully saved %d/%d documents\n", successCount, len(results))
func (*CouchDBService) Close ¶ added in v0.0.2
func (c *CouchDBService) Close() error
Close gracefully shuts down the CouchDB service and releases resources. This method ensures proper cleanup of database connections and resources to prevent connection leaks and maintain optimal resource utilization.
Cleanup Operations:
- Closes active database connections
- Releases connection pool resources
- Terminates background operations
- Ensures graceful service shutdown
Returns:
- error: Connection closure or cleanup errors
Resource Management:
Proper closure is essential for: - Preventing connection leaks in long-running applications - Maintaining optimal database connection pools - Ensuring clean application shutdown - Meeting resource management best practices
Usage Pattern:
Always defer Close() immediately after service creation:
service, err := NewCouchDBService(config)
if err != nil {
return err
}
defer service.Close()
Error Handling:
Connection closure errors are typically non-critical but should be logged for monitoring and debugging purposes in production environments.
func (*CouchDBService) CompactDatabase ¶ added in v0.0.7
func (c *CouchDBService) CompactDatabase() error
CompactDatabase triggers database compaction. Compaction reclaims disk space by removing old document revisions and deleted documents.
Returns:
- error: Compaction request errors
Compaction Process:
- Removes old document revisions beyond the revision limit
- Purges deleted documents
- Rebuilds B-tree indexes
- Reclaims disk space
- Runs asynchronously in the background
Performance Impact:
- Compaction is I/O intensive
- May impact database performance during compaction
- Recommended during low-traffic periods
- Monitor with GetDatabaseInfo().CompactRunning
Example Usage:
err := service.CompactDatabase()
if err != nil {
log.Printf("Failed to start compaction: %v", err)
return
}
fmt.Println("Compaction started")
// Monitor compaction progress
for {
info, _ := service.GetDatabaseInfo()
if !info.CompactRunning {
fmt.Println("Compaction completed")
break
}
time.Sleep(10 * time.Second)
}
func (*CouchDBService) Count ¶ added in v0.0.7
func (c *CouchDBService) Count(selector map[string]interface{}) (int, error)
Count returns the count of documents matching the selector. This is a convenience method that executes a query and returns the count.
Parameters:
- selector: Mango selector (same format as MangoQuery.Selector)
Returns:
- int: Number of matching documents
- error: Query execution errors
Example Usage:
selector := map[string]interface{}{
"status": "running",
"@type": "SoftwareApplication",
}
count, err := service.Count(selector)
fmt.Printf("Found %d running containers\n", count)
func (*CouchDBService) CreateDesignDoc ¶ added in v0.0.7
func (c *CouchDBService) CreateDesignDoc(designDoc DesignDoc) error
CreateDesignDoc creates or updates a CouchDB design document containing views. Design documents contain MapReduce views for efficient querying and aggregation.
Design Document Structure:
Design documents must have IDs starting with "_design/": - Valid: "_design/graphium" - Invalid: "graphium" (will be auto-prefixed)
Parameters:
- designDoc: DesignDoc structure containing ID, language, and views
Returns:
- error: Creation, update, or validation errors
Update Behavior:
If design document exists: - Retrieves current revision automatically - Updates with new view definitions - Preserves other design document fields
Example Usage:
designDoc := DesignDoc{
ID: "_design/graphium",
Language: "javascript",
Views: map[string]View{
"containers_by_host": {
Map: `function(doc) {
if (doc['@type'] === 'SoftwareApplication' && doc.hostedOn) {
emit(doc.hostedOn, {
name: doc.name,
status: doc.status
});
}
}`,
},
"container_count_by_host": {
Map: `function(doc) {
if (doc['@type'] === 'SoftwareApplication' && doc.hostedOn) {
emit(doc.hostedOn, 1);
}
}`,
Reduce: "_sum",
},
},
}
err := service.CreateDesignDoc(designDoc)
if err != nil {
log.Printf("Failed to create design doc: %v", err)
}
func (*CouchDBService) CreateIndex ¶ added in v0.0.7
func (c *CouchDBService) CreateIndex(index Index) error
CreateIndex creates a new index in CouchDB for query optimization. Indexes improve the performance of Mango queries by maintaining sorted structures for frequently queried fields.
Index Types:
- "json": Standard index for Mango queries (default, recommended)
- "text": Full-text search index (requires special query syntax)
Parameters:
- index: Index structure with name, fields, and type
Returns:
- error: Index creation or validation errors
Index Naming:
If Name is empty, CouchDB generates a name automatically. Explicit names are recommended for management and debugging.
Compound Indexes:
Multiple fields create a compound index: - Order matters for query optimization - Query should use fields in the same order - First field is most selective for filtering
Index Usage:
Indexes are automatically used by Mango queries when: - Query selector matches indexed fields - Field order in query matches index definition - Can be explicitly selected via UseIndex in MangoQuery
Example Usage:
// Simple index for status field
index := Index{
Name: "status-index",
Fields: []string{"status"},
Type: "json",
}
err := service.CreateIndex(index)
// Compound index for common query pattern
index = Index{
Name: "status-location-index",
Fields: []string{"status", "location"},
Type: "json",
}
err = service.CreateIndex(index)
// Index for type filtering
index = Index{
Name: "type-index",
Fields: []string{"@type"},
Type: "json",
}
err = service.CreateIndex(index)
func (*CouchDBService) DeleteDesignDoc ¶ added in v0.0.7
func (c *CouchDBService) DeleteDesignDoc(designName string) error
DeleteDesignDoc deletes a design document by name. This removes the design document and all its views.
Parameters:
- designName: Design document name (with or without "_design/" prefix)
Returns:
- error: Not found, conflict, or deletion errors
Example Usage:
err := service.DeleteDesignDoc("graphium")
if err != nil {
log.Printf("Failed to delete design doc: %v", err)
}
func (*CouchDBService) DeleteDocument ¶ added in v0.0.2
func (c *CouchDBService) DeleteDocument(id, rev string) error
DeleteDocument removes a flow process document from the database. This method performs document deletion with proper revision handling to ensure consistency with CouchDB's MVCC conflict resolution.
Deletion Process:
- Validates document existence and revision
- Executes deletion with specified revision
- Handles conflicts and concurrent modification scenarios
- Confirms successful deletion operation
Parameters:
- id: Document identifier to delete
- rev: Current document revision for conflict detection
Returns:
- error: Deletion failures, conflicts, or permission errors
Revision Requirements:
CouchDB requires the current document revision for deletion to prevent conflicts from concurrent modifications. The revision must match the current document state for successful deletion.
MVCC Conflict Handling:
If the provided revision doesn't match the current document revision: - CouchDB returns a 409 Conflict error - Applications should retrieve the latest revision and retry - Alternatively, implement conflict resolution strategies
Error Conditions:
- Document not found (may have been deleted by another process)
- Revision conflict due to concurrent modifications
- Insufficient permissions for document deletion
- Database connectivity issues
Example Usage:
// Retrieve document to get current revision
doc, err := service.GetDocument("process-12345")
if err != nil {
log.Printf("Failed to get document: %v", err)
return
}
// Delete with current revision
err = service.DeleteDocument(doc.ID, doc.Rev)
if err != nil {
log.Printf("Deletion failed: %v", err)
return
}
fmt.Printf("Document %s deleted successfully\n", doc.ID)
Soft Deletion Alternative:
Consider implementing soft deletion for audit purposes: - Mark documents as deleted instead of removing them - Preserve audit trails and compliance data - Enable recovery of accidentally deleted processes - Maintain referential integrity with related data
Cleanup Considerations:
- Implement cleanup policies for old completed processes
- Consider archival strategies for long-term retention
- Monitor database size and performance impact
- Plan for backup and disaster recovery scenarios
func (*CouchDBService) DeleteIndex ¶ added in v0.0.7
func (c *CouchDBService) DeleteIndex(designDoc, indexName string) error
DeleteIndex deletes an index from the database. Special indexes (_all_docs, etc.) cannot be deleted.
Parameters:
- designDoc: Design document name containing the index
- indexName: Name of the index to delete
Returns:
- error: Deletion or not found errors
Index Identification:
To delete an index, you need both: - Design document ID (e.g., "_design/a5f4711fc9448864a13c81dc71e660b524d7410c") - Index name (e.g., "status-index") These can be obtained from ListIndexes()
Example Usage:
// List indexes to find the one to delete
indexes, _ := service.ListIndexes()
for _, idx := range indexes {
if idx.Name == "old-index" {
err := service.DeleteIndex(idx.DesignDoc, idx.Name)
if err != nil {
log.Printf("Failed to delete index: %v", err)
}
break
}
}
func (*CouchDBService) EnsureIndex ¶ added in v0.0.7
func (c *CouchDBService) EnsureIndex(index Index) (bool, error)
EnsureIndex creates an index if it doesn't already exist. This is a convenience method that checks for index existence before creation.
Parameters:
- index: Index structure with name, fields, and type
Returns:
- bool: true if index was created, false if it already existed
- error: Index creation or query errors
Index Existence Check:
Checks if an index with the same fields already exists: - Compares field lists (order matters) - Returns false if exact match found - Creates index if no match found
Example Usage:
index := Index{
Name: "status-index",
Fields: []string{"status"},
Type: "json",
}
created, err := service.EnsureIndex(index)
if err != nil {
log.Printf("Error ensuring index: %v", err)
return
}
if created {
fmt.Println("Index created successfully")
} else {
fmt.Println("Index already exists")
}
func (*CouchDBService) Find ¶ added in v0.0.7
func (c *CouchDBService) Find(query MangoQuery) ([]json.RawMessage, error)
Find executes a Mango query and returns results as json.RawMessage. Mango queries provide MongoDB-style declarative filtering without MapReduce views.
Parameters:
- query: MangoQuery structure with selector, fields, sort, and pagination
Returns:
- []json.RawMessage: Array of matching documents as raw JSON
- error: Query execution or parsing errors
Mango Query Language:
Supports MongoDB-style operators: - $eq, $ne: Equality operators - $gt, $gte, $lt, $lte: Comparison operators - $and, $or, $not: Logical operators - $in, $nin: Array membership - $regex: Regular expression matching - $exists: Field existence check
Index Usage:
For optimal performance, create indexes on queried fields:
index := Index{Name: "status-index", Fields: []string{"status"}, Type: "json"}
service.CreateIndex(index)
Example Usage:
// Find running containers in us-east
query := MangoQuery{
Selector: map[string]interface{}{
"$and": []interface{}{
map[string]interface{}{"status": "running"},
map[string]interface{}{"location": map[string]interface{}{
"$regex": "^us-east",
}},
},
},
Fields: []string{"_id", "name", "status"},
Sort: []map[string]string{
{"name": "asc"},
},
Limit: 100,
}
results, err := service.Find(query)
if err != nil {
log.Printf("Query failed: %v", err)
return
}
for _, result := range results {
var doc map[string]interface{}
json.Unmarshal(result, &doc)
fmt.Printf("Found: %+v\n", doc)
}
func (*CouchDBService) GetAllDocuments ¶ added in v0.0.2
func (c *CouchDBService) GetAllDocuments() ([]eve.FlowProcessDocument, error)
GetAllDocuments retrieves all flow process documents from the database. This method provides complete database enumeration for administrative purposes, reporting, and bulk operations on process documents.
Enumeration Process:
- Uses CouchDB's _all_docs view for efficient document listing
- Includes full document content with include_docs parameter
- Streams results to handle large datasets efficiently
- Returns complete array of all documents
Returns:
- []eve.FlowProcessDocument: Array containing all documents in the database
- error: Query execution, iteration, or parsing errors
Performance Characteristics:
- Efficient B-tree traversal through _all_docs view
- Memory usage scales with total number of documents
- Network bandwidth usage depends on document sizes
- Streaming processing prevents memory exhaustion
Use Cases:
- Administrative reporting and analytics
- Database backup and migration operations
- Bulk processing and data transformation
- System health monitoring and auditing
- Data export for external analysis
Error Conditions:
- Database connectivity issues during enumeration
- Memory limitations with very large databases
- Document parsing errors for corrupted data
- Permission restrictions on database access
Example Usage:
allDocs, err := service.GetAllDocuments()
if err != nil {
log.Printf("Failed to retrieve all documents: %v", err)
return
}
fmt.Printf("Total processes: %d\n", len(allDocs))
// Analyze state distribution
stateCount := make(map[eve.FlowProcessState]int)
for _, doc := range allDocs {
stateCount[doc.State]++
}
for state, count := range stateCount {
fmt.Printf("State %s: %d processes\n", state, count)
}
Memory Considerations:
Large databases may require pagination or streaming approaches: - Consider implementing offset/limit parameters - Use continuous processing for real-time scenarios - Implement data export to files for very large datasets
Alternative Approaches:
For large databases, consider: - Paginated retrieval with skip/limit parameters - State-based filtering to reduce result sets - Export functions for file-based processing - Streaming APIs for real-time data processing
func (*CouchDBService) GetAllGenericDocuments ¶ added in v0.0.7
func (c *CouchDBService) GetAllGenericDocuments(docType string, result interface{}) error
GetAllGenericDocuments retrieves all documents as a slice of maps. This function provides untyped access to all database documents.
Parameters:
- docType: Optional type filter (checks "@type" field), empty string for all
- result: Pointer to slice for storing results (typically *[]map[string]interface{})
Returns:
- error: Query execution or parsing errors
Example Usage:
var docs []map[string]interface{}
err := service.GetAllGenericDocuments("SoftwareApplication", &docs)
for _, doc := range docs {
fmt.Printf("Document: %+v\n", doc)
}
func (*CouchDBService) GetChanges ¶ added in v0.0.7
func (c *CouchDBService) GetChanges(opts ChangesFeedOptions) ([]Change, string, error)
GetChanges retrieves changes without continuous listening. This is useful for one-time synchronization or batch processing.
Parameters:
- opts: ChangesFeedOptions (should use feed="normal" or omit)
Returns:
- []Change: Slice of change events
- string: Last sequence ID for resuming
- error: Query or parsing errors
Usage Pattern:
For polling-based synchronization: 1. Call GetChanges with Since=lastSeq 2. Process returned changes 3. Store returned sequence for next call 4. Repeat periodically
Example Usage:
lastSeq := "0"
for {
opts := ChangesFeedOptions{
Since: lastSeq,
Feed: "normal",
IncludeDocs: true,
Limit: 100,
}
changes, newSeq, err := service.GetChanges(opts)
if err != nil {
log.Printf("Error getting changes: %v", err)
time.Sleep(10 * time.Second)
continue
}
for _, change := range changes {
fmt.Printf("Processing change: %s\n", change.ID)
// Process change
}
lastSeq = newSeq
time.Sleep(5 * time.Second)
}
func (*CouchDBService) GetDatabaseInfo ¶ added in v0.0.7
func (c *CouchDBService) GetDatabaseInfo() (*DatabaseInfo, error)
GetDatabaseInfo retrieves metadata and statistics about the database. This provides information useful for monitoring, capacity planning, and administration.
Returns:
- *DatabaseInfo: Database metadata and statistics
- error: Query or connection errors
Information Provided:
- Document count (active and deleted)
- Database size (disk and data)
- Update sequence for change tracking
- Compaction status
- Instance start time
Example Usage:
info, err := service.GetDatabaseInfo()
if err != nil {
log.Printf("Failed to get database info: %v", err)
return
}
fmt.Printf("Database: %s\n", info.DBName)
fmt.Printf("Documents: %d active, %d deleted\n",
info.DocCount, info.DocDelCount)
fmt.Printf("Size: %.2f MB (disk), %.2f MB (data)\n",
float64(info.DiskSize)/1024/1024,
float64(info.DataSize)/1024/1024)
fmt.Printf("Compaction running: %v\n", info.CompactRunning)
func (*CouchDBService) GetDependencies ¶ added in v0.0.7
func (c *CouchDBService) GetDependencies(id string, relationFields []string) (map[string]json.RawMessage, error)
GetDependencies finds all documents referenced by the given document. This performs forward lookup to find documents referenced in relationship fields.
Parameters:
- id: Document ID to find dependencies for
- relationFields: Slice of field names to check for references
Returns:
- map[string]json.RawMessage: Map of field name to referenced document
- error: Query or execution errors
Multi-Field Support:
Checks multiple relationship fields in the document: - "hostedOn": Find the host this container runs on - "dependsOn": Find services this service depends on - "network": Find the network configuration
Example Usage:
// Find all dependencies for a container
dependencies, err := service.GetDependencies("container-456", []string{
"hostedOn",
"dependsOn",
"network",
})
if err != nil {
log.Printf("Failed to get dependencies: %v", err)
return
}
if hostDoc, ok := dependencies["hostedOn"]; ok {
var host map[string]interface{}
json.Unmarshal(hostDoc, &host)
fmt.Printf("Hosted on: %s\n", host["name"])
}
if netDoc, ok := dependencies["network"]; ok {
var network map[string]interface{}
json.Unmarshal(netDoc, &network)
fmt.Printf("Network: %s\n", network["name"])
}
func (*CouchDBService) GetDependents ¶ added in v0.0.7
func (c *CouchDBService) GetDependents(id string, relationField string) ([]json.RawMessage, error)
GetDependents finds all documents that reference the given document ID. This performs a reverse lookup to find documents with the specified relationship.
Parameters:
- id: Document ID to find dependents for
- relationField: Field name containing the reference
Returns:
- []json.RawMessage: Slice of dependent documents as raw JSON
- error: Query or execution errors
Query Strategy:
Uses Mango query to find documents where relationField equals the given ID: - Efficient with proper indexing on relationField - Returns all matching documents regardless of type - Suitable for finding all containers on a host
Example Usage:
// Find all containers running on host-123
containers, err := service.GetDependents("host-123", "hostedOn")
if err != nil {
log.Printf("Failed to get dependents: %v", err)
return
}
fmt.Printf("Found %d containers on this host\n", len(containers))
for _, container := range containers {
var c map[string]interface{}
json.Unmarshal(container, &c)
fmt.Printf(" - %s\n", c["name"])
}
func (*CouchDBService) GetDesignDoc ¶ added in v0.0.7
func (c *CouchDBService) GetDesignDoc(designName string) (*DesignDoc, error)
GetDesignDoc retrieves a design document by name. Returns the complete design document including all views.
Parameters:
- designName: Design document name (with or without "_design/" prefix)
Returns:
- *DesignDoc: Complete design document structure
- error: Not found or retrieval errors
Example Usage:
doc, err := service.GetDesignDoc("graphium")
if err != nil {
log.Printf("Design doc not found: %v", err)
return
}
fmt.Printf("Design doc %s has %d views\n", doc.ID, len(doc.Views))
for viewName := range doc.Views {
fmt.Printf(" - %s\n", viewName)
}
func (*CouchDBService) GetDocument ¶ added in v0.0.2
func (c *CouchDBService) GetDocument(id string) (*eve.FlowProcessDocument, error)
GetDocument retrieves a flow process document by ID with complete metadata. This method provides access to stored flow documents including all state history, metadata, and current processing status information.
Retrieval Process:
- Queries CouchDB for document by ID
- Handles not-found conditions gracefully
- Deserializes document into FlowProcessDocument structure
- Returns complete document with all fields and history
Parameters:
- id: Document identifier (typically same as ProcessID)
Returns:
- *eve.FlowProcessDocument: Complete document with all fields and history
- error: Document not found, access, or parsing errors
Document Structure:
The returned document contains: - Process identification and metadata - Current state and processing information - Complete audit trail of state changes - Timestamps for creation and last update - Error information for failed processes
Error Handling:
- HTTP 404: Document not found (explicit error message)
- Other HTTP errors: Database connectivity or permission issues
- Parsing errors: Document corruption or schema changes
Example Usage:
doc, err := service.GetDocument("process-12345")
if err != nil {
if strings.Contains(err.Error(), "not found") {
fmt.Println("Process not found")
return
}
log.Printf("Retrieval error: %v", err)
return
}
fmt.Printf("Process %s is in state: %s\n", doc.ProcessID, doc.State)
fmt.Printf("History has %d entries\n", len(doc.History))
Data Consistency:
Retrieved documents reflect the most recent committed state in CouchDB, ensuring consistency with MVCC guarantees. Concurrent modifications are handled through revision-based conflict detection.
Performance Considerations:
- Single document retrieval is efficient with CouchDB's B-tree indexing
- Document size affects retrieval time (consider history size)
- Network latency impacts response time for remote CouchDB instances
- Frequent access patterns benefit from CouchDB's caching mechanisms
func (*CouchDBService) GetDocumentsByState ¶ added in v0.0.2
func (c *CouchDBService) GetDocumentsByState(state eve.FlowProcessState) ([]eve.FlowProcessDocument, error)
GetDocumentsByState retrieves all flow process documents in a specific state. This method uses CouchDB's Mango query language to filter documents by processing state, enabling efficient monitoring and management of workflows.
Query Processing:
- Constructs Mango selector for state filtering
- Executes query against CouchDB database
- Iterates through results with memory-efficient processing
- Returns array of matching documents
Parameters:
- state: FlowProcessState to filter by (started, running, successful, failed)
Returns:
- []eve.FlowProcessDocument: Array of documents matching the specified state
- error: Query execution, iteration, or parsing errors
Mango Query Language:
Uses CouchDB's native Mango query syntax for efficient server-side filtering: - Selector-based filtering on document fields - Index utilization for optimal performance - Server-side processing reduces network traffic
State Filtering:
Supports all FlowProcessState values: - StateStarted: Processes that have been initiated - StateRunning: Processes currently executing - StateSuccessful: Successfully completed processes - StateFailed: Processes that encountered errors
Performance Optimization:
- Server-side filtering reduces network bandwidth
- Index on state field improves query performance
- Streaming results prevent memory overload with large datasets
- Efficient iteration through Kivik's row interface
Error Conditions:
- Database connectivity issues during query
- Invalid state values or query syntax
- Document parsing errors for corrupted data
- Memory limitations with extremely large result sets
Example Usage:
// Get all failed processes for error analysis
failedDocs, err := service.GetDocumentsByState(eve.StateFailed)
if err != nil {
log.Printf("Query failed: %v", err)
return
}
fmt.Printf("Found %d failed processes\n", len(failedDocs))
for _, doc := range failedDocs {
fmt.Printf("Process %s failed: %s\n", doc.ProcessID, doc.ErrorMsg)
}
Index Recommendations:
For optimal performance, create a CouchDB index on the state field:
```json
{
"index": {
"fields": ["state"]
},
"name": "state-index",
"type": "json"
}
```
Use Cases:
- Monitoring dashboard for process states
- Error analysis and debugging workflows
- Capacity planning and load analysis
- Automated cleanup of completed processes
- SLA monitoring and reporting
func (*CouchDBService) GetGenericDocument ¶ added in v0.0.7
func (c *CouchDBService) GetGenericDocument(id string, result interface{}) error
GetGenericDocument retrieves a document as a map for maximum flexibility. This function doesn't use generics and returns the document as map[string]interface{}.
Parameters:
- id: Document identifier to retrieve
- result: Pointer to store the document (typically *map[string]interface{})
Returns:
- error: Document not found or retrieval errors
Example Usage:
var doc map[string]interface{}
err := service.GetGenericDocument("mydoc", &doc)
if err != nil {
log.Printf("Error: %v", err)
return
}
fmt.Printf("Document: %+v\n", doc)
func (*CouchDBService) GetLastSequence ¶ added in v0.0.7
func (c *CouchDBService) GetLastSequence() (string, error)
GetLastSequence retrieves the current database sequence ID. Useful for starting change feeds from the current point.
Returns:
- string: Current sequence ID
- error: Database query errors
Example Usage:
seq, err := service.GetLastSequence()
if err != nil {
log.Printf("Failed to get sequence: %v", err)
return
}
opts := ChangesFeedOptions{
Since: seq,
Feed: "continuous",
}
service.ListenChanges(opts, handler)
func (*CouchDBService) GetRelationshipGraph ¶ added in v0.0.7
func (c *CouchDBService) GetRelationshipGraph(startID string, relationField string, maxDepth int) (*RelationshipGraph, error)
GetRelationshipGraph builds a complete relationship graph for a document. This discovers all related documents in both forward and reverse directions.
Parameters:
- startID: Starting document ID
- relationField: Relationship field to follow
- maxDepth: Maximum depth to traverse
Returns:
- *RelationshipGraph: Complete graph structure
- error: Traversal or query errors
Graph Structure:
The graph includes: - Nodes: All discovered documents - Edges: Relationships between documents - Metadata: Relationship types and directions
Example Usage:
graph, err := service.GetRelationshipGraph("container-456", "hostedOn", 3)
if err != nil {
log.Printf("Failed to build graph: %v", err)
return
}
fmt.Printf("Graph has %d nodes and %d edges\n",
len(graph.Nodes), len(graph.Edges))
func (*CouchDBService) ListDesignDocs ¶ added in v0.0.7
func (c *CouchDBService) ListDesignDocs() ([]string, error)
ListDesignDocs returns a list of all design documents in the database. This is useful for discovering available views and design documents.
Returns:
- []string: List of design document IDs (including "_design/" prefix)
- error: Query or iteration errors
Example Usage:
designDocs, err := service.ListDesignDocs()
if err != nil {
log.Printf("Failed to list design docs: %v", err)
return
}
fmt.Println("Available design documents:")
for _, ddoc := range designDocs {
fmt.Printf(" - %s\n", ddoc)
}
func (*CouchDBService) ListIndexes ¶ added in v0.0.7
func (c *CouchDBService) ListIndexes() ([]IndexInfo, error)
ListIndexes returns all indexes in the database. This is useful for discovering existing indexes and query optimization planning.
Returns:
- []IndexInfo: Slice of index information structures
- error: Query or parsing errors
Index Information:
Each IndexInfo contains:
- Name: Index name (auto-generated or explicit)
- Type: Index type ("json", "text", or "special")
- Fields: Array of indexed field names
- DesignDoc: Design document ID containing the index
Special Indexes:
CouchDB includes special indexes that cannot be deleted: - _all_docs: Primary index on document IDs - Default indexes created by CouchDB
Example Usage:
indexes, err := service.ListIndexes()
if err != nil {
log.Printf("Failed to list indexes: %v", err)
return
}
fmt.Println("Available indexes:")
for _, idx := range indexes {
fmt.Printf(" %s (%s): %v\n", idx.Name, idx.Type, idx.Fields)
}
func (*CouchDBService) ListenChanges ¶ added in v0.0.7
func (c *CouchDBService) ListenChanges(opts ChangesFeedOptions, handler func(Change)) error
ListenChanges starts listening to database changes and calls handler for each change. This enables real-time notifications of document modifications for WebSocket support and live synchronization.
Parameters:
- opts: ChangesFeedOptions configuring the feed type and filtering
- handler: Function called for each change event
Returns:
- error: Connection, parsing, or handler errors
Feed Types:
- "normal": Return all changes since sequence and close
- "longpoll": Wait for changes, return when available, close
- "continuous": Keep connection open, stream changes indefinitely
Change Handling:
Handler function receives each change event: - Called synchronously for each change - Handler errors stop the feed - Long-running handlers block subsequent changes
Heartbeat:
For continuous feeds, heartbeat keeps connection alive: - Sent as newline characters during idle periods - Prevents timeout on long-running connections - Recommended: 60000ms (60 seconds)
Example Usage:
// Monitor all container changes
opts := ChangesFeedOptions{
Since: "now",
Feed: "continuous",
IncludeDocs: true,
Heartbeat: 60000,
Selector: map[string]interface{}{
"@type": "SoftwareApplication",
},
}
err := service.ListenChanges(opts, func(change Change) {
if change.Deleted {
fmt.Printf("Container %s was deleted\n", change.ID)
return
}
fmt.Printf("Container %s changed to rev %s\n",
change.ID, change.Changes[0].Rev)
if change.Doc != nil {
var container map[string]interface{}
json.Unmarshal(change.Doc, &container)
fmt.Printf(" Status: %s\n", container["status"])
}
})
if err != nil {
log.Printf("Changes feed error: %v", err)
}
func (*CouchDBService) QueryView ¶ added in v0.0.7
func (c *CouchDBService) QueryView(designName, viewName string, opts ViewOptions) (*ViewResult, error)
QueryView queries a CouchDB MapReduce view with configurable options. Views enable efficient querying by pre-computing indexes of document data.
Parameters:
- designName: Design document name (without "_design/" prefix)
- viewName: View name within the design document
- opts: ViewOptions for configuring the query
Returns:
- *ViewResult: Contains rows with keys, values, and optional documents
- error: Query execution or parsing errors
View Query Options:
- Key: Query for exact key match
- StartKey/EndKey: Query for key range
- IncludeDocs: Include full document content in results
- Limit: Maximum number of results to return
- Skip: Number of results to skip for pagination
- Descending: Reverse sort order
- Reduce: Execute reduce function (if view has one)
- Group: Group reduce results by key
Example Usage:
// Query containers on a specific host
opts := ViewOptions{
Key: "host-123",
IncludeDocs: true,
Limit: 50,
}
result, err := service.QueryView("graphium", "containers_by_host", opts)
if err != nil {
log.Printf("Query failed: %v", err)
return
}
fmt.Printf("Found %d containers\n", len(result.Rows))
for _, row := range result.Rows {
fmt.Printf("Container: %s -> %v\n", row.ID, row.Value)
}
// Count containers per host using reduce
opts = ViewOptions{
Reduce: true,
Group: true,
}
result, _ = service.QueryView("graphium", "container_count_by_host", opts)
for _, row := range result.Rows {
fmt.Printf("Host %v has %v containers\n", row.Key, row.Value)
}
func (*CouchDBService) SaveDocument ¶ added in v0.0.2
func (c *CouchDBService) SaveDocument(doc eve.FlowProcessDocument) (*eve.FlowCouchDBResponse, error)
SaveDocument saves or updates a flow process document with automatic history management. This method handles the complete document lifecycle including revision management, audit trail maintenance, and state change tracking for flow processing workflows.
Document Processing:
- Sets document ID to ProcessID if not provided
- Updates timestamp to current time
- Retrieves existing document for revision and history management
- Appends new state change to audit history
- Saves document with CouchDB's MVCC conflict resolution
Parameters:
- doc: FlowProcessDocument containing process state and metadata
Returns:
- *eve.FlowCouchDBResponse: Success response with document ID and new revision
- error: Save operation, conflict resolution, or validation errors
Revision Management:
The method automatically handles CouchDB revisions by: - Retrieving current revision for existing documents - Preserving CreatedAt timestamp from original document - Generating new revision on successful save - Handling conflicts through retry mechanisms
History Tracking:
Each save operation appends a new state change to the document history: - State: Current process state (started, running, completed, failed) - Timestamp: When the state change occurred - ErrorMsg: Error details for failed states (if applicable)
Document Lifecycle:
New Documents: - CreatedAt set to current time - History initialized with first state change - Document ID derived from ProcessID Existing Documents: - CreatedAt preserved from original document - History appended with new state change - UpdatedAt updated to current time - Revision updated for conflict resolution
Error Conditions:
- Document conflicts due to concurrent modifications
- Invalid document structure or missing required fields
- Database connectivity issues
- Insufficient permissions for document modification
Example Usage:
doc := eve.FlowProcessDocument{
ProcessID: "process-12345",
State: eve.StateRunning,
Description: "Processing started",
Metadata: map[string]interface{}{"step": "validation"},
}
response, err := service.SaveDocument(doc)
if err != nil {
log.Printf("Save failed: %v", err)
return
}
fmt.Printf("Document saved with revision: %s\n", response.Rev)
Conflict Resolution:
CouchDB's MVCC model prevents lost updates through revision checking. Applications should handle conflict errors by retrieving the latest document version and retrying the save operation with updated data.
Audit Trail:
The complete state change history is preserved in the document, providing a full audit trail for compliance, debugging, and analysis of process execution patterns and performance.
func (*CouchDBService) SaveGenericDocument ¶ added in v0.0.7
func (c *CouchDBService) SaveGenericDocument(doc interface{}) (*CouchDBResponse, error)
SaveGenericDocument saves a document using interface{} for maximum flexibility. This function doesn't use generics and accepts any type that can be marshaled to JSON.
Parameters:
- doc: Document to save (any JSON-serializable type)
Returns:
- *CouchDBResponse: Contains document ID and new revision
- error: Save failures or validation errors
Example Usage:
doc := map[string]interface{}{
"_id": "mydoc",
"name": "example",
"value": 42,
}
response, err := service.SaveGenericDocument(doc)
func (*CouchDBService) Traverse ¶ added in v0.0.7
func (c *CouchDBService) Traverse(opts TraversalOptions) ([]json.RawMessage, error)
Traverse follows relationships between documents starting from a given document. This enables graph-like navigation through document references.
Parameters:
- opts: TraversalOptions configuring start point, depth, and relationship field
Returns:
- []json.RawMessage: Slice of traversed documents as raw JSON
- error: Traversal or query errors
Traversal Directions:
Forward: Follows document references (container → host) - Start at container document - Follow "hostedOn" field to find host - Continue following relationships up to specified depth Reverse: Finds documents that reference the start document (host → containers) - Start at host document - Find all documents with "hostedOn" pointing to this host - Continue finding referring documents up to specified depth
Depth Control:
- Depth 1: Only immediate neighbors
- Depth 2: Neighbors and their neighbors
- Depth N: Up to N relationship hops
Example Usage:
// Find all containers on a host (reverse traversal)
opts := TraversalOptions{
StartID: "host-123",
Depth: 1,
RelationField: "hostedOn",
Direction: "reverse",
}
containers, err := service.Traverse(opts)
// Find host and datacenter for a container (forward traversal)
opts = TraversalOptions{
StartID: "container-456",
Depth: 2,
RelationField: "hostedOn",
Direction: "forward",
}
related, err := service.Traverse(opts)
func (*CouchDBService) WatchChanges ¶ added in v0.0.7
func (c *CouchDBService) WatchChanges(opts ChangesFeedOptions) (<-chan Change, <-chan error, func())
WatchChanges provides a channel-based interface for change notifications. This enables Go-idiomatic concurrent processing of changes.
Parameters:
- opts: ChangesFeedOptions configuration
Returns:
- <-chan Change: Read-only channel for receiving changes
- <-chan error: Read-only channel for receiving errors
- func(): Stop function to close the changes feed
Usage Pattern:
The function returns immediately with channels: - Changes are sent to the first channel - Errors are sent to the second channel - Call stop() to gracefully shutdown
Example Usage:
opts := ChangesFeedOptions{
Since: "now",
Feed: "continuous",
IncludeDocs: true,
}
changeChan, errChan, stop := service.WatchChanges(opts)
defer stop()
for {
select {
case change := <-changeChan:
fmt.Printf("Change: %s\n", change.ID)
// Process change
case err := <-errChan:
log.Printf("Error: %v\n", err)
return
case <-time.After(60 * time.Second):
fmt.Println("No changes for 60 seconds")
}
}
type DatabaseInfo ¶ added in v0.0.7
type DatabaseInfo struct {
DBName string `json:"db_name"` // Database name
DocCount int64 `json:"doc_count"` // Non-deleted document count
DocDelCount int64 `json:"doc_del_count"` // Deleted document count
UpdateSeq string `json:"update_seq"` // Current update sequence
PurgeSeq int64 `json:"purge_seq"` // Purge sequence
CompactRunning bool `json:"compact_running"` // Compaction status
DiskSize int64 `json:"disk_size"` // Disk space used (bytes)
DataSize int64 `json:"data_size"` // Data size (bytes)
InstanceStartTime string `json:"instance_start_time"` // Instance start time
}
DatabaseInfo contains metadata about a CouchDB database. This structure provides statistics and status information for database monitoring, capacity planning, and administrative operations.
Information Fields:
- DBName: Database name
- DocCount: Number of non-deleted documents
- DocDelCount: Number of deleted documents
- UpdateSeq: Current update sequence
- PurgeSeq: Current purge sequence
- CompactRunning: True if compaction is in progress
- DiskSize: Total disk space used (bytes)
- DataSize: Actual data size excluding overhead (bytes)
- InstanceStartTime: Database instance start timestamp
Example Usage:
info, _ := service.GetDatabaseInfo()
fmt.Printf("Database: %s\n", info.DBName)
fmt.Printf("Documents: %d (+ %d deleted)\n", info.DocCount, info.DocDelCount)
fmt.Printf("Disk usage: %.2f MB\n", float64(info.DiskSize)/1024/1024)
fmt.Printf("Data size: %.2f MB\n", float64(info.DataSize)/1024/1024)
if info.CompactRunning {
fmt.Println("Compaction in progress")
}
type Description ¶ added in v0.0.6
type Description struct {
XMLName xml.Name `xml:"Description"`
About string `xml:"about,attr"` // rdf:about attribute
PrefLabel PrefLabel `xml:"prefLabel"`
URI4IRI Resource `xml:"URI4IRI"`
ICURI4IRI Resource `xml:"IC_URI4IRI"`
Restriction Restriction `xml:"Role-is-restricted-to-Protection-Class"`
ID string `xml:"ID"` // literal value inside <czo:ID>
UserSkills *Resource `xml:"User-has-skills-for-Product"`
}
Description represents an RDF resource description with various properties. It maps to rdf:Description elements in RDF/XML format and includes SKOS preferred labels, URIs, restrictions, and custom properties.
type DesignDoc ¶ added in v0.0.7
type DesignDoc struct {
ID string `json:"_id"` // Design document ID (must start with "_design/")
Rev string `json:"_rev,omitempty"` // Document revision (for updates)
Language string `json:"language"` // Programming language (typically "javascript")
Views map[string]View `json:"views"` // Map of view names to definitions
}
DesignDoc represents a CouchDB design document containing views. Design documents are special documents that contain application logic including MapReduce views, validation functions, and show/list functions.
Design Document Structure:
- ID: Design document identifier (must start with "_design/")
- Language: Programming language for functions (default: "javascript")
- Views: Map of view names to view definitions
Naming Convention:
Design document IDs must have the "_design/" prefix: - "_design/graphium" (correct) - "graphium" (incorrect - will be rejected)
Example Usage:
designDoc := DesignDoc{
ID: "_design/graphium",
Language: "javascript",
Views: map[string]View{
"containers_by_host": {
Map: `function(doc) {
if (doc['@type'] === 'SoftwareApplication') {
emit(doc.hostedOn, doc);
}
}`,
},
"host_container_count": {
Map: `function(doc) {
if (doc['@type'] === 'SoftwareApplication') {
emit(doc.hostedOn, 1);
}
}`,
Reduce: "_sum",
},
},
}
service.CreateDesignDoc(designDoc)
type DocumentStore ¶ added in v0.0.6
type DocumentStore interface {
// GetDocument retrieves a flow process document by its ID.
// Returns the document if found, or an error if not found or on failure.
GetDocument(id string) (*eve.FlowProcessDocument, error)
// GetAllDocuments retrieves all flow process documents from the database.
// Returns a slice of all documents, or an error on failure.
GetAllDocuments() ([]eve.FlowProcessDocument, error)
// GetDocumentsByState retrieves flow process documents filtered by state.
// Returns a slice of documents matching the specified state, or an error on failure.
GetDocumentsByState(state eve.FlowProcessState) ([]eve.FlowProcessDocument, error)
// SaveDocument saves a flow process document to the database.
// Returns a response with revision information, or an error on failure.
SaveDocument(doc eve.FlowProcessDocument) (*eve.FlowCouchDBResponse, error)
// DeleteDocument deletes a flow process document by ID and revision.
// Returns an error if deletion fails.
DeleteDocument(id, rev string) error
// Close closes the database connection.
// Returns an error if closing fails.
Close() error
}
DocumentStore defines the interface for flow process document storage and retrieval. This interface allows for easy mocking and testing of document operations.
type GraphDBBinding ¶
type GraphDBBinding struct {
Readable map[string]string `json:"readable"` // Read access information
Id map[string]string `json:"id"` // Resource identifier
Title map[string]string `json:"title"` // Human-readable title
Uri map[string]string `json:"uri"` // Resource URI
Writable map[string]string `json:"writable"` // Write access information
ContextID ContextID `json:"contextID"` // Graph context identifier
}
GraphDBBinding represents a single binding result from GraphDB SPARQL queries. This structure captures the complex binding format returned by GraphDB API responses, including repository metadata and graph information.
Binding Fields:
- Readable: Access permissions for read operations
- Id: Repository or graph identifier information
- Title: Human-readable titles and descriptions
- Uri: URI references for resources
- Writable: Access permissions for write operations
- ContextID: Graph context information
Data Access:
Each field is a map[string]string containing type and value information similar to standard SPARQL JSON results format but with GraphDB-specific extensions for repository and graph management.
type GraphDBResponse ¶
type GraphDBResponse struct {
Head []interface{} `json:"head>vars"` // Response metadata
Results GraphDBResults `json:"results"` // Query results with bindings
}
GraphDBResponse represents the complete response structure from GraphDB API calls. This structure follows GraphDB's JSON response format for repository listing, graph management, and other administrative operations.
Response Structure:
- Head: Contains metadata about the response (variables, etc.)
- Results: Contains the actual query results with detailed bindings
API Compatibility:
The structure is designed to handle GraphDB's specific JSON response format which may differ from standard SPARQL JSON results in some administrative operations.
func GraphDBListGraphs ¶
func GraphDBListGraphs(url, user, pass, repo string) (*GraphDBResponse, error)
GraphDBListGraphs retrieves a list of all named graphs in a repository. This function queries GraphDB's RDF graphs endpoint to discover named graphs and their metadata for graph management and administration.
Named Graph Discovery:
Connects to the repository's RDF graphs endpoint to retrieve: - Named graph URIs and identifiers - Graph metadata and context information - Access permissions and properties - Graph statistics and summary data
Parameters:
- url: Base URL of the GraphDB server
- user: Username for HTTP Basic Authentication
- pass: Password for HTTP Basic Authentication
- repo: Repository identifier to list graphs from
Returns:
- *GraphDBResponse: Structured response with graph information
- error: HTTP communication, authentication, or parsing errors
Response Structure:
Returns GraphDBResponse containing bindings with graph information: - Graph URIs and identifiers - Context metadata and properties - Access control information - Graph statistics and summary data
Graph Management:
The returned information enables: - Graph inventory and discovery - Access control verification - Graph-aware query planning - Administrative monitoring and reporting
Success Conditions:
Graph listing is successful when the server returns: - HTTP 200 OK status code - Valid JSON response with graph bindings
Error Conditions:
- Repository not found or inaccessible
- Authentication failures or insufficient permissions
- Server errors during graph enumeration
- JSON parsing errors in response data
- Network connectivity issues
Example Usage:
graphs, err := GraphDBListGraphs("http://localhost:7200", "admin", "password", "knowledge-base")
if err != nil {
log.Fatal("Failed to list graphs:", err)
}
for _, binding := range graphs.Results.Bindings {
if uri, ok := binding.Uri["value"]; ok {
fmt.Printf("Named graph: %s\n", uri)
}
}
Administrative Applications:
- Graph discovery for query optimization
- Access control auditing and verification
- Graph-based data organization analysis
- Repository content inventory and documentation
func GraphDBRepositories ¶
func GraphDBRepositories(url string, user string, pass string) (*GraphDBResponse, error)
GraphDBRepositories retrieves a list of all repositories from a GraphDB server. This function queries the GraphDB management API to discover available repositories and their metadata for administration and data access purposes.
Repository Discovery:
Connects to the GraphDB repositories endpoint to retrieve: - Repository identifiers and names - Access permissions (readable/writable status) - Repository types and configurations - Context information for graph management
Parameters:
- url: Base URL of the GraphDB server (e.g., "http://localhost:7200")
- user: Username for HTTP Basic Authentication (empty string for no auth)
- pass: Password for HTTP Basic Authentication (empty string for no auth)
Returns:
- *GraphDBResponse: Structured response with repository information
- error: HTTP communication, authentication, or parsing errors
Authentication:
Supports HTTP Basic Authentication when credentials are provided. Empty username and password values skip authentication for open servers.
Response Format:
Returns GraphDB's JSON format with bindings containing: - Repository IDs and human-readable titles - URI references for API access - Access permissions for read/write operations - Context information for graph-aware operations
Error Conditions:
- Network connectivity issues to GraphDB server
- Authentication failures with provided credentials
- Server errors or invalid responses
- JSON parsing errors in response data
Example Usage:
repos, err := GraphDBRepositories("http://localhost:7200", "admin", "password")
if err != nil {
log.Fatal("Failed to list repositories:", err)
}
for _, binding := range repos.Results.Bindings {
fmt.Printf("Repository: %s - %s\n",
binding.Id["value"], binding.Title["value"])
}
Administrative Use:
This function is typically used for: - Repository discovery and inventory - Administrative dashboard implementations - Automated repository management scripts - Health monitoring and status checking
type GraphDBResults ¶
type GraphDBResults struct {
Bindings []GraphDBBinding `json:"bindings"` // Array of query result bindings
}
GraphDBResults contains an array of binding results from GraphDB queries. This structure represents the results section of GraphDB API responses, containing multiple binding objects for repository lists, graph lists, and other query results.
Result Processing:
The Bindings array contains one entry per result item, where each binding provides detailed information about repositories, graphs, or other GraphDB resources depending on the query type.
type Head ¶ added in v0.0.6
type Head struct {
Variables []Variable `xml:"variable"`
}
Head contains the variable declarations for a SPARQL XML result.
type Index ¶ added in v0.0.7
type Index struct {
Name string `json:"name"` // Index name
Fields []string `json:"fields"` // Fields to index
Type string `json:"type"` // Index type: "json" or "text"
}
Index represents a CouchDB index for query optimization. Indexes improve query performance by maintaining sorted data structures for frequently queried fields.
Index Types:
- "json": Standard JSON index for Mango queries (default)
- "text": Full-text search index (requires special queries)
Index Components:
- Name: Index identifier for management and hints
- Fields: Array of field names to index
- Type: Index type ("json" or "text")
Index Usage:
Indexes are automatically used by Mango queries when the query selector matches the indexed fields. Explicit index selection is possible via the UseIndex field in MangoQuery.
Example Usage:
// Create compound index for common query pattern
index := Index{
Name: "status-location-index",
Fields: []string{"status", "location"},
Type: "json",
}
service.CreateIndex(index)
// Query will automatically use the index
query := MangoQuery{
Selector: map[string]interface{}{
"status": "running",
"location": "us-east-1",
},
}
type IndexInfo ¶ added in v0.0.7
type IndexInfo struct {
Name string // Index name
Type string // Index type
Fields []string // Indexed fields
DesignDoc string // Design document ID
}
IndexInfo provides information about a CouchDB index. This structure contains metadata returned by ListIndexes().
Fields:
- Name: Index name (explicit or auto-generated)
- Type: Index type ("json", "text", or "special")
- Fields: Array of indexed field names
- DesignDoc: Design document ID containing the index definition
Example Usage:
indexes, _ := service.ListIndexes()
for _, idx := range indexes {
fmt.Printf("Index: %s\n", idx.Name)
fmt.Printf(" Type: %s\n", idx.Type)
fmt.Printf(" Fields: %v\n", idx.Fields)
fmt.Printf(" Design Doc: %s\n", idx.DesignDoc)
}
type MangoQuery ¶ added in v0.0.7
type MangoQuery struct {
Selector map[string]interface{} `json:"selector"` // MongoDB-style selector
Fields []string `json:"fields,omitempty"` // Fields to return
Sort []map[string]string `json:"sort,omitempty"` // Sort specifications
Limit int `json:"limit,omitempty"` // Maximum results
Skip int `json:"skip,omitempty"` // Pagination offset
UseIndex string `json:"use_index,omitempty"` // Index hint
}
MangoQuery represents a CouchDB Mango query (MongoDB-style queries). Mango queries provide a declarative JSON-based query language for filtering documents without writing MapReduce views.
Query Components:
- Selector: MongoDB-style selector with operators ($eq, $gt, $and, etc.)
- Fields: Array of field names to return (projection)
- Sort: Array of sort specifications
- Limit: Maximum number of results
- Skip: Number of results to skip for pagination
- UseIndex: Hint for which index to use
Selector Operators:
- $eq: Equal to
- $ne: Not equal to
- $gt, $gte: Greater than (or equal)
- $lt, $lte: Less than (or equal)
- $and, $or, $not: Logical operators
- $in, $nin: In array / not in array
- $regex: Regular expression match
- $exists: Field exists check
Example Usage:
// Find running containers in us-east datacenter
query := MangoQuery{
Selector: map[string]interface{}{
"$and": []interface{}{
map[string]interface{}{"status": "running"},
map[string]interface{}{"location": map[string]interface{}{
"$regex": "^us-east",
}},
},
},
Fields: []string{"_id", "name", "status", "hostedOn"},
Sort: []map[string]string{
{"name": "asc"},
},
Limit: 100,
}
results, _ := service.Find(query)
type PoolPartyClient ¶ added in v0.0.6
type PoolPartyClient struct {
BaseURL string
Username string
Password string
HTTPClient *http.Client
TemplateDir string
// contains filtered or unexported fields
}
PoolPartyClient represents a client for interacting with the PoolParty API. It handles authentication, SPARQL query execution, template management, and project operations. The client supports basic authentication and maintains a cache of parsed templates for improved performance.
func NewPoolPartyClient ¶ added in v0.0.6
func NewPoolPartyClient(baseURL, username, password, templateDir string) *PoolPartyClient
NewPoolPartyClient creates a new PoolParty API client with authentication credentials and template directory configuration. The client maintains a template cache and uses a 60-second HTTP timeout for all requests.
Parameters:
- baseURL: The PoolParty server base URL (e.g., "https://poolparty.example.com")
- username: Authentication username
- password: Authentication password
- templateDir: Directory containing SPARQL query template files
Returns:
- *PoolPartyClient: Configured client ready for API operations
Example:
client := NewPoolPartyClient("https://poolparty.example.com", "admin", "password", "./templates")
func (*PoolPartyClient) ExecuteSPARQL ¶ added in v0.0.6
func (c *PoolPartyClient) ExecuteSPARQL(projectID, query, contentType string) ([]byte, error)
ExecuteSPARQL executes a raw SPARQL query against a PoolParty project's SPARQL endpoint. The query is sent as form-encoded POST data with the specified Accept header for format negotiation.
Parameters:
- projectID: The PoolParty project/thesaurus ID
- query: The SPARQL query string
- contentType: Desired response format ("application/json", "application/sparql-results+xml", "application/rdf+xml", etc.)
Returns:
- []byte: Query results in the requested format
- error: Any error encountered during query execution
Example:
query := `SELECT ?s ?p ?o WHERE { ?s ?p ?o } LIMIT 10`
results, err := client.ExecuteSPARQL("myproject", query, "application/json")
if err != nil {
log.Fatal(err)
}
func (*PoolPartyClient) ExecuteSPARQLFromTemplate ¶ added in v0.0.6
func (c *PoolPartyClient) ExecuteSPARQLFromTemplate(projectID, templateName, contentType string, params interface{}) ([]byte, error)
ExecuteSPARQLFromTemplate loads a SPARQL query template, executes it with the given parameters, and runs the resulting query against the specified PoolParty project. This is the recommended way to execute parameterized SPARQL queries.
Parameters:
- projectID: The PoolParty project/thesaurus ID
- templateName: Name of the template file (e.g., "query.sparql")
- contentType: Desired response format ("application/json", "application/rdf+xml", etc.)
- params: Parameters to pass to the template (can be any Go type)
Returns:
- []byte: Query results in the requested format
- error: Any error encountered during template execution or query
Example:
params := map[string]string{"concept": "http://example.org/concept/123"}
results, err := client.ExecuteSPARQLFromTemplate("myproject", "get_related.sparql", "application/json", params)
func (*PoolPartyClient) GetProjectDetails ¶ added in v0.0.6
func (c *PoolPartyClient) GetProjectDetails(projectID string) (*Project, error)
GetProjectDetails retrieves detailed information about a specific PoolParty project by its ID. This includes metadata such as title, description, URI, creation date, modification date, type, and status.
Parameters:
- projectID: The project/thesaurus identifier
Returns:
- *Project: Project details
- error: Any error encountered during retrieval
Example:
project, err := client.GetProjectDetails("myproject")
if err != nil {
log.Fatal(err)
}
fmt.Printf("Project: %s\nDescription: %s\n", project.Title, project.Description)
func (*PoolPartyClient) ListProjects ¶ added in v0.0.6
func (c *PoolPartyClient) ListProjects() ([]Project, error)
ListProjects retrieves all projects (thesauri) from the PoolParty server. This method tries multiple endpoint and content-type combinations to handle different PoolParty versions and configurations. It automatically discovers the working endpoint by trying various API paths and accept headers.
Returns:
- []Project: List of projects found on the server
- error: Error if all endpoint attempts fail
Example:
projects, err := client.ListProjects()
if err != nil {
log.Fatal(err)
}
for _, proj := range projects {
fmt.Printf("Project: %s (%s)\n", proj.Title, proj.ID)
}
func (*PoolPartyClient) LoadTemplate ¶ added in v0.0.6
func (c *PoolPartyClient) LoadTemplate(templateName string) (*template.Template, error)
LoadTemplate loads a SPARQL query template from the template directory and caches it. Subsequent calls with the same template name return the cached version for better performance. Templates use Go's text/template syntax and can accept parameters for dynamic query generation.
Parameters:
- templateName: Name of the template file (e.g., "query.sparql")
Returns:
- *template.Template: Parsed template ready for execution
- error: Any error encountered during template loading or parsing
Example:
tmpl, err := client.LoadTemplate("find_concepts.sparql")
if err != nil {
log.Fatal(err)
}
type PostgresDB ¶ added in v0.0.30
type PostgresDB struct {
// contains filtered or unexported fields
}
PostgresDB wraps PostgreSQL connection pool with helper methods using pgx driver. This provides a lightweight alternative to GORM for applications that need direct SQL access with connection pooling.
Use Cases:
- High-performance metric storage
- Time-series data operations
- Custom SQL queries
- Bulk operations
Comparison to GORM:
- Faster for bulk operations
- No ORM overhead
- Direct SQL control
- Better for time-series workloads
func NewPostgresDB ¶ added in v0.0.30
func NewPostgresDB(connString string) (*PostgresDB, error)
NewPostgresDB creates a new PostgreSQL database connection using pgx. The connection string format is standard PostgreSQL:
postgresql://[user[:password]@][host][:port][/dbname][?param1=value1&...]
Example:
db, err := NewPostgresDB("postgresql://user:pass@localhost:5432/mydb?sslmode=disable")
Connection Pooling:
- Automatic connection pooling via pgxpool
- Default pool configuration applied
- Configurable via connection string parameters
func (*PostgresDB) Close ¶ added in v0.0.30
func (db *PostgresDB) Close()
Close closes the database connection pool.
func (*PostgresDB) Exec ¶ added in v0.0.30
func (db *PostgresDB) Exec(ctx context.Context, sql string, args ...interface{}) error
Exec executes a SQL statement. Returns error if execution fails.
func (*PostgresDB) Pool ¶ added in v0.0.30
func (db *PostgresDB) Pool() *pgxpool.Pool
Pool returns the underlying connection pool for advanced operations. Use this for transactions, batch operations, or custom connection management.
type PrefLabel ¶ added in v0.0.6
PrefLabel represents a SKOS preferred label with language tag. Example: <skos:prefLabel xml:lang="en">Computer Science</skos:prefLabel>
type Project ¶ added in v0.0.6
type Project struct {
ID string `json:"id"`
Title string `json:"title"`
Description string `json:"description"`
URI string `json:"uri"`
Created string `json:"created"`
Modified string `json:"modified"`
Type string `json:"type"`
Status string `json:"status"`
}
Project represents a PoolParty thesaurus/taxonomy project. PoolParty is a semantic technology platform for taxonomy and knowledge graph management. Each project can contain concepts, terms, and their relationships.
type QueryBuilder ¶ added in v0.0.7
type QueryBuilder struct {
// contains filtered or unexported fields
}
QueryBuilder provides a fluent API for constructing complex Mango queries. This builder pattern simplifies query construction with method chaining.
Example Usage:
query := NewQueryBuilder().
Where("status", "eq", "running").
And().
Where("location", "regex", "^us-east").
Select("_id", "name", "status").
Limit(50).
Build()
results, _ := service.Find(query)
func NewQueryBuilder ¶ added in v0.0.7
func NewQueryBuilder() *QueryBuilder
NewQueryBuilder creates a new QueryBuilder instance. Returns a builder ready for method chaining.
Example Usage:
qb := NewQueryBuilder()
query := qb.Where("status", "eq", "running").Build()
func (*QueryBuilder) And ¶ added in v0.0.7
func (qb *QueryBuilder) And() *QueryBuilder
And specifies that subsequent conditions should be AND'd together. This is the default logical operator.
Returns:
- *QueryBuilder: Builder instance for method chaining
Example Usage:
qb.Where("status", "eq", "running").
And().
Where("location", "eq", "us-east")
func (*QueryBuilder) Build ¶ added in v0.0.7
func (qb *QueryBuilder) Build() MangoQuery
Build constructs the final MangoQuery from the builder. Returns a MangoQuery ready for execution.
Returns:
- MangoQuery: Complete query structure
Example Usage:
query := NewQueryBuilder().
Where("status", "eq", "running").
And().
Where("location", "regex", "^us-east").
Select("_id", "name", "status").
Sort("name", "asc").
Limit(50).
Build()
results, _ := service.Find(query)
func (*QueryBuilder) Limit ¶ added in v0.0.7
func (qb *QueryBuilder) Limit(n int) *QueryBuilder
Limit sets the maximum number of results to return. Used for pagination and result size control.
Parameters:
- n: Maximum number of results
Returns:
- *QueryBuilder: Builder instance for method chaining
Example Usage:
qb.Limit(100)
func (*QueryBuilder) Or ¶ added in v0.0.7
func (qb *QueryBuilder) Or() *QueryBuilder
Or specifies that subsequent conditions should be OR'd together. Changes the logical operator for combining conditions.
Returns:
- *QueryBuilder: Builder instance for method chaining
Example Usage:
qb.Where("status", "eq", "running").
Or().
Where("status", "eq", "starting")
func (*QueryBuilder) Select ¶ added in v0.0.7
func (qb *QueryBuilder) Select(fields ...string) *QueryBuilder
Select specifies which fields to return in query results. This provides projection to reduce bandwidth and processing.
Parameters:
- fields: Field names to include in results
Returns:
- *QueryBuilder: Builder instance for method chaining
Example Usage:
qb.Select("_id", "name", "status", "hostedOn")
func (*QueryBuilder) Skip ¶ added in v0.0.7
func (qb *QueryBuilder) Skip(n int) *QueryBuilder
Skip sets the number of results to skip (for pagination). Used with Limit to implement pagination.
Parameters:
- n: Number of results to skip
Returns:
- *QueryBuilder: Builder instance for method chaining
Example Usage:
// Get second page (results 51-100) qb.Skip(50).Limit(50)
func (*QueryBuilder) Sort ¶ added in v0.0.7
func (qb *QueryBuilder) Sort(field, direction string) *QueryBuilder
Sort specifies sort order for query results. Multiple sort fields can be specified for multi-level sorting.
Parameters:
- field: Field name to sort by
- direction: Sort direction ("asc" or "desc")
Returns:
- *QueryBuilder: Builder instance for method chaining
Example Usage:
qb.Sort("status", "asc").Sort("name", "asc")
func (*QueryBuilder) UseIndex ¶ added in v0.0.7
func (qb *QueryBuilder) UseIndex(indexName string) *QueryBuilder
UseIndex hints which index to use for the query. Improves performance by explicitly selecting an index.
Parameters:
- indexName: Name of the index to use
Returns:
- *QueryBuilder: Builder instance for method chaining
Example Usage:
qb.UseIndex("status-location-index")
func (*QueryBuilder) Where ¶ added in v0.0.7
func (qb *QueryBuilder) Where(field string, operator string, value interface{}) *QueryBuilder
Where adds a condition to the query. Supports various operators for flexible filtering.
Parameters:
- field: Document field name to filter on
- operator: Comparison operator ("eq", "ne", "gt", "gte", "lt", "lte", "regex", "in", "nin", "exists")
- value: Value to compare against
Returns:
- *QueryBuilder: Builder instance for method chaining
Supported Operators:
- "eq": Equal to (default)
- "ne": Not equal to
- "gt": Greater than
- "gte": Greater than or equal
- "lt": Less than
- "lte": Less than or equal
- "regex": Regular expression match
- "in": In array
- "nin": Not in array
- "exists": Field exists (value should be bool)
Example Usage:
qb.Where("status", "eq", "running")
qb.Where("count", "gt", 10)
qb.Where("location", "regex", "^us-")
qb.Where("tags", "in", []string{"production", "critical"})
type RDF ¶ added in v0.0.6
type RDF struct {
XMLName xml.Name `xml:"RDF"`
Descriptions []Description `xml:"Description"`
}
RDF represents an RDF graph with multiple resource descriptions. It is used for parsing RDF/XML responses from PoolParty.
type RabbitLog ¶
type RabbitLog struct {
gorm.Model // Embedded GORM model with ID, timestamps, soft delete
DocumentID string // Unique document identifier
State string // Processing state (started, running, completed, failed)
Version string // Document or processing version
Log []byte `gorm:"type:bytea"` // Binary log data stored as PostgreSQL bytea
}
RabbitLog represents a RabbitMQ message processing log entry in the database. This model captures essential information about message processing including document identification, processing state, version tracking, and binary log data.
Database Schema:
The model uses GORM's embedded Model which provides: - ID: Primary key with auto-increment - CreatedAt: Automatic timestamp on record creation - UpdatedAt: Automatic timestamp on record updates - DeletedAt: Soft deletion support with null timestamp
Field Descriptions:
- DocumentID: Unique identifier for the processed document
- State: Current processing state (started, running, completed, failed)
- Version: Document version or processing version identifier
- Log: Binary log data stored as PostgreSQL bytea type
Storage Considerations:
The Log field uses PostgreSQL's 'bytea' type for efficient binary data storage. This provides optimal performance and avoids encoding overhead while properly handling arbitrary binary content including null bytes and non-UTF8 data.
Indexing Strategy:
Consider adding database indexes on: - DocumentID for fast document lookups - State for filtering by processing status - CreatedAt for time-based queries and pagination - Composite indexes for common query patterns
Data Integrity:
- DocumentID should be validated for proper format
- State should be constrained to valid values
- Version should follow semantic versioning if applicable
- Log data is stored as raw binary without encoding
Example Database Record:
{
"ID": 1,
"CreatedAt": "2024-01-15T10:30:00Z",
"UpdatedAt": "2024-01-15T10:35:00Z",
"DocumentID": "doc-12345",
"State": "completed",
"Version": "v1.2.3",
"Log": [binary data] // raw binary log data
}
type RelationshipEdge ¶ added in v0.0.7
type RelationshipEdge struct {
From string // Source document ID
To string // Target document ID
Type string // Relationship type (field name)
}
RelationshipEdge represents a relationship between two documents. Describes the direction and type of the relationship.
type RelationshipGraph ¶ added in v0.0.7
type RelationshipGraph struct {
Nodes map[string]json.RawMessage // Map of document ID to document
Edges []RelationshipEdge // Relationships between documents
}
RelationshipGraph represents a graph of related documents. Contains nodes (documents) and edges (relationships) for visualization and analysis.
type Repository ¶ added in v0.0.3
type Repository struct {
ID string // Unique repository identifier
Title string // Human-readable repository name
Type string // Repository storage type
}
Repository represents an RDF4J repository configuration and metadata. This structure provides essential information about repositories available on an RDF4J server, including identification, description, and storage type.
Repository Attributes:
- ID: Unique identifier used in API endpoints and configuration
- Title: Human-readable description for management interfaces
- Type: Storage backend type (memory, native, LMDB, etc.)
The repository serves as the primary container for RDF data and provides the context for all SPARQL operations and data management functions.
Repository Types:
- "memory": In-memory storage (fast, non-persistent)
- "native": File-based persistent storage
- "lmdb": LMDB-based high-performance storage
- "remote": Proxy to remote repositories
- "federation": Federated access to multiple repositories
func ListRepositories ¶ added in v0.0.3
func ListRepositories(serverURL, username, password string) ([]Repository, error)
ListRepositories retrieves all available repositories from an RDF4J server. This function queries the server's repository management API to discover available data stores and their configurations.
Server Discovery:
The function connects to the RDF4J server's repository endpoint to retrieve metadata about all configured repositories, including both system repositories and user-created data stores.
Parameters:
- serverURL: Base URL of the RDF4J server (e.g., "http://localhost:8080/rdf4j-server")
- username: Username for HTTP Basic Authentication
- password: Password for HTTP Basic Authentication
Returns:
- []Repository: Array of repository metadata structures
- error: HTTP communication, authentication, or parsing errors
Authentication:
Uses HTTP Basic Authentication to access the repository listing endpoint. The credentials must have appropriate permissions to read repository metadata from the RDF4J server.
Response Format:
The function expects SPARQL JSON results format from the server, containing repository information with id, title, and type bindings for each available repository.
Error Conditions:
- Network connectivity issues to the RDF4J server
- Authentication failures with provided credentials
- Server errors or invalid responses
- JSON parsing errors in response data
- Repository access permission denied
Example Usage:
repos, err := ListRepositories(
"http://localhost:8080/rdf4j-server",
"admin",
"password",
)
if err != nil {
log.Fatal("Failed to list repositories:", err)
}
for _, repo := range repos {
fmt.Printf("Repository: %s (%s) - %s\n", repo.ID, repo.Type, repo.Title)
}
Repository Management:
The returned repository list can be used for: - Dynamic repository selection in applications - Repository health monitoring and status checking - Administrative interfaces for repository management - Automated repository discovery and configuration
type Resource ¶ added in v0.0.6
type Resource struct {
Resource string `xml:"resource,attr"`
}
Resource represents an RDF resource reference using rdf:resource attribute. Example: <czo:URI4IRI rdf:resource="http://example.org/concept/123"/>
type Restriction ¶ added in v0.0.6
type Restriction struct {
Resource string `xml:"resource,attr"` // rdf:resource
}
Restriction represents an RDF restriction with a resource reference. Used for OWL restrictions and constraints in the knowledge graph.
type Result ¶ added in v0.0.6
type Result struct {
Bindings []Binding `xml:"binding"`
}
Result represents a single result row in SPARQL XML format, containing multiple variable bindings.
type Results ¶ added in v0.0.6
type Results struct {
Results []Result `xml:"result"`
}
Results contains all result rows from a SPARQL XML query.
type SPARQLBindings ¶ added in v0.0.6
type SPARQLBindings struct {
Bindings []map[string]SPARQLValue `json:"bindings"`
}
SPARQLBindings contains the actual result bindings from a SPARQL query. Each binding is a map of variable names to their corresponding values.
type SPARQLHead ¶ added in v0.0.6
type SPARQLHead struct {
Vars []string `json:"vars"`
}
SPARQLHead contains metadata about the SPARQL query result, including the variable names used in the query.
type SPARQLResult ¶ added in v0.0.6
type SPARQLResult struct {
Head SPARQLHead `json:"head"`
Results SPARQLBindings `json:"results"`
}
SPARQLResult represents a SPARQL query result in JSON format. It follows the W3C SPARQL 1.1 Query Results JSON Format specification.
type SPARQLValue ¶ added in v0.0.6
type SPARQLValue struct {
Type string `json:"type"`
Value string `json:"value"`
Lang string `json:"xml:lang,omitempty"`
}
SPARQLValue represents a single value in a SPARQL query result. It includes the value type (uri, literal, bnode), the value itself, and optional language tag for literals.
type Sparql ¶ added in v0.0.6
type Sparql struct {
XMLName xml.Name `xml:"http://www.w3.org/2005/sparql-results# sparql"`
Head Head `xml:"head"`
Results Results `xml:"results"`
}
Sparql represents the root element of a SPARQL XML query result. It follows the W3C SPARQL Query Results XML Format specification.
type TLSConfig ¶ added in v0.0.7
type TLSConfig struct {
Enabled bool // Enable TLS/SSL
CertFile string // Client certificate file path
KeyFile string // Client private key file path
CAFile string // Certificate Authority file path
InsecureSkipVerify bool // Skip certificate verification (development only)
}
TLSConfig provides TLS/SSL configuration for secure CouchDB connections. This configuration enables encrypted communication between the client and CouchDB server with optional client certificate authentication.
Security Options:
- Enabled: Enable TLS/SSL for the connection
- CertFile: Client certificate file for mutual TLS authentication
- KeyFile: Client private key file for certificate authentication
- CAFile: Certificate Authority file for server verification
- InsecureSkipVerify: Skip server certificate verification (not recommended)
Example Usage:
tlsConfig := &TLSConfig{
Enabled: true,
CAFile: "/etc/ssl/certs/ca-bundle.crt",
CertFile: "/etc/ssl/certs/client.crt",
KeyFile: "/etc/ssl/private/client.key",
InsecureSkipVerify: false,
}
type TraversalOptions ¶ added in v0.0.7
type TraversalOptions struct {
StartID string `json:"start_id"` // Starting document ID
Depth int `json:"depth"` // Traversal depth
RelationField string `json:"relation_field"` // Relationship field name
Direction string `json:"direction"` // "forward" or "reverse"
Filter map[string]interface{} `json:"filter,omitempty"` // Optional filters
}
TraversalOptions configures graph traversal operations for following relationships. Traversal allows navigation through document relationships like container → host → datacenter.
Configuration Options:
- StartID: Document ID to begin traversal from
- Depth: Maximum relationship hops to traverse
- RelationField: Document field containing relationship reference
- Direction: "forward" (follow refs) or "reverse" (find referring docs)
- Filter: Additional filters for traversed documents
Traversal Directions:
Forward: Follow relationship references in documents - Start: Container document - Field: "hostedOn" - Result: Host documents referenced by containers Reverse: Find documents that reference this one - Start: Host document - Field: "hostedOn" - Result: Container documents that reference this host
Example Usage:
// Find all containers on a host and their networks
opts := TraversalOptions{
StartID: "host-123",
Depth: 2,
RelationField: "hostedOn",
Direction: "reverse",
Filter: map[string]interface{}{
"status": "running",
},
}
results, _ := service.Traverse(opts)
type UploadResult ¶ added in v0.0.6
UploadResult represents the outcome of a file upload operation to BaseX. It provides detailed information about the upload status, including success state, status code, message, and the remote path where the file was stored.
func BaseXUploadBinaryFile ¶ added in v0.0.6
func BaseXUploadBinaryFile(dbName, localFilePath, remotePath string) (*UploadResult, error)
BaseXUploadBinaryFile uploads a binary file to BaseX using base64 encoding. The file is encoded as base64 and stored using an XQuery command. This method is suitable for images, PDFs, archives, and other binary content.
The function requires the following environment variables:
- BASEX_URL: The BaseX server URL
- BASEX_USERNAME: Authentication username
- BASEX_PASSWORD: Authentication password
Parameters:
- dbName: The target database name
- localFilePath: Path to the local binary file
- remotePath: The path/name to use in the database
Returns:
- *UploadResult: Details about the upload operation
- error: Any error encountered during binary upload
Example:
result, err := BaseXUploadBinaryFile("mydb", "/tmp/image.jpg", "images/photo.jpg")
func BaseXUploadFile ¶ added in v0.0.6
func BaseXUploadFile(dbName, localFilePath, remotePath string) (*UploadResult, error)
BaseXUploadFile uploads a file to a BaseX database using the REST API. The content type is automatically determined based on the file extension.
The function requires the following environment variables:
- BASEX_URL: The BaseX server URL
- BASEX_USERNAME: Authentication username
- BASEX_PASSWORD: Authentication password
Parameters:
- dbName: The target database name
- localFilePath: Path to the local file to upload
- remotePath: The path/name to use in the database
Returns:
- *UploadResult: Details about the upload operation
- error: Any error encountered during file upload
Example:
result, err := BaseXUploadFile("mydb", "/tmp/data.json", "data.json")
if err != nil {
log.Fatal(err)
}
if result.Success {
fmt.Println("Upload successful:", result.RemotePath)
}
func BaseXUploadFileAuto ¶ added in v0.0.6
func BaseXUploadFileAuto(dbName, localFilePath, remotePath string) (*UploadResult, error)
BaseXUploadFileAuto automatically detects the file type and uploads using the appropriate method. XML files are validated and uploaded via BaseXUploadXMLFile, binary files (images, PDFs, archives) use base64 encoding via BaseXUploadBinaryFile, and other files use the standard REST upload via BaseXUploadFile.
The function requires the following environment variables:
- BASEX_URL: The BaseX server URL
- BASEX_USERNAME: Authentication username
- BASEX_PASSWORD: Authentication password
Parameters:
- dbName: The target database name
- localFilePath: Path to the local file
- remotePath: The path/name to use in the database
Returns:
- *UploadResult: Details about the upload operation
- error: Any error encountered during upload
Example:
result, err := BaseXUploadFileAuto("mydb", "/tmp/document.xml", "docs/doc1.xml")
// Automatically uses XML validation and upload
func BaseXUploadToFilesystem ¶ added in v0.0.6
func BaseXUploadToFilesystem(localFilePath, remotePath string) (*UploadResult, error)
BaseXUploadToFilesystem uploads a file to the BaseX server's filesystem using a multipart form upload to a custom upload endpoint. This is different from database storage and places files directly on the server filesystem.
The function requires the following environment variables:
- BASEX_URL: The BaseX server URL
- BASEX_USERNAME: Authentication username
- BASEX_PASSWORD: Authentication password
Parameters:
- localFilePath: Path to the local file to upload
- remotePath: The target filesystem path on the server (sent via X-Target-Path header)
Returns:
- *UploadResult: Details about the upload operation
- error: Any error encountered during filesystem upload
Example:
result, err := BaseXUploadToFilesystem("/tmp/backup.zip", "/server/backups/backup.zip")
func BaseXUploadXMLFile ¶ added in v0.0.6
func BaseXUploadXMLFile(dbName, localFilePath, remotePath string) (*UploadResult, error)
BaseXUploadXMLFile uploads and validates an XML file to a BaseX database. The XML content is validated before upload to ensure proper structure. This function uses BaseXSaveDocument internally for XML-specific handling.
The function requires the following environment variables:
- BASEX_URL: The BaseX server URL
- BASEX_USERNAME: Authentication username
- BASEX_PASSWORD: Authentication password
Parameters:
- dbName: The target database name
- localFilePath: Path to the local XML file
- remotePath: The path/name to use in the database (should include .xml extension)
Returns:
- *UploadResult: Details about the upload operation
- error: Any error encountered (including XML validation errors)
Example:
result, err := BaseXUploadXMLFile("mydb", "/tmp/book.xml", "books/book1.xml")
if err != nil {
log.Fatal(err)
}
type Variable ¶ added in v0.0.6
type Variable struct {
Name string `xml:"name,attr"`
}
Variable represents a SPARQL query variable declaration in XML format.
type View ¶ added in v0.0.7
type View struct {
Name string `json:"-"` // View name (not in JSON)
Map string `json:"map"` // JavaScript map function
Reduce string `json:"reduce,omitempty"` // JavaScript reduce function (optional)
}
View represents a CouchDB MapReduce view definition. Views enable efficient querying and aggregation of documents through JavaScript map and reduce functions.
View Components:
- Name: View identifier within the design document
- Map: JavaScript function that emits key/value pairs
- Reduce: Optional JavaScript function for aggregation
Map Function:
The map function processes each document and emits key/value pairs:
function(doc) {
if (doc.type === 'container') {
emit(doc.hostId, doc);
}
}
Reduce Function:
The reduce function aggregates values for each key:
function(keys, values, rereduce) {
return sum(values);
}
Built-in Reduce Functions:
- _count: Count number of values
- _sum: Sum numeric values
- _stats: Calculate statistics (sum, count, min, max, etc.)
Example Usage:
view := View{
Name: "containers_by_host",
Map: `function(doc) {
if (doc['@type'] === 'SoftwareApplication' && doc.hostedOn) {
emit(doc.hostedOn, {name: doc.name, status: doc.status});
}
}`,
Reduce: "_count",
}
type ViewOptions ¶ added in v0.0.7
type ViewOptions struct {
Key interface{} // Exact key to query
StartKey interface{} // Range query start key (inclusive)
EndKey interface{} // Range query end key (inclusive)
IncludeDocs bool // Include full documents in results
Limit int // Maximum results to return
Skip int // Number of results to skip
Descending bool // Reverse sort order
Group bool // Group results by key
GroupLevel int // Group by key array prefix level
Reduce bool // Execute reduce function
}
ViewOptions configures parameters for querying CouchDB MapReduce views. This structure provides comprehensive control over view query behavior including key ranges, document inclusion, pagination, sorting, and reduce function usage.
Query Parameters:
- Key: Exact key match for view results
- StartKey: Starting key for range queries (inclusive)
- EndKey: Ending key for range queries (inclusive)
- IncludeDocs: Include full document content with view results
- Limit: Maximum number of results to return
- Skip: Number of results to skip for pagination
- Descending: Reverse result order
- Group: Group results by key when using reduce
- GroupLevel: Group by key array prefix (for array keys)
- Reduce: Execute reduce function (if defined in view)
Example Usage:
// Query containers by host with full documents
opts := ViewOptions{
Key: "host-123",
IncludeDocs: true,
Limit: 50,
}
results, _ := service.QueryView("graphium", "containers_by_host", opts)
// Range query with pagination
opts := ViewOptions{
StartKey: "2024-01-01",
EndKey: "2024-12-31",
Skip: 100,
Limit: 50,
Descending: false,
}
type ViewResult ¶ added in v0.0.7
type ViewResult struct {
TotalRows int `json:"total_rows"` // Total rows in view
Offset int `json:"offset"` // Starting offset
Rows []ViewRow `json:"rows"` // Result rows
}
ViewResult contains the results from a CouchDB view query. This structure provides metadata about the query results along with the actual row data returned from the view.
Result Fields:
- TotalRows: Total number of rows in the view (before limit/skip)
- Offset: Starting offset for the returned results
- Rows: Array of view rows containing key/value/document data
Example Usage:
result, _ := service.QueryView("graphium", "containers_by_host", opts)
fmt.Printf("Found %d total rows, showing %d\n", result.TotalRows, len(result.Rows))
for _, row := range result.Rows {
fmt.Printf("Key: %v, Value: %v\n", row.Key, row.Value)
if opts.IncludeDocs {
var container Container
json.Unmarshal(row.Doc, &container)
}
}
type ViewRow ¶ added in v0.0.7
type ViewRow struct {
ID string `json:"id"` // Document ID
Key interface{} `json:"key"` // Emitted key
Value interface{} `json:"value"` // Emitted value
Doc json.RawMessage `json:"doc,omitempty"` // Full document (if IncludeDocs=true)
}
ViewRow represents a single row from a CouchDB view query result. Each row contains the document ID, the emitted key and value from the map function, and optionally the full document content.
Row Fields:
- ID: Document identifier that emitted this row
- Key: Key emitted by the map function
- Value: Value emitted by the map function
- Doc: Full document content (if IncludeDocs=true)
Map Function Behavior:
In a MapReduce view, the map function calls emit(key, value): - Key: Determines sort order and enables range queries - Value: Can be the document, a field, or computed value - Multiple emit() calls create multiple rows per document
Example Usage:
for _, row := range result.Rows {
fmt.Printf("Document %s has key %v\n", row.ID, row.Key)
if row.Doc != nil {
var doc map[string]interface{}
json.Unmarshal(row.Doc, &doc)
fmt.Printf("Document content: %+v\n", doc)
}
}
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
Package repository provides a unified interface for multi-database storage patterns.
|
Package repository provides a unified interface for multi-database storage patterns. |