simplecloud

package module
v0.0.8 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 8, 2026 License: MIT Imports: 21 Imported by: 2

README

simplecloud

A tiny Go package for reading and writing objects across different storage backends with a unified interface.

Installation

go get github.com/mtgban/simplecloud

Supported Backends

Backend Read Write Constructor
Local filesystem &FileBucket{}
HTTP/HTTPS NewHTTPBucket(client, baseURL)
Backblaze B2 NewB2Client(ctx, accessKey, secretKey, bucket)
Google Cloud Storage NewGCSClient(ctx, serviceAccountFile, bucket)
Amazon S3 NewS3Client(ctx, accessKey, secretKey, bucket, endpoint, region)

Usage

All backends implement the same interface:

type Reader interface {
    NewReader(context.Context, string) (io.ReadCloser, error)
}

type Writer interface {
    NewWriter(context.Context, string) (io.WriteCloser, error)
}
Reading from GCS
bucket, err := simplecloud.NewGCSClient(ctx, "service-account.json", "my-bucket")
if err != nil {
    log.Fatal(err)
}

reader, err := bucket.NewReader(ctx, "path/to/file.txt")
if err != nil {
    log.Fatal(err)
}
defer reader.Close()

data, err := io.ReadAll(reader)
Writing to B2
bucket, err := simplecloud.NewB2Client(ctx, accessKey, secretKey, "my-bucket")
if err != nil {
    log.Fatal(err)
}

writer, err := bucket.NewWriter(ctx, "path/to/file.txt")
if err != nil {
    log.Fatal(err)
}

_, err = writer.Write([]byte("hello world"))
if err != nil {
    writer.Close()
    log.Fatal(err)
}

if err := writer.Close(); err != nil {
    log.Fatal(err)  // important: Close() flushes to cloud storage
}

Transparent Compression

Use InitReader and InitWriter to automatically handle compressed files based on extension:

Extension Compression
.gz gzip
.bz2 bzip2
.xz xz/lzma
// Automatically decompresses .gz file
reader, err := simplecloud.InitReader(ctx, bucket, "data.json.gz")
if err != nil {
    log.Fatal(err)
}
defer reader.Close()
// reader yields decompressed data

// Automatically compresses to .xz
writer, err := simplecloud.InitWriter(ctx, bucket, "output.json.xz")
if err != nil {
    log.Fatal(err)
}
// writes are compressed before storage

Copying Between Backends

Copy files between any backends, with automatic compression/decompression:

src, _ := simplecloud.NewGCSClient(ctx, "sa.json", "source-bucket")
dst, _ := simplecloud.NewB2Client(ctx, key, secret, "dest-bucket")

// Copy and transcode: decompress gzip, recompress as xz
n, err := simplecloud.Copy(ctx, src, dst, "input.json.gz", "output.json.xz")
if err != nil {
    log.Fatal(err)
}
fmt.Printf("copied %d bytes\n", n)

Limitations

This is a lightweight helper, and some operations are not covered:

  • No List or Delete API
  • No retry logic or exponential backoff
  • No ACL or permission management
  • No multipart upload configuration
  • Context cancellation doesn't interrupt local file operations
  • Cloud clients aren't exposed for cleanup (create short-lived or manage externally)

For advanced use cases, use the underlying SDKs directly:

License

MIT

Documentation

Overview

Package simplecloud provides a unified interface for reading and writing objects across different storage backends, including the local filesystem, HTTP, Backblaze B2, Google Cloud Storage, and Amazon S3.

All backends implement the Reader and/or Writer interfaces, which wrap the underlying SDK into a simple NewReader/NewWriter model. Transparent compression and decompression based on file extension is available via InitReader and InitWriter.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func Copy added in v0.0.5

func Copy(ctx context.Context, src Reader, dst Writer, srcPath, dstPath string) (int64, error)

Copy reads from srcPath on src and writes to dstPath on dst, using InitReader and InitWriter so that compression and decompression are applied automatically based on the path extensions. This means formats can be transcoded in a single call — e.g. copying a .gz source to a .xz destination will decompress and recompress on the fly.

The returned count is the number of uncompressed bytes transferred between the reader and writer, not the number of bytes read from or written to storage.

func InitReader

func InitReader(ctx context.Context, bucket Reader, path string) (io.ReadCloser, error)

InitReader opens path from bucket for reading, wrapping the stream in a decompressor when the path extension is recognised:

  • .gz — gzip
  • .bz2 — bzip2
  • .xz — xz/lzma

The path may be a full URL; only the path component is passed to the bucket. The caller must close the returned ReadCloser when done.

func InitWriter

func InitWriter(ctx context.Context, bucket Writer, path string) (io.WriteCloser, error)

InitWriter opens path on bucket for writing, wrapping the stream in a compressor when the path extension is recognised:

  • .gz — gzip
  • .bz2 — bzip2
  • .xz — xz/lzma

The path may be a full URL; only the path component is passed to the bucket. The caller must call Close on the returned WriteCloser when done; for cloud backends this is what commits the upload.

Types

type B2Bucket

type B2Bucket struct {
	Bucket *b2.Bucket

	// ConcurrentDownloads controls how many parallel range requests are used
	// when downloading large objects. Zero uses the blazer library default.
	ConcurrentDownloads int
}

B2Bucket implements Reader and Writer for a Backblaze B2 bucket.

func NewB2Client

func NewB2Client(ctx context.Context, accessKey, secretKey, bucketName string) (*B2Bucket, error)

NewB2Client authenticates with Backblaze B2 using accessKey and secretKey, then opens the named bucket.

func (*B2Bucket) NewReader

func (b *B2Bucket) NewReader(ctx context.Context, path string) (io.ReadCloser, error)

NewReader opens the object at path in the bucket for reading. A leading slash in path is stripped before the request is made.

func (*B2Bucket) NewWriter

func (b *B2Bucket) NewWriter(ctx context.Context, path string) (io.WriteCloser, error)

NewWriter opens the object at path in the bucket for writing. A leading slash in path is stripped. The caller must call Close when done; Close finalises the upload to B2.

type FileBucket

type FileBucket struct{}

FileBucket implements Reader and Writer against the local filesystem.

func (*FileBucket) NewReader

func (f *FileBucket) NewReader(ctx context.Context, path string) (io.ReadCloser, error)

NewReader opens the file at path for reading.

func (*FileBucket) NewWriter added in v0.0.2

func (f *FileBucket) NewWriter(ctx context.Context, path string) (io.WriteCloser, error)

NewWriter creates or truncates the file at path for writing. Any missing parent directories are created automatically.

type GCSBucket

type GCSBucket struct {
	Bucket *storage.BucketHandle
}

GCSBucket implements Reader and Writer for a Google Cloud Storage bucket.

func NewGCSClient

func NewGCSClient(ctx context.Context, serviceAccountFile, bucketName string) (*GCSBucket, error)

NewGCSClient creates a GCS client and opens the named bucket. If serviceAccountFile is non-empty it is used for authentication; otherwise Application Default Credentials are used, which works automatically in GKE, Cloud Run, and locally via `gcloud auth application-default login`. The underlying storage.Client is not exposed; callers that need to close it should construct one directly.

func (*GCSBucket) NewReader

func (g *GCSBucket) NewReader(ctx context.Context, path string) (io.ReadCloser, error)

NewReader opens the object at path in the bucket for reading.

func (*GCSBucket) NewWriter

func (g *GCSBucket) NewWriter(ctx context.Context, path string) (io.WriteCloser, error)

NewWriter opens the object at path in the bucket for writing. The caller must call Close when done; Close is what commits the object to GCS.

type HTTPBucket

type HTTPBucket struct {
	Client *http.Client
	URL    *url.URL
}

HTTPBucket implements Reader for HTTP and HTTPS sources. It does not support writes; use a different backend for upload destinations.

func NewHTTPBucket

func NewHTTPBucket(client *http.Client, path string) (*HTTPBucket, error)

NewHTTPBucket constructs an HTTPBucket with the given base URL. The scheme, host, and any credentials are reused for every request; the path component is replaced per call to NewReader. If client is nil, http.DefaultClient is used.

func (*HTTPBucket) NewReader

func (h *HTTPBucket) NewReader(ctx context.Context, path string) (io.ReadCloser, error)

NewReader issues a GET request for path under the bucket's base URL and returns the response body. Non-2xx responses are returned as an error with the URL redacted. The caller must close the returned ReadCloser when done.

type MultiCloser

type MultiCloser struct {
	io.Reader
	io.Writer
	// contains filtered or unexported fields
}

MultiCloser composes an io.Reader or io.Writer with multiple Closers that must all be closed in order. It is used internally by InitReader and InitWriter to close both the compression layer and the underlying storage stream in the correct sequence.

func (*MultiCloser) Close

func (m *MultiCloser) Close() error

type ReadWriter

type ReadWriter interface {
	Reader
	Writer
}

ReadWriter is implemented by backends that support both reads and writes.

type Reader

type Reader interface {
	// NewReader opens the object at path for reading. The caller must close
	// the returned ReadCloser when done.
	NewReader(context.Context, string) (io.ReadCloser, error)
}

Reader is implemented by any storage backend that supports object reads.

type S3Bucket added in v0.0.6

type S3Bucket struct {
	Bucket string
	// contains filtered or unexported fields
}

S3Bucket implements Reader and Writer for an Amazon S3 bucket (or any S3-compatible object store).

func NewS3Client added in v0.0.6

func NewS3Client(ctx context.Context, accessKey, secretKey, bucketName, endpoint, region string) (*S3Bucket, error)

NewS3Client creates an S3 client for the named bucket. accessKey and secretKey are optional; if both are empty, the default AWS credential chain is used. endpoint may be set to target S3-compatible stores (e.g. Cloudflare R2, MinIO); path-style addressing is enabled automatically when an endpoint is provided. region defaults to "auto" if empty.

func (*S3Bucket) NewReader added in v0.0.6

func (s *S3Bucket) NewReader(ctx context.Context, path string) (io.ReadCloser, error)

NewReader opens the object at path in the bucket for reading. A leading slash in path is stripped before the request is made.

func (*S3Bucket) NewWriter added in v0.0.6

func (s *S3Bucket) NewWriter(ctx context.Context, path string) (io.WriteCloser, error)

NewWriter opens the object at path in the bucket for writing using a background goroutine and an io.Pipe so that data is streamed to S3 without buffering the entire payload in memory. A leading slash in path is stripped. The caller must call Close when done; Close blocks until the upload completes and returns any upload error.

type Writer

type Writer interface {
	// NewWriter opens the object at path for writing. The caller must call
	// Close when done; for cloud backends, Close is what commits the upload.
	NewWriter(context.Context, string) (io.WriteCloser, error)
}

Writer is implemented by any storage backend that supports object writes.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL