Documentation
¶
Overview ¶
Package simplecloud provides a unified interface for reading and writing objects across different storage backends, including the local filesystem, HTTP, Backblaze B2, Google Cloud Storage, and Amazon S3.
All backends implement the Reader and/or Writer interfaces, which wrap the underlying SDK into a simple NewReader/NewWriter model. Transparent compression and decompression based on file extension is available via InitReader and InitWriter.
Index ¶
- func Copy(ctx context.Context, src Reader, dst Writer, srcPath, dstPath string) (int64, error)
- func InitReader(ctx context.Context, bucket Reader, path string) (io.ReadCloser, error)
- func InitWriter(ctx context.Context, bucket Writer, path string) (io.WriteCloser, error)
- type B2Bucket
- type FileBucket
- type GCSBucket
- type HTTPBucket
- type MultiCloser
- type ReadWriter
- type Reader
- type S3Bucket
- type Writer
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func Copy ¶ added in v0.0.5
Copy reads from srcPath on src and writes to dstPath on dst, using InitReader and InitWriter so that compression and decompression are applied automatically based on the path extensions. This means formats can be transcoded in a single call — e.g. copying a .gz source to a .xz destination will decompress and recompress on the fly.
The returned count is the number of uncompressed bytes transferred between the reader and writer, not the number of bytes read from or written to storage.
func InitReader ¶
InitReader opens path from bucket for reading, wrapping the stream in a decompressor when the path extension is recognised:
- .gz — gzip
- .bz2 — bzip2
- .xz — xz/lzma
The path may be a full URL; only the path component is passed to the bucket. The caller must close the returned ReadCloser when done.
func InitWriter ¶
InitWriter opens path on bucket for writing, wrapping the stream in a compressor when the path extension is recognised:
- .gz — gzip
- .bz2 — bzip2
- .xz — xz/lzma
The path may be a full URL; only the path component is passed to the bucket. The caller must call Close on the returned WriteCloser when done; for cloud backends this is what commits the upload.
Types ¶
type B2Bucket ¶
type B2Bucket struct {
Bucket *b2.Bucket
// ConcurrentDownloads controls how many parallel range requests are used
// when downloading large objects. Zero uses the blazer library default.
ConcurrentDownloads int
}
B2Bucket implements Reader and Writer for a Backblaze B2 bucket.
func NewB2Client ¶
NewB2Client authenticates with Backblaze B2 using accessKey and secretKey, then opens the named bucket.
type FileBucket ¶
type FileBucket struct{}
FileBucket implements Reader and Writer against the local filesystem.
func (*FileBucket) NewReader ¶
func (f *FileBucket) NewReader(ctx context.Context, path string) (io.ReadCloser, error)
NewReader opens the file at path for reading.
func (*FileBucket) NewWriter ¶ added in v0.0.2
func (f *FileBucket) NewWriter(ctx context.Context, path string) (io.WriteCloser, error)
NewWriter creates or truncates the file at path for writing. Any missing parent directories are created automatically.
type GCSBucket ¶
type GCSBucket struct {
Bucket *storage.BucketHandle
}
GCSBucket implements Reader and Writer for a Google Cloud Storage bucket.
func NewGCSClient ¶
NewGCSClient creates a GCS client and opens the named bucket. If serviceAccountFile is non-empty it is used for authentication; otherwise Application Default Credentials are used, which works automatically in GKE, Cloud Run, and locally via `gcloud auth application-default login`. The underlying storage.Client is not exposed; callers that need to close it should construct one directly.
type HTTPBucket ¶
HTTPBucket implements Reader for HTTP and HTTPS sources. It does not support writes; use a different backend for upload destinations.
func NewHTTPBucket ¶
func NewHTTPBucket(client *http.Client, path string) (*HTTPBucket, error)
NewHTTPBucket constructs an HTTPBucket with the given base URL. The scheme, host, and any credentials are reused for every request; the path component is replaced per call to NewReader. If client is nil, http.DefaultClient is used.
func (*HTTPBucket) NewReader ¶
func (h *HTTPBucket) NewReader(ctx context.Context, path string) (io.ReadCloser, error)
NewReader issues a GET request for path under the bucket's base URL and returns the response body. Non-2xx responses are returned as an error with the URL redacted. The caller must close the returned ReadCloser when done.
type MultiCloser ¶
MultiCloser composes an io.Reader or io.Writer with multiple Closers that must all be closed in order. It is used internally by InitReader and InitWriter to close both the compression layer and the underlying storage stream in the correct sequence.
func (*MultiCloser) Close ¶
func (m *MultiCloser) Close() error
type ReadWriter ¶
ReadWriter is implemented by backends that support both reads and writes.
type Reader ¶
type Reader interface {
// NewReader opens the object at path for reading. The caller must close
// the returned ReadCloser when done.
NewReader(context.Context, string) (io.ReadCloser, error)
}
Reader is implemented by any storage backend that supports object reads.
type S3Bucket ¶ added in v0.0.6
type S3Bucket struct {
Bucket string
// contains filtered or unexported fields
}
S3Bucket implements Reader and Writer for an Amazon S3 bucket (or any S3-compatible object store).
func NewS3Client ¶ added in v0.0.6
func NewS3Client(ctx context.Context, accessKey, secretKey, bucketName, endpoint, region string) (*S3Bucket, error)
NewS3Client creates an S3 client for the named bucket. accessKey and secretKey are optional; if both are empty, the default AWS credential chain is used. endpoint may be set to target S3-compatible stores (e.g. Cloudflare R2, MinIO); path-style addressing is enabled automatically when an endpoint is provided. region defaults to "auto" if empty.
func (*S3Bucket) NewReader ¶ added in v0.0.6
NewReader opens the object at path in the bucket for reading. A leading slash in path is stripped before the request is made.
func (*S3Bucket) NewWriter ¶ added in v0.0.6
NewWriter opens the object at path in the bucket for writing using a background goroutine and an io.Pipe so that data is streamed to S3 without buffering the entire payload in memory. A leading slash in path is stripped. The caller must call Close when done; Close blocks until the upload completes and returns any upload error.
type Writer ¶
type Writer interface {
// NewWriter opens the object at path for writing. The caller must call
// Close when done; for cloud backends, Close is what commits the upload.
NewWriter(context.Context, string) (io.WriteCloser, error)
}
Writer is implemented by any storage backend that supports object writes.