Documentation
¶
Index ¶
- func GenPermCheckObjectKey() string
- func GetHTTPRange(startOffset, endOffset int64) (full bool, rangeVal string)
- type BucketPrefix
- type Copier
- type CopySpec
- type Options
- type Permission
- type Prefix
- type ReadSeekCloser
- type ReaderOption
- type Storage
- type StrongConsistency
- type Uploader
- type WalkOption
- type WriterOption
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func GenPermCheckObjectKey ¶
func GenPermCheckObjectKey() string
GenPermCheckObjectKey generates a unique object key for permission checking.
func GetHTTPRange ¶
GetHTTPRange returns the HTTP Range header value for the given start and end offsets. If endOffset is not 0, startOffset must <= endOffset; we don't check the validity here. If startOffset == 0 and endOffset == 0, `full` is true and `rangeVal` is empty. Otherwise, a partial object is requested, `full` is false and `rangeVal` contains the Range header value.
Types ¶
type BucketPrefix ¶
BucketPrefix represents a prefix in a bucket.
func NewBucketPrefix ¶
func NewBucketPrefix(bucket, prefix string) BucketPrefix
NewBucketPrefix returns a new BucketPrefix instance.
func (*BucketPrefix) ObjectKey ¶
func (bp *BucketPrefix) ObjectKey(name string) string
ObjectKey returns the object key by joining the name to the Prefix.
func (*BucketPrefix) PrefixStr ¶
func (bp *BucketPrefix) PrefixStr() string
PrefixStr returns the Prefix as a string.
type Copier ¶
type Copier interface {
// CopyFrom copies a object to the current external storage by the specification.
CopyFrom(ctx context.Context, e Storage, spec CopySpec) error
}
Copier copier.
type Options ¶
type Options struct {
// SendCredentials marks whether to send credentials downstream.
//
// This field should be set to false if the credentials are provided to
// downstream via external key managers, e.g. on K8s or cloud provider.
SendCredentials bool
// NoCredentials means that no cloud credentials are supplied to BR
NoCredentials bool
// HTTPClient to use. The created storage may ignore this field if it is not
// directly using HTTP (e.g. the local storage).
// NOTICE: the HTTPClient is only used by s3/azure/gcs.
// For GCS, we will use this as base client to init a client with credentials.
HTTPClient *http.Client
// CheckPermissions check the given permission in New() function.
// make sure we can access the storage correctly before execute tasks.
CheckPermissions []Permission
// S3Retryer is the retryer for create s3 storage, if it is nil,
// defaultS3Retryer() will be used.
S3Retryer aws.Retryer
// CheckObjectLockOptions check the s3 bucket has enabled the ObjectLock.
// if enabled. it will send the options to tikv.
CheckS3ObjectLockOptions bool
// AccessRecording records the access statistics of object storage.
// we use the read/write file size as an estimate of the network traffic,
// we don't consider the traffic consumed by network protocol, and traffic
// caused by retry
AccessRecording *recording.AccessStats
}
Options are backend-independent options provided to New.
type Permission ¶
type Permission string
Permission represents the permission we need to check in create storage.
const ( // AccessBuckets represents bucket access permission // it replaces the origin skip-check-path. AccessBuckets Permission = "AccessBucket" // ListObjects represents listObjects permission ListObjects Permission = "ListObjects" // GetObject represents GetObject permission GetObject Permission = "GetObject" // PutObject represents PutObject permission PutObject Permission = "PutObject" // PutAndDeleteObject represents PutAndDeleteObject permission // we cannot check DeleteObject permission alone, so we use PutAndDeleteObject instead. PutAndDeleteObject Permission = "PutAndDeleteObject" )
type Prefix ¶
type Prefix string
Prefix is like a folder if not empty, we still call it a prefix to match S3 terminology. if not empty, it cannot start with '/' and must end with a '/', such as 'a/b/'. the folder name must be valid, we don't check it here.
func (Prefix) JoinStr ¶
JoinStr returns a new Prefix by joining the given string to the current Prefix.
type ReadSeekCloser ¶
ReadSeekCloser is the interface that groups the basic Read, Seek and Close methods.
type ReaderOption ¶
type ReaderOption struct {
// StartOffset is inclusive.
StartOffset *int64
// EndOffset is exclusive.
EndOffset *int64
// PrefetchSize will switch to NewPrefetchReader if value is positive.
PrefetchSize int
}
ReaderOption reader option.
type Storage ¶
type Storage interface {
// WriteFile writes a complete file to storage, similar to os.WriteFile, but WriteFile should be atomic
WriteFile(ctx context.Context, name string, data []byte) error
// ReadFile reads a complete file from storage, similar to os.ReadFile
ReadFile(ctx context.Context, name string) ([]byte, error)
// FileExists return true if file exists
FileExists(ctx context.Context, name string) (bool, error)
// DeleteFile delete the file in storage
DeleteFile(ctx context.Context, name string) error
// Open a Reader by file path. path is relative path to storage base path.
// Some implementation will use the given ctx as the inner context of the reader.
Open(ctx context.Context, path string, option *ReaderOption) (objectio.Reader, error)
// DeleteFiles delete the files in storage
DeleteFiles(ctx context.Context, names []string) error
// WalkDir traverse all the files in a dir.
//
// fn is the function called for each regular file visited by WalkDir.
// The argument `path` is the file path that can be used in `Open`
// function; the argument `size` is the size in byte of the file determined
// by path.
WalkDir(ctx context.Context, opt *WalkOption, fn func(path string, size int64) error) error
// URI returns the base path as a URI
URI() string
// Create opens a file writer by path. path is relative path to storage base
// path. The old file under same path will be overwritten. Currently only s3
// implemented WriterOption.
Create(ctx context.Context, path string, option *WriterOption) (objectio.Writer, error)
// Rename file name from oldFileName to newFileName
Rename(ctx context.Context, oldFileName, newFileName string) error
// PresignFile creates a presigned URL for sharing a file without writing any code.
// For S3, it returns a presigned URL. For local storage, it returns the file name only.
// Unsupported backends (Azure, HDFS, etc.) return an error.
// See https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html
PresignFile(ctx context.Context, fileName string, expire time.Duration) (string, error)
// Close release the resources of the storage.
Close()
}
Storage represents a kind of file system storage.
type StrongConsistency ¶
type StrongConsistency interface {
MarkStrongConsistency()
}
StrongConsistency is a marker interface that indicates the storage is strong consistent over its `Read`, `Write` and `WalkDir` APIs.
type Uploader ¶
type Uploader interface {
// UploadPart upload part of file data to storage
UploadPart(ctx context.Context, data []byte) error
// CompleteUpload make the upload data to a complete file
CompleteUpload(ctx context.Context) error
}
Uploader upload file with chunks.
type WalkOption ¶
type WalkOption struct {
// walk on SubDir of base directory, i.e. if the base dir is '/path/to/base'
// then we're walking '/path/to/base/<SubDir>'
SubDir string
// whether subdirectory under the walk dir is skipped, only works for LOCAL storage now.
// default is false, i.e. we walk recursively.
SkipSubDir bool
// ObjPrefix used fo prefix search in storage. Note that only part of storage
// support it.
// It can save lots of time when we want find specify prefix objects in storage.
// For example. we have 10000 <Hash>.sst files and 10 backupmeta.(\d+) files.
// we can use ObjPrefix = "backupmeta" to retrieve all meta files quickly.
ObjPrefix string
// ListCount is the number of entries per page.
//
// In cloud storages such as S3 and GCS, the files listed and sent in pages.
// Typically, a page contains 1000 files, and if a folder has 3000 descendant
// files, one would need 3 requests to retrieve all of them. This parameter
// controls this size. Note that both S3, GCS and OSS limits the maximum to
// 1000.
//
// Typically, you want to leave this field unassigned (zero) to use the
// default value (1000) to minimize the number of requests, unless you want
// to reduce the possibility of timeout on an extremely slow connection, or
// perform testing.
ListCount int64
// IncludeTombstone will allow `Walk` to emit removed files during walking.
//
// In most cases, `Walk` runs over a snapshot, if a file in the snapshot
// was deleted during walking, the file will be ignored. Set this to `true`
// will make them be sent to the callback.
//
// The size of a deleted file should be `TombstoneSize`.
IncludeTombstone bool
// StartAfter is the key to start after. If not empty, the walk will start
// after the key. Currently only S3-like storage supports this option.
StartAfter string
}
WalkOption is the option of storage.WalkDir.
type WriterOption ¶
WriterOption writer option.