walg

package module
v0.1.14 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 6, 2018 License: Apache-2.0 Imports: 50 Imported by: 0

README

WAL-G

Build Status Go Report Card

WAL-G is an archival restoration tool for Postgres.

WAL-G is the successor of WAL-E with a number of key differences. WAL-G uses LZ4, LZMA, Zstd or Brotli compression, multiple processors and non-exclusive base backups for Postgres. More information on the design and implementation of WAL-G can be found on the Citus Data blog post "Introducing WAL-G by Citus: Faster Disaster Recovery for Postgres".

Table of Contents

Installation

A precompiled binary for Linux AMD 64 of the latest version of WAL-G can be obtained under the Releases tab.

To decompress the binary, use:

tar -zxvf wal-g.linux-amd64.tar.gz

For other incompatible systems, please consult the Development section for more information.

Configuration

Required

To connect to Amazon S3, WAL-G requires that these variables be set:

  • WALG_S3_PREFIX (eg. s3://bucket/path/to/folder) (alternative form WALE_S3_PREFIX)

WAL-G determines AWS credentials like other AWS tools. You can set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (optionally with AWS_SECURITY_TOKEN), or ~/.aws/credentials (optionally with AWS_PROFILE), or you can set nothing to automatically fetch credentials from the EC2 metadata service.

WAL-G uses the usual PostgreSQL environment variables to configure its connection, especially including PGHOST, PGPORT, PGUSER, and PGPASSWORD/PGPASSFILE/~/.pgpass.

PGHOST can connect over a UNIX socket. This mode is preferred for localhost connections, set PGHOST=/var/run/postgresql to use it. WAL-G will connect over TCP if PGHOST is an IP address.

Optional

WAL-G can automatically determine the S3 bucket's region using s3:GetBucketLocation, but if you wish to avoid this API call or forbid it from the applicable IAM policy, specify:

  • AWS_REGION(eg. us-west-2)

Concurrency values can be configured using:

  • WALG_DOWNLOAD_CONCURRENCY

To configure how many goroutines to use during backup-fetch and wal-push, use WALG_DOWNLOAD_CONCURRENCY. By default, WAL-G uses the minimum of the number of files to extract and 10.

  • WALG_UPLOAD_CONCURRENCY

To configure how many concurrency streams to use during backup uploading, use WALG_UPLOAD_CONCURRENCY. By default, WAL-G uses 10 streams.

  • WALG_UPLOAD_DISK_CONCURRENCY

To configure how many concurrency streams are reading disk during backup-push. By default, WAL-G uses 1 stream.

  • WALG_SENTINEL_USER_DATA

This setting allows backup automation tools to add extra information to JSON sentinel file during backup-push. This setting can be used e.g. to give user-defined names to backups.

  • WALG_PREVENT_WAL_OVERWRITE

If this setting is specified, during wal-push WAL-G will check the existence of WAL before uploading it. If the different file is already archived under the same name, WAL-G will return the non-zero exit code to prevent PostgreSQL from removing WAL.

  • AWS_ENDPOINT

Overrides the default hostname to connect to an S3-compatible service. i.e, http://s3-like-service:9000

  • AWS_S3_FORCE_PATH_STYLE

To enable path-style addressing(i.e., http://s3.amazonaws.com/BUCKET/KEY) when connecting to an S3-compatible service that lack of support for sub-domain style bucket URLs (i.e., http://BUCKET.s3.amazonaws.com/KEY). Defaults to false.

Example: Using Minio.io S3-compatible storage

AWS_ACCESS_KEY_ID: "<minio-key>"
AWS_SECRET_ACCESS_KEY: "<minio-secret>"
WALE_S3_PREFIX: "s3://my-minio-bucket/sub-dir"
AWS_ENDPOINT: "http://minio:9000"
AWS_S3_FORCE_PATH_STYLE: "true"
AWS_REGION: us-east-1
  • WALG_S3_STORAGE_CLASS

To configure the S3 storage class used for backup files, use WALG_S3_STORAGE_CLASS. By default, WAL-G uses the "STANDARD" storage class. Other supported values include "STANDARD_IA" for Infrequent Access and "REDUCED_REDUNDANCY" for Reduced Redundancy.

  • WALG_S3_SSE

To enable S3 server-side encryption, set to the algorithm to use when storing the objects in S3 (i.e., AES256, aws:kms).

  • WALG_S3_SSE_KMS_ID

If using S3 server-side encryption with aws:kms, the KMS Key ID to use for object encryption.

  • WALG_GPG_KEY_ID (alternative form WALE_GPG_KEY_ID)

To configure GPG key for encryption and decryption. By default, no encryption is used. Public keyring is cached in the file "/.walg_key_cache".

  • WALG_DELTA_MAX_STEPS

Delta-backup is difference between previously taken backup and present state. WALG_DELTA_MAX_STEPS determines how many delta backups can be between full backups. Defaults to 0. Restoration process will automatically fetch all necessary deltas and base backup and compose valid restored backup (you still need WALs after start of last backup to restore consistent cluster). Delta computation is based on ModTime of file system and LSN number of pages in datafiles.

  • WALG_DELTA_ORIGIN

To configure base for next delta backup (only if WALG_DELTA_MAX_STEPS is not exceeded). WALG_DELTA_ORIGIN can be LATEST (chaining increments), LATEST_FULL (for bases where volatile part is compact and chaining has no meaning - deltas overwrite each other). Defaults to LATEST.

  • WALG_COMPRESSION_METHOD

To configure compression method used for backups. Possible options are: lz4, 'lzma', zstd, 'brotli'. Default method is lz4. LZ4 is the fastest method, but compression ratio is bad. LZMA is way much slower, however it compresses backups about 6 times better than LZ4. Zstd or Brotli is a good trade-off between speed and compression ratio which is about 3 times better than LZ4.

  • WALG_DISK_RATE_LIMIT

To configure disk read rate limit during backup-push in bytes per second.

  • WALG_NETWORK_RATE_LIMIT

To configure network upload rate limit during backup-push in bytes per second.

Usage

WAL-G currently supports these commands:

  • backup-fetch

When fetching base backups, the user should pass in the name of the backup and a path to a directory to extract to. If this directory does not exist, WAL-G will create it and any dependent subdirectories.

wal-g backup-fetch ~/extract/to/here example-backup

WAL-G can also fetch the latest backup using:

wal-g backup-fetch ~/extract/to/here LATEST
  • backup-push

When uploading backups to S3, the user should pass in the path containing the backup started by Postgres as in:

wal-g backup-push /backup/directory/path

If backup is pushed from replication slave, WAL-G will control timeline of the server. In case of promotion to master or timeline switch, backup will be uploaded but not finalized, WAL-G will exit with an error. In this case logs will contain information necessary to finalize the backup. You can use backuped data if you clearly understand entangled risks.

  • wal-fetch

When fetching WAL archives from S3, the user should pass in the archive name and the name of the file to download to. This file should not exist as WAL-G will create it for you.

WAL-G will also prefetch WAL files ahead of asked WAL file. These files will be cached in ./.wal-g/prefetch directory. Cache files older than recently asked WAL file will be deleted from the cache, to prevent cache bloat. If the file is requested with wal-fetch this will also remove it from cache, but trigger fulfilment of cache with new file.

wal-g wal-fetch example-archive new-file-name
  • wal-push

When uploading WAL archives to S3, the user should pass in the absolute path to where the archive is located.

wal-g wal-push /path/to/archive
  • backup-list

Lists names and creation time of available backups.

  • delete

Is used to delete backups and WALs before them. By default delete will perform dry run. If you want to execute deletion you have to add --confirm flag at the end of the command.

delete can operate in two modes: retain and before.

retain [FULL|FIND_FULL] %number%

if FULL is specified keep 5 full backups and everything in the middle

before [FIND_FULL] %name%

if FIND_FULL is specified WAL-G will calculate minimum backup needed to keep all deltas alive. If FIND_FULL is not specified and call can produce orphaned deltas - call will fail with the list.

retain 5 will fail if 5th is delta

retain FULL 5 will keep 5 full backups and all deltas of them

retain FIND_FULL will find necessary full for 5th

before base_000010000123123123 will fail if base_000010000123123123 is delta

before FIND_FULL base_000010000123123123 will keep everything after base of base_000010000123123123

Development

Installing

To compile and build the binary:

go get github.com/wal-g/wal-g
make all

Users can also install WAL-G by using make install. Specifying the GOBIN environment variable before installing allows the user to specify the installation location. On default, make install puts the compiled binary in go/bin.

export GOBIN=/usr/local/bin
make install
Testing

WAL-G relies heavily on unit tests. These tests do not require S3 configuration as the upload/download parts are tested using mocked objects. For more information on testing, please consult test_tools.

WAL-G will perform a round-trip compression/decompression test that generates a directory for data (eg. data...), compressed files (eg. compressed), and extracted files (eg. extracted). These directories will only get cleaned up if the files in the original data directory match the files in the extracted one.

Test coverage can be obtained using:

go test -v -coverprofile=coverage.out
go tool cover -html=coverage.out

Authors

See also the list of contributors who participated in this project.

License

This project is licensed under the Apache License, Version 2.0, but the lzo support is licensed under GPL 3.0+. Please refer to the LICENSE.md file for more details.

Acknowledgements

WAL-G would not have happened without the support of Citus Data

WAL-G came into existence as a result of the collaboration between a summer engineering intern at Citus, Katie Li, and Daniel Farina, the original author of WAL-E who currently serves as a principal engineer on the Citus Cloud team. Citus Data also has an open source extension to Postgres that distributes database queries horizontally to deliver scale and performance.

Chat

We have a Slack group to discuss WAL-G usage and development. To joint PostgreSQL slack use invite app.

Documentation

Index

Constants

View Source
const (
	DefaultTarSizeThreshold = int64((1 << 30) - 1)
	PgControl               = "pg_control"
)

It is made so to load big database files of size 1GB one by one

View Source
const (
	Lz4AlgorithmName    = "lz4"
	LzmaAlgorithmName   = "lzma"
	ZstdAlgorithmName   = "zstd"
	BrotliAlgorithmName = "brotli"

	Lz4FileExtension    = "lz4"
	LzmaFileExtension   = "lzma"
	ZstdFileExtension   = "zst"
	BrotliFileExtension = "br"
	LzoFileExtension    = "lzo"
)
View Source
const (
	DefaultStreamingPartSizeFor10Concurrency = 20 << 20
	DefaultDataBurstRateLimit                = 8 * int64(DatabasePageSize)
)
View Source
const (
	RelFileSizeBound               = 1 << 30
	BlocksInRelFile                = RelFileSizeBound / int(DatabasePageSize)
	DefaultSpcNode   walparser.Oid = 1663
)
View Source
const (
	DatabasePageSize = walparser.BlockSize

	SignatureMagicNumber byte = 0x55

	DefaultTablespace    = "base"
	GlobalTablespace     = "global"
	NonDefaultTablespace = "pg_tblspc"
)
View Source
const (
	VersionStr      = "005"
	BaseBackupsPath = "/basebackups_" + VersionStr + "/"
	WalPath         = "/wal_" + VersionStr + "/"

	// SentinelSuffix is a suffix of backup finish sentinel file
	SentinelSuffix         = "_backup_stop_sentinel.json"
	CompressedBlockMaxSize = 20 << 20
	NotFoundAWSErrorCode   = "NotFound"
	NoSuchKeyAWSErrorCode  = "NoSuchKey"
)
View Source
const (
	WalFileInDelta      uint64 = 16
	DeltaFilenameSuffix        = "_delta"
	PartFilenameSuffix         = "_part"
)
View Source
const DefaultDataFolderPath = "/tmp"
View Source
const DeleteUsageText = "delete requires at least 2 parameters" + `
		retain 5                      keep 5 backups
		retain FULL 5                 keep 5 full backups and all deltas of them
		retain FIND_FULL 5            find necessary full for 5th and keep everything after it
		before base_0123              keep everything after base_0123 including itself
		before FIND_FULL base_0123    keep everything after the base of base_0123`
View Source
const GpgBin = "gpg"
View Source
const LzopBlockSize = 256 * 1024
View Source
const (
	RecordPartFilename = "currentRecord.part"
)
View Source
const TotalBgUploadedLimit = 1024
View Source
const (
	// WalSegmentSize is the size of one WAL file
	WalSegmentSize = uint64(16 * 1024 * 1024) // xlog.c line 113ß

)

Variables

View Source
var DiskLimiter *rate.Limiter
View Source
var ErrCrypterUseMischief = errors.New("Crypter is not checked before use")

ErrCrypterUseMischief happens when crypter is used before initialization

View Source
var ExcludedFilenames = make(map[string]Empty)

ExcludedFilenames is a list of excluded members from the bundled backup.

View Source
var IncrementFileHeader = []byte{'w', 'i', '1', SignatureMagicNumber}

"wi" at the head stands for "wal-g increment" format version "1", signature magic number

View Source
var InvalidIncrementFileHeaderError = errors.New("Invalid increment file header")
View Source
var InvalidWalFileMagicError = errors.New("WAL-G: WAL file magic is invalid ")
View Source
var MaxRetries = 15

MaxRetries limit upload and download retries during interaction with S3

View Source
var NetworkLimiter *rate.Limiter
View Source
var NilWalParserError = errors.New("expected to get non nil wal parser, but got nil one")
View Source
var NoBackupsFoundError = errors.New("No backups found")
View Source
var NoBitmapFoundError = errors.New("GetDeltaBitmapFor: no bitmap found")
View Source
var NoFilesToExtractError = errors.New("ExtractAll: did not provide files to extract")
View Source
var PgControlMissingError = errors.New("Corrupted backup: missing pg_control")
View Source
var TerminalLocation = *walparser.NewBlockLocation(0, 0, 0, 0)
View Source
var UnexpectedTarDataError = errors.New("Expected end of Tar")
View Source
var UnknownIncrementFileHeaderError = errors.New("Unknown increment file header")
View Source
var UnknownTableSpaceError = errors.New("GetRelFileNodeFrom: unknown tablespace")
View Source
var (
	WalgConfig *map[string]string
)

Functions

func ApplyFileIncrement added in v0.1.3

func ApplyFileIncrement(fileName string, increment io.Reader) error

ApplyFileIncrement changes pages according to supplied change map file

func CleanupPrefetchDirectories added in v0.1.14

func CleanupPrefetchDirectories(walFileName string, location string, cleaner Cleaner)

func ComputeDeletionSkipline added in v0.1.14

func ComputeDeletionSkipline(backups []BackupTime, target *Backup) (skipLine int, walSkipFileName string)

ComputeDeletionSkipline selects last backup and name of last necessary WAL

func Configure

func Configure(verifyUploads bool) (uploader *Uploader, destinationFolder *S3Folder, err error)

Configure connects to S3 and creates an uploader. It makes sure that a valid session has started; if invalid, returns AWS error and `<nil>` values.

Requires these environment variables to be set: WALE_S3_PREFIX

Able to configure the upload part size in the S3 uploader.

func Connect

func Connect() (*pgx.Conn, error)

Connect establishes a connection to postgres using a UNIX socket. Must export PGHOST and run with `sudo -E -u postgres`. If PGHOST is not set or if the connection fails, an error is returned and the connection is `<nil>`.

Example: PGHOST=/var/run/postgresql or PGHOST=10.0.0.1

func CreateFileWith added in v0.1.14

func CreateFileWith(filePath string, content io.Reader) error

func CreateUploader

func CreateUploader(svc s3iface.S3API, partsize, concurrency int) s3manageriface.UploaderAPI

CreateUploader returns an uploader with customizable concurrency and partsize.

func DecryptAndDecompressTar added in v0.1.14

func DecryptAndDecompressTar(writer io.Writer, readerMaker ReaderMaker, crypter Crypter) error

TODO : unit tests Ensures that file extension is valid. Any subsequent behavior depends on file type.

func ExtractAll

func ExtractAll(tarInterpreter TarInterpreter, files []ReaderMaker) error

TODO : unit tests ExtractAll Handles all files passed in. Supports `.lzo`, `.lz4`, `.lzma`, and `.tar`. File type `.nop` is used for testing purposes. Each file is extracted in its own goroutine and ExtractAll will wait for all goroutines to finish. Returns the first error encountered.

func ExtractBlockLocations added in v0.1.14

func ExtractBlockLocations(records []walparser.XLogRecord) []walparser.BlockLocation

func FastCopy added in v0.1.14

func FastCopy(dst io.Writer, src io.Reader) (int64, error)

func GetBackupPath added in v0.1.4

func GetBackupPath(folder *S3Folder) string

GetBackupPath gets path for basebackup in a bucket

func GetDeltaFilenameFor added in v0.1.14

func GetDeltaFilenameFor(walFilename string) (string, error)

func GetFileExtension added in v0.1.11

func GetFileExtension(filePath string) string

func GetFileRelativePath added in v0.1.14

func GetFileRelativePath(fileAbsPath string, directoryPath string) string

func GetKeyRingId added in v0.1.3

func GetKeyRingId() string

GetKeyRingId extracts name of a key to use from env variable

func GetLatestBackupKey added in v0.1.14

func GetLatestBackupKey(folder *S3Folder) (string, error)

func GetNextWalFilename added in v0.1.14

func GetNextWalFilename(name string) (string, error)

GetNextWalFilename computes name of next WAL segment

func GetPositionInDelta added in v0.1.14

func GetPositionInDelta(walFilename string) int

func GetPrefetchLocations added in v0.1.14

func GetPrefetchLocations(location string, walFileName string) (prefetchLocation string, runningLocation string, runningFile string, fetchedFile string)

func GetRelFileIdFrom added in v0.1.14

func GetRelFileIdFrom(filePath string) (int, error)

func GetRelFileNodeFrom added in v0.1.14

func GetRelFileNodeFrom(filePath string) (*walparser.RelFileNode, error)

func GetSentinelUserData added in v0.1.8

func GetSentinelUserData() interface{}

GetSentinelUserData tries to parse WALG_SENTINEL_USER_DATA env variable

func HandleBackupFetch added in v0.1.3

func HandleBackupFetch(backupName string, folder *S3Folder, archiveDirectory string, mem bool) (lsn *uint64)

TODO : unit tests HandleBackupFetch is invoked to perform wal-g backup-fetch

func HandleBackupList added in v0.1.3

func HandleBackupList(folder *S3Folder)

TODO : unit tests HandleBackupList is invoked to perform wal-g backup-list

func HandleBackupPush added in v0.1.3

func HandleBackupPush(archiveDirectory string, uploader *Uploader)

TODO : unit tests HandleBackupPush is invoked to perform a wal-g backup-push

func HandleDelete added in v0.1.3

func HandleDelete(folder *S3Folder, args []string)

TODO : unit tests HandleDelete is invoked to perform wal-g delete

func HandleWALFetch added in v0.1.3

func HandleWALFetch(folder *S3Folder, walFileName string, location string, triggerPrefetch bool)

TODO : unit tests HandleWALFetch is invoked to performa wal-g wal-fetch

func HandleWALPrefetch added in v0.1.3

func HandleWALPrefetch(folder *S3Folder, walFileName string, location string, uploader *Uploader)

TODO : unit tests HandleWALPrefetch is invoked by wal-fetch command to speed up database restoration

func HandleWALPush added in v0.1.3

func HandleWALPush(uploader *Uploader, walFilePath string)

TODO : unit tests HandleWALPush is invoked to perform wal-g wal-push

func IsAwsNotExist added in v0.1.14

func IsAwsNotExist(err error) bool

func LookupConfigValue added in v0.1.14

func LookupConfigValue(key string) (value string, ok bool)

func NewDiskLimitReader added in v0.1.11

func NewDiskLimitReader(r io.ReadCloser) io.ReadCloser

NewDiskLimitReader returns a reader that is rate limited by disk limiter

func NewLzoReader added in v0.1.11

func NewLzoReader(r io.Reader) (io.ReadCloser, error)

func NewLzoWriter added in v0.1.11

func NewLzoWriter(w io.Writer) io.WriteCloser

func NewNetworkLimitReader added in v0.1.11

func NewNetworkLimitReader(r io.ReadCloser) io.ReadCloser

NewNetworkLimitReader returns a reader that is rate limited by network limiter

func PackFileTo added in v0.1.14

func PackFileTo(tarBall TarBall, fileInfoHeader *tar.Header, fileContent io.Reader) (fileSize int64, err error)

func ParseWALFilename added in v0.1.14

func ParseWALFilename(name string) (timelineId uint32, logSegNo uint64, err error)

TODO : unit tests ParseWALFilename extracts numeric parts from WAL file name

func ReadIncrementFileHeader added in v0.1.14

func ReadIncrementFileHeader(reader io.Reader) error

func ReadIncrementalFile added in v0.1.14

func ReadIncrementalFile(filePath string, fileSize int64, lsn uint64, deltaBitmap *roaring.Bitmap) (fileReader io.ReadCloser, size int64, err error)

func ReadLocationsFrom added in v0.1.14

func ReadLocationsFrom(reader io.Reader) ([]walparser.BlockLocation, error)
func ResolveSymlink(path string) string

ResolveSymlink converts path to physical if it is symlink

func SelectRelFileBlocks added in v0.1.14

func SelectRelFileBlocks(bitmap *roaring.Bitmap, relFileId int) *roaring.Bitmap

func ShouldPrefault added in v0.1.14

func ShouldPrefault(name string) (lsn uint64, shouldPrefault bool, timelineId uint32, err error)

func ToBytes added in v0.1.14

func ToBytes(x interface{}) []byte

func ToPartFilename added in v0.1.14

func ToPartFilename(deltaFilename string) string

func TryDownloadWALFile added in v0.1.14

func TryDownloadWALFile(folder *S3Folder, walPath string) (archiveReader io.ReadCloser, exists bool, err error)

func WriteLocationsTo added in v0.1.14

func WriteLocationsTo(writer io.Writer, locations []walparser.BlockLocation) error

Types

type Archive

type Archive struct {
	Folder  *S3Folder
	Archive *string
}

Archive contains information associated with a WAL archive.

func (*Archive) CheckExistence

func (archive *Archive) CheckExistence() (bool, error)

CheckExistence checks that the specified WAL file exists.

func (*Archive) GetArchive

func (archive *Archive) GetArchive() (io.ReadCloser, error)

GetArchive downloads the specified archive from S3.

type ArchiveNonExistenceError added in v0.1.14

type ArchiveNonExistenceError struct {
	// contains filtered or unexported fields
}

func (ArchiveNonExistenceError) Error added in v0.1.14

func (err ArchiveNonExistenceError) Error() string

type Backup

type Backup struct {
	Folder *S3Folder
	Name   string
}

Backup contains information about a valid backup generated and uploaded by WAL-G.

func NewBackup added in v0.1.14

func NewBackup(folder *S3Folder, name string) *Backup

func (*Backup) CheckExistence

func (backup *Backup) CheckExistence() (bool, error)

CheckExistence checks that the specified backup exists.

func (*Backup) GetKeys

func (backup *Backup) GetKeys() ([]string, error)

GetKeys returns all the keys for the Files in the specified backup.

type BackupFileDescription added in v0.1.3

type BackupFileDescription struct {
	IsIncremented bool // should never be both incremented and Skipped
	IsSkipped     bool
	MTime         time.Time
}

type BackupFileList added in v0.1.3

type BackupFileList map[string]BackupFileDescription

type BackupTime

type BackupTime struct {
	Name        string
	Time        time.Time
	WalFileName string
}

BackupTime is used to sort backups by latest modified time.

func GetBackupTimeSlices added in v0.1.3

func GetBackupTimeSlices(backups []*s3.Object) []BackupTime

GetBackupTimeSlices converts S3 objects to backup description

func GetGarbageBackupTimeSlicesFromPrefix added in v0.1.14

func GetGarbageBackupTimeSlicesFromPrefix(backups []*s3.CommonPrefix, nongarbage []BackupTime) []BackupTime

GetBackupTimeSlices converts S3 objects to backup description

type BgUploader added in v0.1.5

type BgUploader struct {
	// contains filtered or unexported fields
}

BgUploader represents the state of concurrent WAL upload

func NewBgUploader added in v0.1.14

func NewBgUploader(walFilePath string, maxParallelWorkers int32, uploader *Uploader) *BgUploader

func (*BgUploader) Start added in v0.1.5

func (bgUploader *BgUploader) Start()

Start up checking what's inside archive_status

func (*BgUploader) Stop added in v0.1.5

func (bgUploader *BgUploader) Stop()

Stop pipeline

type BlockLocationReader added in v0.1.14

type BlockLocationReader struct {
	// contains filtered or unexported fields
}

func NewBlockLocationReader added in v0.1.14

func NewBlockLocationReader(underlying io.Reader) *BlockLocationReader

func (*BlockLocationReader) ReadNextLocation added in v0.1.14

func (reader *BlockLocationReader) ReadNextLocation() (*walparser.BlockLocation, error)

ReadNextLocation returns any reader error wrapped with errors.Wrap

type BlockLocationWriter added in v0.1.14

type BlockLocationWriter struct {
	Underlying io.Writer
}

func NewBlockLocationWriter added in v0.1.14

func NewBlockLocationWriter(underlying io.Writer) *BlockLocationWriter

func (*BlockLocationWriter) WriteLocation added in v0.1.14

func (locationWriter *BlockLocationWriter) WriteLocation(location walparser.BlockLocation) error

type BrotliCompressor added in v0.1.14

type BrotliCompressor struct{}

func (BrotliCompressor) FileExtension added in v0.1.14

func (compressor BrotliCompressor) FileExtension() string

func (BrotliCompressor) NewWriter added in v0.1.14

func (compressor BrotliCompressor) NewWriter(writer io.Writer) ReaderFromWriteCloser

type BrotliDecompressor added in v0.1.14

type BrotliDecompressor struct{}

func (BrotliDecompressor) Decompress added in v0.1.14

func (decompressor BrotliDecompressor) Decompress(dst io.Writer, src io.Reader) error

func (BrotliDecompressor) FileExtension added in v0.1.14

func (decompressor BrotliDecompressor) FileExtension() string

type BrotliReaderFromWriter added in v0.1.14

type BrotliReaderFromWriter struct {
	cbrotli.Writer
}

func NewBrotliReaderFromWriter added in v0.1.14

func NewBrotliReaderFromWriter(dst io.Writer) *BrotliReaderFromWriter

func (*BrotliReaderFromWriter) ReadFrom added in v0.1.14

func (writer *BrotliReaderFromWriter) ReadFrom(reader io.Reader) (n int64, err error)

type Bundle

type Bundle struct {
	ArchiveDirectory   string
	TarSizeThreshold   int64
	Sentinel           *Sentinel
	TarBall            TarBall
	TarBallMaker       TarBallMaker
	Crypter            OpenPGPCrypter
	Timeline           uint32
	Replica            bool
	IncrementFromLsn   *uint64
	IncrementFromFiles BackupFileList
	DeltaMap           PagedFileDeltaMap

	Files *sync.Map
	// contains filtered or unexported fields
}

A Bundle represents the directory to be walked. Contains at least one TarBall if walk has started. Each TarBall except for the last one will be at least TarSizeThreshold bytes. The Sentinel is used to ensure complete uploaded backups; in this case, pg_control is used as the sentinel.

func NewBundle added in v0.1.14

func NewBundle(archiveDirectory string, incrementFromLsn *uint64, incrementFromFiles BackupFileList) *Bundle

TODO: use DiskDataFolder

func (*Bundle) CheckSizeAndEnqueueBack added in v0.1.8

func (bundle *Bundle) CheckSizeAndEnqueueBack(tarBall TarBall) error

func (*Bundle) Deque added in v0.1.8

func (bundle *Bundle) Deque() TarBall

func (*Bundle) DownloadDeltaMap added in v0.1.14

func (bundle *Bundle) DownloadDeltaMap(folder *S3Folder, backupStartLSN uint64) error

func (*Bundle) EnqueueBack added in v0.1.8

func (bundle *Bundle) EnqueueBack(tarBall TarBall)

func (*Bundle) FinishQueue added in v0.1.8

func (bundle *Bundle) FinishQueue() error

func (*Bundle) GetFileRelPath added in v0.1.14

func (bundle *Bundle) GetFileRelPath(fileAbsPath string) string

func (*Bundle) GetFiles added in v0.1.8

func (bundle *Bundle) GetFiles() *sync.Map

func (*Bundle) GetIncrementBaseFiles added in v0.1.3

func (bundle *Bundle) GetIncrementBaseFiles() BackupFileList

GetIncrementBaseFiles returns list of Files from previous backup

func (*Bundle) GetIncrementBaseLsn added in v0.1.3

func (bundle *Bundle) GetIncrementBaseLsn() *uint64

GetIncrementBaseLsn returns LSN of previous backup

func (*Bundle) HandleWalkedFSObject added in v0.1.14

func (bundle *Bundle) HandleWalkedFSObject(path string, info os.FileInfo, err error) error

TODO : unit tests HandleWalkedFSObject walks files provided by the passed in directory and creates compressed tar members labeled as `part_00i.tar.*`, where '*' is compressor file extension.

To see which files and directories are Skipped, please consult ExcludedFilenames. Excluded directories will be created but their contents will not be included in the tar bundle.

func (*Bundle) NewTarBall

func (bundle *Bundle) NewTarBall(dedicatedUploader bool)

NewTarBall starts writing new tarball

func (*Bundle) PrefaultWalkedFSObject added in v0.1.14

func (bundle *Bundle) PrefaultWalkedFSObject(path string, info os.FileInfo, err error) error

func (*Bundle) StartBackup added in v0.1.3

func (bundle *Bundle) StartBackup(conn *pgx.Conn, backup string) (backupName string, lsn uint64, version int, err error)

TODO : unit tests StartBackup starts a non-exclusive base backup immediately. When finishing the backup, `backup_label` and `tablespace_map` contents are not immediately written to a file but returned instead. Returns empty string and an error if backup fails.

func (*Bundle) StartQueue added in v0.1.8

func (bundle *Bundle) StartQueue()

func (*Bundle) UploadLabelFiles added in v0.1.14

func (bundle *Bundle) UploadLabelFiles(conn *pgx.Conn) (uint64, error)

TODO : unit tests UploadLabelFiles creates the `backup_label` and `tablespace_map` files by stopping the backup and uploads them to S3.

func (*Bundle) UploadPgControl added in v0.1.14

func (bundle *Bundle) UploadPgControl(compressorFileExtension string) error

TODO : unit tests UploadPgControl should only be called after the rest of the backup is successfully uploaded to S3.

type CachedKey added in v0.1.3

type CachedKey struct {
	KeyId string `json:"keyId"`
	Body  []byte `json:"body"`
}

CachedKey is the data transfer object describing format of key ring cache

type CascadeWriteCloser added in v0.1.11

type CascadeWriteCloser struct {
	io.WriteCloser
	Underlying io.Closer
}

CascadeWriteCloser bundles multiple closures into one function. Calling Close() will close the main and underlying writers.

func (*CascadeWriteCloser) Close added in v0.1.11

func (cascadeCloser *CascadeWriteCloser) Close() error

Close returns the first encountered error from closing main or underlying writer.

type Cleaner added in v0.1.3

type Cleaner interface {
	GetFiles(directory string) ([]string, error)
	Remove(file string)
}

Cleaner interface serves to separate file system logic from prefetch clean logic to make it testable

type CompressingPipeWriter added in v0.1.11

type CompressingPipeWriter struct {
	Input                io.Reader
	Output               io.Reader
	NewCompressingWriter func(io.Writer) ReaderFromWriteCloser
}

CompressingPipeWriter allows for flexibility of using compressed output. Input is read and compressed to a pipe reader.

func (*CompressingPipeWriter) Compress added in v0.1.11

func (pipeWriter *CompressingPipeWriter) Compress(crypter Crypter)

Compress compresses input to a pipe reader. Output must be used or pipe will block.

type CompressingPipeWriterError added in v0.1.11

type CompressingPipeWriterError struct {
	// contains filtered or unexported fields
}

CompressingPipeWriterError is used to catch specific errors from CompressingPipeWriter when uploading to S3. Will not retry upload if this error occurs.

func (CompressingPipeWriterError) Error added in v0.1.11

func (err CompressingPipeWriterError) Error() string

type Compressor added in v0.1.11

type Compressor interface {
	NewWriter(writer io.Writer) ReaderFromWriteCloser
	FileExtension() string
}

type Crypter added in v0.1.3

type Crypter interface {
	IsUsed() bool
	Encrypt(writer io.WriteCloser) (io.WriteCloser, error)
	Decrypt(reader io.ReadCloser) (io.Reader, error)
}

Crypter is responsible for making cryptographical pipeline parts when needed

type DataFolder added in v0.1.14

type DataFolder interface {
	// OpenReadonlyFile should return NoSuchFileError if it cannot find desired file
	OpenReadonlyFile(filename string) (io.ReadCloser, error)
	OpenWriteOnlyFile(filename string) (io.WriteCloser, error)
	CleanFolder() error
}

type Decompressor added in v0.1.11

type Decompressor interface {
	Decompress(dst io.Writer, src io.Reader) error
	FileExtension() string
}

type DelayWriteCloser added in v0.1.3

type DelayWriteCloser struct {
	// contains filtered or unexported fields
}

DelayWriteCloser delays first writes. Encryption starts writing header immediately. But there is a lot of places where writer is instantiated long before pipe is ready. This is why here is used special writer, which delays encryption initialization before actual write. If no write occurs, initialization still is performed, to handle zero-byte Files correctly

func (*DelayWriteCloser) Close added in v0.1.3

func (delayWriteCloser *DelayWriteCloser) Close() error

Close DelayWriteCloser

func (*DelayWriteCloser) Write added in v0.1.3

func (delayWriteCloser *DelayWriteCloser) Write(p []byte) (n int, err error)

type DeleteCommandArguments added in v0.1.3

type DeleteCommandArguments struct {
	Full       bool
	FindFull   bool
	Retain     bool
	Before     bool
	Target     string
	BeforeTime *time.Time
	// contains filtered or unexported fields
}

DeleteCommandArguments incapsulates arguments for delete command

func ParseDeleteArguments added in v0.1.3

func ParseDeleteArguments(args []string, fallBackFunc func()) (result DeleteCommandArguments)

ParseDeleteArguments interprets arguments for delete command. TODO: use flags or cobra

type DeltaFile added in v0.1.14

type DeltaFile struct {
	Locations []walparser.BlockLocation
	WalParser *walparser.WalParser
}

func LoadDeltaFile added in v0.1.14

func LoadDeltaFile(reader io.Reader) (*DeltaFile, error)

func NewDeltaFile added in v0.1.14

func NewDeltaFile(walParser *walparser.WalParser) (*DeltaFile, error)

func (*DeltaFile) Save added in v0.1.14

func (deltaFile *DeltaFile) Save(writer io.Writer) error

type DeltaFileChanWriter added in v0.1.14

type DeltaFileChanWriter struct {
	DeltaFile             *DeltaFile
	BlockLocationConsumer chan walparser.BlockLocation
}

func NewDeltaFileChanWriter added in v0.1.14

func NewDeltaFileChanWriter(deltaFile *DeltaFile) *DeltaFileChanWriter

func (*DeltaFileChanWriter) Consume added in v0.1.14

func (writer *DeltaFileChanWriter) Consume(waitGroup *sync.WaitGroup)

type DeltaFileManager added in v0.1.14

type DeltaFileManager struct {
	PartFiles        *LazyCache
	DeltaFileWriters *LazyCache

	CanceledDeltaFiles map[string]bool
	// contains filtered or unexported fields
}

func NewDeltaFileManager added in v0.1.14

func NewDeltaFileManager(dataFolder DataFolder) *DeltaFileManager

func (*DeltaFileManager) CancelRecording added in v0.1.14

func (manager *DeltaFileManager) CancelRecording(walFilename string)

func (*DeltaFileManager) CombinePartFile added in v0.1.14

func (manager *DeltaFileManager) CombinePartFile(deltaFilename string, partFile *WalPartFile) error

func (*DeltaFileManager) FlushDeltaFiles added in v0.1.14

func (manager *DeltaFileManager) FlushDeltaFiles(uploader *Uploader, completedPartFiles map[string]bool)

func (*DeltaFileManager) FlushFiles added in v0.1.14

func (manager *DeltaFileManager) FlushFiles(uploader *Uploader)

func (*DeltaFileManager) FlushPartFiles added in v0.1.14

func (manager *DeltaFileManager) FlushPartFiles() (completedPartFiles map[string]bool)

func (*DeltaFileManager) GetBlockLocationConsumer added in v0.1.14

func (manager *DeltaFileManager) GetBlockLocationConsumer(deltaFilename string) (chan walparser.BlockLocation, error)

func (*DeltaFileManager) GetPartFile added in v0.1.14

func (manager *DeltaFileManager) GetPartFile(deltaFilename string) (*WalPartFile, error)

func (*DeltaFileManager) LoadDeltaFileWriter added in v0.1.14

func (manager *DeltaFileManager) LoadDeltaFileWriter(deltaFilename string) (deltaFileWriter *DeltaFileChanWriter, err error)

TODO : unit tests

func (*DeltaFileManager) LoadPartFile added in v0.1.14

func (manager *DeltaFileManager) LoadPartFile(partFilename string) (*WalPartFile, error)

TODO : unit tests

type DeltaFileWriterNotFoundError added in v0.1.14

type DeltaFileWriterNotFoundError struct {
	// contains filtered or unexported fields
}

func (DeltaFileWriterNotFoundError) Error added in v0.1.14

type DiskDataFolder added in v0.1.14

type DiskDataFolder struct {
	// contains filtered or unexported fields
}

func NewDiskDataFolder added in v0.1.14

func NewDiskDataFolder(folderPath string) (*DiskDataFolder, error)

func (*DiskDataFolder) CleanFolder added in v0.1.14

func (folder *DiskDataFolder) CleanFolder() error

func (*DiskDataFolder) OpenReadonlyFile added in v0.1.14

func (folder *DiskDataFolder) OpenReadonlyFile(filename string) (io.ReadCloser, error)

func (*DiskDataFolder) OpenWriteOnlyFile added in v0.1.14

func (folder *DiskDataFolder) OpenWriteOnlyFile(filename string) (io.WriteCloser, error)

type Empty

type Empty struct{}

Empty is used for channel signaling.

type EmptyWriteIgnorer

type EmptyWriteIgnorer struct {
	io.WriteCloser
}

EmptyWriteIgnorer handles 0 byte write in LZ4 package to stop pipe reader/writer from blocking.

func (EmptyWriteIgnorer) Write

func (e EmptyWriteIgnorer) Write(p []byte) (int, error)

type FileSystemCleaner added in v0.1.3

type FileSystemCleaner struct{}

FileSystemCleaner actually performs it's functions on file system

func (FileSystemCleaner) GetFiles added in v0.1.3

func (cleaner FileSystemCleaner) GetFiles(directory string) (files []string, err error)

TODO : unit tests GetFiles of a directory

func (FileSystemCleaner) Remove added in v0.1.3

func (cleaner FileSystemCleaner) Remove(file string)

Remove file

type FileTarInterpreter

type FileTarInterpreter struct {
	NewDir             string
	Sentinel           S3TarBallSentinelDto
	IncrementalBaseDir string
}

FileTarInterpreter extracts input to disk.

func (*FileTarInterpreter) Interpret

func (tarInterpreter *FileTarInterpreter) Interpret(tr io.Reader, cur *tar.Header) error

TODO : unit tests Interpret extracts a tar file to disk and creates needed directories. Returns the first error encountered. Calls fsync after each file is written successfully.

type IncrementalPageReader added in v0.1.3

type IncrementalPageReader struct {
	PagedFile ReadSeekCloser
	FileSize  int64
	Lsn       uint64
	Next      []byte
	Blocks    []uint32
}

IncrementalPageReader constructs difference map during initialization and than re-read file Diff map may consist of 1Gb/PostgresBlockSize elements == 512Kb

func (*IncrementalPageReader) AdvanceFileReader added in v0.1.3

func (pageReader *IncrementalPageReader) AdvanceFileReader() error

func (*IncrementalPageReader) Close added in v0.1.3

func (pageReader *IncrementalPageReader) Close() error

Close IncrementalPageReader

func (*IncrementalPageReader) DeltaBitmapInitialize added in v0.1.14

func (pageReader *IncrementalPageReader) DeltaBitmapInitialize(deltaBitmap *roaring.Bitmap)

func (*IncrementalPageReader) DrainMoreData added in v0.1.3

func (pageReader *IncrementalPageReader) DrainMoreData() (succeed bool, err error)

func (*IncrementalPageReader) FullScanInitialize added in v0.1.14

func (pageReader *IncrementalPageReader) FullScanInitialize() error

func (*IncrementalPageReader) Read added in v0.1.3

func (pageReader *IncrementalPageReader) Read(p []byte) (n int, err error)

func (*IncrementalPageReader) SelectNewValidPage added in v0.1.14

func (pageReader *IncrementalPageReader) SelectNewValidPage(pageBytes []byte, blockNo uint32) (valid bool)

SelectNewValidPage checks whether page is valid and if it so, then blockNo is appended to Blocks list

func (*IncrementalPageReader) WriteDiffMapToHeader added in v0.1.14

func (pageReader *IncrementalPageReader) WriteDiffMapToHeader(headerWriter io.Writer)

WriteDiffMapToHeader is currently used only with buffers, so we don't handle any writing errors

type InvalidBlockError added in v0.1.14

type InvalidBlockError struct {
	// contains filtered or unexported fields
}

InvalidBlockError indicates that file contain invalid page and cannot be archived incrementally

func (*InvalidBlockError) Error added in v0.1.14

func (err *InvalidBlockError) Error() string

type LazyCache added in v0.1.14

type LazyCache struct {
	// contains filtered or unexported fields
}

func NewLazyCache added in v0.1.14

func NewLazyCache(load func(key interface{}) (value interface{}, err error)) *LazyCache

func (*LazyCache) Load added in v0.1.14

func (lazyCache *LazyCache) Load(key interface{}) (value interface{}, exists bool, err error)

func (*LazyCache) LoadExisting added in v0.1.14

func (lazyCache *LazyCache) LoadExisting(key interface{}) (value interface{}, exists bool)

func (*LazyCache) Range added in v0.1.14

func (lazyCache *LazyCache) Range(reduce func(key, value interface{}) bool)

Range calls reduce sequentially for each key and value present in the cache. If reduce returns false, range stops the iteration.

func (*LazyCache) Store added in v0.1.14

func (lazyCache *LazyCache) Store(key, value interface{})

type LimitedReader added in v0.1.11

type LimitedReader struct {
	// contains filtered or unexported fields
}

func (*LimitedReader) Close added in v0.1.11

func (r *LimitedReader) Close() error

func (*LimitedReader) Read added in v0.1.11

func (r *LimitedReader) Read(buf []byte) (int, error)

type Lz4Compressor added in v0.1.11

type Lz4Compressor struct{}

func (Lz4Compressor) FileExtension added in v0.1.11

func (compressor Lz4Compressor) FileExtension() string

func (Lz4Compressor) NewWriter added in v0.1.11

func (compressor Lz4Compressor) NewWriter(writer io.Writer) ReaderFromWriteCloser

type Lz4Decompressor added in v0.1.11

type Lz4Decompressor struct{}

func (Lz4Decompressor) Decompress added in v0.1.11

func (decompressor Lz4Decompressor) Decompress(dst io.Writer, src io.Reader) error

func (Lz4Decompressor) FileExtension added in v0.1.11

func (decompressor Lz4Decompressor) FileExtension() string

type Lz4ReaderFromWriter added in v0.1.14

type Lz4ReaderFromWriter struct {
	lz4.Writer
}

func NewLz4ReaderFromWriter added in v0.1.14

func NewLz4ReaderFromWriter(dst io.Writer) *Lz4ReaderFromWriter

func (*Lz4ReaderFromWriter) ReadFrom added in v0.1.14

func (writer *Lz4ReaderFromWriter) ReadFrom(reader io.Reader) (n int64, err error)

type LzmaCompressor added in v0.1.11

type LzmaCompressor struct{}

func (LzmaCompressor) FileExtension added in v0.1.11

func (compressor LzmaCompressor) FileExtension() string

func (LzmaCompressor) NewWriter added in v0.1.11

func (compressor LzmaCompressor) NewWriter(writer io.Writer) ReaderFromWriteCloser

type LzmaDecompressor added in v0.1.11

type LzmaDecompressor struct{}

func (LzmaDecompressor) Decompress added in v0.1.11

func (decompressor LzmaDecompressor) Decompress(dst io.Writer, src io.Reader) error

func (LzmaDecompressor) FileExtension added in v0.1.11

func (decompressor LzmaDecompressor) FileExtension() string

type LzmaReaderFromWriter added in v0.1.11

type LzmaReaderFromWriter struct {
	lzma.Writer
}

func NewLzmaReaderFromWriter added in v0.1.11

func NewLzmaReaderFromWriter(dst io.Writer) (*LzmaReaderFromWriter, error)

func (*LzmaReaderFromWriter) ReadFrom added in v0.1.11

func (writer *LzmaReaderFromWriter) ReadFrom(reader io.Reader) (n int64, err error)

type LzoDecompressor added in v0.1.11

type LzoDecompressor struct{}

func (LzoDecompressor) Decompress added in v0.1.11

func (decompressor LzoDecompressor) Decompress(dst io.Writer, src io.Reader) error

func (LzoDecompressor) FileExtension added in v0.1.11

func (decompressor LzoDecompressor) FileExtension() string

type MD5Reader added in v0.1.11

type MD5Reader struct {
	// contains filtered or unexported fields
}

func (*MD5Reader) Read added in v0.1.11

func (reader *MD5Reader) Read(p []byte) (n int, err error)

func (*MD5Reader) Sum added in v0.1.11

func (reader *MD5Reader) Sum() string

type NOPTarBall added in v0.1.14

type NOPTarBall struct {
	// contains filtered or unexported fields
}

NOPTarBall mocks a tarball. Used for testing purposes.

func (*NOPTarBall) AddSize added in v0.1.14

func (tarBall *NOPTarBall) AddSize(i int64)

func (*NOPTarBall) AwaitUploads added in v0.1.14

func (tarBall *NOPTarBall) AwaitUploads()

func (*NOPTarBall) CloseTar added in v0.1.14

func (tarBall *NOPTarBall) CloseTar() error

func (*NOPTarBall) Finish added in v0.1.14

func (tarBall *NOPTarBall) Finish(sentinelDto *S3TarBallSentinelDto) error

func (*NOPTarBall) SetUp added in v0.1.14

func (tarBall *NOPTarBall) SetUp(crypter Crypter, params ...string)

func (*NOPTarBall) Size added in v0.1.14

func (tarBall *NOPTarBall) Size() int64

func (*NOPTarBall) TarWriter added in v0.1.14

func (tarBall *NOPTarBall) TarWriter() *tar.Writer

type NOPTarBallMaker added in v0.1.14

type NOPTarBallMaker struct {
	// contains filtered or unexported fields
}

NOPTarBallMaker creates a new NOPTarBall. Used for testing purposes.

func (*NOPTarBallMaker) Make added in v0.1.14

func (tarBallMaker *NOPTarBallMaker) Make(inheritState bool) TarBall

Make creates a new NOPTarBall.

type NamedReader added in v0.1.14

type NamedReader interface {
	io.Reader
	Name() string
}

type NamedReaderImpl added in v0.1.14

type NamedReaderImpl struct {
	io.Reader
	// contains filtered or unexported fields
}

func (*NamedReaderImpl) Name added in v0.1.14

func (reader *NamedReaderImpl) Name() string

type NoSuchFileError added in v0.1.14

type NoSuchFileError struct {
	// contains filtered or unexported fields
}

func NewNoSuchFileError added in v0.1.14

func NewNoSuchFileError(filename string) *NoSuchFileError

func (NoSuchFileError) Error added in v0.1.14

func (err NoSuchFileError) Error() string

type NotWalFilenameError added in v0.1.14

type NotWalFilenameError struct {
	// contains filtered or unexported fields
}

func (NotWalFilenameError) Error added in v0.1.14

func (err NotWalFilenameError) Error() string

type OpenPGPCrypter added in v0.1.3

type OpenPGPCrypter struct {
	Configured bool
	KeyRingId  string

	PubKey    openpgp.EntityList
	SecretKey openpgp.EntityList
}

OpenPGPCrypter incapsulates specific of cypher method Includes keys, infrastructutre information etc If many encryption methods will be used it worth to extract interface

func (*OpenPGPCrypter) ConfigureGPGCrypter added in v0.1.3

func (crypter *OpenPGPCrypter) ConfigureGPGCrypter()

ConfigureGPGCrypter is OpenPGPCrypter internal initialization

func (*OpenPGPCrypter) Decrypt added in v0.1.3

func (crypter *OpenPGPCrypter) Decrypt(reader io.ReadCloser) (io.Reader, error)

Decrypt creates decrypted reader from ordinary reader

func (*OpenPGPCrypter) Encrypt added in v0.1.3

func (crypter *OpenPGPCrypter) Encrypt(writer io.WriteCloser) (io.WriteCloser, error)

Encrypt creates encryption writer from ordinary writer

func (*OpenPGPCrypter) IsArmed added in v0.1.14

func (crypter *OpenPGPCrypter) IsArmed() bool

func (*OpenPGPCrypter) IsUsed added in v0.1.3

func (crypter *OpenPGPCrypter) IsUsed() bool

IsUsed is to check necessity of Crypter use Must be called prior to any other crypter call

type PagedFileDeltaMap added in v0.1.14

type PagedFileDeltaMap map[walparser.RelFileNode]*roaring.Bitmap

func NewPagedFileDeltaMap added in v0.1.14

func NewPagedFileDeltaMap() PagedFileDeltaMap

func (*PagedFileDeltaMap) AddToDelta added in v0.1.14

func (deltaMap *PagedFileDeltaMap) AddToDelta(location walparser.BlockLocation)

func (*PagedFileDeltaMap) GetDeltaBitmapFor added in v0.1.14

func (deltaMap *PagedFileDeltaMap) GetDeltaBitmapFor(filePath string) (*roaring.Bitmap, error)

TODO : unit test no bitmap found

type PgQueryRunner added in v0.1.8

type PgQueryRunner struct {
	Version int
	// contains filtered or unexported fields
}

PgQueryRunner is implementation for controlling PostgreSQL 9.0+

func NewPgQueryRunner added in v0.1.8

func NewPgQueryRunner(conn *pgx.Conn) (*PgQueryRunner, error)

NewPgQueryRunner builds QueryRunner from available connection

func (*PgQueryRunner) BuildGetVersion added in v0.1.8

func (queryRunner *PgQueryRunner) BuildGetVersion() string

BuildGetVersion formats a query to retrieve PostgreSQL numeric version

func (*PgQueryRunner) BuildStartBackup added in v0.1.8

func (queryRunner *PgQueryRunner) BuildStartBackup() (string, error)

BuildStartBackup formats a query that starts backup according to server features and version

func (*PgQueryRunner) BuildStopBackup added in v0.1.8

func (queryRunner *PgQueryRunner) BuildStopBackup() (string, error)

BuildStopBackup formats a query that stops backup according to server features and version

func (*PgQueryRunner) StartBackup added in v0.1.8

func (queryRunner *PgQueryRunner) StartBackup(backup string) (backupName string, lsnString string, inRecovery bool, err error)

StartBackup informs the database that we are starting copy of cluster contents

func (*PgQueryRunner) StopBackup added in v0.1.8

func (queryRunner *PgQueryRunner) StopBackup() (label string, offsetMap string, lsnStr string, err error)

StopBackup informs the database that copy is over

type PostgresPageHeader added in v0.1.14

type PostgresPageHeader struct {
	// contains filtered or unexported fields
}

func ParsePostgresPageHeader added in v0.1.14

func ParsePostgresPageHeader(reader io.Reader) (*PostgresPageHeader, error)

ParsePostgresPageHeader reads information from PostgreSQL page header. Exported for test reasons.

func (*PostgresPageHeader) IsNew added in v0.1.14

func (header *PostgresPageHeader) IsNew() bool

func (*PostgresPageHeader) IsValid added in v0.1.14

func (header *PostgresPageHeader) IsValid() bool

func (*PostgresPageHeader) Lsn added in v0.1.14

func (header *PostgresPageHeader) Lsn() uint64

type QueryRunner added in v0.1.8

type QueryRunner interface {
	// This call should inform the database that we are going to copy cluster's contents
	// Should fail if backup is currently impossible
	StartBackup(backup string) (string, string, bool, error)
	// Inform database that contents are copied, get information on backup
	StopBackup() (string, string, string, error)
}

The QueryRunner interface for controlling database during backup

type ReadCascadeCloser added in v0.1.11

type ReadCascadeCloser struct {
	io.Reader
	io.Closer
}

ReadCascadeCloser composes io.ReadCloser from two parts

type ReadSeekCloser added in v0.1.14

type ReadSeekCloser interface {
	io.Reader
	io.Seeker
	io.Closer
}

type ReadSeekCloserImpl added in v0.1.14

type ReadSeekCloserImpl struct {
	io.Reader
	io.Seeker
	io.Closer
}

type ReaderFromWriteCloser added in v0.1.11

type ReaderFromWriteCloser interface {
	io.ReaderFrom
	io.WriteCloser
}

type ReaderMaker

type ReaderMaker interface {
	Reader() (io.ReadCloser, error)
	Path() string
}

ReaderMaker is the generic interface used by extract. It allows for ease of handling different file formats.

type S3Folder added in v0.1.14

type S3Folder struct {
	S3API  s3iface.S3API
	Bucket *string
	Server string
	// contains filtered or unexported fields
}

func NewS3Folder added in v0.1.14

func NewS3Folder(s3API s3iface.S3API, bucket, server string, preventWalOverwrite bool) *S3Folder

type S3ReaderMaker

type S3ReaderMaker struct {
	Folder       *S3Folder
	RelativePath string
}

S3ReaderMaker creates readers for downloading from S3

func NewS3ReaderMaker added in v0.1.14

func NewS3ReaderMaker(folder *S3Folder, key string) *S3ReaderMaker

func (*S3ReaderMaker) Path

func (readerMaker *S3ReaderMaker) Path() string

func (*S3ReaderMaker) Reader

func (readerMaker *S3ReaderMaker) Reader() (io.ReadCloser, error)

Reader creates a new S3 reader for each S3 object.

type S3TarBall

type S3TarBall struct {
	// contains filtered or unexported fields
}

S3TarBall represents a tar file that is going to be uploaded to S3.

func (*S3TarBall) AddSize added in v0.1.8

func (tarBall *S3TarBall) AddSize(i int64)

AddSize to total Size

func (*S3TarBall) AwaitUploads added in v0.1.8

func (tarBall *S3TarBall) AwaitUploads()

func (*S3TarBall) CloseTar

func (tarBall *S3TarBall) CloseTar() error

CloseTar closes the tar writer, flushing any unwritten data to the underlying writer before also closing the underlying writer.

func (*S3TarBall) Finish

func (tarBall *S3TarBall) Finish(sentinelDto *S3TarBallSentinelDto) error

Finish writes a .json file description and uploads it with the the backup name. Finish will wait until all tar file parts have been uploaded. The json file will only be uploaded if all other parts of the backup are present in S3. an alert is given with the corresponding error.

func (*S3TarBall) SetUp

func (tarBall *S3TarBall) SetUp(crypter Crypter, names ...string)

SetUp creates a new tar writer and starts upload to S3. Upload will block until the tar file is finished writing. If a name for the file is not given, default name is of the form `part_....tar.[Compressor file extension]`.

func (*S3TarBall) Size

func (tarBall *S3TarBall) Size() int64

Size accumulated in this tarball

func (*S3TarBall) TarWriter added in v0.1.11

func (tarBall *S3TarBall) TarWriter() *tar.Writer

type S3TarBallMaker

type S3TarBallMaker struct {
	// contains filtered or unexported fields
}

S3TarBallMaker creates tarballs that are uploaded to S3.

func NewS3TarBallMaker added in v0.1.14

func NewS3TarBallMaker(backupName string, uploader *Uploader) *S3TarBallMaker

func (*S3TarBallMaker) Make

func (tarBallMaker *S3TarBallMaker) Make(dedicatedUploader bool) TarBall

Make returns a tarball with required S3 fields.

type S3TarBallSentinelDto added in v0.1.3

type S3TarBallSentinelDto struct {
	BackupStartLSN    *uint64 `json:"LSN"`
	IncrementFromLSN  *uint64 `json:"DeltaFromLSN,omitempty"`
	IncrementFrom     *string `json:"DeltaFrom,omitempty"`
	IncrementFullName *string `json:"DeltaFullName,omitempty"`
	IncrementCount    *int    `json:"DeltaCount,omitempty"`

	Files BackupFileList `json:"Files"`

	PgVersion       int     `json:"PgVersion"`
	BackupFinishLSN *uint64 `json:"FinishLSN"`

	UserData interface{} `json:"UserData,omitempty"`
}

S3TarBallSentinelDto describes file structure of json sentinel

type Saver added in v0.1.14

type Saver interface {
	Save(writer io.Writer) error
}

type Sentinel

type Sentinel struct {
	Info os.FileInfo
	// contains filtered or unexported fields
}

Sentinel is used to signal completion of a walked directory.

type TarBall

type TarBall interface {
	SetUp(crypter Crypter, args ...string)
	CloseTar() error
	Finish(sentinelDto *S3TarBallSentinelDto) error
	Size() int64
	AddSize(int64)
	TarWriter() *tar.Writer
	AwaitUploads()
}

A TarBall represents one tar file.

type TarBallMaker

type TarBallMaker interface {
	Make(dedicatedUploader bool) TarBall
}

TarBallMaker is used to allow for flexible creation of different TarBalls.

func NewNopTarBallMaker added in v0.1.14

func NewNopTarBallMaker() TarBallMaker

type TarInterpreter

type TarInterpreter interface {
	Interpret(reader io.Reader, header *tar.Header) error
}

TarInterpreter behaves differently for different file types.

type TimeSlice

type TimeSlice []BackupTime

TimeSlice represents a backup and its last modified time.

func (TimeSlice) Len

func (timeSlice TimeSlice) Len() int

func (TimeSlice) Less

func (timeSlice TimeSlice) Less(i, j int) bool

func (TimeSlice) Swap

func (timeSlice TimeSlice) Swap(i, j int)

type UnknownCompressionMethodError added in v0.1.11

type UnknownCompressionMethodError struct{}

func (UnknownCompressionMethodError) Error added in v0.1.11

type UnsetEnvVarError

type UnsetEnvVarError struct {
	// contains filtered or unexported fields
}

UnsetEnvVarError is used to indicate required environment variables for WAL-G.

func (UnsetEnvVarError) Error

func (e UnsetEnvVarError) Error() string

type UnsupportedFileTypeError

type UnsupportedFileTypeError struct {
	Path       string
	FileFormat string
}

UnsupportedFileTypeError is used to signal file types that are unsupported by WAL-G.

func (UnsupportedFileTypeError) Error

func (e UnsupportedFileTypeError) Error() string

type UntilEOFReader added in v0.1.11

type UntilEOFReader struct {
	// contains filtered or unexported fields
}

func NewUntilEofReader added in v0.1.11

func NewUntilEofReader(underlying io.Reader) *UntilEOFReader

func (*UntilEOFReader) Read added in v0.1.11

func (reader *UntilEOFReader) Read(p []byte) (n int, err error)

type Uploader added in v0.1.14

type Uploader struct {
	SSEKMSKeyId  string
	StorageClass string
	Success      bool
	// contains filtered or unexported fields
}

Uploader contains fields associated with uploading tarballs. Multiple tarballs can share one uploader. Must call CreateUploader() in 'configure.go'.

func NewUploader added in v0.1.14

func NewUploader(
	uploaderAPI s3manageriface.UploaderAPI,
	compressor Compressor,
	uploadingLocation *S3Folder,
	deltaDataFolder DataFolder,
	useWalDelta, verify bool,
) *Uploader

NewUploader creates a new tar uploader without the actual S3 uploader. CreateUploader() is used to configure byte size and concurrency streams for the uploader.

func (*Uploader) Clone added in v0.1.14

func (uploader *Uploader) Clone() *Uploader

Clone creates similar Uploader with new WaitGroup

func (*Uploader) CreateUploadInput added in v0.1.14

func (uploader *Uploader) CreateUploadInput(path string, reader io.Reader) *s3manager.UploadInput

CreateUploadInput creates a s3manager.UploadInput for a Uploader using the specified path and reader.

func (*Uploader) UploadFile added in v0.1.14

func (uploader *Uploader) UploadFile(file NamedReader) error

TODO : unit tests UploadFile compresses a file and uploads it.

func (*Uploader) UploadWalFile added in v0.1.14

func (uploader *Uploader) UploadWalFile(file NamedReader) error

TODO : unit tests

type WalDeltaRecorder added in v0.1.14

type WalDeltaRecorder struct {
	// contains filtered or unexported fields
}

func NewWalDeltaRecorder added in v0.1.14

func NewWalDeltaRecorder(blockLocationConsumer chan walparser.BlockLocation) *WalDeltaRecorder

type WalDeltaRecordingReader added in v0.1.14

type WalDeltaRecordingReader struct {
	PageReader       walparser.WalPageReader
	WalParser        walparser.WalParser
	PageDataLeftover []byte
	Recorder         *WalDeltaRecorder
	// contains filtered or unexported fields
}

In case of recording error WalDeltaRecordingReader stops recording, but continues reading data correctly

func NewWalDeltaRecordingReader added in v0.1.14

func NewWalDeltaRecordingReader(walFileReader io.Reader, walFilename string, manager *DeltaFileManager) (*WalDeltaRecordingReader, error)

func (*WalDeltaRecordingReader) Close added in v0.1.14

func (reader *WalDeltaRecordingReader) Close() error

func (*WalDeltaRecordingReader) Read added in v0.1.14

func (reader *WalDeltaRecordingReader) Read(p []byte) (n int, err error)

func (*WalDeltaRecordingReader) RecordBlockLocationsFromPage added in v0.1.14

func (reader *WalDeltaRecordingReader) RecordBlockLocationsFromPage() error

type WalPart added in v0.1.14

type WalPart struct {
	// contains filtered or unexported fields
}

func LoadWalPart added in v0.1.14

func LoadWalPart(reader io.Reader) (*WalPart, error)

func NewWalPart added in v0.1.14

func NewWalPart(dataType WalPartDataType, id uint8, data []byte) *WalPart

func (*WalPart) Save added in v0.1.14

func (part *WalPart) Save(writer io.Writer) error

type WalPartDataType added in v0.1.14

type WalPartDataType uint8
const (
	PreviousWalHeadType WalPartDataType = 0
	WalTailType         WalPartDataType = 1
	WalHeadType         WalPartDataType = 2
)

type WalPartFile added in v0.1.14

type WalPartFile struct {
	WalTails        [][]byte
	PreviousWalHead []byte
	WalHeads        [][]byte
}

func LoadPartFile added in v0.1.14

func LoadPartFile(reader io.Reader) (*WalPartFile, error)

func NewWalPartFile added in v0.1.14

func NewWalPartFile() *WalPartFile

func (*WalPartFile) CombineRecords added in v0.1.14

func (partFile *WalPartFile) CombineRecords() ([]walparser.XLogRecord, error)

func (*WalPartFile) IsComplete added in v0.1.14

func (partFile *WalPartFile) IsComplete() bool

func (*WalPartFile) Save added in v0.1.14

func (partFile *WalPartFile) Save(writer io.Writer) error

type WalPartRecorder added in v0.1.14

type WalPartRecorder struct {
	// contains filtered or unexported fields
}

func NewWalPartRecorder added in v0.1.14

func NewWalPartRecorder(walFilename string, manager *DeltaFileManager) (*WalPartRecorder, error)

func (*WalPartRecorder) SaveNextWalHead added in v0.1.14

func (recorder *WalPartRecorder) SaveNextWalHead(head []byte) error

func (*WalPartRecorder) SavePreviousWalTail added in v0.1.14

func (recorder *WalPartRecorder) SavePreviousWalTail(tailData []byte) error

type WrongTypeError added in v0.1.14

type WrongTypeError struct {
	// contains filtered or unexported fields
}

func (WrongTypeError) Error added in v0.1.14

func (err WrongTypeError) Error() string

type ZeroReader

type ZeroReader struct{}

ZeroReader generates a slice of zeroes. Used to pad tar in cases where length of file changes.

func (*ZeroReader) Read

func (z *ZeroReader) Read(p []byte) (int, error)

type ZstdCompressor added in v0.1.11

type ZstdCompressor struct{}

func (ZstdCompressor) FileExtension added in v0.1.11

func (compressor ZstdCompressor) FileExtension() string

func (ZstdCompressor) NewWriter added in v0.1.11

func (compressor ZstdCompressor) NewWriter(writer io.Writer) ReaderFromWriteCloser

type ZstdDecompressor added in v0.1.11

type ZstdDecompressor struct{}

func (ZstdDecompressor) Decompress added in v0.1.11

func (decompressor ZstdDecompressor) Decompress(dst io.Writer, src io.Reader) error

func (ZstdDecompressor) FileExtension added in v0.1.11

func (decompressor ZstdDecompressor) FileExtension() string

type ZstdReaderFromWriter added in v0.1.11

type ZstdReaderFromWriter struct {
	zstd.Writer
}

func NewZstdReaderFromWriter added in v0.1.11

func NewZstdReaderFromWriter(dst io.Writer) *ZstdReaderFromWriter

func (*ZstdReaderFromWriter) ReadFrom added in v0.1.11

func (writer *ZstdReaderFromWriter) ReadFrom(reader io.Reader) (n int64, err error)

Directories

Path Synopsis
cmd
wal-g command
cmd/compress command
cmd/delta command
cmd/extract command
cmd/generate command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL