Documentation
¶
Overview ¶
Package downloader implements download in parallel of various URLs, with various progress report callback.
It is used by the `hub` package, but it's also left public, in case it becomes useful for others.
Index ¶
- Variables
- type Manager
- func (m *Manager) Download(ctx context.Context, url string, filePath string, callback ProgressCallback) error
- func (m *Manager) FetchHeader(ctx context.Context, url string) (header http.Header, contentLength int64, err error)
- func (m *Manager) MaxParallel(n int) *Manager
- func (m *Manager) WithAuthToken(authToken string) *Manager
- func (m *Manager) WithUserAgent(userAgent string) *Manager
- type ProgressCallback
- type Semaphore
Constants ¶
This section is empty.
Variables ¶
var CancellationError = errors.New("download cancelled")
Functions ¶
This section is empty.
Types ¶
type Manager ¶
type Manager struct {
// contains filtered or unexported fields
}
Manager handles downloads, reporting back progress and errors.
func New ¶
func New() *Manager
New creates a Manager that download files in parallel -- by default mostly 20 in parallel.
func (*Manager) Download ¶
func (m *Manager) Download(ctx context.Context, url string, filePath string, callback ProgressCallback) error
Download downloads the given url to be downloaded to the given filePath. This may lock if it reached the maximum number of parallel downloads. Consider calling this on its own go-routine.
Progress of download is reported back to the given callback, if not nil.
The context ctx can be used to interrupt the downloading.
func (*Manager) FetchHeader ¶
func (m *Manager) FetchHeader(ctx context.Context, url string) (header http.Header, contentLength int64, err error)
FetchHeader fetches the header of a URL (using HTTP method "HEAD").
Notice it may lock on the maximum number of parallel requests, so consider calling this on a separate goroutine.
The context ctx can be used to interrupt the downloading.
func (*Manager) MaxParallel ¶
MaxParallel indicates how many files to download at the same time. Default is 20. If set to <= 0 it will download all files in parallel. Set to 1 to make downloads sequential.
func (*Manager) WithAuthToken ¶
WithAuthToken sets the authentication token to use in the requests. It is passed in the header "Authorization" and prefixed with "Bearer ".
Setting it to empty ("") is the same as resetting and not using authentication.
func (*Manager) WithUserAgent ¶
WithUserAgent sets the user agent to user.
type ProgressCallback ¶
type ProgressCallback func(downloadedBytes, totalBytes int64)
ProgressCallback is called as download progresses.
- totalBytes may be set to 0 if total size is not yet known.
type Semaphore ¶
type Semaphore struct {
// contains filtered or unexported fields
}
Semaphore that allows dynamic resizing.
It uses a sync.Cond, to allow dynamic resizing, so it will be slower than a pure channel version of a semaphore, with a fixed capacity. This shouldn't matter for more coarse resource control.
Implementation copied from github.com/gomlx/gomlx/types/xsync.
func NewSemaphore ¶
NewSemaphore returns a Semaphore that allows at most capacity simultaneous acquisitions. If capacity <= 0, there is no limit on acquisitions.
FIFO ordering may be lost during resizes (Semaphore.Resize) to larger capacity, but otherwise it is respected.
func (*Semaphore) Acquire ¶
func (s *Semaphore) Acquire()
Acquire resource observing current semaphore capacity. It must be matched by exactly one call to Semaphore.Release after the reservation is no longer needed.
func (*Semaphore) Release ¶
func (s *Semaphore) Release()
Release resource previously allocated with Semaphore.Acquire.
func (*Semaphore) Resize ¶
Resize number of available resources in the Semaphore.
If newCapacity is larger than previous one, this may immediately allow pending Semaphore.Acquire to proceed. Notice since all waiting Semaphore.Acquire are awoken (broadcast), the queue order may be lost.
If newCapacity is smaller than previous one, it doesn't have any effect on current acquisitions. So if the Semaphore is being used to control a worker pool, reducing its size won't stop workers currently executing.