nn

package
v0.6.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 24, 2022 License: Apache-2.0 Imports: 10 Imported by: 35

Documentation

Index

Constants

View Source
const SEP = "."

SEP is a separator to separate path elements in the tensor names.

Variables

This section is empty.

Functions

func BCELoss added in v0.3.14

func BCELoss(logits, target *ts.Tensor, opts ...LossFnOption) *ts.Tensor

BCELoss calculates a binary cross entropy loss.

- logits: tensor of shape [B, C, H, W] corresponding the raw output of the model. - target: ground truth tensor of shape [B, 1, H, W] - posWeight: scalar representing the weight attributed to positive class. This is especially useful for an imbalanced dataset

func BatchAccuracyForLogits

func BatchAccuracyForLogits(vs *VarStore, m ts.ModuleT, xs, ys *ts.Tensor, d gotch.Device, batchSize int) (retVal float64)

BatchAccuracyForLogits calculates average accuracy of test batches.

NOTE: Pytorch uses `NoGradGuard` which is a thread local scope and it sets a global flag that is checked by the backend whenever an op is done on a variable. The guard itself saved the current status and set it to false in the constructor. And restore the saved status in it’s destructor. That way it is similar to a with torch.no_grad(): block in python. This seems not working in Go. There 2 ways to get around. One is freeze VarStore, the other is set manually set AutoGrad at `loss` tensor. I.e., `loss = loss.MustSetRequiresGrad(true)`

func BatchAccuracyForLogitsIdx

func BatchAccuracyForLogitsIdx(vs *VarStore, m ts.ModuleT, xs, ys *ts.Tensor, d gotch.Device, batchSize int) (retVal float64)

BatchAccuracyForLogitIdx is an alternative of BatchAccuracyForLogits to calculate accuracy for specified batch on module weight. It uses tensor indexing instead of Iter2

func BatchAccuracyForLogitsOld added in v0.3.0

func BatchAccuracyForLogitsOld(vs *VarStore, m ts.ModuleT, xs, ys *ts.Tensor, d gotch.Device, batchSize int) (retVal float64)

func CrossEntropyLoss added in v0.3.14

func CrossEntropyLoss(logits, target *ts.Tensor, opts ...LossFnOption) *ts.Tensor

CrossEntropyLoss calculates cross entropy loss. Ref. https://github.com/pytorch/pytorch/blob/15be189f0de4addf4f68d18022500f67617ab05d/torch/nn/functional.py#L2012 - logits: tensor of shape [B, C, H, W] corresponding the raw output of the model. - target: ground truth tensor of shape [B, 1, H, W] - posWeight: scalar representing the weight attributed to positive class. This is especially useful for an imbalanced dataset

func NewConstInit

func NewConstInit(v float64) constInit

func NewGlorotNInit

func NewGlorotNInit() glorotNInit

func NewKaimingUniformInit

func NewKaimingUniformInit() kaimingUniformInit

func NewParameter added in v0.6.0

func NewParameter(path *Path, name string, x *ts.Tensor, requireGradOpt ...bool) *ts.Tensor

NewParameter creates a kind of tensor that is considered as a module parameter. Ref. https://pytorch.org/docs/stable/generated/torch.nn.parameter.Parameter.html

func NewRandnInit

func NewRandnInit(mean, stdev float64) randnInit

func NewUniformInit

func NewUniformInit(lo, up float64) uniformInit

func WithUint8

func WithUint8(n uint8) func() uint8

WithUint8 returns an uint8 value option

Types

type AdamConfig

type AdamConfig struct {
	Beta1 float64
	Beta2 float64
	Wd    float64
}

func DefaultAdamConfig

func DefaultAdamConfig() *AdamConfig

DefaultAdamConfig creates AdamConfig with default values

func NewAdamConfig

func NewAdamConfig(beta1, beta2, wd float64) *AdamConfig

NewAdamConfig creates AdamConfig with specified values

func (*AdamConfig) Build

func (c *AdamConfig) Build(vs *VarStore, lr float64) (*Optimizer, error)

type AdamWConfig added in v0.3.11

type AdamWConfig struct {
	Beta1 float64
	Beta2 float64
	Wd    float64
}

func DefaultAdamWConfig added in v0.3.11

func DefaultAdamWConfig() *AdamWConfig

DefaultAdamWConfig creates AdamWConfig with default values

func NewAdamWConfig added in v0.3.11

func NewAdamWConfig(beta1, beta2, wd float64) *AdamWConfig

NewAdamWConfig creates AdamWConfig with specified values

func (*AdamWConfig) Build added in v0.3.11

func (c *AdamWConfig) Build(vs *VarStore, lr float64) (*Optimizer, error)

Build builds AdamW optimizer

type BatchNorm

type BatchNorm struct {
	RunningMean *ts.Tensor
	RunningVar  *ts.Tensor
	Ws          *ts.Tensor
	Bs          *ts.Tensor
	Nd          uint
	// contains filtered or unexported fields
}

A batch-normalization layer.

func BatchNorm1D

func BatchNorm1D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm

Applies Batch Normalization over a three dimension input.

The input shape is assumed to be (N, C, L). Normalization is performed over the first batch dimension N.

func BatchNorm2D

func BatchNorm2D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm

Applies Batch Normalization over a four dimension input.

The input shape is assumed to be (N, C, H, W). Normalization is performed over the first batch dimension N.

func BatchNorm3D

func BatchNorm3D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm

Applies Batch Normalization over a five dimension input.

The input shape is assumed to be (N, C, D, H, W). Normalization is performed over the first batch dimension N.

func NewBatchNorm

func NewBatchNorm(vs *Path, nd uint, outDim int64, config *BatchNormConfig) *BatchNorm

NewBatchNorm creates a new BatchNorm layer

func (*BatchNorm) Forward added in v0.6.0

func (bn *BatchNorm) Forward(xs *ts.Tensor) (retVal *ts.Tensor)

Forward forwards inputs through the module. NOTE. This forwarding will update BatchNorm weight by default (training=true). Wrap module with tensor.NoGrad() when running model inference mode.

func (*BatchNorm) ForwardT

func (bn *BatchNorm) ForwardT(xs *ts.Tensor, train bool) (retVal *ts.Tensor)

type BatchNormConfig

type BatchNormConfig struct {
	CudnnEnable bool
	Eps         float64
	Momentum    float64
	WsInit      Init
	BsInit      Init
}

Batch-normalization config.

func DefaultBatchNormConfig

func DefaultBatchNormConfig() *BatchNormConfig

type Conv

type Conv interface{}

func NewConv

func NewConv(vs *Path, inDim, outDim int64, ksizes []int64, config interface{}) Conv

NewConv is a generic builder to build Conv1D, Conv2D, Conv3D. It returns an interface Conv which might need a type assertion for further use.

type Conv1D

type Conv1D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *Conv1DConfig
}

Conv1D is convolution 1D struct.

func NewConv1D

func NewConv1D(vs *Path, inDim, outDim, k int64, cfg *Conv1DConfig) *Conv1D

NewConv1D creates Conv1D struct.

func (*Conv1D) Forward

func (c *Conv1D) Forward(xs *ts.Tensor) *ts.Tensor

func (*Conv1D) ForwardT

func (c *Conv1D) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

type Conv1DConfig

type Conv1DConfig struct {
	Stride   []int64
	Padding  []int64
	Dilation []int64
	Groups   int64
	Bias     bool
	WsInit   Init
	BsInit   Init
}

Conv1DConfig is configuration struct for convolution 1D.

func DefaultConv1DConfig

func DefaultConv1DConfig() *Conv1DConfig

DefaultConvConfig create a default 1D ConvConfig

func NewConv1DConfig added in v0.4.2

func NewConv1DConfig(opts ...Conv1DConfigOpt) *Conv1DConfig

NewConv1DConfig creates Conv1DConfig.

type Conv1DConfigOpt added in v0.4.2

type Conv1DConfigOpt func(*Conv1DConfig)

Conv1DConfigOpt is option for Conv1DConfig.

func WithBias1D added in v0.4.2

func WithBias1D(val bool) Conv1DConfigOpt

WithBias1D adds bias 1D option.

func WithBsInit1D added in v0.4.2

func WithBsInit1D(val Init) Conv1DConfigOpt

WithBsInit adds BsInit 1D option.

func WithDilation1D added in v0.4.2

func WithDilation1D(val int64) Conv1DConfigOpt

WithDilation1D adds dilation 1D option.

func WithGroup1D added in v0.4.2

func WithGroup1D(val int64) Conv1DConfigOpt

func WithPadding1D added in v0.4.2

func WithPadding1D(val int64) Conv1DConfigOpt

WithPadding1D adds padding 1D option.

func WithStride1D added in v0.4.2

func WithStride1D(val int64) Conv1DConfigOpt

withStride1D adds stride 1D option.

func WithWsInit1D added in v0.4.2

func WithWsInit1D(val Init) Conv1DConfigOpt

WithWsInit adds WsInit 1D option.

type Conv2D

type Conv2D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *Conv2DConfig
}

Conv2D is convolution 2D struct.

func NewConv2D

func NewConv2D(vs *Path, inDim, outDim int64, k int64, cfg *Conv2DConfig) *Conv2D

NewConv2D creates new Conv2D.

func (*Conv2D) Forward

func (c *Conv2D) Forward(xs *ts.Tensor) *ts.Tensor

func (*Conv2D) ForwardT

func (c *Conv2D) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

type Conv2DConfig

type Conv2DConfig struct {
	Stride   []int64
	Padding  []int64
	Dilation []int64
	Groups   int64
	Bias     bool
	WsInit   Init
	BsInit   Init
}

Conv2DConfig is configuration for convolution 2D.

func DefaultConv2DConfig

func DefaultConv2DConfig() *Conv2DConfig

DefaultConvConfig2D creates a default 2D ConvConfig

func NewConv2DConfig added in v0.4.2

func NewConv2DConfig(opts ...Conv2DConfigOpt) *Conv2DConfig

NewConv2DConfig creates Conv2DConfig.

type Conv2DConfigOpt added in v0.4.2

type Conv2DConfigOpt func(*Conv2DConfig)

Conv2DConfigOpt is option type for Conv2DConfig.

func WithBias2D added in v0.4.2

func WithBias2D(val bool) Conv2DConfigOpt

WithBias2D adds bias 2D option.

func WithBsInit2D added in v0.4.2

func WithBsInit2D(val Init) Conv2DConfigOpt

WithBsInit adds BsInit 2D option.

func WithDilation2D added in v0.4.2

func WithDilation2D(val int64) Conv2DConfigOpt

WithDilation2D adds dilation 2D option.

func WithGroup2D added in v0.4.2

func WithGroup2D(val int64) Conv2DConfigOpt

WithGroup2D adds group 2D option.

func WithPadding2D added in v0.4.2

func WithPadding2D(val int64) Conv2DConfigOpt

WithPadding2D adds padding 2D option.

func WithStride2D added in v0.4.2

func WithStride2D(val int64) Conv2DConfigOpt

WithStride2D adds stride 2D option.

func WithWsInit2D added in v0.4.2

func WithWsInit2D(val Init) Conv2DConfigOpt

WithWsInit2D adds WsInit 2D option.

type Conv3D

type Conv3D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *Conv3DConfig
}

Conv3D is convolution 3D struct.

func NewConv3D

func NewConv3D(vs *Path, inDim, outDim, k int64, cfg *Conv3DConfig) *Conv3D

NewConv3D creates new Conv3D struct.

func (*Conv3D) Forward

func (c *Conv3D) Forward(xs *ts.Tensor) *ts.Tensor

func (*Conv3D) ForwardT

func (c *Conv3D) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

type Conv3DConfig

type Conv3DConfig struct {
	Stride   []int64
	Padding  []int64
	Dilation []int64
	Groups   int64
	Bias     bool
	WsInit   Init
	BsInit   Init
}

Conv3DConfig is configuration struct for convolution 3D.

func DefaultConv3DConfig added in v0.4.5

func DefaultConv3DConfig() *Conv3DConfig

DefaultConvConfig3D creates a default 3D ConvConfig

func NewConv3DConfig added in v0.4.5

func NewConv3DConfig(opts ...Conv3DConfigOpt) *Conv3DConfig

NewConv3DConfig creates Conv3DConfig.

type Conv3DConfigOpt added in v0.4.5

type Conv3DConfigOpt func(*Conv3DConfig)

Conv3DConfigOpt is option type for Conv3DConfig.

func WithBias3D added in v0.4.5

func WithBias3D(val bool) Conv3DConfigOpt

WithBias3D adds bias 3D option.

func WithBsInit3D added in v0.4.5

func WithBsInit3D(val Init) Conv3DConfigOpt

WithBsInit adds BsInit 3D option.

func WithDilation3D added in v0.4.5

func WithDilation3D(val int64) Conv3DConfigOpt

WithDilation3D adds dilation 3D option.

func WithGroup3D added in v0.4.5

func WithGroup3D(val int64) Conv3DConfigOpt

WithGroup3D adds group 3D option.

func WithPadding3D added in v0.4.5

func WithPadding3D(val int64) Conv3DConfigOpt

WithPadding3D adds padding 3D option.

func WithStride3D added in v0.4.5

func WithStride3D(val int64) Conv3DConfigOpt

WithStride3D adds stride 3D option.

func WithWsInit3D added in v0.4.5

func WithWsInit3D(val Init) Conv3DConfigOpt

WithWsInit3D adds WsInit 3D option.

type ConvTranspose1D

type ConvTranspose1D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *ConvTranspose1DConfig
}

func NewConvTranspose1D

func NewConvTranspose1D(vs *Path, inDim, outDim int64, ksizes []int64, cfg *ConvTranspose1DConfig) *ConvTranspose1D

func (*ConvTranspose1D) Forward

func (c *ConvTranspose1D) Forward(xs *ts.Tensor) *ts.Tensor

type ConvTranspose1DConfig

type ConvTranspose1DConfig struct {
	Stride        []int64
	Padding       []int64
	OutputPadding []int64
	Dilation      []int64
	Groups        int64
	Bias          bool
	WsInit        Init
	BsInit        Init
}

func DefaultConvTranspose1DConfig

func DefaultConvTranspose1DConfig() *ConvTranspose1DConfig

DefaultConvConfig create a default 1D ConvConfig

type ConvTranspose2D

type ConvTranspose2D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *ConvTranspose2DConfig
}

func NewConvTranspose2D

func NewConvTranspose2D(vs *Path, inDim, outDim int64, ksizes []int64, cfg *ConvTranspose2DConfig) *ConvTranspose2D

func (*ConvTranspose2D) Forward

func (c *ConvTranspose2D) Forward(xs *ts.Tensor) *ts.Tensor

type ConvTranspose2DConfig

type ConvTranspose2DConfig struct {
	Stride        []int64
	Padding       []int64
	OutputPadding []int64
	Dilation      []int64
	Groups        int64
	Bias          bool
	WsInit        Init
	BsInit        Init
}

type ConvTranspose3D

type ConvTranspose3D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *ConvTranspose3DConfig
}

func NewConvTranspose3D

func NewConvTranspose3D(vs *Path, inDim, outDim int64, ksizes []int64, cfg *ConvTranspose3DConfig) *ConvTranspose3D

func (*ConvTranspose3D) Forward

func (c *ConvTranspose3D) Forward(xs *ts.Tensor) *ts.Tensor

type ConvTranspose3DConfig

type ConvTranspose3DConfig struct {
	Stride        []int64
	Padding       []int64
	OutputPadding []int64
	Dilation      []int64
	Groups        int64
	Bias          bool
	WsInit        Init
	BsInit        Init
}

type CosineAnnealingLR added in v0.3.10

type CosineAnnealingLR struct {
	// contains filtered or unexported fields
}

CosineAnnealingLR set the learning rates of each optimizer parameter group by using a cosine annealing schedule where eta max is set to initial learning rate and Tcur is the number of epochs since the last restart in SGDR (Stochastic Gradient Descent with Warm Restarts).

NOTE. this implements only the cosine annealing part of SGDR, and not the starts. Ref. - https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.CosineAnnealingLR - https://arxiv.org/abs/1608.03983

func NewCosineAnnealingLR added in v0.3.10

func NewCosineAnnealingLR(opt *Optimizer, tmax int, etaMin float64) *CosineAnnealingLR

NewConsineAnnealingLR creates a new ConsineAnnealingLR.

func (*CosineAnnealingLR) Build added in v0.3.10

func (ca *CosineAnnealingLR) Build() *LRScheduler

Build implements scheduler interface.

func (*CosineAnnealingLR) SetLRs added in v0.3.10

func (ca *CosineAnnealingLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type CosineAnnealingWarmRestarts added in v0.3.11

type CosineAnnealingWarmRestarts struct {
	// contains filtered or unexported fields
}

CosineAnnealingWarmRestart sets the learning rate of each parameter group / using a cosine annealing schedule.

Source: Stochastic Gradient Descent with Warm Restarts: https://arxiv.org/abs/1608.03983

func NewCosineAnnealingWarmRestarts added in v0.3.11

func NewCosineAnnealingWarmRestarts(opt *Optimizer, t0 int, opts ...CosineAnnealingWarmRestartsOption) *CosineAnnealingWarmRestarts

func (*CosineAnnealingWarmRestarts) Build added in v0.3.11

Build implement scheduler interface

func (*CosineAnnealingWarmRestarts) SetLRs added in v0.3.11

func (s *CosineAnnealingWarmRestarts) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

NOTE. scheduler.Step(epoch) could be called after every batch update

type CosineAnnealingWarmRestartsOption added in v0.3.11

type CosineAnnealingWarmRestartsOption func(*CosineAnnealingWarmRestartsOptions)

func WithCosineAnnealingLastEpoch added in v0.3.11

func WithCosineAnnealingLastEpoch(v int) CosineAnnealingWarmRestartsOption

func WithEtaMin added in v0.3.11

func WithTMult added in v0.3.11

type CosineAnnealingWarmRestartsOptions added in v0.3.11

type CosineAnnealingWarmRestartsOptions struct {
	TMult     int
	EtaMin    float64
	LastEpoch int
}

type CyclicLR added in v0.3.11

type CyclicLR struct {
	// contains filtered or unexported fields
}

CyclicLR sets the learning rate of each parameter group according to cyclical learning rate policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper `Cyclical Learning Rates for Training Neural Networks`_. The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis.

Cyclical learning rate policy changes the learning rate after every batch. `Step()` should be called after a batch has been used for training. This class has three built-in policies, as put forth in the paper: - "triangular": A basic triangular cycle without amplitude scaling. - "triangular2": A basic triangular cycle that scales initial amplitude by half each cycle. - "exp_range": A cycle that scales initial amplitude by :math:`\text{gamma}^{\text{cycle iterations}}` at each cycle iteration.

Source: - Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186 - bckenstler/CLR: https://github.com/bckenstler/CLR

func NewCyclicLR added in v0.3.11

func NewCyclicLR(opt *Optimizer, baseLRs, maxLRs []float64, opts ...CyclicOption) *CyclicLR

func (*CyclicLR) Build added in v0.3.11

func (cyc *CyclicLR) Build() *LRScheduler

Build implements scheduler interface.

func (*CyclicLR) SetLRs added in v0.3.11

func (cyc *CyclicLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

It calculates the learning rate at batch index. This function treats `lastEpoch` as the last batch index. NOTE. If `cycleMomentum` is “true“, this function has a side effect of updating the optimizer's momentum.

type CyclicOption added in v0.3.11

type CyclicOption func(*CyclicOptions)

func WithCyclicBaseMomentum added in v0.3.11

func WithCyclicBaseMomentum(v float64) CyclicOption

func WithCyclicCycleMomentum added in v0.3.11

func WithCyclicCycleMomentum(v bool) CyclicOption

func WithCyclicGamma added in v0.3.11

func WithCyclicGamma(v float64) CyclicOption

func WithCyclicLastEpoch added in v0.3.11

func WithCyclicLastEpoch(v int) CyclicOption

func WithCyclicMaxMomentum added in v0.3.11

func WithCyclicMaxMomentum(v float64) CyclicOption

func WithCyclicMode added in v0.3.11

func WithCyclicMode(v string) CyclicOption

func WithCyclicScaleFn added in v0.3.11

func WithCyclicScaleFn(v func(x float64) float64) CyclicOption

func WithCyclicScaleMode added in v0.3.11

func WithCyclicScaleMode(v string) CyclicOption

func WithCyclicStepSizeDown added in v0.3.11

func WithCyclicStepSizeDown(v int) CyclicOption

func WithCyclicStepSizeUp added in v0.3.11

func WithCyclicStepSizeUp(v int) CyclicOption

type CyclicOptions added in v0.3.11

type CyclicOptions struct {
	StepSizeUp    int                     // 2000
	StepSizeDown  int                     // -1
	Mode          string                  // "triangular"
	Gamma         float64                 // 1.0
	ScaleFn       func(x float64) float64 // nil
	ScaleMode     string                  // "cycle"
	CycleMomentum bool                    // true
	BaseMomentum  float64                 // 0.8
	MaxMomentum   float64                 // 0.9
	LastEpoch     int                     // -1
}

type Dropout added in v0.6.0

type Dropout struct {
	// contains filtered or unexported fields
}

Dropout represents a neural network dropout layer.

func NewDropout added in v0.6.0

func NewDropout(p float64) *Dropout

NewDropout creates a new Dropout layer

func (*Dropout) ForwardT added in v0.6.0

func (d *Dropout) ForwardT(input *ts.Tensor, train bool) (retVal *ts.Tensor)

ForwardT implements ModuleT for Dropout layer.

type Embedding

type Embedding struct {
	Ws *ts.Tensor
	// contains filtered or unexported fields
}

An embedding layer.

An embedding layer acts as a simple lookup table that stores embeddings. This is commonly used to store word embeddings.

func NewEmbedding

func NewEmbedding(vs *Path, numEmbeddings int64, embeddingDim int64, config *EmbeddingConfig) *Embedding

NewEmbedding creates a new Embedding

func (*Embedding) Forward

func (e *Embedding) Forward(xs *ts.Tensor) *ts.Tensor

Forward implements Module interface for Embedding

func (*Embedding) ForwardT

func (e *Embedding) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

ForwardT implements ModuleT interface for Embedding

type EmbeddingConfig

type EmbeddingConfig struct {
	Sparse          bool
	ScaleGradByFreq bool
	WsInit          Init
	PaddingIdx      int64
}

Configuration option for an embedding layer.

func DefaultEmbeddingConfig

func DefaultEmbeddingConfig() *EmbeddingConfig

type Entry

type Entry struct {
	// contains filtered or unexported fields
}

Entry holds an entry corresponding to a given name in Path.

func (*Entry) OrKaimingUniform

func (e *Entry) OrKaimingUniform(dims []int64) *ts.Tensor

Returns the existing entry if, otherwise create a new variable.

func (*Entry) OrOnes

func (e *Entry) OrOnes(dims []int64) *ts.Tensor

OrOnes returns the existing entry if, otherwise create a new variable.

func (*Entry) OrOnesNoTrain

func (e *Entry) OrOnesNoTrain(dims []int64) *ts.Tensor

OrOnesNoTrain returns the existing entry if, otherwise create a new variable.

func (*Entry) OrRandn

func (e *Entry) OrRandn(dims []int64, mean, stdev float64) *ts.Tensor

OrRandn returns the existing entry if, otherwise create a new variable.

func (*Entry) OrRandnStandard

func (e *Entry) OrRandnStandard(dims []int64) *ts.Tensor

OrRandnStandard returns the existing entry if, otherwise create a new variable.

func (*Entry) OrUniform

func (e *Entry) OrUniform(dims []int64, lo, up float64) (retVal *ts.Tensor)

OrUniform returns the existing entry if, otherwise create a new variable.

func (*Entry) OrVar

func (e *Entry) OrVar(dims []int64, init Init) *ts.Tensor

OrVar returns the existing entry if, otherwise create a new variable.

If this entry name matches the name of a variables stored in the var store, the corresponding tensor is returned. Otherwise a new variable is added to the var-store with the entry name and is initialized according to the init parameter.

func (*Entry) OrVarCopy

func (e *Entry) OrVarCopy(tensor *ts.Tensor) *ts.Tensor

Returns the existing entry if, otherwise create a new variable.

func (*Entry) OrZeros

func (e *Entry) OrZeros(dims []int64) *ts.Tensor

OrZeros returns the existing entry if, otherwise create a new variable.

func (*Entry) OrZerosNoTrain

func (e *Entry) OrZerosNoTrain(dims []int64) *ts.Tensor

OrZerosNoTrain returns the existing entry if, otherwise create a new variable.

type ExponentialLR added in v0.3.10

type ExponentialLR struct {
	// contains filtered or unexported fields
}

ExponentialLR decays the learning rates of each optimizer parameter group by gamma every epochs.

func NewExponentialLR added in v0.3.10

func NewExponentialLR(opt *Optimizer, gamma float64) *ExponentialLR

NewExponentialLR creates a new ExponentialLR.

func (*ExponentialLR) Build added in v0.3.10

func (e *ExponentialLR) Build() *LRScheduler

Build implements scheduler interface.

func (*ExponentialLR) SetLRs added in v0.3.10

func (e *ExponentialLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type ForwardTWith

type ForwardTWith func(*ts.Tensor, bool) *ts.Tensor

func (ForwardTWith) ForwardT

func (fw ForwardTWith) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

type ForwardWith

type ForwardWith func(*ts.Tensor) *ts.Tensor

ForwardWith is a handler function to implement Module interface for any (anonymous) function it wraps.

Ref. https://stackoverflow.com/a/42182987 NOTE: Specifically, `ForwardWith` is used to wrap anonymous function as input parameter of `AddFn` Sequential method.

func (ForwardWith) Forward

func (fw ForwardWith) Forward(xs *ts.Tensor) *ts.Tensor

type Func

type Func struct {
	// contains filtered or unexported fields
}

func NewFunc

func NewFunc(fn func(*ts.Tensor) *ts.Tensor) (retVal Func)

func (Func) Forward

func (fn Func) Forward(xs *ts.Tensor) (retVal *ts.Tensor)

Implement Module interface for Func: ====================================

func (Func) ForwardT

func (fn Func) ForwardT(xs *ts.Tensor, train bool) (retVal *ts.Tensor)

ForwardT implements ModuleT for Func object as well.

NOTE: train param will not be used.

type FuncT

type FuncT struct {
	// contains filtered or unexported fields
}

func NewFuncT

func NewFuncT(fn func(*ts.Tensor, bool) *ts.Tensor) (retVal FuncT)

func (FuncT) ForwardT

func (fn FuncT) ForwardT(xs *ts.Tensor, train bool) (retVal *ts.Tensor)

Implement Module interface for Func: ====================================

type GRU

type GRU struct {
	// contains filtered or unexported fields
}

A Gated Recurrent Unit (GRU) layer.

https://en.wikipedia.org/wiki/Gated_recurrent_unit

func NewGRU

func NewGRU(vs *Path, inDim, hiddenDim int64, cfg *RNNConfig) (retVal *GRU)

NewGRU create a new GRU layer

func (*GRU) Seq

func (g *GRU) Seq(input *ts.Tensor) (*ts.Tensor, State)

func (*GRU) SeqInit

func (g *GRU) SeqInit(input *ts.Tensor, inState State) (*ts.Tensor, State)

func (*GRU) Step

func (g *GRU) Step(input *ts.Tensor, inState State) State

func (*GRU) ZeroState

func (g *GRU) ZeroState(batchDim int64) State

type GRUState

type GRUState struct {
	Tensor *ts.Tensor
}

GRUState is a GRU state. It contains a single tensor.

func (*GRUState) Value

func (gs *GRUState) Value() *ts.Tensor

type Identity added in v0.6.0

type Identity struct{}

func NewIdentity added in v0.6.0

func NewIdentity() *Identity

func (*Identity) Forward added in v0.6.0

func (m *Identity) Forward(x *ts.Tensor) *ts.Tensor

type Init

type Init interface {
	// creates a new tensor with specified initiation
	InitTensor(dims []int64, device gotch.Device) (retVal *ts.Tensor)

	// re-initializes (in-place) an existing tensor with the specified initiation
	Set(tensor *ts.Tensor)
}

type LRScheduler added in v0.3.10

type LRScheduler struct {
	// contains filtered or unexported fields
}

LRScheduler is a scheduler to update optimizer learning rates.

func NewLRScheduler added in v0.4.2

func NewLRScheduler(s scheduler) *LRScheduler

func (*LRScheduler) Step added in v0.3.10

func (s *LRScheduler) Step(opts ...SchedulerOption)

Step updates optimizer learning rate.

type LSTM

type LSTM struct {
	// contains filtered or unexported fields
}

A Long Short-Term Memory (LSTM) layer.

https://en.wikipedia.org/wiki/Long_short-term_memory

func NewLSTM

func NewLSTM(vs *Path, inDim, hiddenDim int64, cfg *RNNConfig) *LSTM

NewLSTM creates a LSTM layer.

func (*LSTM) Seq

func (l *LSTM) Seq(input *ts.Tensor) (*ts.Tensor, State)

func (*LSTM) SeqInit

func (l *LSTM) SeqInit(input *ts.Tensor, inState State) (*ts.Tensor, State)

func (*LSTM) Step

func (l *LSTM) Step(input *ts.Tensor, inState State) State

func (*LSTM) ZeroState

func (l *LSTM) ZeroState(batchDim int64) State

type LSTMState

type LSTMState struct {
	Tensor1 *ts.Tensor
	Tensor2 *ts.Tensor
}

The state for a LSTM network, this contains two tensors.

func (*LSTMState) C

func (ls *LSTMState) C() *ts.Tensor

The cell state vector.

func (*LSTMState) H

func (ls *LSTMState) H() *ts.Tensor

The hidden state vector, which is also the output of the LSTM.

type LambdaFn added in v0.3.10

type LambdaFn func(in interface{}) float64

type LambdaLR added in v0.3.10

type LambdaLR struct {
	// contains filtered or unexported fields
}

LamdaLR calculates new learning rate for each parameter group by applying Lambda function to the corresponding INITIAL learning rate.

func NewLambdaLR added in v0.3.10

func NewLambdaLR(opt *Optimizer, ldFns []LambdaFn) *LambdaLR

NewLambdaLRS creates a new LambdaLRS.

func (*LambdaLR) Build added in v0.3.10

func (l *LambdaLR) Build() *LRScheduler

Build implements scheduler interface.

func (*LambdaLR) SetLRs added in v0.3.10

func (l *LambdaLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type LayerNorm

type LayerNorm struct {
	Config          *LayerNormConfig
	Ws              *ts.Tensor // optional
	Bs              *ts.Tensor // optional
	NormalizedShape []int64
}

A layer-normalization layer.

func NewLayerNorm

func NewLayerNorm(vs *Path, normalizedShape []int64, config *LayerNormConfig) *LayerNorm

func (*LayerNorm) Forward

func (ln *LayerNorm) Forward(xs *ts.Tensor) (retVal *ts.Tensor)

type LayerNormConfig

type LayerNormConfig struct {
	CudnnEnable       bool
	Eps               float64
	ElementwiseAffine bool
	WsInit            Init
	BsInit            Init
}

Layer-normalization config.

func DefaultLayerNormConfig

func DefaultLayerNormConfig() *LayerNormConfig

type Linear

type Linear struct {
	Ws *ts.Tensor
	Bs *ts.Tensor
}

Linear is a linear fully-connected layer

func NewLinear

func NewLinear(vs *Path, inDim, outDim int64, c *LinearConfig) *Linear

NewLinear creates a new linear layer y = x*wT + b inDim - input dimension (x) [input features - columns] outDim - output dimension (y) [output features - columns] NOTE: w will have shape{outDim, inDim}; b will have shape{outDim}

func (*Linear) Forward

func (l *Linear) Forward(xs *ts.Tensor) (retVal *ts.Tensor)

Forward proceeds input node through linear layer. NOTE: - It assumes that node has dimensions of 2 (matrix). To make it work for matrix multiplication, input node should has same number of **column** as number of **column** in `LinearLayer` `Ws` property as weights matrix will be transposed before multiplied to input node. (They are all used `inDim`) - Input node should have shape of `shape{batch size, input features}`. (shape{batchSize, inDim}). The input features is `inDim` while the output feature is `outDim` in `LinearConfig` struct.

Example:

inDim := 3
outDim := 2
batchSize := 4
weights: 2x3
[ 1 1 1
	1 1 1 ]

input node: 3x4
[ 1 1 1
  1 1 1
  1 1 1
	1 1 1 ]

func (*Linear) ForwardT

func (l *Linear) ForwardT(xs *ts.Tensor, train bool) (retVal *ts.Tensor)

ForwardT implements ModuleT interface for Linear layer.

NOTE: train param will not be used.

type LinearConfig

type LinearConfig struct {
	WsInit Init // iniital weights
	BsInit Init // optional initial bias
	Bias   bool
}

LinearConfig is a configuration for a linear layer

func DefaultLinearConfig

func DefaultLinearConfig() *LinearConfig

DefaultLinearConfig creates default LinearConfig with weights initiated using KaimingUniform and Bias is set to true

type LossFnOption added in v0.3.14

type LossFnOption func(*lossFnOptions)

func WithLossFnIgnoreIndex added in v0.3.14

func WithLossFnIgnoreIndex(val int64) LossFnOption

func WithLossFnPosWeight added in v0.3.14

func WithLossFnPosWeight(val int64) LossFnOption

func WithLossFnReduction added in v0.3.14

func WithLossFnReduction(val int64) LossFnOption

func WithLossFnWeights added in v0.3.14

func WithLossFnWeights(vals []float64) LossFnOption

type MaxPool2D added in v0.6.0

type MaxPool2D struct {
	Kernel   []int64
	Stride   []int64
	Padding  []int64
	Dilation []int64
	CeilMode bool
}

func NewMaxPool2D added in v0.6.0

func NewMaxPool2D(kernelSize []int64, opts ...MaxPool2DOpt) *MaxPool2D

func (*MaxPool2D) Forward added in v0.6.0

func (m *MaxPool2D) Forward(x *ts.Tensor) *ts.Tensor

type MaxPool2DOpt added in v0.6.0

type MaxPool2DOpt func(*MaxPool2DOpts)

func OptCeilModeMp2D added in v0.6.0

func OptCeilModeMp2D(v bool) MaxPool2DOpt

func OptDilationMp2D added in v0.6.0

func OptDilationMp2D(v []int64) MaxPool2DOpt

func OptPaddingMp2D added in v0.6.0

func OptPaddingMp2D(v []int64) MaxPool2DOpt

func OptStrideMp2D added in v0.6.0

func OptStrideMp2D(v []int64) MaxPool2DOpt

type MaxPool2DOpts added in v0.6.0

type MaxPool2DOpts struct {
	Stride   []int64
	Padding  []int64
	Dilation []int64
	CeilMode bool
}

func DefaultMaxPool2DOpts added in v0.6.0

func DefaultMaxPool2DOpts() *MaxPool2DOpts

type MultiStepLR added in v0.3.10

type MultiStepLR struct {
	// contains filtered or unexported fields
}

StepLR decays the learning rates of each optimizer parameter group by gamm once the number of epochs reaches one of the milestones.

NOTE. Such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.

func NewMultiStepLR added in v0.3.10

func NewMultiStepLR(opt *Optimizer, milestones []int, gamma float64) *MultiStepLR

NewStepLR creates a new StepLR.

func (*MultiStepLR) Build added in v0.3.10

func (ms *MultiStepLR) Build() *LRScheduler

Build implements scheduler interface.

func (*MultiStepLR) SetLRs added in v0.3.10

func (ms *MultiStepLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type MultiplicativeLR added in v0.3.10

type MultiplicativeLR struct {
	// contains filtered or unexported fields
}

MultiplicativeLR calculates new learning rates for each optimizer para groups by applying corresponding Lambda function to the CURRENT learning rate.

func NewMultiplicativeLR added in v0.3.10

func NewMultiplicativeLR(opt *Optimizer, ldFns []LambdaFn) *MultiplicativeLR

NewMultiplicativeLR creates a new MultiplicativeLR.

func (*MultiplicativeLR) Build added in v0.3.10

func (m *MultiplicativeLR) Build() *LRScheduler

Build implements scheduler interface.

func (*MultiplicativeLR) SetLRs added in v0.3.10

func (m *MultiplicativeLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type OneCycleLR added in v0.3.11

type OneCycleLR struct {
	// contains filtered or unexported fields
}

OneCycleLR sets the learning rate of each parameter group according to the 1cycle learning rate policy. The 1cycle policy anneals the learning rate from an initial learning rate to some maximum learning rate and then from that maximum learning rate to some minimum learning rate much lower than the initial learning rate.

This policy was initially described in the paper `Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates`_. The 1cycle learning rate policy changes the learning rate after every batch. `step` should be called after a batch has been used for training. This scheduler is not chainable.

Note also that the total number of steps in the cycle can be determined in one of two ways (listed in order of precedence): - A value for total_steps is explicitly provided. - A number of epochs (epochs) and a number of steps per epoch (steps_per_epoch) are provided. In this case, the number of total steps is inferred by total_steps = epochs * steps_per_epoch You must either provide a value for total_steps or provide a value for both epochs and steps_per_epoch.

Source: Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates https://arxiv.org/abs/1708.07120

func NewOneCycleLR added in v0.3.11

func NewOneCycleLR(opt *Optimizer, maxLR float64, opts ...OneCycleOption) *OneCycleLR

func (*OneCycleLR) Build added in v0.3.11

func (oc *OneCycleLR) Build() *LRScheduler

func (*OneCycleLR) SetLRs added in v0.3.11

func (oc *OneCycleLR) SetLRs(opts ...SchedulerOption)

type OneCycleOption added in v0.3.11

type OneCycleOption func(*OneCycleOptions)

func WithOneCycleAnnealStrategy added in v0.3.11

func WithOneCycleAnnealStrategy(v string) OneCycleOption

func WithOneCycleBaseMomentum added in v0.3.11

func WithOneCycleBaseMomentum(v float64) OneCycleOption

func WithOneCycleCycleMomentum added in v0.3.11

func WithOneCycleCycleMomentum(v bool) OneCycleOption

func WithOneCycleDivFactor added in v0.3.11

func WithOneCycleDivFactor(v float64) OneCycleOption

func WithOneCycleEpochs added in v0.3.11

func WithOneCycleEpochs(v int) OneCycleOption

func WithOneCycleFinalDivFactor added in v0.3.11

func WithOneCycleFinalDivFactor(v float64) OneCycleOption

func WithOneCycleLastEpoch added in v0.3.11

func WithOneCycleLastEpoch(v int) OneCycleOption

func WithOneCycleMaxMomentum added in v0.3.11

func WithOneCycleMaxMomentum(v float64) OneCycleOption

func WithOneCyclePctStart added in v0.3.11

func WithOneCyclePctStart(v float64) OneCycleOption

func WithOneCycleStepsPerEpoch added in v0.3.11

func WithOneCycleStepsPerEpoch(v int) OneCycleOption

func WithOneCycleTotalSteps added in v0.3.11

func WithOneCycleTotalSteps(v int) OneCycleOption

type OneCycleOptions added in v0.3.11

type OneCycleOptions struct {
	TotalSteps     int
	Epochs         int
	StepsPerEpoch  int
	PctStart       float64
	AnnealStrategy string
	CycleMomentum  bool
	BaseMomentum   float64
	MaxMomentum    float64
	DivFactor      float64
	FinalDivFactor float64
	LastEpoch      int
}

type Optimizer

type Optimizer struct {
	// contains filtered or unexported fields
}

Optimizer is a struct object to run gradient descent.

func (*Optimizer) AddParamGroup added in v0.3.10

func (opt *Optimizer) AddParamGroup(tensors []ts.Tensor)

func (*Optimizer) BackwardStep

func (opt *Optimizer) BackwardStep(loss *ts.Tensor)

BackwardStep applies a backward step pass, update the gradients, and performs an optimization step.

func (*Optimizer) BackwardStepClip

func (opt *Optimizer) BackwardStepClip(loss *ts.Tensor, max float64)

BackwardStepClip applies a backward step pass, update the gradients, and performs an optimization step.

The gradients are clipped based on `max` before being applied.

func (*Optimizer) BackwardStepClipNorm added in v0.3.10

func (opt *Optimizer) BackwardStepClipNorm(loss *ts.Tensor, max float64)

TODO. Applies a backward step pass, update the gradients, and performs an optimization step.

The gradients L2 norm is clipped based on `max`.

func (*Optimizer) ClipGradNorm added in v0.3.10

func (opt *Optimizer) ClipGradNorm(max float64)

/ TODO. Clips gradient L2 norm over all trainable parameters.

The norm is computed over all gradients together, as if they were concatenated into a single vector.

func (*Optimizer) ClipGradValue

func (opt *Optimizer) ClipGradValue(max float64)

Clips gradient value at some specified maximum value.

func (*Optimizer) GetLRs added in v0.3.10

func (opt *Optimizer) GetLRs() []float64

func (*Optimizer) ParamGroupNum added in v0.3.10

func (opt *Optimizer) ParamGroupNum() int

func (*Optimizer) ResetStepCount added in v0.3.10

func (opt *Optimizer) ResetStepCount()

ResetStepCount set step count to zero.

func (*Optimizer) SetLR

func (opt *Optimizer) SetLR(lr float64)

SetLR sets the optimizer learning rate.

NOTE. it sets a SINGLE value of learning rate for all parameter groups. Most of the time, there's one parameter group.

func (*Optimizer) SetLRs added in v0.3.10

func (opt *Optimizer) SetLRs(lrs []float64)

SetLRs sets learning rates for ALL parameter groups respectively.

func (*Optimizer) SetMomentum

func (opt *Optimizer) SetMomentum(m float64)

SetMomentum sets the optimizer momentum.

func (*Optimizer) Step

func (opt *Optimizer) Step()

Step performs an optimization step, updating the tracked tensors based on their gradients.

func (*Optimizer) StepCount added in v0.3.10

func (opt *Optimizer) StepCount() int

StepCount get current step count.

func (*Optimizer) ZeroGrad

func (opt *Optimizer) ZeroGrad()

ZeroGrad zeroes the gradient for the tensors tracked by this optimizer.

type OptimizerConfig

type OptimizerConfig interface {

	// Build builds an optimizer with the specified learning rate handling variables stored in `vs`.
	//
	// NOTE: Build is a 'default' method. It can be called by wrapping
	// 'DefaultBuild' function
	// E.g. AdamOptimizerConfig struct have a method to fullfil `Build` method of
	// OptimizerConfig by wrapping `DefaultBuild` like
	// (config AdamOptimizerConfig) Build(vs VarStore, lr float64) (retVal Optimizer, err error){
	//		return defaultBuild(config, vs, lr)
	// }
	Build(vs *VarStore, lr float64) (*Optimizer, error)
	// contains filtered or unexported methods
}

OptimizerConfig defines Optimizer configurations. These configs can be used to build optimizer.

type Path

type Path struct {
	// contains filtered or unexported fields
}

Path is variable store with an associated path for variables naming.

func (*Path) Add added in v0.3.7

func (p *Path) Add(name string, x *ts.Tensor, trainable bool) *ts.Tensor

Add adds a tensor to a given path.

func (*Path) Device

func (p *Path) Device() gotch.Device

Device gets the device where the var-store variables are stored.

func (*Path) Entry

func (p *Path) Entry(name string) *Entry

Entry gets the entry corresponding to a given name for in-place manipulation.

func (*Path) Get

func (p *Path) Get(name string) (*ts.Tensor, error)

Get gets the tensor corresponding to a given name if present.

func (*Path) KaimingUniform

func (p *Path) KaimingUniform(name string, dims []int64) *ts.Tensor

KaimingUniform creates a new variable initialized randomly with kaiming uniform.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a uniform distribution which bounds follow Kaiming initialization.

func (*Path) NewVar

func (p *Path) NewVar(name string, dims []int64, ini Init) *ts.Tensor

NewVar creates a new variable.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized as per the related argument.

func (*Path) Ones

func (p *Path) Ones(name string, dims []int64) *ts.Tensor

Ones creates a new variable initialized with ones.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized with ones.

func (*Path) OnesNoTrain

func (p *Path) OnesNoTrain(name string, dims []int64) *ts.Tensor

OnesNoTrain creates a new variable initialized with ones.

The new variable is named according to the name parameter and has the specified shape. The variable will not be trainable so gradients will not be tracked. The variable uses a float tensor initialized with ones.

func (*Path) Paths added in v0.6.0

func (p *Path) Paths() []string

Paths returns all sub paths from current path.

func (*Path) Randn

func (p *Path) Randn(name string, dims []int64, mean float64, stdev float64) *ts.Tensor

Randn creates a new variable initialized randomly with normal distribution.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a normal distribution with the specified mean and standard deviation.

func (*Path) RandnStandard

func (p *Path) RandnStandard(name string, dims []int64) *ts.Tensor

RandnStandard creates a new variable initialized randomly with normal distribution.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a standard normal distribution.

func (*Path) SetGroup added in v0.3.10

func (p *Path) SetGroup(g uint)

func (*Path) Sub

func (p *Path) Sub(str string) *Path

Sub gets a sub-path of the given path.

func (*Path) Uniform

func (p *Path) Uniform(name string, dims []int64, lo, up float64) *ts.Tensor

Uniform creates a new variable initialized randomly with uniform distribution.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a uniform distribution between the specified bounds.

func (*Path) VarCopy

func (p *Path) VarCopy(name string, t *ts.Tensor) *ts.Tensor

VarCopy creates a new variable initialized by copying an existing tensor.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized by copying some given tensor.

func (*Path) Zeros

func (p *Path) Zeros(name string, dims []int64) *ts.Tensor

Zeros creates a new variable initialized with zeros.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized with zeros.

func (*Path) ZerosNoTrain

func (p *Path) ZerosNoTrain(name string, dims []int64) *ts.Tensor

ZerosNoTrain creates a new variable initialized with zeros.

The new variable is named according to the name parameter and has the specified shape. The variable will not be trainable so gradients will not be tracked. The variable uses a float tensor initialized with zeros.

type RMSPropConfig

type RMSPropConfig struct {
	Alpha    float64
	Eps      float64
	Wd       float64
	Momentum float64
	Centered bool
}

func DefaultRMSPropConfig

func DefaultRMSPropConfig() *RMSPropConfig

DefaultAdamConfig creates AdamConfig with default values

func NewRMSPropConfig

func NewRMSPropConfig(alpha, eps, wd, momentum float64, centered bool) *RMSPropConfig

NewRMSPropConfig creates RMSPropConfig with specified values

func (*RMSPropConfig) Build

func (c *RMSPropConfig) Build(vs *VarStore, lr float64) (*Optimizer, error)

type RNN

type RNN interface {

	// A zero state from which the recurrent network is usually initialized.
	ZeroState(batchDim int64) State

	// Applies a single step of the recurrent network.
	//
	// The input should have dimensions [batch_size, features].
	Step(input *ts.Tensor, inState State) State

	// Applies multiple steps of the recurrent network.
	//
	// The input should have dimensions [batch_size, seq_len, features].
	// The initial state is the result of applying zero_state.
	Seq(input *ts.Tensor) (*ts.Tensor, State)

	// Applies multiple steps of the recurrent network.
	//
	// The input should have dimensions [batch_size, seq_len, features].
	SeqInit(input *ts.Tensor, inState State) (*ts.Tensor, State)
}

type RNNConfig

type RNNConfig struct {
	HasBiases     bool
	NumLayers     int64
	Dropout       float64
	Train         bool
	Bidirectional bool
	BatchFirst    bool
}

The GRU and LSTM layers share the same config. Configuration for the GRU and LSTM layers.

func DefaultRNNConfig

func DefaultRNNConfig() *RNNConfig

Default creates default RNN configuration

type ReduceLROnPlateau added in v0.3.11

type ReduceLROnPlateau struct {
	// contains filtered or unexported fields
}

ReduceLROnPlateau reduces learning rate when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This scheduler reads a metrics quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.

func NewReduceLROnPlateau added in v0.3.11

func NewReduceLROnPlateau(opt *Optimizer, opts ...ReduceLROnPlateauOption) *ReduceLROnPlateau

func (*ReduceLROnPlateau) Build added in v0.3.11

func (s *ReduceLROnPlateau) Build() *LRScheduler

Build implements scheduler interface.

func (*ReduceLROnPlateau) SetLRs added in v0.3.11

func (s *ReduceLROnPlateau) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type ReduceLROnPlateauOption added in v0.3.11

type ReduceLROnPlateauOption func(*ReduceLROnPlateauOptions)

func WithReduceOnPlateauCooldown added in v0.3.11

func WithReduceOnPlateauCooldown(cooldown int) ReduceLROnPlateauOption

func WithReduceOnPlateauEps added in v0.3.11

func WithReduceOnPlateauEps(eps float64) ReduceLROnPlateauOption

func WithReduceOnPlateauFactor added in v0.3.11

func WithReduceOnPlateauFactor(factor float64) ReduceLROnPlateauOption

func WithReduceOnPlateauMinLRs added in v0.3.11

func WithReduceOnPlateauMinLRs(minLRs []float64) ReduceLROnPlateauOption

func WithReduceOnPlateauMode added in v0.3.11

func WithReduceOnPlateauMode(mode string) ReduceLROnPlateauOption

func WithReduceOnPlateauPatience added in v0.3.11

func WithReduceOnPlateauPatience(patience int) ReduceLROnPlateauOption

func WithReduceOnPlateauThreshold added in v0.3.11

func WithReduceOnPlateauThreshold(threshold float64) ReduceLROnPlateauOption

func WithReduceOnPlateauThresholdMode added in v0.3.11

func WithReduceOnPlateauThresholdMode(thresholdMode string) ReduceLROnPlateauOption

func WithReduceOnPlateauVerbose added in v0.3.11

func WithReduceOnPlateauVerbose(verbose bool) ReduceLROnPlateauOption

type ReduceLROnPlateauOptions added in v0.3.11

type ReduceLROnPlateauOptions struct {
	Mode          string
	Factor        float64
	Patience      int
	Verbose       bool
	Threshold     float64
	ThresholdMode string
	MinLRs        []float64
	Cooldown      int
	Eps           float64
}

type SGDConfig

type SGDConfig struct {
	Momentum  float64
	Dampening float64
	Wd        float64
	Nesterov  bool
}

SGDConfig holds parameters for building the SGD (Stochastic Gradient Descent) optimizer.

func DefaultSGDConfig

func DefaultSGDConfig() *SGDConfig

DefaultSGDConfig creates SGDConfig with default values.

func NewSGDConfig

func NewSGDConfig(momentum, dampening, wd float64, nesterov bool) *SGDConfig

NewSGD creates the configuration for a SGD optimizer with specified values

func (*SGDConfig) Build

func (c *SGDConfig) Build(vs *VarStore, lr float64) (*Optimizer, error)

type SchedulerOption added in v0.3.11

type SchedulerOption func(*SchedulerOptions)

func WithLastEpoch added in v0.3.11

func WithLastEpoch(epoch int) SchedulerOption

func WithLoss added in v0.3.11

func WithLoss(loss float64) SchedulerOption

type SchedulerOptions added in v0.3.11

type SchedulerOptions struct {
	// Metrics   map[string]interface{}
	Loss      float64 // Usually metrics is loss value
	LastEpoch int
}

func DefaultSchedulerOptions added in v0.4.3

func DefaultSchedulerOptions() *SchedulerOptions

type Sequential

type Sequential struct {
	// contains filtered or unexported fields
}

Sequential is a layer (container) that combines multiple other layers.

func Seq

func Seq() *Sequential

Seq creates a new empty sequential layer

func (*Sequential) Add

func (s *Sequential) Add(l ts.Module)

Add appends a layer after all the current layers.

func (*Sequential) AddFn

func (s *Sequential) AddFn(fn ts.Module)

AddFn appends a closure after all the current layers.

NOTE: fn should have signature `func(t ts.Tensor) ts.Tensor` and it implements Module interface

func (*Sequential) Forward

func (s *Sequential) Forward(xs *ts.Tensor) (retVal *ts.Tensor)

Forward implements Module interface for Sequential

func (*Sequential) ForwardAll

func (s *Sequential) ForwardAll(xs *ts.Tensor, opts ...uint8) (retVal []ts.Tensor)

ForwardAll applies the forward pass and returns the output for each layer.

func (*Sequential) IsEmpty

func (s *Sequential) IsEmpty() (retVal bool)

IsEmpty returns true if this layer does not have any sub-layers.

func (*Sequential) Len

func (s *Sequential) Len() (retVal int64)

Len returns number of sub-layers embedded in this layer

type SequentialT

type SequentialT struct {
	// contains filtered or unexported fields
}

SequentialT is a sequential layer combining new layers with support for a training mode.

func SeqT

func SeqT() *SequentialT

/ SeqT creates a new empty sequential layer.

func (*SequentialT) Add

func (s *SequentialT) Add(l ts.ModuleT)

Add appends a layer after all the current layers.

func (*SequentialT) AddFn

func (s *SequentialT) AddFn(fn ts.ModuleT)

AddFn appends a closure after all the current layers.

NOTE: fn should have signature `func(t ts.Tensor) ts.Tensor` and it implements Module interface

func (*SequentialT) AddFnT

func (s *SequentialT) AddFnT(fn ts.ModuleT)

AddFn appends a closure after all the current layers.

NOTE: fn should have signature `func(t ts.Tensor, train bool) ts.Tensor` and it implements Module interface

func (*SequentialT) ForwardAllT

func (s *SequentialT) ForwardAllT(xs *ts.Tensor, train bool, opts ...uint8) (retVal []ts.Tensor)

ForwardAll applies the forward pass and returns the output for each layer.

func (*SequentialT) ForwardT

func (s *SequentialT) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

func (*SequentialT) IsEmpty

func (s *SequentialT) IsEmpty() (retVal bool)

IsEmpty returns true if this layer does not have any sub-layers.

func (*SequentialT) Len

func (s *SequentialT) Len() (retVal int64)

Len returns number of sub-layers embedded in this layer

type State

type State interface{}

type StepLR added in v0.3.10

type StepLR struct {
	// contains filtered or unexported fields
}

StepLR decays the learning rates of each optimizer parameter group by gamma every step size epochs.

NOTE. Such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.

func NewStepLR added in v0.3.10

func NewStepLR(opt *Optimizer, stepSize int, gamma float64) *StepLR

NewStepLR creates a new StepLR.

func (*StepLR) Build added in v0.3.10

func (s *StepLR) Build() *LRScheduler

Build implements scheduler interface.

func (*StepLR) SetLRs added in v0.3.10

func (s *StepLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type TrainableCModule added in v0.3.7

type TrainableCModule struct {
	Inner *ts.CModule
}

TrainableCModule is a trainable version of JIT Pytorch module

These modules can be created via TorchScript python API. See: https://pytorch.org/docs/stable/jit.html

func TrainableCModuleLoad added in v0.3.7

func TrainableCModuleLoad(p *Path, file string) (*TrainableCModule, error)

TrainableCModuleLoad loads a PyTorch saved JIT module from a file and adds tensors (weights) to `varstore` so that module can be trained.

func TrainableCModuleLoadData added in v0.3.7

func TrainableCModuleLoadData(p *Path, stream io.Reader) (*TrainableCModule, error)

func (*TrainableCModule) ForwardT added in v0.3.7

func (m *TrainableCModule) ForwardT(x *ts.Tensor, train bool) *ts.Tensor

ForwardT implements ModuleT for TrainableCModule. NOTE: train parameter will not be used.

func (*TrainableCModule) Save added in v0.3.7

func (m *TrainableCModule) Save(file string) error

Save saves TrainableCModule to specified file.

func (*TrainableCModule) SetEval added in v0.3.7

func (m *TrainableCModule) SetEval()

SetEval set TrainableCModule to inference mode

func (*TrainableCModule) SetTrain added in v0.3.7

func (m *TrainableCModule) SetTrain()

SetTrain set TrainableCModule to train mode

type Var added in v0.3.10

type Var struct {
	Tensor *ts.Tensor
	Group  uint // optimizer parameter group
}

type VarStore

type VarStore struct {
	Vars Variables
	// contains filtered or unexported fields
}

VarStore is used to store variables used by one or multiple layers. It specifies a SINGLE device where all variables are stored.

func NewVarStore

func NewVarStore(device gotch.Device) *VarStore

NewVarStore creates a new variable store located on the specified device

func (*VarStore) Copy

func (vs *VarStore) Copy(src VarStore) error

Copy copies variable values from a source var store to this var store.

All the variables in this var store have to exist with the same name in the source var store, otherwise an error is returned.

func (*VarStore) Device

func (vs *VarStore) Device() gotch.Device

Device returns device for this var-store

func (*VarStore) Freeze

func (vs *VarStore) Freeze()

Freeze freezes a var store.

Gradients for the variables in this store are not tracked anymore.

func (*VarStore) IsEmpty

func (vs *VarStore) IsEmpty() bool

IsEmpty returns true if no tensors are currently stored on this var-store

func (*VarStore) Len

func (vs *VarStore) Len() int

Len returns the number of tensors currently stored on this var-store

func (*VarStore) Load

func (vs *VarStore) Load(filepath string) error

Load loads the var-store variable values from a file.

NOTE: Weight values for all the tensors currently stored in the var-store gets loaded from the given file. Note that the set of variables stored in the var-store is not changed, only the values for these tensors are modified. It will throw error if name of the loaded tensors can not find in the current var-store named tensors set.

func (*VarStore) LoadPartial

func (vs *VarStore) LoadPartial(filepath string) ([]string, error)

LoadPartial loads the var-store variable values from a file if it exists.

Weight values for the tensors currently stored in the var-store and the given file get loaded from the given file. If a variable in the var store is not present in the given file, it is skipped and its values are not updated. This method should be used if pre-trained weight for only parts of the model are available. Note that the set of variables stored in the var-store is not changed, only the values for these tensors are modified.

Returns a String Vector containing the names of missing variables.

func (*VarStore) Root

func (vs *VarStore) Root() *Path

Root gets the root path for this var-store

NOTE: Variables are named and organized using paths. This function returns the top level path for the var store and can be combined with '/' to create sub-paths.

func (*VarStore) Save

func (vs *VarStore) Save(filepath string) error

Save saves the var-store variable values to a file

NOTE: Weight values for all the tensors currently stored in the var-store gets saved in the given file.

func (*VarStore) Summary added in v0.6.0

func (vs *VarStore) Summary()

Summary prints a simple list of all named variables with their shapes.

func (*VarStore) TrainableVariables

func (vs *VarStore) TrainableVariables() []ts.Tensor

TrainableVariabless returns all trainable variables for this var-store

func (*VarStore) Unfreeze

func (vs *VarStore) Unfreeze()

Unfreeze unfreezes a var store.

Gradients for the variables in this store are tracked again.

func (*VarStore) Variables

func (vs *VarStore) Variables() map[string]*ts.Tensor

Variables returns all variables and their names in a map[variable_name]Tensor

type Variables

type Variables struct {
	NamedVariables     map[string]*ts.Tensor
	TrainableVariables []Var
	// contains filtered or unexported fields
}

Variables represents a collection of tensors.

NOTE: When the variable store is frozen, trainable still is set to tree, however the tensor is not set to require gradients.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL