veracity

package module
v0.2.6 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 13, 2025 License: MIT Imports: 44 Imported by: 0

README

veracity

Veracity is a command line tool providing support for inspecting the DataTrails native MERKLE_LOG verifiable data structures.

A general familiarity with verifiable data structures, and in particular binary merkle trees, would be advantageous, but is not required.

Installation

Veracity provides native binaries for Mac OS, Linux on the releases page.

Note: For The Windows Subsystem for Linux (WSL), use the Linux binaries.

OS Platform Architecture
Mac darwin arm64
Mac darwin x86_64
Linux linux arm64
Linux linux x86_64
  1. Select the desired release from the releases page.
  2. Download the archive for your host platform
  3. Extract the archive
  4. Set the file permissions
  5. Move the binary to a location on your PATH

Or, follow these commands to install the latest build.

Mac Install
PLATFORM=$(uname -s | tr [:upper:] [:lower:])
ARCH=$(uname -m)
cd $TMPDIR
curl -sLO https://github.com/datatrails/veracity/releases/latest/download/veracity_${PLATFORM}_${ARCH}.tar.gz
tar -xf veracity_${PLATFORM}_${ARCH}.tar.gz
chmod +x ./veracity
mv ./veracity $HOME/.local/bin/
veracity --help
Linux/WSL Install
PLATFORM=$(uname -s | tr [:upper:] [:lower:])
ARCH=$(uname -m)
cd /tmp
curl -sLO https://github.com/datatrails/veracity/releases/latest/download/veracity_${PLATFORM}_${ARCH}.tar.gz
tar -xf veracity_${PLATFORM}_${ARCH}.tar.gz
chmod +x ./veracity
mv ./veracity $HOME/.local/bin/
veracity --help
Troubleshooting

If veracity --help fails, check the following:

Confirm $PATH includes .local/bin. Either add the location to $PATH, or place veracity in an alternate location within an existing $PATH.

# Check veracity exists in your $PATH
echo $PATH

# Add to the path
export PATH="$HOME/.local/bin:$PATH"
# reload the configuration
source ~/.bashrc

# Confirm which veracity binary is being used
which veracity

Example Usage

Environment Variables

The following samples use environment variables to simplify the commands:

EVENT_ID=publicassets/87dd2e5a-42b4-49a5-8693-97f40a5af7f8/events/a022f458-8e55-4d63-a200-4172a42fc2aa
DATATRAILS_URL=https://app.datatrails.ai
PUBLIC_TENANT_ID=tenant/6ea5cd00-c711-3649-6914-7b125928bbb4

Verifying A Single Event

The following steps verify the single public event a022f458-8e55-4d63-a200-4172a42fc2aa using the DataTrails API.

Check the event details directly.

  1. Download the event from the DataTrails ledger:

    curl -sL $DATATRAILS_URL/archivist/v2/$EVENT_ID > event.json
    
  2. Verify inclusion with veracity

    cat event.json | \
        veracity --data-url $DATATRAILS_URL/verifiabledata \
        --tenant=$PUBLIC_TENANT_ID \
        --loglevel=INFO \
        verify-included
    
  3. View the output, noting there are no verification errors

    verifying for tenant: tenant/6ea5cd00-c711-3649-6914-7b125928bbb4
    verifying: 663 334 018fa97ef269039b00 2024-05-24T08:27:00.2+01:00 
    publicassets/87dd2e5a-42b4-49a5-8693-97f40a5af7f8/events/a022f458-8e55-4d63-a200-4172a42fc2aa
    leaf hash: bfc511ab1b880b24bb2358e07472e3383cdeddfbc4de9d66d652197dfb2b6633
    OK|663 334|[aea799fb2a8..., proof path nodes, ...f0a52d2256c235]
    

Note: To minimize veracity output, remove --loglevel, checking the exit code of 0 (echo $?) for a successful verification.

The elided proof path at time of writing was:

[aea799fb2a8c4bbb6eda1dd2c1e69f8807b9b06deeaf51b9e0287492cefd8e4c,
9f0183c7f79fd81966e104520af0f90c8447f1a73d4e38e7f2f23a0602ceb617, 
a21cb383d63896a9811f06ebd2094921581d8eb72f7fbef566b730958dc35f1, 
1ea08fd02da3633b72ef0b09d8ba4209db1092d22367ef565f35e0afd4b0fc3, 
85a9d55cf507ef85bd264f4db7228e225032c48da689aa8597e11059f45ab30, 
ab40107f7d7bebfe30c9cea4772f9eb3115cae1f801adab318f90fcdc204bdc, 
4ca607094ead6fcd23f52851c8cdd8c6f0e2abde20dca19ba5abc8aff70d0d1, 
a6d0fd8922342aafbba6073c5510103b077a7de9cb2d72fb652510110250f9e, 
fafc7edc434225afffc19b0582efa2a71b06a2d035358356df0a52d2256c235, 
737375d837e67ee7bce182377304e889187ef0f335952174cb5bf707a0b4788]

Verify Tamper Resiliency

One of the many scenarios DataTrails prevents is tampering if and when information was written to the ledger.

  1. To simulate backdating, the following backdates one of the events in the log:

    sed -i -e 's/2024-05-24T07:27:00.200Z/2024-04-24T07:27:00.200Z/g' ./event.json
    
  2. Re-verify inclusion with veracity verify-included, noting the error

    cat event.json | \
        veracity --data-url $DATATRAILS_URL/verifiabledata \
        --tenant=$PUBLIC_TENANT_ID \
        --loglevel=INFO \
        verify-included
    
  3. View the output

    ...
    error: the entry is not in the log. for tenant tenant/6ea5cd00-c711-3649-6914-7b125928bbb4
    

Verify All Events

The veracity verify-included command accepts the result of a DataTrails list events call. This verifies the inclusion of each event in the returned list.

  1. Pipe the events to veracity:

    PUBLIC_ASSET_ID=publicassets/87dd2e5a-42b4-49a5-8693-97f40a5af7f8
    curl -sL $DATATRAILS_URL/archivist/v2/$PUBLIC_ASSET_ID/events | \
        veracity --data-url $DATATRAILS_URL/verifiabledata \
            --tenant=$PUBLIC_TENANT_ID \
            --loglevel=INFO \
            verify-included 
    

Read a Selected Node From the Log

An example of reading a node associated with event, it's possible to visit merkle log entry page for event 999773ed-cc92-4d9c-863f-b418418705ea

On the Merkle log entry page we can see the MMR Index field with a value of 916 which can be used with the node command to retrieve the leaf directly from the merklelog using following command:

veracity --data-url $DATATRAILS_URL/verifiabledata \
    --tenant=$PUBLIC_TENANT_ID \
    node --mmrindex 916

The above command will output c3323019fd1d325ac068d203c62007b504c5fa762446a9fe5d88e392ec96914b which will match the value from the merkle log entry page.

General Use Commands

Additional Commands include:

  • node - read a merklelog node
  • verify-included - verify the inclusion of an event, or list of events, in the tenant's merkle log
  • watch - discover recently active logs
  • replicate-logs - create or update a local trusted replica of one more more tenants logs, accepts the output of watch as input.
  • receipt - Generate a COSE Receipt of inclusion using the MMRIVER profile for an entry.

For more information, please visit the DataTrails documentation

Documentation

Index

Constants

View Source
const (
	AzureBlobURLFmt       = "https://%s.blob.core.windows.net"
	AzuriteStorageAccount = "devstoreaccount1"
	DefaultContainer      = "merklelogs"
)
View Source
const (
	// LeafTypePlain is used for committing to plain values.
	LeafTypePlain         = uint8(0)
	PublicAssetsPrefix    = "publicassets/"
	ProtectedAssetsPrefix = "assets/"

	// To create smooth UX for basic or first-time users, we default to the verifiabledata proxy
	// on production. This gives us compact runes to verify inclusion of a List Events response.
	DefaultRemoteMassifURL = "https://app.datatrails.ai/verifiabledata"
)

Variables

View Source
var (
	// recovers timestamp_committed from merklelog_entry.commit.idtimestamp prior to hashing
	Bug9308 = "9308"

	Bugs = []string{
		Bug9308,
	}
)
View Source
var (
	ErrChangesFlagIsExclusive          = errors.New("use --changes Or --massif and --tenant, not both")
	ErrNewReplicaNotEmpty              = errors.New("the local directory for a new replica already exists")
	ErrSealNotFound                    = errors.New("seal not found")
	ErrSealVerifyFailed                = errors.New("the seal signature verification failed")
	ErrFailedCheckingConsistencyProof  = errors.New("failed to check a consistency proof")
	ErrFailedToCreateReplicaDir        = errors.New("failed to create a directory needed for local replication")
	ErrRequiredOption                  = errors.New("a required option was not provided")
	ErrRemoteLogTruncated              = errors.New("the local replica indicates the remote log has been truncated")
	ErrRemoteLogInconsistentRootState  = errors.New("the local replica root state disagrees with the remote")
	ErrInconsistentUseOfPrefetchedSeal = errors.New("prefetching signed root reader used inconsistently")
)
View Source
var (
	ErrInvalidBlockNotPublicKey = errors.New("the data does not have the PEM armour indicating it is a public key")
	// ErrInvalidPublicKeyString     = errors.New("failed to decode the key bytes from a string")
	ErrKeyBytesParseFailed      = errors.New("the pem block could not be parsed as a public key")
	ErrInvalidKeyNotECDSAPublic = errors.New("parsed public key is not the expected ecdsa type")
)
View Source
var (
	ErrVerifyInclusionFailed = errors.New("the entry is not in the log")
	ErrUncommittedEvents     = errors.New("one or more events did not have record of their inclusion in the log")
)
View Source
var (
	ErrNoChanges = errors.New("no changes found")
)
View Source
var (
	ErrNoLogTenant = fmt.Errorf("error, cannot find log tenant, please provide either %v or %v", logIDFlagName, logTenantFlagName)
)

Functions

func AddCommands

func AddCommands(app *cli.App, ikwid bool) *cli.App

func Bug

func Bug(cmd *CmdCtx, id string) bool

func CompareECDSAPublicKeys added in v0.1.0

func CompareECDSAPublicKeys(key1, key2 *ecdsa.PublicKey) bool

func CtxGetOneTenantOption added in v0.1.0

func CtxGetOneTenantOption(cCtx cliContextString) string

func CtxGetTenantOptions added in v0.1.0

func CtxGetTenantOptions(cCtx cliContextString) []string

func DecodeECDSAPublicPEM added in v0.1.0

func DecodeECDSAPublicPEM(data []byte) (*ecdsa.PublicKey, error)

DecodeECDSAPublicPEM decodes a public pem format ecdsa key This is the format that the merklelog signing key is distributed in

func DecodeECDSAPublicString added in v0.1.0

func DecodeECDSAPublicString(data string) (*ecdsa.PublicKey, error)

DecodeECDSAPublicString decodes a public pem format ecdsa key This is the format that the merklelog signing key is distributed in, but with the key material presented as a single, base64 encoded, string. This is typically more convenient for command line and environment vars

func EnsureTenantPrefix added in v0.1.0

func EnsureTenantPrefix(tenant string) string

EnsureTenantPrefix ensures a string is prefixed with 'tenant/' Note the expected input is a uuid string or a tenant/uuid string

func IsSupportedBug

func IsSupportedBug(id string) bool

func NewApp

func NewApp(version string, ikwid bool) *cli.App

func NewAttribute added in v0.0.3

func NewAttribute(value any) (*attribute.Attribute, error)

func NewDiagCmd

func NewDiagCmd() *cli.Command

NewDiagCmd prints diagnostic information about the massif blob containg a specific mmrIndex

func NewDirLister added in v0.1.0

func NewDirLister() massifs.DirLister

func NewEventDiagCmd

func NewEventDiagCmd() *cli.Command

NewEventDiagCmd provides diagnostic support for event verification

func NewFileOpener added in v0.1.0

func NewFileOpener() massifs.Opener

func NewFileWriteOpener added in v0.1.0

func NewFileWriteOpener() massifs.WriteAppendOpener

func NewFindMMREntriesCmd added in v0.2.2

func NewFindMMREntriesCmd() *cli.Command

NewFindMMREntriesCmd finds the mmr entries associated with a given app entries in the tenants Merkle Log.

func NewFindTrieEntriesCmd added in v0.2.2

func NewFindTrieEntriesCmd() *cli.Command

NewFindTrieEntriesCmd finds the trie entries associated with a given trie key in the tenants Merkle Log.

func NewLogTailCmd

func NewLogTailCmd() *cli.Command

func NewLogWatcherCmd

func NewLogWatcherCmd() *cli.Command

NewLogWatcherCmd watches for changes on any log

func NewMassifsCmd

func NewMassifsCmd() *cli.Command

NewMassifsCmd prints out pre-calculated tables for navigating massif blobs with maximum convenience

func NewNodeCmd

func NewNodeCmd() *cli.Command

NewNodeCmd prints out the identified mmr node

func NewNodeScanCmd

func NewNodeScanCmd() *cli.Command

NewNodeScan implements a sub command which linearly scans for a node in a blob This is a debugging tool

func NewPrefetchingSealReader added in v0.2.6

func NewPrefetchingSealReader(ctx context.Context, sealGetter massifs.SealGetter, tenantIdentity string, massifIndex uint32) (*prefetchingSealReader, error)

func NewProveCmd

func NewProveCmd() *cli.Command

NewProveCmd (will) generate a proof and node path for the argument node

func NewReceiptCmd added in v0.2.0

func NewReceiptCmd() *cli.Command

func NewReplicateLogsCmd added in v0.1.0

func NewReplicateLogsCmd() *cli.Command

NewReplicateLogsCmd updates a local replica of a remote log, verifying the mutual consistency of the two before making any changes.

func NewStdinOpener added in v0.1.0

func NewStdinOpener() massifs.Opener

func NewTimestamp

func NewTimestamp(id uint64, epoch uint8) (*timestamppb.Timestamp, error)

func NewVerifyIncludedCmd added in v0.1.0

func NewVerifyIncludedCmd() *cli.Command

NewVerifyIncludedCmd verifies inclusion of a DataTrails event in the tenants Merkle Log

func PeakStack

func PeakStack(massifHeight uint8, mmrIndex uint64) []uint64

PeakStack returns the stack of mmrIndices corresponding to the stack of ancestor nodes required for mmrSize. Note that the trick here is to realise that passing a massifIndex+1 in place of mmrSize, treating each massif as a leaf node in a much smaller tree, gets the (much shorter) peak stack of nodes required from earlier massifs. And this is stack of nodes carried forward in each massif blob to make them self contained. (The mmrblobs package has a slightly different variant of this that returns a map)

func SetTimestamp

func SetTimestamp(id uint64, ts *timestamppb.Timestamp, epoch uint8) error

func WatchForChanges added in v0.0.6

func WatchForChanges(
	ctx context.Context,
	cfg WatchConfig, reader azblob.Reader, reporter watchReporter,
) error

WatchForChanges watches for tenant log chances according to the provided config

Types

type CmdCtx

type CmdCtx struct {
	// contains filtered or unexported fields
}

CmdCtx holds shared config and config derived state for all commands

func (*CmdCtx) Clone added in v0.1.0

func (c *CmdCtx) Clone() *CmdCtx

Clone returns a copy of the CmdCtx with only those members that are safe to share copied. Those are:

  • log - the result of cfgLogging

All other members need to be initialzed by the caller if they are required in a specific go routine context.

type FileWriteAppendOpener added in v0.1.0

type FileWriteAppendOpener struct{}

FileWriteAppendOpener is an interface for opening a file for writing The Open implementation must open for *append*, and must create the file if it does not exist. The Create implementation must truncate the file if it exists, and create it if it does not.

func (*FileWriteAppendOpener) Create added in v0.2.1

func (*FileWriteAppendOpener) Create(name string) (io.WriteCloser, error)

Create ensures the named file exists, is empty and is writable If the named file already exists it is truncated

func (*FileWriteAppendOpener) Open added in v0.1.0

Open ensures the named file exists and is writable. Writes are appended to any existing content.

type LogTailActivity

type LogTailActivity struct {
	watcher.LogTail
	LogSize         uint64
	LastIDEpoch     uint8
	LastIDTimestamp uint64
	LogActivity     time.Time
	TagActivity     time.Time
}

LogTailActivity can represent either the seal or the massif that has most recently been updated for the log.

type MassifGetter added in v0.2.2

type MassifGetter interface {
	GetMassif(
		ctx context.Context, tenantIdentity string, massifIndex uint64, opts ...massifs.ReaderOption,
	) (massifs.MassifContext, error)
}

MassifGetter gets a specific massif based on the massifIndex given for a tenant log

type MassifReader added in v0.0.4

type MassifReader interface {
	GetVerifiedContext(
		ctx context.Context, tenantIdentity string, massifIndex uint64,
		opts ...massifs.ReaderOption,
	) (*massifs.VerifiedContext, error)

	GetFirstMassif(
		ctx context.Context, tenantIdentity string, opts ...massifs.ReaderOption,
	) (massifs.MassifContext, error)
	GetHeadMassif(
		ctx context.Context, tenantIdentity string, opts ...massifs.ReaderOption,
	) (massifs.MassifContext, error)
	GetLazyContext(
		ctx context.Context, tenantIdentity string, which massifs.LogicalBlob, opts ...massifs.ReaderOption,
	) (massifs.LogBlobContext, uint64, error)
	MassifGetter
}

type MassifTail

type MassifTail struct {
	LogTailActivity
	FirstIndex uint64
}

MassifTail contains the massif specific tail information

func TailMassif

func TailMassif(
	ctx context.Context,
	massifReader MassifReader,
	tenantIdentity string,
) (MassifTail, error)

TailMassif returns the active massif for the tenant

func (MassifTail) String

func (lt MassifTail) String() string

String returns a printable. loggable pretty rendering of the tail

type OsDirLister added in v0.0.4

type OsDirLister struct{}

Utilities to remove the os dependencies from the MassifReader

func (*OsDirLister) ListFiles added in v0.0.4

func (*OsDirLister) ListFiles(name string) ([]string, error)

type Progresser added in v0.1.1

type Progresser interface {
	Completed()
}

func NewNoopProgress added in v0.1.1

func NewNoopProgress() Progresser

func NewStagedProgress added in v0.1.1

func NewStagedProgress(prefix string, count int) Progresser

type ReadOpener added in v0.1.0

type ReadOpener struct{}

func (*ReadOpener) Open added in v0.1.0

func (*ReadOpener) Open(name string) (io.ReadCloser, error)

type SealTail

type SealTail struct {
	LogTailActivity
	Count  uint64
	Signed cose.CoseSign1Message
	State  massifs.MMRState
}

SealTail contains the seal specific tail information

func TailSeal

func TailSeal(
	ctx context.Context,
	rootReader massifs.SignedRootReader,
	tenantIdentity string,
) (SealTail, error)

TailSeal returns the most recently added seal for the log

func (SealTail) String

func (st SealTail) String() string

String returns a printable. loggable pretty rendering of the tail

type StdinOpener added in v0.0.4

type StdinOpener struct {
	// contains filtered or unexported fields
}

func (*StdinOpener) Open added in v0.0.4

func (o *StdinOpener) Open(string) (io.ReadCloser, error)

type TailConfig

type TailConfig struct {
	// Interval defines the wait period between repeated tail checks if many
	// checks have been asked for.
	Interval time.Duration
	// TenantIdentity identifies the log of interest
	TenantIdentity string
}

func NewTailConfig

func NewTailConfig(cCtx *cli.Context, cmd *CmdCtx) (TailConfig, error)

NewTailConfig derives a configuration from the supplied comand line options context

type TenantActivity added in v0.0.6

type TenantActivity struct {
	// Massif is the massif index of the most recently appended massif
	Massif int `json:"massifindex"`
	// Tenant is the tenant identity of the most recently changed log
	Tenant string `json:"tenant"`

	// IDCommitted is the idtimestamp for the most recent entry observed in the log
	IDCommitted string `json:"idcommitted"`
	// IDConfirmed is the idtimestamp for the most recent entry to be sealed.
	IDConfirmed  string `json:"idconfirmed"`
	LastModified string `json:"lastmodified"`
	// MassifURL is the remote path to the most recently changed massif
	MassifURL string `json:"massif"`
	// SealURL is the remote path to the most recently changed seal
	SealURL string `json:"seal"`
}

TenantActivity represents the per tenant output of the watch command

type TenantMassif added in v0.1.0

type TenantMassif struct {
	// Massif is the massif index of the most recently appended massif
	Massif int `json:"massifindex"`
	// Tenant is the tenant identity of the most recently changed log
	Tenant string `json:"tenant"`
}

TenantMassif identifies a combination of tenant and massif Typically it is used to convey that the massif is the most recently changed for that tenant. Note: it is a strict subset of the fields in TenantActivity, maintained seperately due to json marshalling

func TenantMassifsFromData added in v0.1.0

func TenantMassifsFromData(data []byte) ([]TenantMassif, error)

type VerifiedContextReader added in v0.1.0

type VerifiedContextReader interface {
	massifs.VerifiedContextReader
}

type VerifiedReplica added in v0.1.0

type VerifiedReplica struct {
	// contains filtered or unexported fields
}

func NewVerifiedReplica added in v0.1.0

func NewVerifiedReplica(
	cCtx *cli.Context, cmd *CmdCtx,
) (*VerifiedReplica, error)

func (*VerifiedReplica) ReplicateVerifiedUpdates added in v0.1.0

func (v *VerifiedReplica) ReplicateVerifiedUpdates(
	ctx context.Context,
	tenantIdentity string, startMassif, endMassif uint32) error

ReplicateVerifiedUpdates confirms that any additions to the remote log are consistent with the local replica Only the most recent local massif and seal need be retained for verification purposes. If independent, off line, verification of inclusion is desired, retain as much of the log as is interesting.

type WatchConfig added in v0.0.6

type WatchConfig struct {
	watcher.WatchConfig
	WatchTenants map[string]bool
	WatchCount   int
	ReaderURL    string
	Latest       bool
}

func NewWatchConfig

func NewWatchConfig(cCtx cliContext, cmd *CmdCtx) (WatchConfig, error)

NewWatchConfig derives a configuration from the options set on the command line context

type Watcher added in v0.0.6

type Watcher struct {
	watcher.Watcher
	// contains filtered or unexported fields
}

func (*Watcher) FirstFilter added in v0.2.1

func (w *Watcher) FirstFilter() string

FirstFilter accounts for the --latest flag but otherwise falls through to the base implementation

func (*Watcher) NextFilter added in v0.2.1

func (w *Watcher) NextFilter() string

NextFilter accounts for the --latest flag but otherwise falls through to the base implementation

Directories

Path Synopsis
cmd
veracity command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL