veracity

package module
v0.0.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 12, 2024 License: MIT Imports: 32 Imported by: 0

README

veracity

Veracity is a command line tool providing support for inspecting DataTrails native MERKLE_LOG verifiable data.

Familiarity with a command line environment on your chosen platform is assumed by this README.

A general familiarity with verifiable data structures, and in particular binary merkle trees, would be advantageous when using veractity but is not required.

Support

We provide pre-built native binaries for linux, mac, and windows. The following architectures are supported:

Platform Architecture
MacOS(darwin) arm64
MacOS(darwin) x86_64
Linux arm64
Linux x86_64
Windows x86_64
Windows i386

The linux binaries can also be used in Windows Subsystem for Linux.

Installation

Installation is a manual process:

  1. Download the archive for your host platform
  2. Extract the archive
  3. Set the file permissions
  4. Move the binary to a location on your PATH

For example, for the Linux or Darwin OS the following steps would be conventional

PLATFORM=Darwin
ARCH=arm64
VERSION=0.0.1
curl -sLO https://github.com/datatrails/veracity/releases/download/v${VERSION}/veracity_${PLATFORM}_${ARCH}.tar.gz
chmod +x ./veracity
./veracity --help

Set PLATFORM and ARCH according to you environment. Select the desired release from the releases page as VERSION (Omitting the 'v').

The last step should report usage information. Usual practice is to move the binary into a location on your $PATH. For example:

mkdir -p $HOME/bin
mv ./veracity $HOME/bin/
which veracity

The last command will echo the location of the veracity binary if $HOME/bin is in your $PATH

A simple first example using nodescan

nodescan is a command which searches for a leaf entry in the verifiable data by linearly scanning the log. This is typically used in development as a diagnostic aid. It can also be used for some audit use cases.

Find a leaf in the log by full audit. The Merkle Leaf value for any DataTrails event can be found from its event details page in the UI. Follow the "Merkle Log Entry" link.

URL=https://app.datatrails.ai/verifiabledata
TENANT=tenant/7dfaa5ef-226f-4f40-90a5-c015e59998a8
LEAF=2b8ecdee967d976a31bac630036d6b183bd40913f969b47b438d4614ce7fa155

veracity --data-url $URL --tenant=$TENANT nodescan -v $LEAF

This command will report the MMR index of that leaf as 10

The conventional way to visualise the MMR index is like this


     6
   /  \
  2    5     9
 /\   / \   / \  
0  1  3  4 7  8  10  MMR INDEX

0  1  2  3 5  5   6 LEAF INDEX

And that shows that the leaf, which has MMR index 10 is the 7'th event ever recorded in that tenant.

The results of this command can be independently checked by downloading the public verifiable data for the DataTrails tenant on which the event was recorded.

curl -H "x-ms-blob-type: BlockBlob" -H "x-ms-version: 2019-12-12" https://app.datatrails.ai/verifiabledata/merklelogs/v1/mmrs/tenant/7dfaa5ef-226f-4f40-90a5-c015e59998a8/0/massifs/0000000000000000.log -o mmr.log

Using this online hexeditor the mmr.log can be uploaded and you can repeat the search performed above using its interface.

The format of the log is described in detail in "Navigating the Merkle Logs" (note: this material is not released yet)

Verifying a single event

An example of verifying the following single event using api response data.

https://app.datatrails.ai/archivist/v2/publicassets/87dd2e5a-42b4-49a5-8693-97f40a5af7f8/events/a022f458-8e55-4d63-a200-4172a42fc2aa

We use a publicly attested event so that you can check the event details directly.

EVENT_ID=publicassets/87dd2e5a-42b4-49a5-8693-97f40a5af7f8/events/a022f458-8e55-4d63-a200-4172a42fc2aa
DATATRAILS_URL=https://app.datatrails.ai
PUBLIC_TENANT_ID=tenant/6ea5cd00-c711-3649-6914-7b125928bbb4

curl -sL $DATATRAILS_URL/archivist/v2/$EVENT_ID | \
    veracity --data-url $DATATRAILS_URL/verifiabledata --tenant=$PUBLIC_TENANT_ID verify-included

By default there will be no output. If the verification has succeeded an exit code of 0 will be returned.

If the verification command is run with --loglevel=INFO the output will be:

verifying for tenant: tenant/6ea5cd00-c711-3649-6914-7b125928bbb4
verifying: 663 334 018fa97ef269039b00 2024-05-24T08:27:00.2+01:00 publicassets/87dd2e5a-42b4-49a5-8693-97f40a5af7f8/events/a022f458-8e55-4d63-a200-4172a42fc2aa
leaf hash: bfc511ab1b880b24bb2358e07472e3383cdeddfbc4de9d66d652197dfb2b6633
OK|663 334|[aea799fb2a8..., proof path nodes, ...f0a52d2256c235]

The elided proof path at time of writing was:

[aea799fb2a8c4bbb6eda1dd2c1e69f8807b9b06deeaf51b9e0287492cefd8e4c, 9f0183c7f79fd81966e104520af0f90c8447f1a73d4e38e7f2f23a0602ceb617, da21cb383d63896a9811f06ebd2094921581d8eb72f7fbef566b730958dc35f1, 51ea08fd02da3633b72ef0b09d8ba4209db1092d22367ef565f35e0afd4b0fc3, 185a9d55cf507ef85bd264f4db7228e225032c48da689aa8597e11059f45ab30, bab40107f7d7bebfe30c9cea4772f9eb3115cae1f801adab318f90fcdc204bdc, 94ca607094ead6fcd23f52851c8cdd8c6f0e2abde20dca19ba5abc8aff70d0d1, ba6d0fd8922342aafbba6073c5510103b077a7de9cb2d72fb652510110250f9e, 7fafc7edc434225afffc19b0582efa2a71b06a2d035358356df0a52d2256c235, b737375d837e67ee7bce182377304e889187ef0f335952174cb5bf707a0b4788]

The same command accepts the result of a DataTrails list events call, e.g.

DATATRAILS_URL=https://app.datatrails.ai
PUBLIC_TENANT_ID=tenant/6ea5cd00-c711-3649-6914-7b125928bbb4
PUBLIC_ASSET_ID=publicassets/87dd2e5a-42b4-49a5-8693-97f40a5af7f8

curl -sL $DATATRAILS_URL/archivist/v2/$PUBLIC_ASSET_ID/events | \
  veracity --data-url $DATATRAILS_URL/verifiabledata --tenant=$PUBLIC_TENANT_ID verify-included 

General use commands

  • node - read a merklelog node
  • nodescan - scan a log for a particular node value
  • diag - print diagnostics about a massif, identified by massif index or by an mmr index
  • verify-included - verify the inclusion of an event, or list of events, in the tenant's merkle log
  • event-log-info - print diagnostics about an events entry in the log (currently only supports events on protected assets)
  • massifs - Generate pre-calculated tables for navigating massif raw storage with maximum convenience

Developer commands

The following sub commands are used in development or by contributors. Or currently require an authenticated connection

  • tail, watch

Documentation

Index

Constants

View Source
const (
	AzureBlobURLFmt       = "https://%s.blob.core.windows.net"
	AzuriteStorageAccount = "devstoreaccount1"
	DefaultContainer      = "merklelogs"
)
View Source
const (
	// LeafTypePlain is used for committing to plain values.
	LeafTypePlain         = uint8(0)
	PublicAssetsPrefix    = "publicassets/"
	ProtectedAssetsPrefix = "assets/"

	// To create smooth UX for basic or first-time users, we default to the verifiabledata proxy
	// on production. This gives us compact runes to verify inclusion of a List Events response.
	DefaultRemoteMassifURL = "https://app.datatrails.ai/verifiabledata"
)

Variables

View Source
var (
	// recovers timestamp_committed from merklelog_entry.commit.idtimestamp prior to hashing
	Bug9308 = "9308"

	Bugs = []string{
		Bug9308,
	}
)
View Source
var (
	ErrVerifyInclusionFailed = errors.New("the entry is not in the log")
	ErrUncommittedEvents     = errors.New("one or more events did not have record of their inclusion in the log")
)
View Source
var (
	ErrInvalidV3Event = errors.New(`json is not in expected v3event format`)
)

Functions

func AddCommands

func AddCommands(app *cli.App, ikwid bool) *cli.App

func Bug

func Bug(cmd *CmdCtx, id string) bool

func DecodedEventsFromData added in v0.0.4

func DecodedEventsFromData(data []byte) ([]logverification.DecodedEvent, error)

func IsSupportedBug

func IsSupportedBug(id string) bool

func NewApp

func NewApp(ikwid bool) *cli.App

func NewAttribute added in v0.0.3

func NewAttribute(value any) (*attribute.Attribute, error)

func NewDiagCmd

func NewDiagCmd() *cli.Command

NewDiagCmd prints diagnostic information about the massif blob containg a specific mmrIndex

func NewEventDiagCmd

func NewEventDiagCmd() *cli.Command

NewEventDiagCmd provides diagnostic support for event verification

func NewEventsVerifyCmd added in v0.0.3

func NewEventsVerifyCmd() *cli.Command

NewEventsVerifyCmd verifies inclusion of a DataTrails event in the tenants Merkle Log

func NewLogTailCmd

func NewLogTailCmd() *cli.Command

func NewLogWatcherCmd

func NewLogWatcherCmd() *cli.Command

NewLogWatcherCmd watches for changes on any log

func NewMassifsCmd

func NewMassifsCmd() *cli.Command

NewMassifsCmd prints out pre-calculated tables for navigating massif blobs with maximum convenience

func NewNodeCmd

func NewNodeCmd() *cli.Command

NewNodeCmd prints out the identified mmr node

func NewNodeScanCmd

func NewNodeScanCmd() *cli.Command

NewNodeScan implements a sub command which linearly scans for a node in a blob This is a debugging tool

func NewProveCmd

func NewProveCmd() *cli.Command

NewProveCmd (will) generate a proof and node path for the argument node

func NewTimestamp

func NewTimestamp(id uint64, epoch uint8) (*timestamppb.Timestamp, error)

func NewWatchConfig

func NewWatchConfig(cCtx *cli.Context, cmd *CmdCtx) (watcher.WatchConfig, error)

NewWatchConfig derives a configuration from the options set on the command line context

func PeakStack

func PeakStack(massifHeight uint8, mmrSize uint64) []uint64

PeakStack returns the stack of mmrIndices corresponding to the stack of ancestor nodes required for mmrSize. Note that the trick here is to realise that passing a massifIndex+1 in place of mmrSize, treating each massif as a leaf node in a much smaller tree, gets the (much shorter) peak stack of nodes required from earlier massifs. And this is stack of nodes carried forward in each massif blob to make them self contained. (The mmrblobs package has a slightly different variant of this that returns a map)

func SetTimestamp

func SetTimestamp(id uint64, ts *timestamppb.Timestamp, epoch uint8) error

func VerifiableEventsFromData added in v0.0.3

func VerifiableEventsFromData(data []byte) ([]logverification.VerifiableEvent, error)

Types

type CmdCtx

type CmdCtx struct {
	// contains filtered or unexported fields
}

CmdCtx holds shared config and config derived state for all commands

type DirLister added in v0.0.4

type DirLister interface {
	// ListFiles returns list of absolute paths
	// to files (not subdirectories) in a directory
	ListFiles(string) ([]string, error)
}

type FileOpener added in v0.0.4

type FileOpener struct{}

func (*FileOpener) Open added in v0.0.4

func (*FileOpener) Open(name string) (io.ReadCloser, error)

type LocalMassifReader added in v0.0.4

type LocalMassifReader struct {
	// contains filtered or unexported fields
}

func NewLocalMassifReader added in v0.0.4

func NewLocalMassifReader(
	log logger.Logger, opener Opener, logLocation string, opts ...Option,
) (*LocalMassifReader, error)

NewLocalMassifReader creates MassifReader that reads from local files on disc - it mostly ignores tenant identity as we assume all the logs on the disc are for the tenant one is interested in - but it's still valid to pass tenant ID here

func (*LocalMassifReader) GetFirstMassif added in v0.0.4

func (mr *LocalMassifReader) GetFirstMassif(
	ctx context.Context, tenantIdentity string,
	opts ...azblob.Option,
) (massifs.MassifContext, error)

func (*LocalMassifReader) GetHeadMassif added in v0.0.4

func (mr *LocalMassifReader) GetHeadMassif(
	ctx context.Context, tenantIdentity string,
	opts ...azblob.Option,
) (massifs.MassifContext, error)

func (*LocalMassifReader) GetLazyContext added in v0.0.4

func (mr *LocalMassifReader) GetLazyContext(
	ctx context.Context, tenantIdentity string, which massifs.LogicalBlob,
	opts ...azblob.Option,
) (massifs.LogBlobContext, uint64, error)

GetLazyContext is an optimization for remote massif readers and is therefor not implemented for local massif reader

func (*LocalMassifReader) GetMassif added in v0.0.4

func (mr *LocalMassifReader) GetMassif(
	ctx context.Context, tenantIdentity string, massifIndex uint64,
	opts ...azblob.Option,
) (massifs.MassifContext, error)

type LocalMassifReaderOptions added in v0.0.4

type LocalMassifReaderOptions struct {
	// contains filtered or unexported fields
}

type LogTailActivity

type LogTailActivity struct {
	watcher.LogTail
	LogSize         uint64
	LastIDEpoch     uint8
	LastIDTimestamp uint64
	LogActivity     time.Time
	TagActivity     time.Time
}

LogTailActivity can represent either the seal or the massif that has most recently been updated for the log.

type MassifReader added in v0.0.4

type MassifReader interface {
	GetFirstMassif(ctx context.Context, tenantIdentity string, opts ...azblob.Option) (massifs.MassifContext, error)
	GetHeadMassif(ctx context.Context, tenantIdentity string, opts ...azblob.Option) (massifs.MassifContext, error)
	GetLazyContext(ctx context.Context, tenantIdentity string, which massifs.LogicalBlob, opts ...azblob.Option) (massifs.LogBlobContext, uint64, error)
	GetMassif(ctx context.Context, tenantIdentity string, massifIndex uint64, opts ...azblob.Option) (massifs.MassifContext, error)
}

type MassifTail

type MassifTail struct {
	LogTailActivity
	FirstIndex uint64
}

MassifTail contains the massif specific tail information

func TailMassif

func TailMassif(
	ctx context.Context,
	massifReader MassifReader,
	tenantIdentity string,
) (MassifTail, error)

TailMassif returns the active massif for the tenant

func (MassifTail) String

func (lt MassifTail) String() string

String returns a printable. loggable pretty rendering of the tail

type Opener added in v0.0.4

type Opener interface {
	Open(string) (io.ReadCloser, error)
}

type Option added in v0.0.4

type Option func(*LocalMassifReaderOptions)

func WithDirectory added in v0.0.4

func WithDirectory() Option

type OsDirLister added in v0.0.4

type OsDirLister struct{}

Utilities to remove the os dependencies from the MassifReader

func (*OsDirLister) ListFiles added in v0.0.4

func (*OsDirLister) ListFiles(name string) ([]string, error)

type SealTail

type SealTail struct {
	LogTailActivity
	Count  uint64
	Signed cose.CoseSign1Message
	State  massifs.MMRState
}

SealTail contains the seal specific tail information

func TailSeal

func TailSeal(
	ctx context.Context,
	rootReader massifs.SignedRootReader,
	tenantIdentity string,
) (SealTail, error)

TailSeal returns the most recently added seal for the log

func (SealTail) String

func (st SealTail) String() string

String returns a printable. loggable pretty rendering of the tail

type StdinOpener added in v0.0.4

type StdinOpener struct {
	// contains filtered or unexported fields
}

func (*StdinOpener) Open added in v0.0.4

func (o *StdinOpener) Open(string) (io.ReadCloser, error)

type TailConfig

type TailConfig struct {
	// Interval defines the wait period between repeated tail checks if many
	// checks have been asked for.
	Interval time.Duration
	// TenantIdentity identifies the log of interest
	TenantIdentity string
}

func NewTailConfig

func NewTailConfig(cCtx *cli.Context, cmd *CmdCtx) (TailConfig, error)

NewTailConfig derives a configuration from the supplied comand line options context

Directories

Path Synopsis
cmd
veracity command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL