Documentation
¶
Overview ¶
go-ipld-prime is a series of go interfaces for manipulating IPLD data.
See https://github.com/ipld/specs for more information about the basics of "What is IPLD?".
See https://github.com/ipld/go-ipld-prime/tree/master/doc/README.md for more documentation about go-ipld-prime's architecture and usage.
Here in the godoc, the first couple of types to look at should be:
- Node
- NodeBuilder
These types provide a generic description of the data model.
If working with linked data (data which is split into multiple trees of Nodes, loaded separately, and connected by some kind of "link" reference), the next types you should look at are:
- Link
- LinkBuilder
- Loader
- Storer
All of these types are interfaces. There are several implementations you can choose; we've provided some in subpackages, or you can bring your own.
Particularly interesting subpackages include:
- impl/* -- various Node + NodeBuilder implementations
- encoding/* -- functions for serializing and deserializing Nodes
- linking/* -- various Link + LinkBuilder implementation
- traversal -- functions for walking Node graphs (including automatic link loading) and visiting
- typed -- Node implementations with constraints
- fluent -- Node interfaces with streamlined error handling
Index ¶
- Variables
- type ErrInvalidKey
- type ErrIteratorOverread
- type ErrNotExists
- type ErrWrongKind
- type Link
- type LinkBuilder
- type LinkContext
- type ListBuilder
- type ListIterator
- type Loader
- type MapBuilder
- type MapIterator
- type Node
- type NodeBuilder
- type Path
- type PathSegment
- type ReprKind
- type ReprKindSet
- type StoreCommitter
- type Storer
Constants ¶
This section is empty.
Variables ¶
var ( ReprKindSet_Recursive = ReprKindSet{ReprKind_Map, ReprKind_List} ReprKindSet_Scalar = ReprKindSet{ReprKind_Null, ReprKind_Bool, ReprKind_Int, ReprKind_Float, ReprKind_String, ReprKind_Bytes, ReprKind_Link} ReprKindSet_JustMap = ReprKindSet{ReprKind_Map} ReprKindSet_JustList = ReprKindSet{ReprKind_List} ReprKindSet_JustNull = ReprKindSet{ReprKind_Null} ReprKindSet_JustBool = ReprKindSet{ReprKind_Bool} ReprKindSet_JustInt = ReprKindSet{ReprKind_Int} ReprKindSet_JustFloat = ReprKindSet{ReprKind_Float} ReprKindSet_JustString = ReprKindSet{ReprKind_String} ReprKindSet_JustBytes = ReprKindSet{ReprKind_Bytes} ReprKindSet_JustLink = ReprKindSet{ReprKind_Link} )
Functions ¶
This section is empty.
Types ¶
type ErrInvalidKey ¶ added in v0.0.2
type ErrInvalidKey struct {
Reason string
}
ErrInvalidKey may be returned from lookup functions on the Node interface when a key is invalid.
Common examples of this are when `Lookup(Node)` is used with a non-string Node; typed nodes also introduce other reasons a key may be invalid.
func (ErrInvalidKey) Error ¶ added in v0.0.2
func (e ErrInvalidKey) Error() string
type ErrIteratorOverread ¶
type ErrIteratorOverread struct{}
ErrIteratorOverread is returned when calling 'Next' on a MapIterator or ListIterator when it is already done.
func (ErrIteratorOverread) Error ¶
func (e ErrIteratorOverread) Error() string
type ErrNotExists ¶
type ErrNotExists struct {
Segment PathSegment
}
ErrNotExists may be returned from the lookup functions of the Node interface to indicate a missing value.
Note that schema.ErrNoSuchField is another type of error which sometimes occurs in similar places as ErrNotExists. ErrNoSuchField is preferred when handling data with constraints provided by a schema that mean that a field can *never* exist (as differentiated from a map key which is simply absent in some data).
func (ErrNotExists) Error ¶
func (e ErrNotExists) Error() string
type ErrWrongKind ¶
type ErrWrongKind struct {
// TypeName may optionally indicate the named type of a node the function
// was called on (if the node was typed!), or, may be the empty string.
TypeName string
// MethodName is literally the string for the operation attempted, e.g.
// "AsString".
//
// For methods on nodebuilders, we say e.g. "NodeBuilder.CreateMap".
MethodName string
// ApprorpriateKind describes which ReprKinds the erroring method would
// make sense for.
AppropriateKind ReprKindSet
// ActualKind describes the ReprKind of the node the method was called on.
//
// In the case of typed nodes, this will typically refer to the 'natural'
// data-model kind for such a type (e.g., structs will say 'map' here).
ActualKind ReprKind
}
ErrWrongKind may be returned from functions on the Node interface when a method is invoked which doesn't make sense for the Kind and/or ReprKind that node concretely contains.
For example, calling AsString on a map will return ErrWrongKind. Calling Lookup on an int will similarly return ErrWrongKind.
func (ErrWrongKind) Error ¶
func (e ErrWrongKind) Error() string
type Link ¶
type Link interface {
// Load returns a Node identified by the Link.
//
// The provided Loader function is used to get a reader for the raw
// serialized content; the Link contains an understanding of how to
// select a decoder (and hasher for verification, etc).
Load(context.Context, LinkContext, NodeBuilder, Loader) (Node, error)
// LinkBuilder returns a handle to any parameters of the Link which
// are needed to create a new Link of the same style but with new content.
// (It's much like the relationship of Node/NodeBuilder.)
//
// (If you're familiar with CIDs, you can think of this method as
// corresponding closely to `cid.Prefix()`, just more abstractly.)
LinkBuilder() LinkBuilder
// String should return a reasonably human-readable debug-friendly
// representation of a Link. It should only be used for debug and
// log message purposes; there is no contract that requires that the
// string be able to be parsed back into a reified Link.
String() string
}
Link is a special kind of value in IPLD which can be "loaded" to access more nodes.
Nodes can return a Link; this can be loaded manually, or, the traversal package contains powerful features for automatically traversing links through large trees of nodes.
Links straddle somewhat awkwardly across the IPLD Layer Model: clearly not at the Schema layer (though schemas can define their parameters), partially at the Data Model layer (as they're recognizably in the Node interface), and also involved at some serial layer that we don't often talk about: linking -- since we're a content-addressed system at heart -- necessarily involves understanding of concrete serialization details: which encoding mechanisms to use, what string escaping, what hashing, etc, and indeed what concrete serial link representation itself to use.
Link is an abstract interface so that we can describe Nodes without getting stuck on specific details of any link representation. In practice, you'll almost certainly use CIDs for linking. However, it's possible to bring your own Link implementations (though this'll almost certainly involve also bringing your own encoding systems; it's a lot of work). It's even possible to use IPLD *entirely without* any linking implementation, using it purely for json/cbor via the encoding packages and foregoing the advanced traversal features around transparent link loading.
type LinkBuilder ¶
LinkBuilder encapsulates any implementation details and parameters necessary for taking a Node and converting it to a serial representation and returning a Link to that data.
The serialized bytes will be routed through the provided Storer system, which is expected to store them in some way such that a related Loader system can later use the Link and an associated Loader to load nodes of identical content.
LinkBuilder, like Link, is an abstract interface. If using CIDs as an implementation, LinkBuilder will encapsulate things like multihashType, multicodecType, and cidVersion, for example.
type LinkContext ¶
type LinkContext struct {
LinkPath Path
LinkNode Node // has the Link again, but also might have type info // always zero for writing new nodes, for obvi reasons.
ParentNode Node
}
LinkContext is a parameter to Storer and Loader functions.
An example use of LinkContext might be inspecting the LinkNode, and if it's a typed node, inspecting its Type property; then, a Loader might deciding on whether or not we want to load objects of that Type. This might be used to do a traversal which looks at all directory objects, but not file contents, for example.
type ListBuilder ¶
type ListBuilder interface {
AppendAll([]Node) error
Append(v Node) error
Set(idx int, v Node) error
Build() (Node, error)
BuilderForValue(idx int) NodeBuilder
}
ListBuilder is an interface for creating new Node instances of kind list.
A ListBuilder is generally obtained by getting a NodeBuilder first, and then using CreateList or AmendList to begin.
Methods mutate the builder's internal state; when done, call Build to produce a new immutable Node from the internal state. (After calling Build, future mutations may be rejected.)
Methods may error when handling typed lists if non-matching types are inserted.
The BuilderForValue function returns a NodeBuilder that can be used to produce values for insertion. If you already have the data you're inserting, you can use those Nodes; if you don't, use these builders. (This is particularly relevant for typed nodes and bind nodes, since those have internal specializations, and not all NodeBuilders for them are equal.) Note that BuilderForValue requires an index as a parameter! In most cases, this is not relevant and the method returns a constant NodeBuilder; however, typed nodes which are structs and have list representations may return different builders per index, corresponding to the types of its fields.
You may be interested in the fluent package's fluent.ListBuilder equivalent for common usage with less error-handling boilerplate requirements.
type ListIterator ¶
type ListIterator interface {
// Next returns the next index and value.
//
// An error value can also be returned at any step: in the case of advanced
// data structures with incremental loading, it's possible to encounter
// cancellation or I/O errors at any point in iteration.
// If an error is returned, the boolean will always be false (so it's
// correct to check the bool first and short circuit to continuing if true).
// If an error is returned, the key and value may be nil.
Next() (idx int, value Node, err error)
// Done returns false as long as there's at least one more entry to iterate.
// When Done returns false, iteration can stop.
//
// Note when implementing iterators for advanced data layouts (e.g. more than
// one chunk of backing data, which is loaded incrementally): if your
// implementation does any I/O during the Done method, and it encounters
// an error, it must return 'false', so that the following Next call
// has an opportunity to return the error.
Done() bool
}
ListIterator is an interface for traversing list nodes. Sequential calls to Next() will yield index-value pairs; Done() describes whether iteration should continue.
A loop which iterates from 0 to Node.Length is a valid alternative to using a ListIterator.
type Loader ¶
type Loader func(lnk Link, lnkCtx LinkContext) (io.Reader, error)
Loader functions are used to get a reader for raw serialized content based on the lookup information in a Link. A loader function is used by providing it to a Link.Load() call.
Loaders typically have some filesystem or database handle contained within their closure which is used to satisfy read operations.
LinkContext objects can be provided to give additional information to the loader, and will be automatically filled out when a Loader is used by systems in the traversal package; most Loader implementations should also work fine when given the zero value of LinkContext.
Loaders are implicitly coupled to a Link implementation and have some "extra" knowledge of the concrete Link type. This necessary since there is no mandated standard for how to serially represent Link itself, and such a representation is typically needed by a Storer implementation.
type MapBuilder ¶
type MapBuilder interface {
Insert(k, v Node) error
Delete(k Node) error
Build() (Node, error)
BuilderForKeys() NodeBuilder
BuilderForValue(k string) NodeBuilder
}
MapBuilder is an interface for creating new Node instances of kind map.
A MapBuilder is generally obtained by getting a NodeBuilder first, and then using CreateMap or AmendMap to begin.
Methods mutate the builder's internal state; when done, call Build to produce a new immutable Node from the internal state. (After calling Build, future mutations may be rejected.)
Insertion methods error if the key already exists.
The BuilderForKeys and BuilderForValue functions return NodeBuilders that can be used to produce values for insertion. If you already have the data you're inserting, you can use those Nodes; if you don't, use these builders. (This is particularly relevant for typed nodes and bind nodes, since those have internal specializations, and not all NodeBuilders for them are equal.) Note that BuilderForValue requires a key as a parameter! This is because typed nodes which are structs may return different builders per field, specific to the field's type.
You may be interested in the fluent package's fluent.MapBuilder equivalent for common usage with less error-handling boilerplate requirements.
type MapIterator ¶
type MapIterator interface {
// Next returns the next key-value pair.
//
// An error value can also be returned at any step: in the case of advanced
// data structures with incremental loading, it's possible to encounter
// cancellation or I/O errors at any point in iteration.
// If an error is returned, the boolean will always be false (so it's
// correct to check the bool first and short circuit to continuing if true).
// If an error is returned, the key and value may be nil.
Next() (key Node, value Node, err error)
// Done returns false as long as there's at least one more entry to iterate.
// When Done returns true, iteration can stop.
//
// Note when implementing iterators for advanced data layouts (e.g. more than
// one chunk of backing data, which is loaded incrementally): if your
// implementation does any I/O during the Done method, and it encounters
// an error, it must return 'false', so that the following Next call
// has an opportunity to return the error.
Done() bool
}
MapIterator is an interface for traversing map nodes. Sequential calls to Next() will yield key-value pairs; Done() describes whether iteration should continue.
Iteration order is defined to be stable: two separate MapIterator created to iterate the same Node will yield the same key-value pairs in the same order. The order itself may be defined by the Node implementation: some Nodes may retain insertion order, and some may return iterators which always yield data in sorted order, for example.
type Node ¶
type Node interface {
// ReprKind returns a value from the ReprKind enum describing what the
// essential serializable kind of this node is (map, list, int, etc).
// Most other handling of a node requires first switching upon the kind.
ReprKind() ReprKind
// LookupString looks up a child object in this node and returns it.
// The returned Node may be any of the ReprKind:
// a primitive (string, int, etc), a map, a list, or a link.
//
// If the Kind of this Node is not ReprKind_Map, a nil node and an error
// will be returned.
//
// If the key does not exist, a nil node and an error will be returned.
LookupString(key string) (Node, error)
// Lookup is the equivalent of LookupString, but takes a reified Node
// as a parameter instead of a plain string.
// This mechanism is useful if working with typed maps (if the key types
// have constraints, and you already have a reified `schema.TypedNode` value,
// using that value can save parsing and validation costs);
// and may simply be convenient if you already have a Node value in hand.
//
// (When writing generic functions over Node, a good rule of thumb is:
// when handling a map, check for `schema.TypedNode`, and in this case prefer
// the Lookup(Node) method; otherwise, favor LookupString; typically
// implementations will have their fastest paths thusly.)
Lookup(key Node) (Node, error)
// LookupIndex is the equivalent of LookupString but for indexing into a list.
// As with LookupString, the returned Node may be any of the ReprKind:
// a primitive (string, int, etc), a map, a list, or a link.
//
// If the Kind of this Node is not ReprKind_List, a nil node and an error
// will be returned.
//
// If idx is out of range, a nil node and an error will be returned.
LookupIndex(idx int) (Node, error)
// LookupSegment is will act as either LookupString or LookupIndex,
// whichever is contextually appropriate.
//
// Using LookupSegment may imply an "atoi" conversion if used on a list node,
// or an "itoa" conversion if used on a map node. If an "itoa" conversion
// takes place, it may error, and this method may return that error.
LookupSegment(seg PathSegment) (Node, error)
// MapIterator returns an iterator which yields key-value pairs
// traversing the node.
// If the node kind is anything other than a map, the iterator will
// yield error values.
//
// The iterator will yield every entry in the map; that is, it
// can be expected that itr.Next will be called node.Length times
// before itr.Done becomes true.
MapIterator() MapIterator
// ListIterator returns an iterator which yields key-value pairs
// traversing the node.
// If the node kind is anything other than a list, the iterator will
// yield error values.
//
// The iterator will yield every entry in the list; that is, it
// can be expected that itr.Next will be called node.Length times
// before itr.Done becomes true.
ListIterator() ListIterator
// Length returns the length of a list, or the number of entries in a map,
// or -1 if the node is not of list nor map kind.
Length() int
// Undefined nodes are returned when traversing a struct field that is
// defined by a schema but unset in the data. (Undefined nodes are not
// possible otherwise; you'll only see them from `schema.TypedNode`.)
// The undefined flag is necessary so iterating over structs can
// unambiguously make the distinction between values that are
// present-and-null versus values that are absent.
IsUndefined() bool
IsNull() bool
AsBool() (bool, error)
AsInt() (int, error)
AsFloat() (float64, error)
AsString() (string, error)
AsBytes() ([]byte, error)
AsLink() (Link, error)
// NodeBuilder returns a NodeBuilder which can be used to build
// new nodes of the same implementation type as this one.
//
// For map and list nodes, the NodeBuilder's append-oriented methods
// will work using this node's values as a base.
// If this is a typed node, the NodeBuilder will carry the same
// typesystem constraints as this Node.
//
// (This feature is used by the traversal package, especially in
// e.g. traversal.Transform, for doing tree updates while keeping the
// existing implementation preferences and doing as many operations
// in copy-on-write fashions as possible.)
//
// ---
//
// More specifically, the contract of a NodeBuilder returned by this method
// is that it should be able to "replace" this node with a new one of
// similar properties.
// E.g., for a string, the builder must be able to build a new string.
// For a map, the builder must be able to build a new map.
// For a *struct* (when using typed nodes), the builder must be able to
// build new structs of the name type.
// Note that the promise doesn't extend further: there's no requirement
// that the builder be able to build maps if this node's kind is "string"
// (you can see why this lack-of-contract is important when considering
// typed nodes: if this node has a struct type, then should the builder
// be able to build other structs of different types? Of course not;
// there'd be no way to define which other types to build!).
// For nulls, this means the builder doesn't have to do much at all!
//
// (Some Nodes may return a NodeBuilder that can be used for much more
// than replacing their own kind: for example, Node implementations from
// the ipldfree package tend to return a NodeBuilder than can build any
// other ipldfree.Node (e.g. even the builder obtained from a string node
// will be able to build maps). This is not required by the contract;
// such packages only do so out of internal implementation convenience.)
//
// This "able to replace" behavior also has a specific application regarding
// nodes implementing Advanced Data Layouts: it means that the NodeBuilder
// returned by this method must produce a new Node using that same ADL.
// For example, if a Node is a map implemented by some sort of HAMT, its
// NodeBuilder must also produce a new HAMT.
NodeBuilder() NodeBuilder
}
Node represents a value in IPLD. Any point in a tree of data is a node: scalar values (like int, string, etc) are nodes, and so are recursive values (like map and list).
Nodes and kinds are described in the IPLD specs at https://github.com/ipld/specs/blob/master/data-model-layer/data-model.md .
Methods on the Node interface cover the superset of all possible methods for all possible kinds -- but some methods only make sense for particular kinds, and thus will only make sense to call on values of the appropriate kind. (For example, 'Length' on an int doesn't make sense, and 'AsInt' on a map certainly doesn't work either!) Use the ReprKind method to find out the kind of value before calling kind-specific methods. Individual method documentation state which kinds the method is valid for. (If you're familiar with the stdlib reflect package, you'll find the design of the Node interface very comparable to 'reflect.Value'.)
The Node interface is read-only. All of the methods on the interface are for examining values, and implementations should be immutable. The companion interface, NodeBuilder, provides the matching writable methods, and should be use to create a (thence immutable) Node.
Keeping Node immutable and separating mutation into NodeBuilder makes it possible to perform caching (or rather, memoization, since there's no such thing as cache invalidation for immutable systems) of computed properties of Node; use copy-on-write algorithms for memory efficiency; and to generally build pleasant APIs. Many library functions will rely on the immutability of Node (e.g., assuming that pointer-equal nodes do not change in value over time), so any user-defined Node implementations should be careful to uphold the immutability contract.)
There are many different concrete types which implement Node. The primary purpose of various node implementations is to organize memory in the program in different ways -- some in-memory layouts may be more optimal for some programs than others, and changing the Node (and NodeBuilder) implementations lets the programmer choose.
For concrete implementations of Node, check out the "./impl/" folder, and the packages within it. "impl/free" should probably be your first start; the Node and NodeBuilder implementations in that package work for any data. Other packages are optimized for specific use-cases. Codegen tools can also be used to produce concrete implementations of Node; these may be specific to certain data, but still conform to the Node interface for interoperability and to support higher-level functions.
Nodes may also be *typed* -- see the 'schema' and 'impl/typed' packages. Typed nodes have additional constraints and behaviors (and have a `.Type().Kind()` in addition to their `.ReprKind()`!), but still behave as a regular Node in all the basic ways.
var Null Node = nullNode{}
var Undef Node = undefNode{}
type NodeBuilder ¶
type NodeBuilder interface {
CreateMap() (MapBuilder, error)
AmendMap() (MapBuilder, error)
CreateList() (ListBuilder, error)
AmendList() (ListBuilder, error)
CreateNull() (Node, error)
CreateBool(bool) (Node, error)
CreateInt(int) (Node, error)
CreateFloat(float64) (Node, error)
CreateString(string) (Node, error)
CreateBytes([]byte) (Node, error)
CreateLink(Link) (Node, error)
}
NodeBuilder is an interface that describes creating new Node instances.
The Node interface is entirely read-only methods; a Node is immutable. Thus, we need a NodeBuilder system for creating new ones; the builder is mutable, and when we're done accumulating mutations, we take the accumulated data and produce an immutable Node out of it.
Separating mutation into NodeBuilder and keeping Node immutable makes it possible to perform caching (or rather, memoization, since there's no such thing as cache invalidation for immutable systems) of computed properties of Node; use copy-on-write algorithms for memory efficiency; and to generally build pleasant APIs.
Each package in `go-ipld-prime//impl/*` that implements ipld.Node also has a NodeBuilder implementation that produces new nodes of that same package's type.
The Node interface includes a method which returns a NodeBuilder; this builder must be able to produce a new node of the same concrete implementation as the original node. This is useful for algorithms that work on trees of nodes: this NodeBuilder getter will be used when an update deep in the tree causes a need to create several new nodes to propagate the change up through parent nodes.
NodeBuilder instances obtained from `Node.NodeBuilder()` may carry some additional logic or constraints with them to the new Node they produce. For example, a Node which is implemented using reflection to bind to a natively-typed struct will yield a NodeBuilder which contains a `reflect.Type` handle it can use to create a new value of that native type; similarly, schema-typed Nodes will yield a NodeBuilder that keeps the schema info and type constraints from that Node! (Continuing the schema.TypedNode example: if you have a schema.TypedNode that is constrained to be of some `type Foo = {Bar:Baz}` type, then any new Node produced from its NodeBuilder will still answer `n.(schema.TypedNode).Type().Name()` as `Foo`; and if `n.NodeBuilder().AmendMap().Insert(...)` is called with nodes of unmatching type given to the insertion, the builder will error!)
The NodeBuilder retrieved from a Node can also be used to do *updates*: consider the AmendMap and AmendList methods. These methods are useful not just for programmer convenience, but also because they can reuse memory, sharing any common segments of memory with the earlier Node. (In the NodeBuilder exposed by the `go-ipld-prime//impl/*` packages, these methods are equivalent to their Create* counterparts. As there's no "existing" node for them to refer to, it's treated the same as amending an empty node.)
type Path ¶
type Path struct {
// contains filtered or unexported fields
}
Path describes a series of steps across a tree or DAG of Node, where each segment in the path is a map key or list index (literaly, Path is a slice of PathSegment values). Path is used in describing progress in a traversal; and can also be used as an instruction for traversing from one Node to another. Path values will also often be encountered as part of error messages.
(Note that Paths are useful as an instruction for traversing from *one* Node to *one* other Node; to do a walk from one Node and visit *several* Nodes based on some sort of pattern, look to IPLD Selectors, and the 'traversal/selector' package in this project.)
Path values are always relative. Observe how 'traversal.Focus' requires both a Node and a Path argument -- where to start, and where to go, respectively. Similarly, error values which include a Path will be speaking in reference to the "starting Node" in whatever context they arose from.
The canonical form of a Path is as a list of PathSegment. Each PathSegment is a string; by convention, the string should be in UTF-8 encoding and use NFC normalization, but all operations will regard the string as its constituent eight-bit bytes.
There are no illegal or magical characters in IPLD Paths (in particular, do not mistake them for UNIX system paths). IPLD Paths can only go down: that is, each segment must traverse one node. There is no ".." which means "go up"; and there is no "." which means "stay here". IPLD Paths have no magic behavior around characters such as "~". IPLD Paths do not have a concept of "globs" nor behave specially for a path segment string of "*" (but you may wish to see 'Selectors' for globbing-like features that traverse over IPLD data).
An empty string is a valid PathSegment. (This leads to some unfortunate complications when wishing to represent paths in a simple string format; however, consider that maps do exist in serialized data in the wild where an empty string is used as the key: it is important we be able to correctly describe and address this!)
A string containing "/" (or even being simply "/"!) is a valid PathSegment. (As with empty strings, this is unfortunate (in particular, because it very much doesn't match up well with expectations popularized by UNIX-like filesystems); but, as with empty strings, maps which contain such a key certainly exist, and it is important that we be able to regard them!)
A string starting, ending, or otherwise containing the NUL (\x00) byte is also a valid PathSegment. This follows from the rule of "a string is regarded as its constituent eight-bit bytes": an all-zero byte is not exceptional. In golang, this doesn't pose particular difficulty, but note this would be of marked concern for languages which have "C-style nul-terminated strings".
For an IPLD Path to be represented as a string, an encoding system including escaping is necessary. At present, there is not a single canonical specification for such an escaping; we expect to decide one in the future, but this is not yet settled and done. (This implementation has a 'String' method, but it contains caveats and may be ambiguous for some content. This may be fixed in the future.)
func NewPath ¶ added in v0.0.2
func NewPath(segments []PathSegment) Path
NewPath returns a Path composed of the given segments.
This constructor function does a defensive copy, in case your segments slice should mutate in the future. (Use NewPathNocopy if this is a performance concern, and you're sure you know what you're doing.)
func NewPathNocopy ¶ added in v0.0.2
func NewPathNocopy(segments []PathSegment) Path
NewPathNocopy is identical to NewPath but trusts that the segments slice you provide will not be mutated.
func ParsePath ¶
ParsePath converts a string to an IPLD Path, doing a basic parsing of the string using "/" as a delimiter to produce a segmented Path. This is a handy, but not a general-purpose nor spec-compliant (!), way to create a Path: it cannot represent all valid paths.
Multiple subsequent "/" characters will be silently collapsed. E.g., `"foo///bar"` will be treated equivalently to `"foo/bar"`. Prefixed and suffixed extraneous "/" characters are also discarded. This makes this constructor incapable of handling some possible Path values (specifically: paths with empty segements cannot be created with this constructor).
There is no escaping mechanism used by this function. This makes this constructor incapable of handling some possible Path values (specifically, a path segment containing "/" cannot be created, because it will always be intepreted as a segment separator).
No other "cleaning" of the path occurs. See the documentation of the Path struct; in particular, note that ".." does not mean "go up", nor does "." mean "stay here" -- correspondingly, there isn't anything to "clean" in the same sense as 'filepath.Clean' from the standard library filesystem path packages would.
If the provided string contains unprintable characters, or non-UTF-8 or non-NFC-canonicalized bytes, no remark will be made about this, and those bytes will remain part of the PathSegments in the resulting Path.
func (Path) AppendSegment ¶
func (p Path) AppendSegment(ps PathSegment) Path
AppendSegmentString is as per Join, but a shortcut when appending single segments using strings.
func (Path) AppendSegmentString ¶ added in v0.0.2
AppendSegmentString is as per Join, but a shortcut when appending single segments using strings.
func (Path) Join ¶
Join creates a new path composed of the concatenation of this and the given path's segments.
func (Path) Parent ¶
Parent returns a path with the last of its segments popped off (or the zero path if it's already empty).
func (Path) Segments ¶
func (p Path) Segments() []PathSegment
Segements returns a slice of the path segment strings.
It is not lawful to mutate nor append the returned slice.
func (Path) String ¶
String representation of a Path is simply the join of each segment with '/'. It does not include a leading nor trailing slash.
This is a handy, but not a general-purpose nor spec-compliant (!), way to reduce a Path to a string. There is no escaping mechanism used by this function, and as a result, not all possible valid Path values (such as those with empty segments or with segments containing "/") can be encoded unambiguously. For Path values containing these problematic segments, ParsePath applied to the string returned from this function may return a nonequal Path value.
No escaping for unprintable characters is provided. No guarantee that the resulting string is UTF-8 nor NFC canonicalized is provided unless all the constituent PathSegment had those properties.
type PathSegment ¶ added in v0.0.2
type PathSegment struct {
// contains filtered or unexported fields
}
PathSegment can describe either a key in a map, or an index in a list.
Create a PathSegment via either ParsePathSegment, PathSegmentOfString, or PathSegmentOfInt; or, via one of the constructors of Path, which will implicitly create PathSegment internally. Using PathSegment's natural zero value directly is discouraged (it will act like ParsePathSegment("0"), which likely not what you'd expect).
Path segments are "stringly typed" -- they may be interpreted as either strings or ints depending on context. A path segment of "123" will be used as a string when traversing a node of map kind; and it will be converted to an integer when traversing a node of list kind. (If a path segment string cannot be parsed to an int when traversing a node of list kind, then traversal will error.) It is not possible to ask which kind (string or integer) a PathSegment is, because that is not defined -- this is *only* intepreted contextually.
Internally, PathSegment will store either a string or an integer, depending on how it was constructed, and will automatically convert to the other on request. (This means if two pieces of code communicate using PathSegment, one producing ints and the other expecting ints, then they will work together efficiently.) PathSegment in a Path produced by ParsePath generally have all strings internally, because there is no distinction possible when parsing a Path string (and attempting to pre-parse all strings into ints "just in case" would waste time in almost all cases).
Be cautious of attempting to use PathSegment as a map key! Due to the implementation detail of internal storage, it's possible for PathSegment values which are "equal" per PathSegment.Equal's definition to still be unequal in the eyes of golang's native maps. You should probably use the string values of the PathSegment as map keys. (This has the additional bonus of hitting a special fastpath that the golang built-in maps have specifically for plain string keys.)
func ParsePathSegment ¶ added in v0.0.2
func ParsePathSegment(s string) PathSegment
ParsePathSegment parses a string into a PathSegment, handling any escaping if present. (Note: there is currently no escaping specified for PathSegments, so this is currently functionally equivalent to PathSegmentOfString.)
func PathSegmentOfInt ¶ added in v0.0.2
func PathSegmentOfInt(i int) PathSegment
PathSegmentOfString boxes an int into a PathSegment.
func PathSegmentOfString ¶ added in v0.0.2
func PathSegmentOfString(s string) PathSegment
PathSegmentOfString boxes a string into a PathSegment. It does not attempt to parse any escaping; use ParsePathSegment for that.
func (PathSegment) Equals ¶ added in v0.0.2
func (x PathSegment) Equals(o PathSegment) bool
Equals checks if two PathSegment values are equal.
Because PathSegment is "stringly typed", this comparison does not regard if one of the segments is stored as a string and one is stored as an int; if string values of two segments are equal, they are "equal" overall. In other words, `PathSegmentOfInt(2).Equals(PathSegmentOfString("2")) == true`! (You should still typically prefer this method over converting two segments to string and comparing those, because even though that may be functionally correct, this method will be faster if they're both ints internally.)
func (PathSegment) Index ¶ added in v0.0.2
func (ps PathSegment) Index() (int, error)
Index returns the PathSegment as an int, or returns an error if the segment is a string that can't be parsed as an int.
func (PathSegment) String ¶ added in v0.0.2
func (ps PathSegment) String() string
String returns the PathSegment as a string.
type ReprKind ¶
type ReprKind uint8
ReprKind represents the primitive kind in the IPLD data model. All of these kinds map directly onto serializable data.
Note that ReprKind contains the concept of "map", but not "struct" or "object" -- those are a concepts that could be introduced in a type system layers, but are *not* present in the data model layer, and therefore they aren't included in the ReprKind enum.
const ( ReprKind_Invalid ReprKind = 0 ReprKind_Map ReprKind = '{' ReprKind_List ReprKind = '[' ReprKind_Null ReprKind = '0' ReprKind_Bool ReprKind = 'b' ReprKind_Int ReprKind = 'i' ReprKind_Float ReprKind = 'f' ReprKind_String ReprKind = 's' ReprKind_Bytes ReprKind = 'x' ReprKind_Link ReprKind = '/' )
type ReprKindSet ¶
type ReprKindSet []ReprKind
ReprKindSet is a type with a few enumerated consts that are commonly used (mostly, in error messages).
func (ReprKindSet) String ¶
func (x ReprKindSet) String() string
type StoreCommitter ¶
StoreCommitter is a thunk returned by a Storer which is used to "commit" the storage operation. It should be called after the associated writer is finished, similar to a 'Close' method, but further takes a Link parameter, which is the identity of the content. Typically, this will cause an atomic operation in the storage system to move the already-written content into a final place (e.g. rename a tempfile) determined by the Link. (The Link parameter is necessarily only given at the end of the process rather than the beginning to so that we can have content-addressible semantics while also supporting streaming writes.)
type Storer ¶
type Storer func(lnkCtx LinkContext) (io.Writer, StoreCommitter, error)
Storer functions are used to a get a writer for raw serialized content, which will be committed to storage indexed by Link. A stoerer function is used by providing it to a LinkBuilder.Build() call.
The storer system comes in two parts: the Storer itself *starts* a storage operation (presumably to some e.g. tempfile) and returns a writer; the StoreCommitter returned with the writer is used to *commit* the final storage (much like a 'Close' operation for the writer).
Storers typically have some filesystem or database handle contained within their closure which is used to satisfy read operations.
LinkContext objects can be provided to give additional information to the storer, and will be automatically filled out when a Storer is used by systems in the traversal package; most Storer implementations should also work fine when given the zero value of LinkContext.
Storers are implicitly coupled to a Link implementation and have some "extra" knowledge of the concrete Link type. This necessary since there is no mandated standard for how to serially represent Link itself, and such a representation is typically needed by a Storer implementation.
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
_rsrch
|
|
|
nodesolution/node
The 'node' package gathers various general purpose Node implementations; the first one you should jump to is 'node/basic'.
|
The 'node' package gathers various general purpose Node implementations; the first one you should jump to is 'node/basic'. |
|
impl
|
|
|
linking
|
|
|
Package 'must' provides another alternative to the 'fluent' package, providing many helpful functions for wrapping methods with multiple returns into a single return (converting errors into panics).
|
Package 'must' provides another alternative to the 'fluent' package, providing many helpful functions for wrapping methods with multiple returns into a single return (converting errors into panics). |
|
tests
The `schema/tests` package contains behavioral tests for type-constrained Node implementations -- meant to work with either codegenerated Nodes OR with the runtime schema.TypedNode wrappers, checking for the same behavior on each.
|
The `schema/tests` package contains behavioral tests for type-constrained Node implementations -- meant to work with either codegenerated Nodes OR with the runtime schema.TypedNode wrappers, checking for the same behavior on each. |
|
storage
|
|
|
bsadapter
module
|
|
|
bsrvadapter
module
|
|
|
dsadapter
module
|
|
|
corpus
The corpus package exports some values useful for building tests and benchmarks.
|
The corpus package exports some values useful for building tests and benchmarks. |
|
This package provides functional utilities for traversing and transforming IPLD nodes.
|
This package provides functional utilities for traversing and transforming IPLD nodes. |