ants

package module
v0.0.0-...-c3ec29f Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 5, 2025 License: MIT Imports: 48 Imported by: 0

README

Ants ProbeLab Logo

Ants Watch

ProbeLab License

ants watch is a DHT client monitoring tool. It is able to log the activity of all nodes in a DHT network by carefully placing ants in the DHT keyspace. For nodes to utilize the DHT they need to perform routing table maintenance tasks. These tasks consist of sending requests to several other nodes close to oneself in the DHT keyspace. ants watch ensures that at least one of these requests will always hit one of the deployed ants. When a request hits an ant, we record information about the requesting peer like agent version, supported protocols, IP addresses, and more.

Supported networks:

Table of Contents

Methodology

  • An ant is a lightweight libp2p DHT node, participating in the DHT network, and logging incoming requests.
  • ants participate in the DHT network as DHT server nodes. ants need to be dialable by other nodes in the network. Hence, ants-watch must run on a public IP address either with port forwarding properly configured (including local and gateway firewalls) or UPnP enabled.
  • The tool releases ants (i.e., spawns new ant nodes) at targeted locations in the keyspace in order to occupy and watch the full keyspace.
  • The tool's logic is based on the fact that peer routing requests are distributed to k closest nodes in the keyspace and routing table updates by DHT client (and server) nodes need to find the k closest DHT server peers to themselves. Therefore, placing approximately 1 ant node every k DHT server nodes can capture all DHT client nodes over time.
  • The routing table update process varies across implementations, but is by default set to 10 mins in the go-libp2p implementation. This means that ants will record the existence of DHT client nodes approximately every 10 mins (or whatever the routing table update interval is).
  • Depending on the network size, the number of ants as well as their location in the keyspace is adjusted automatically.
  • Network size and peers distribution is obtained by querying an external Nebula database.
  • All ants run from within the same process, sharing the same DHT records.
  • The ant queen is responsible for spawning, adjusting the number and monitoring the ants as well as gathering their logs and persisting them to a central database.
  • ants-watch does not operate like a crawler, where after one run the number of DHT client nodes is captured. ants-watch logs all received DHT requests and therefore, it must run continuously to provide the number of DHT client nodes over time.

Setup

Prerequisites

You need go-migrate to run the clickhouse database migrations:

make tools

# or

go install -tags 'clickhouse' github.com/golang-migrate/migrate/v4/cmd/migrate@v4.15.2

You can then start a Clickhouse database with:

make local-clickhouse

# or

docker run --name ants-clickhouse --rm -p 9000:9000 -p 8123:8123 -e CLICKHOUSE_DB=ants_local -e CLICKHOUSE_USER=ants_local -e CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT=1 -e CLICKHOUSE_PASSWORD=password clickhouse/clickhouse-server

This will start a Clickhouse server with the container name ants-clickhouse that's accessible on the non-SSL native port 9000. The relevant database parameters are:

  • host: localhost
  • port: 9000
  • username: ants_local
  • password: password
  • database: ants_local
  • secure: false

Then you need to apply the migrations with:

make local-migrate-up

This will take the migration files in the ./db/migrations directory and strip all the Replicated merge tree prefixes before applying the migrations. The Replicated merge tree table engines only work with a clustered clickhouse deployment (e.g., clickhouse cloud). When running locally, you will only have a single clickhouse instance, so applying Replicated migrations will fail.

I'm all ears how to improve the workflow here.

Configuration

The following environment variables should be set for ants-watch:

ANTS_CLICKHOUSE_ADDRESS=localhost:9000
ANTS_CLICKHOUSE_DATABASE=ants_local
ANTS_CLICKHOUSE_USERNAME=ants_local
ANTS_CLICKHOUSE_PASSWORD=password
ANTS_CLICKHOUSE_SSL=false

ANTS_NEBULA_CONNSTRING=postgres://nebula:password@localhost/nebula?sslmode=disable # change with proper values for the datbase you want to use

Usage

Once the database is set up and migrations are applied, you can start the honeypot.

ants-watch needs to be dialable by other nodes in the network. Hence, it must run on a public IP address either with port forwarding properly configured (including local and gateway firewalls) or UPnP enabled.

Queen

To start the ants queen, you can run the following command:

go run ./cmd/ants queen --upnp # for UPnP
# or
go run ./cmd/ants queen --first.port=<port> --num.ports=<count> # for port forwarding

When UPnP is disabled, ports from firstPort to firstPort + nPorts - 1 must be forwarded to the machine running ants-watch. ants-watch will be able to spawn at most nPorts distinct ants.

Health

You can run a health check on the honeypot by running the following command:

go run . health

Ants Key Generation

The queen ant periodically queries the Nebula database to retrieve the list of connected DHT servers. Kademlia identifiers of these peers are then inserted into a binary trie. Using this binary trie, the queen defines keyspace zones of at most bucket_size - 1 peers. One ant must be present in each of these zones in order to capture all DHT requests reaching the bucket_size closest peers to the target key.

Kademlia identifiers are derived from a libp2p peer id, which itself is derived from a cryptographic key pair. Hence generating a key matching a specific zone of the binary trie isn't trivial and requires bruteforce. All keys generated during the bruteforce are persisted on disk, because they may be useful in the future. When an ant isn't needed anymore, its key is marked as available for reuse. This also allows reusing the same peer ids for the ants across multiple runs of the honeypot.

Maintainers

Contributing

Feel free to dive in! Open an issue or submit PRs.

Standard Readme follows the Contributor Covenant Code of Conduct.

License

MIT © ProbeLab

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func BootstrapPeers

func BootstrapPeers(net Network) []peer.AddrInfo

func NsToCid

func NsToCid(ns string) (cid.Cid, error)

func PeerIDToKadID

func PeerIDToKadID(pid peer.ID) bit256.Key

PeerIDToKadID converts a libp2p peer.ID to its binary kademlia identifier

func ProtocolID

func ProtocolID(net Network) string

func UserAgent

func UserAgent(net Network) string

Types

type Ant

type Ant struct {
	// contains filtered or unexported fields
}

func SpawnAnt

func SpawnAnt(ctx context.Context, ps peerstore.Peerstore, ds ds.Batching, cfg *AntConfig) (*Ant, error)

func (*Ant) Close

func (a *Ant) Close() error

type AntConfig

type AntConfig struct {
	PrivateKey     crypto.PrivKey
	UserAgent      string
	Port           int
	ProtocolID     string
	BootstrapPeers []peer.AddrInfo
	RequestsChan   chan<- RequestEvent
	CertPath       string
	Telemetry      *metrics.Telemetry
}

func (*AntConfig) Validate

func (cfg *AntConfig) Validate() error

type KeysDB

type KeysDB struct {
	// contains filtered or unexported fields
}

func NewKeysDB

func NewKeysDB(filepath string) *KeysDB

func (*KeysDB) MatchingKeys

func (db *KeysDB) MatchingKeys(prefixes []bitstr.Key, returned []crypto.PrivKey) []crypto.PrivKey

MatchingKeys returns a list of private keys whose kademlia IDs match the provided list of prefixes. It also write back to disk the returned private keys for future use.

type Network

type Network string
const (
	// CelestiaMainnet corresponds to the main network. See: celestiaorg/networks.
	CelestiaMainnet Network = "celestia-mainnet"
	// CelestiaArabica corresponds to the Arabica testnet. See: celestiaorg/networks.
	CelestiaArabica Network = "celestia-arabica-11"
	// CelestiaMocha corresponds to the Arabica testnet. See: celestiaorg/networks.
	CelestiaMocha Network = "celestia-mocha-4"
	// AvailMainnetLC corresponds to the light client mainnet from avail
	AvailMainnetLC Network = "avail-mnlc"
)

type Queen

type Queen struct {
	// contains filtered or unexported fields
}

func NewQueen

func NewQueen(clickhouseClient db.Client, nebulaClient nebulav1.NebulaServiceClient, cfg *QueenConfig) (*Queen, error)

func (*Queen) Run

func (q *Queen) Run(ctx context.Context) error

Run makes the queen orchestrate the ant nest

type QueenConfig

type QueenConfig struct {
	KeysDBPath      string
	CertsPath       string
	NPorts          int
	FirstPort       int
	UPnP            bool
	BatchSize       int
	BatchTime       time.Duration
	CrawlInterval   time.Duration
	CacheSize       int
	BucketSize      int
	UserAgent       string
	BootstrapPeers  []peer.AddrInfo
	ProtocolID      string
	ThrottleTimeout time.Duration
	Telemetry       *metrics.Telemetry
}

type RequestEvent

type RequestEvent struct {
	Timestamp    time.Time
	Self         peer.ID
	Remote       peer.ID
	Type         pb.Message_MessageType
	Target       mh.Multihash
	AgentVersion string
	Protocols    []protocol.ID
	Maddrs       []multiaddr.Multiaddr
	ConnMaddr    multiaddr.Multiaddr
}

func (*RequestEvent) IsIdentified

func (r *RequestEvent) IsIdentified() bool

func (*RequestEvent) MaddrStrings

func (r *RequestEvent) MaddrStrings() []string

Directories

Path Synopsis
cmd
ants command
proto
nebula/v1
Package nebulav1 is a generated GoMock package.
Package nebulav1 is a generated GoMock package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL