ha
Highly available leaderless SQLite cluster powered by embedded NATS JetStream server.
- Connect using HTTP API or PostgreSQL Wire Protocol
- Use ha-sync SQLite extension to create live local read replicas
- Change Data Capture (CDC)
Overview
1. Installation
1.1 Install from source
git clone https://github.com/litesql/ha.git
cd ha
go install
2. Usage
- Start the first ha node (-m flag if you want to use in-memory)
ha -n node1 -m
- Start an another ha node
ha -n node2 -m --port 8081 --pg-port 5433 --nats-port 0 --replication-url nats://localhost:4222
- Create a table
curl -d '[
{
"sql": "CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT)"
}
]' \
http://localhost:8080
- Insert some data using HTTP client
curl -d '[
{
"sql": "INSERT INTO users(name) VALUES(:name)",
"params": {"name": "HA User"}
}
]' \
http://localhost:8080
- Or use a PostgreSQL client
PGPASSWORD="ha" psql -h localhost -U ha
INSERT INTO users(name) VALUES('HA user from PostgreSQL Wire Protocol');
SELECT * FROM users;
- Connect to another server and check the values
PGPASSWORD="ha" psql -h localhost -U ha -p 5433
SELECT * FROM users;
2.1 Loading existed database to memory
ha -m mydatabase.db
2.2 Use database in disk
ha file:mydatabase.db?_journal=WAL&_busy_timeout=500
2.3 Load database from latest snapshot
3. Local Read Replicas
- Use ha-sync SQLite extension to create local embedded replicas from a remote HA database.
4. Using Docker
docker run --name ha \
-e HA_MEMORY=true \
-p 5432:5432 -p 8080:8080 -p 4222:4222 \
ghcr.io/litesql/ha:latest
- Set up a volume at /data to store the NATS streams state.
5. PostgreSQL Wire Protocol
- You can use any PostgreSQL driver to connect to ha.
- The SQLite parser engine will proccess the commands.
- PostgreSQL functions (and visual editors like pgadmim, dbeaver, etc) are not supported.
6. HTTP API
6.1 Using bind parameters
curl -d '[{
"sql": "INSERT INTO users(name) VALUES(:name)",
"params": {"name": "HA user"}
}]' \
http://localhost:8080
{
"results": [
{
"columns": [
"rows_affected",
"last_insert_id"
],
"rows": [
[
1,
3
]
]
}
]
}
6.2 Multiple commands (one transaction)
curl -d '[
{
"sql": "INSERT INTO users(name) VALUES(:name)",
"params": {"name": "new HA user"}
},
{
"sql": "DELETE FROM users WHERE name = :name",
"params": {"name": "new HA user"}
},
{
"sql": "SELECT * FROM users"
}
]' \
http://localhost:8080
{
"results": [
{
"columns": [
"rows_affected",
"last_insert_id"
],
"rows": [
[
1,
2
]
]
},
{
"columns": [
"rows_affected",
"last_insert_id"
],
"rows": [
[
1,
2
]
]
},
{
"columns": [
"id",
"name"
],
"rows": [
[
1,
"HA user"
]
]
}
]
}
6.3 Backup database
curl -O -J http://localhost:8080
6.4 Take a snapshot and save on NATS Object Store
curl -X POST http://localhost:8080/snapshot
6.5 Get latest snapshot from NATS Object Store
curl -O -J http://localhost:8080/snapshot
6.6 List all replications status
curl http://localhost:8080/replications
6.7 Get replication status
curl http://localhost:8080/replications/{name}
7. Replication
- You can write to any server
- Uses embedded or external NATS JetStream cluster
- NATS JetStream guarantees "at-least-once" message delivery
- All DML (INSERT, UPDATE, DELETE) operations are idempotent
- Last writer wins
- DDL commands are replicated (since v0.0.7)
{
"node": "ha_node_name",
"changes": [
{
"database": "main",
"table": "users",
"columns": [
"id",
"name"
],
"operation": "INSERT",
"new_rowid": 2,
"new_values": [
2,
"new HA user"
]
},
{
"database": "main",
"table": "users",
"columns": [
"id",
"name"
],
"operation": "DELETE",
"old_rowid": 2,
"old_values": [
2,
"new HA user"
]
}
],
"timestamp_ns": 1758574275504509677
}
7.2 Replication limitations
- Tables WITHOUT ROWID are not replicated
- The replication is not invoked when conflicting rows are deleted because of an ON CONFLICT REPLACE clause.
- Use idempotents DDL commands (CREATE IF NOT EXISTS and DROP IF EXISTS)
- Writing to any node in the cluster improves availability, but it can lead to consistency issues in certain edge cases. If your application values Consistency more than Availability, it's better to route all write operations through a single cluster node.