This guide covers deploying Apiary in different configurations: single-node, distributed, and Kubernetes.
Single-node deployment uses embedded NATS and BadgerDB. This is the simplest configuration and suitable for development, testing, and small production workloads.
1. Build binaries:
make build
2. Create data directory:
sudo mkdir -p /var/apiary
sudo chown $USER:$USER /var/apiary
3. Start apiaryd:
./bin/apiaryd -data-dir /var/apiary -port 8080
Default configuration:
/var/apiary8080# Liveness probe
curl http://localhost:8080/healthz
# Readiness probe
curl http://localhost:8080/ready
Create /etc/systemd/system/apiaryd.service:
[Unit]
Description=Apiary Orchestrator
After=network.target
[Service]
Type=simple
User=apiary
Group=apiary
ExecStart=/usr/local/bin/apiaryd -data-dir /var/apiary -port 8080
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
Enable and start:
sudo systemctl enable apiaryd
sudo systemctl start apiaryd
sudo systemctl status apiaryd
Distributed deployment uses external NATS, Redis (optional), and etcd for leader election. This configuration supports horizontal scaling and high availability.
Distributed deployment consists of:
1. Install NATS:
# Download NATS server
wget https://github.com/nats-io/nats-server/releases/download/v2.10.7/nats-server-v2.10.7-linux-amd64.zip
unzip nats-server-v2.10.7-linux-amd64.zip
sudo mv nats-server-v2.10.7-linux-amd64/nats-server /usr/local/bin/
2. Start NATS cluster:
# Node 1
nats-server -p 4222 -cluster nats://localhost:6222 -routes nats://localhost:6223,nats://localhost:6224
# Node 2
nats-server -p 4223 -cluster nats://localhost:6223 -routes nats://localhost:6222,nats://localhost:6224
# Node 3
nats-server -p 4224 -cluster nats://localhost:6224 -routes nats://localhost:6222,nats://localhost:6223
3. Enable JetStream (for persistence):
nats-server -p 4222 -js -sd /var/nats/data -cluster nats://localhost:6222
1. Install etcd:
ETCD_VER=v3.5.9
wget https://github.com/etcd-io/etcd/releases/download/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz
tar xzvf etcd-${ETCD_VER}-linux-amd64.tar.gz
sudo mv etcd-${ETCD_VER}-linux-amd64/etcd* /usr/local/bin/
2. Start etcd cluster:
# Node 1
etcd --name infra1 --initial-advertise-peer-urls http://127.0.0.1:2380 \
--listen-peer-urls http://127.0.0.1:2380 \
--listen-client-urls http://127.0.0.1:2379 \
--advertise-client-urls http://127.0.0.1:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra1=http://127.0.0.1:2380,infra2=http://127.0.0.2:2380,infra3=http://127.0.0.3:2380 \
--initial-cluster-state new
./bin/apiaryd \
-data-dir /var/apiary \
-port 8080 \
-nats-urls "nats://nats1:4222,nats://nats2:4222,nats://nats3:4222" \
-etcd-endpoints "http://etcd1:2379,http://etcd2:2379,http://etcd3:2379"
Apiary can be deployed as a StatefulSet in Kubernetes for production use.
Create k8s/configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: apiaryd-config
namespace: apiary
data:
nats-urls: "nats://nats.nats.svc.cluster.local:4222"
etcd-endpoints: "http://etcd-client.etcd.svc.cluster.local:2379"
data-dir: "/data"
Create k8s/statefulset.yaml with health checks, resource limits, and volume claims.
Create k8s/service.yaml for LoadBalancer or ClusterIP service.
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/statefulset.yaml
kubectl apply -f k8s/service.yaml
# Check status
kubectl get pods -n apiary
kubectl logs -n apiary -l app=apiaryd
# Port forward for local access
kubectl port-forward -n apiary svc/apiaryd 8080:8080
Deploy NATS using the official Helm chart:
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install nats nats/nats \
--namespace nats \
--create-namespace \
--set nats.jetstream.enabled=true
Apiary exposes OpenTelemetry metrics. Configure your observability backend to scrape metrics.
Logs are structured JSON. Use a log aggregation system (ELK, Loki, etc.) to collect and analyze logs.
Monitor health check endpoints:
# Liveness (is the process running?)
curl http://localhost:8080/healthz
# Readiness (can it serve traffic?)
curl http://localhost:8080/ready
BadgerDB data is stored in the data directory. To backup:
# Stop apiaryd
sudo systemctl stop apiaryd
# Backup data directory
tar -czf apiary-backup-$(date +%Y%m%d).tar.gz /var/apiary
# Restart apiaryd
sudo systemctl start apiaryd
# Stop apiaryd
sudo systemctl stop apiaryd
# Restore data
tar -xzf apiary-backup-YYYYMMDD.tar.gz -C /
# Restart apiaryd
sudo systemctl start apiaryd
netstat -tuln | grep 8080journalctl -u apiaryd -fetcdctl endpoint healthetcdctl member listetcdctl get /apiary/leadernats-server -vtop, htop