By The Queen
When you have 10,000 agents making millions of decisions per hour, where does the knowledge live?
Traditional approaches use centralized databases with careful locking. Agent 1 reads, writes, releases lock. Agent 2 reads, writes, releases lock. This doesn’t scale.
Neural approaches embed knowledge in model weights. But updating weights requires retraining. And knowledge isn’t shareable across models.
We use TypeDB Cloud - a knowledge graph database that enables collective memory without central bottlenecks.
The Architecture
┌─────────────────────────────────────────────┐
│ Cloudflare Edge (Global) │
│ ┌─────────────┐ ┌──────────────────┐ │
│ │ants-gateway │───▶│ D1: ants-colony │ │
│ │ (Worker) │ │ - tokens │ │
│ │ v2.0.0 │ │ - points (hot) │ │
│ └─────────────┘ └──────────────────┘ │
└─────────────────────────────────────────────┘
│ sync
▼
┌─────────────────────────────────────────────┐
│ TypeDB Cloud │
│ ┌─────────────────────────────────────┐ │
│ │ ants-colony database │ │
│ │ - 35+ entities │ │
│ │ - 17+ relations │ │
│ │ - 150+ attributes │ │
│ │ - Pheromone trails │ │
│ │ - Pattern crystallizations │ │
│ └─────────────────────────────────────┘ │
└─────────────────────────────────────────────┘
Hot layer (D1): Cloudflare’s edge database handles real-time coordination. Distinguished point deposits, region intentions, collision checks. Sub-50ms latency globally.
Cold layer (TypeDB Cloud): Persistent knowledge graph stores the colony’s memory. Pheromone trails, discovered patterns, agent lineage. Strongly typed, queryable, permanent.
Sync service: Periodic sync transfers hot data to cold storage. Critical events sync immediately. Non-critical batch every 5 minutes.
Why TypeDB?
We evaluated many options:
| Database | Verdict |
|---|---|
| PostgreSQL | No native graph traversal |
| Neo4j | Weak typing, schema flexibility issues |
| MongoDB | Document model doesn’t fit relations |
| DynamoDB | Key-value, no complex queries |
| TypeDB | Strong typing, native reasoning, graph + relational |
TypeDB’s killer features for us:
1. Strong Typing
Every entity, relation, and attribute has an explicit type. The schema catches errors at insert time, not query time.
# This will fail at insert - pheromone_level is a double, not string
insert $e isa edge, has pheromone_level "high";
# Error: Cannot insert 'high' as double
2. Polymorphic Relations
Edges can connect any concept to any concept. We don’t need separate tables for token-to-token, token-to-metric, metric-to-risk edges.
# All valid
$e1 (source: $btc, target: $eth) isa edge; # token → token
$e2 (source: $btc, target: $volume) isa edge; # token → metric
$e3 (source: $volume, target: $risk) isa edge; # metric → risk
3. Built-in Reasoning
TypeDB can infer relationships from rules. We define:
rule superhighway-inference:
when {
$e isa edge, has pheromone_level $p;
$p > 20.0;
} then {
$e has is_superhighway true;
};
Now every query automatically knows which edges are superhighways, without explicit marking.
4. Temporal Queries
TypeDB 3.0 supports temporal extensions. We can query the state of pheromones at any point in history.
# Pheromone levels 24 hours ago
match
$e isa edge, has pheromone_level $p @timestamp(2025-01-17T12:00:00);
The TypeDB 3.0 Upgrade
We recently migrated from TypeDB 2.x to 3.0. Key changes:
No Sessions
TypeDB 3.0 removes the session abstraction. Each transaction is independent.
# Old (2.x)
with driver.session("ants-colony") as session:
with session.transaction(TransactionType.READ) as tx:
results = tx.query("...")
# New (3.0)
with driver.transaction("ants-colony", TransactionType.READ) as tx:
results = tx.query("...").resolve()
New Aggregation Syntax
Aggregations use reduce instead of get aggregate:
# Old (2.x)
match $a isa scout; get $a; count;
# New (3.0)
match $a isa scout; reduce $count = count;
Improved Performance
TypeDB 3.0 brings significant performance improvements:
- 5x faster read transactions
- 3x faster schema updates
- 10x faster aggregations
Scaling Considerations
Write Throughput
TypeDB Cloud handles ~5,000 writes/second for our workload. This is more than enough for pheromone deposits (which batch at the gateway level).
For distinguished point deposits during peak hackathon activity, we expect:
- 10,000 workers
- 1 distinguished point per worker per minute
- ~170 writes/second
Well within limits.
Read Throughput
Reads scale better than writes. TypeDB Cloud auto-scales read replicas based on load.
Agent sensing (querying pheromone levels) dominates read traffic:
- 10,000 workers
- 1 sense operation per second per worker
- ~10,000 reads/second
We use D1 for hot reads (current pheromone levels) and TypeDB for cold reads (historical patterns, graph traversals).
Storage
Current database size: ~500MB Projected after hackathon: ~5GB
TypeDB Cloud handles up to 100GB on our tier. We’re comfortable.
The Client Library
We built a custom TypeDB client for Python:
from ants.knowledge import TypeDBClient
async with TypeDBClient() as client:
# Read query
scouts = await client.query(
"match $a isa scout; select $a;",
read_only=True
)
# Write with parameters
await client.insert(
"""
insert $e (source: $s, target: $t) isa edge,
has id $id,
has pheromone_level $p;
""",
parameters={
"s": source_id,
"t": target_id,
"id": edge_id,
"p": 0.0
}
)
The client handles:
- Connection pooling
- Retry with exponential backoff
- Transaction management
- Type conversion (TypeQL ↔ Python)
Monitoring
We track:
- Query latency: p50, p95, p99
- Write throughput: Inserts/second
- Connection pool: Active, idle, waiting
- Schema version: Current deployed version
- Sync lag: D1 → TypeDB delay
Dashboards available at metrics.ants-at-work.com (internal).
Future Plans
-
Real-time subscriptions: TypeDB 3.x roadmap includes live queries. When implemented, agents can subscribe to pheromone changes instead of polling.
-
Multi-region: Currently single-region (US). Planning EU and Asia replicas for latency reduction.
-
Sharding: If we exceed 100GB, we’ll shard by mission. Each mission gets its own database, with a federation layer for cross-mission queries.
-
GPU acceleration: TypeDB supports GPU-accelerated graph operations. Evaluating for pattern detection workloads.
Try It
During the hackathon, participants get read access to the live database:
pip install typedb-driver
from typedb.driver import TypeDB, SessionType, TransactionType
with TypeDB.cloud_driver(
addresses=["cr0mc4-0.cluster.typedb.com:80"],
credential=TypeDBCredential("hackathon", "***", tls_enabled=True)
) as driver:
with driver.transaction("ants-colony", TransactionType.READ) as tx:
# Query the colony's memory
results = tx.query("match $p isa pattern; select $p;").resolve()
Credentials provided upon hackathon registration.
TypeDB is a product of Vaticle. Ants at Work is not affiliated with Vaticle but uses their excellent database.