How the GoBank simulation scales across account counts and what that means for different deployment environments.
| CPU | Intel Core Ultra 7 165H (22 threads) |
| OS | Linux (amd64) |
| Go | 1.25.3 |
| Backend | SQLite :memory: (go-sqlite3, pure Go) |
| Ledger | go-luca double-entry accounting |
| Date | 2026-03-17 |
~650
accounts/sec (creation)
~47K
EOD accounts/sec (1K)
~17M
account-days/sec (1K)
Measured at 1K accounts. Throughput degrades at larger scales — see tables below.
Each account creation involves: generate customer, store PII, append to slice, register in go-luca ledger (creates account paths), and fund with initial deposit. This is the most expensive operation per-account due to multiple SQL round-trips through go-luca.
| Accounts | Wall Time | accounts/sec | ms/account | Cumulative Allocs |
|---|---|---|---|---|
| 1,000 | 1.5s | 675 | 1.48 | 803 MB / 1.7M |
| 10,000 | 16.8s | 595 | 1.68 | 8.1 GB / 17.5M |
Scaling: Near-linear. Throughput drops ~12% from 1K to 10K due to growing ledger size. Each account generates ~8 MB of cumulative allocations (mostly SQL operations in go-luca). The per-account cost is dominated by 3–4 SQL round-trips for ledger registration and initial funding.
Each simulated day: accrue interest on all accounts (daily_rate = annual_rate / 365), update balances via go-luca, emit transaction log entries. The txLog is cleared each day to bound memory.
| Accounts | Wall Time (365d) | account-days/sec | EOD accounts/sec | ms per EOD |
|---|---|---|---|---|
| 1,000 | 22.6ms | 17.3M | 47,384 | 0.062 |
| 10,000 | 368ms | 10.4M | 28,591 | 1.01 |
Scaling: Sublinear — throughput drops ~40% from 1K to 10K (10.4M vs 17.3M account-days/sec). The per-day cost grows slightly faster than linearly because go-luca SQL queries (GetAccountByID, BalanceAt, RecordMovement) become slower as the ledger table grows. Memory per iteration is constant (161 KB / 36 allocs) since txLog is cleared daily.
Combined benchmark: create all accounts then simulate 365 days.
| Accounts | Total | Create | Simulate | Create % |
|---|---|---|---|---|
| 1,000 | 1.9s | 1.5s | 22ms | 99% |
| 10,000 | 17.5s | 17.0s | 417ms | 97% |
Key insight: Account creation dominates total runtime (~97–99%). Once accounts exist, simulating an entire year is fast. This means the simulation is well-suited for long-running scenarios where you create once and simulate over extended periods.
Projections based on observed scaling behaviour. Actual numbers may differ — the sublinear degradation in SQL performance means these are lower bounds for time.
| Accounts | Create (est.) | Sim Year (est.) | Total (est.) | Cumul. Allocs (est.) |
|---|---|---|---|---|
| 1,000 | 1.5s | 23ms | ~2s | ~1 GB |
| 10,000 | 17s | 370ms | ~17s | ~8 GB |
| 100,000 | ~3 min | ~6s | ~3 min | ~80 GB |
| 1,000,000 | ~30 min | ~2 min | ~30 min | ~800 GB |
Note: “Cumulative Allocs” is total bytes allocated and freed over the run (from -benchmem), not peak RSS. Go’s GC reclaims most of it. Observed peak RSS for 1K accounts is ~145 MB.
Each environment has a natural scale determined by available RAM (for the in-memory SQLite backend) and acceptable wall-clock time for initial creation.
| Environment | RAM | Natural Scale | Create Time | Year Sim | Use Case |
|---|---|---|---|---|---|
| Raspberry Pi / gokrazy | 2–4 GB | 1K–5K | < 10s | < 1s | Demos, IoT dashboard, teaching |
| Small VPS | 4–8 GB | 5K–20K | < 1 min | < 5s | Dev/test, CI benchmarks |
| Laptop / Workstation | 16–64 GB | 10K–100K | < 3 min | < 10s | Development, scenario testing |
| Cloud instance | 64–256 GB | 100K–500K | < 15 min | < 1 min | Stress testing, regulatory scenarios |
| High-memory server | 256 GB+ | 1M+ | < 30 min | < 5 min | Full-scale retail bank simulation |
WASM (browser): Limited to browser memory (~2–4 GB). Safe ceiling is ~1K–5K accounts. The demo defaults to 1K for this reason.
| Constraint | Limit | Notes |
|---|---|---|
| Benchmark timeout | 10 min (quick), 1h / 24h (full) | Set via -timeout flag in Taskfile. Quick bench caps at 600s to keep CI fast. |
| Memory (SQLite :memory:) | Available RAM | The in-memory database grows with account count. No disk I/O, but no persistence either. Peak RSS is much less than cumulative allocs due to GC. |
| Disk (temp files) | Minimal | SQLite :memory: writes nothing to disk. The Go test binary and any core dumps are the only disk use. No 100 GB temp file risk with current backend. |
| CPU | Single-threaded | Current simulation is single-goroutine under a mutex. Multi-core machines get no scaling benefit. This is the most obvious optimisation path after batch SQL. |
The dominant cost is go-luca’s per-account SQL round-trips, each requiring:
GetAccountByID — fetch account detailsBalanceAt — query closing balanceRecordMovement — insert interest movementThat’s 3 SQL operations per account per day for interest accrual alone. Account creation adds further round-trips for ledger registration and initial funding.
| Optimisation | Expected Impact | Complexity |
|---|---|---|
| Batch SQL in go-luca (bulk balance query + bulk insert) | 10–100x for EOD processing | Medium — requires go-luca API changes |
| Parallel daily processing (shard accounts across goroutines) | Near-linear with core count | Medium — requires ledger to support concurrent writes |
| File-backed SQLite (avoid RAM ceiling) | Unlocks 1M+ on modest hardware, slower per-op | Low — change DSN from :memory: to file path |
| PostgreSQL / CockroachDB backend | Production-grade scaling, persistence | High — requires go-luca backend abstraction |
-benchmem is total allocated, not retained.All benchmarks are defined in cmd/demo/benchmark_test.go and invoked via Taskfile:
| Command | Scope | Timeout | Typical Duration |
|---|---|---|---|
task bench | Full year, 1K–10K | 10 min | ~20s |
task bench:create | Account creation, all sizes | 1h | ~20 min |
task bench:sim | Year simulation, all sizes | 24h | varies |
task bench:full | Full year, all sizes | 24h | varies |
task bench:all | Everything | 24h | varies |
Custom metrics reported: accounts/sec, account-days/sec, eod-accounts/sec, create-ms, sim-ms.