Development Notes

Ephemeral results, investigations, and design decisions captured during development.

2026-03-19 — WASM test quadratic scaling & yielding advanceDay

Problem: WASM test runs in quadratic time

node cmd/demo/wasm_test.js docs/demo ran dramatically slower than the Go benchmark (benchmark_test.go), with per-day cost growing over time instead of staying constant.

Root cause

The Go benchmark sets MaxCustomers = n (e.g. 1 for the baseline), preventing advanceDay() from generating new customers. The WASM test called goReset() which restored DefaultSettings().MaxCustomers = 1,000,000, so new customers were generated on ~15% of days.

With accumulating customers, each day became more expensive: day d processes ~0.15×d customers, giving O(D²) total work over D days.

Fix

Added goUpdateSettings(nCustomers) after goReset() in both wasm_test.js and wasm_bench.js to cap customer count. The live WASM demo retains MaxCustomers = 1,000,000 for normal browser use.

Also added a linearity assertion to testFullYear: times the first and second halves of 365 days and fails if the ratio exceeds 2.0×.

API load benchmarks

New benchmarks added to measure HTTP API throughput under concurrent load. Scenario: 1000 customers, 10 days simulated, then sustained load for 3 seconds at each concurrency level.

Per-endpoint latency (serial, 1000 customers)

EndpointLatencyThroughput
GET /api/customer/{id}/transactions0.12ms8,262 req/s
GET /api/customer/{id}/accounts0.16ms6,152 req/s
GET /accounting/pnl0.20ms5,037 req/s
POST /payments/send0.70ms1,425 req/s
GET /customers (HTML)1.09ms913 req/s
GET /dashboard (HTML)1.45ms690 req/s
GET /api/customers9.20ms109 req/s
POST /advance1,235ms0.8 req/s

Note: /api/customers is slow at 9ms because it JSON-encodes all 1000 customer records. /advance dominates at 1.2s due to per-account ledger SQL (3 round-trips × ~2000 accounts).

Yielding advanceDay

The original advanceDay() held ds.mu for the entire duration (~1.2s with 1000 customers), blocking all reads. Refactored to release the lock between each customer’s interest accrual, giving reads ~1.2ms windows to run.

Overhead

ModeTime per advanceOverhead
Single lock hold783ms
Yielding (lock per customer)804ms+2.7%

1000 mutex lock/unlock pairs add ~20µs total — negligible against 800ms of work. The yielding version became the only implementation.

Impact on read throughput

WorkloadMetricBeforeAfterChange
Read-only c1req/s447652+46%
Read-only c8req/s1,0531,362+29%
Read-only c32req/s9551,658+74%
Mixed c1avg latency126ms94ms−25%
Mixed c8avg latency1,424ms768ms−46%
Mixed c1–c64req/s5–9~10.8stable

Read throughput improved significantly. Mixed throughput stabilised at ~10.8 req/s regardless of concurrency (vs erratic 5–9 before). The throughput ceiling remains set by advance at ~1.2s per call, but reads now interleave instead of queuing.

Design: advanceDay lock strategy

Two phases:

  1. Interest accrual — iterates customers, locks ds.mu per customer, processes all accounts for that customer, unlocks. Local accumulators (totalDeposits, totalLoans) span the loop without needing the lock.
  2. Finalize — single lock hold for BoE interest, day increment, customer generation, history recording.

Trade-off: Reads during phase 1 may see partially-advanced state (some customers have day N+1 interest, others still at day N). For a simulation demo this is acceptable — renders show approximate values and the inconsistency lasts <1s.

Safety: Render functions only snapshot fields under a brief lock then build HTML outside it. They do not access ds.sim (the go-luca ledger), so there is no contention with the ledger writes in phase 1.

Benchmark suite

Three new benchmarks in cmd/demo/benchmark_test.go:

BenchmarkWhat it measuresHow to run
BenchmarkAPIEndpoint Per-endpoint serial latency (1000 customers, 10 days) go test -bench=APIEndpoint -benchtime=3s
BenchmarkAPILoad Concurrent throughput at 1–64 goroutines, read-only and mixed workloads (3s per level) go test -bench=APILoad -benchtime=1x