Ephemeral results, investigations, and design decisions captured during development.
node cmd/demo/wasm_test.js docs/demo ran dramatically slower than the Go benchmark (benchmark_test.go), with per-day cost growing over time instead of staying constant.
The Go benchmark sets MaxCustomers = n (e.g. 1 for the baseline), preventing advanceDay() from generating new customers. The WASM test called goReset() which restored DefaultSettings().MaxCustomers = 1,000,000, so new customers were generated on ~15% of days.
With accumulating customers, each day became more expensive: day d processes ~0.15×d customers, giving O(D²) total work over D days.
Added goUpdateSettings(nCustomers) after goReset() in both wasm_test.js and wasm_bench.js to cap customer count. The live WASM demo retains MaxCustomers = 1,000,000 for normal browser use.
Also added a linearity assertion to testFullYear: times the first and second halves of 365 days and fails if the ratio exceeds 2.0×.
New benchmarks added to measure HTTP API throughput under concurrent load. Scenario: 1000 customers, 10 days simulated, then sustained load for 3 seconds at each concurrency level.
| Endpoint | Latency | Throughput |
|---|---|---|
GET /api/customer/{id}/transactions | 0.12ms | 8,262 req/s |
GET /api/customer/{id}/accounts | 0.16ms | 6,152 req/s |
GET /accounting/pnl | 0.20ms | 5,037 req/s |
POST /payments/send | 0.70ms | 1,425 req/s |
GET /customers (HTML) | 1.09ms | 913 req/s |
GET /dashboard (HTML) | 1.45ms | 690 req/s |
GET /api/customers | 9.20ms | 109 req/s |
POST /advance | 1,235ms | 0.8 req/s |
Note: /api/customers is slow at 9ms because it JSON-encodes all 1000 customer records. /advance dominates at 1.2s due to per-account ledger SQL (3 round-trips × ~2000 accounts).
The original advanceDay() held ds.mu for the entire duration (~1.2s with 1000 customers), blocking all reads. Refactored to release the lock between each customer’s interest accrual, giving reads ~1.2ms windows to run.
| Mode | Time per advance | Overhead |
|---|---|---|
| Single lock hold | 783ms | — |
| Yielding (lock per customer) | 804ms | +2.7% |
1000 mutex lock/unlock pairs add ~20µs total — negligible against 800ms of work. The yielding version became the only implementation.
| Workload | Metric | Before | After | Change |
|---|---|---|---|---|
| Read-only c1 | req/s | 447 | 652 | +46% |
| Read-only c8 | req/s | 1,053 | 1,362 | +29% |
| Read-only c32 | req/s | 955 | 1,658 | +74% |
| Mixed c1 | avg latency | 126ms | 94ms | −25% |
| Mixed c8 | avg latency | 1,424ms | 768ms | −46% |
| Mixed c1–c64 | req/s | 5–9 | ~10.8 | stable |
Read throughput improved significantly. Mixed throughput stabilised at ~10.8 req/s regardless of concurrency (vs erratic 5–9 before). The throughput ceiling remains set by advance at ~1.2s per call, but reads now interleave instead of queuing.
Two phases:
ds.mu per customer, processes all accounts for that customer, unlocks. Local accumulators (totalDeposits, totalLoans) span the loop without needing the lock.Trade-off: Reads during phase 1 may see partially-advanced state (some customers have day N+1 interest, others still at day N). For a simulation demo this is acceptable — renders show approximate values and the inconsistency lasts <1s.
Safety: Render functions only snapshot fields under a brief lock then build HTML outside it. They do not access ds.sim (the go-luca ledger), so there is no contention with the ledger writes in phase 1.
Three new benchmarks in cmd/demo/benchmark_test.go:
| Benchmark | What it measures | How to run |
|---|---|---|
BenchmarkAPIEndpoint |
Per-endpoint serial latency (1000 customers, 10 days) | go test -bench=APIEndpoint -benchtime=3s |
BenchmarkAPILoad |
Concurrent throughput at 1–64 goroutines, read-only and mixed workloads (3s per level) | go test -bench=APILoad -benchtime=1x |