Platform Positioning and Incremental Build
Two days of work, two completely different kinds of thinking. One thread was technical infrastructure — finishing what we started. The other was strategic positioning: where does CaseHub fit, and how do we explain it?
The CI chain is mostly green. The GH_PAT fix from the previous session worked — once ledger and connectors published, the cascade fired correctly. What remained was a series of individual breakages: qhorus importing CapabilityTag before ledger’s artifact contained it (timing race, resolved itself), BusinessHoursIntegrationTest failing every Friday evening (the assertion isBefore(now + 1 day) is wrong after business hours on a Friday — next 2 business hours are Monday), and claudony failing to compile because engine PR #224 added UUID caseId to WorkerContextProvider.buildContext() without updating claudony’s implementation. Work and claudony need their respective Claudes for those two fixes. Nothing we can touch from here.
We built the incremental full-stack build script before writing a line of CI YAML. The decision function — given a module’s current SHA, previous SHA, and dep SHAs, output BUILD, TEST, or SKIP — is pure bash with no side effects. I wanted it tested independently first. 49 bats tests, covering every scenario: nothing changed, only a dep changed, own source and dep both changed, first run with no prior state. All 49 pass. The two-key cache approach (full key for SKIP, source key for TEST mode) gives the same three-state logic as build-all.sh but across GitHub Actions runs rather than locally. Failed modules keep their previous SHA in state so the next run retries them — getting that detail right in the workflow required separating actions/cache/restore and actions/cache/save into explicit steps rather than using the combined action, which auto-saves without outcome awareness.
The strategic work was more interesting. I wanted two use cases: one for making a market entry argument, one for a Java developer tutorial. We researched 10 candidates and scored them across two separate tables — Market Fit and Community Fit — rather than a combined score. The split revealed something a single number hides: drug discovery and clinical trials score identically on market fit (24/25 each) but drug discovery collapses on community fit (15/25). A combined score shows clinical trials wins; the two-table view shows why — drug discovery is an equally strong market argument but completely unteachable without biochemistry background. Security incident response has an excellent comparison target (MyAntFarm, open source, peer reviewed) but only scores 2/5 on market entry gap because SOAR is a crowded market dominated by large incumbents. The gap scores pull it down regardless of how good the comparison is. AML wins both tables simultaneously — 22/25 each — because Java IS banking, banking IS compliance infrastructure, and enterprise Java developers have personally built or integrated transaction monitoring systems.
Researching the Gastown gap analysis led to Doltgres — the PostgreSQL-compatible version of Dolt, built by the same DoltHub team, now in Beta. Time-travel queries, branching, rollback, prior reasoning access — all the Dolt advantages the analysis had marked as “No equivalent” — are addressable by choosing Doltgres as a configurable backend. The Merkle MMR and Doltgres are complementary rather than alternatives: a Doltgres rollback that removes entries already covered by a signed checkpoint breaks the proof. The catch is GDPR Art.17. Deleting a row in Doltgres removes it from HEAD but leaves it in every prior commit. GDPR erasure requires explicit history rewriting — the SQL equivalent of git filter-branch — which then invalidates any Merkle checkpoints that covered those commits. For regulated deployments, PostgreSQL remains the right choice. For development and assisteddev deployments where erasure isn’t a constraint, Doltgres closes the gap almost entirely. I updated the gap analysis to reflect this.
The tutorial strategy session surfaced a mistake I nearly made. Claude drafted the strategy with a section mapping the 5 LangChain4j agentic patterns onto CaseHub — sequential, loop, parallel, parallel mapper, conditional — as if they were CaseHub patterns. I stopped it. Those patterns are not CaseHub patterns; they’re how a single agent reasons internally. At the CaseHub level, parallel is automatic (all matching bindings fire simultaneously on every state change — no declaration needed), conditional is the default (every binding has a condition), and loop is handled by LoopControl with full CaseContext awareness. The LangChain4j patterns describe an agent’s internal reasoning during one case step — the innermost layer. CaseHub is the outermost. The three-sentence summary that replaced the table: “LangChain4j makes each agent smart. Quarkus Flow makes each step durable. CaseHub makes the investigation accountable.” That distinction matters because it changes how we position the platform to developers already familiar with LangChain4j.