What Happens When You Change a Parameter in CAD? Four Graphs, One Atomic Commit
When a parameter changes, geometry, identity, and evidence all shift at once. Most CAD only tracks one. NeuroCAD tracks four, atomically — like Git, but for engineering truth.
What happens when a CAD parameter changes? In SolidWorks, you wait. In Git, you commit. NeuroCAD does both, atomically, across four graphs.
The minute that breaks every CAD review
You open an assembly. You bump a hole diameter from 6 mm to 6.5 mm. You watch a progress bar. Sometimes the part rebuilds. Sometimes a downstream feature throws “Edge reference lost”. Sometimes a drawing balloon points at a face that no longer exists. Sometimes the BOM line silently rolls forward and nobody notices until manufacturing.
Three things happened at once and the CAD system only told you about one of them:
- Geometry changed. A face moved, an edge died, a new face was born.
- Identity changed. The thing your downstream feature called “Face #14” might be a totally different region of the body now.
- Evidence changed. A FEA result, a drawing dimension, a tolerance check were all computed against the old geometry. Are they still valid?
Mainstream CAD treats those as one operation called “rebuild”. When the rebuild succeeds, you assume everything else is fine. When it fails, you debug by looking at the feature tree until you find a red icon.
That is a 1990s answer to what is now a four-dimensional problem.
The NeuroCAD answer: four graphs, one commit
NeuroCAD is an SDF-native CAD platform. Under the hood, every project is described not as a list of features but as a graph of operations. When you change a parameter, NeuroCAD does not rebuild a part — it commits a change to four graphs at once:
flowchart LR
P[Parameter change<br/>hole_diameter: 6 → 6.5] --> C{Atomic commit}
C --> A[A · HistoryDag<br/>Merkle DAG of all<br/>operations ever applied]
C --> B[B · EntityGenealogyGraph<br/>Persistent identity<br/>across topology change]
C --> D[C · ProjectionIndex<br/>Live link to kernel,<br/>BVH, UI, simulation]
C --> E[D · EvidenceGraph<br/>Test ↔ claim links,<br/>provenance, freshness]
A -.reflog.-> R[Audit receipt]
B -.identity path.-> R
D -.projection diagnostic.-> R
E -.basis report.-> R
The commit is atomic: either all four graphs accept the change or all four roll back. There is no state where geometry is updated but the drawing is lying about it. There is no state where a downstream feature thinks Face #14 still exists. There is no state where a passing FEA result gets quietly attached to a different body.
Let’s walk through the four graphs.
Surface A — HistoryDag: Git for CAD, but for real this time
The first surface is a Merkle DAG of every commit ever made to the project. Think of it as Git’s object store, but the objects are CAD-typed instead of file-typed.
A commit is a content-addressed object. Its hash is a BLAKE3 of:
- the commit kind tag,
- the commit’s schema version,
- the commit’s payload (parents, patch sets, timestamp, author).
Two commits with the same payload have the same hash. Two commits with different payloads cannot collide. This is the same trick Git uses, the same trick IPFS uses, the same trick Reproducible Builds uses to prove that two binaries are byte-identical. NeuroCAD borrows it deliberately.
What makes the HistoryDag different from “Git for CAD” attempts of the past is the discipline around refs. CAD systems traditionally store “the current state” in a mutable file. That file is a single point of failure: corrupted, lost, half-written, version-skewed.
NeuroCAD instead has named refs:
refs/heads/mainrefs/releases/rev11.5refs/scenarios/{scenario_id}/latestrefs/evidence/{release_scope}/latest
Every ref movement is a compare-and-swap. Every ref movement writes a reflog: who moved it, from what object, to what object, with what reason, at what time, with what receipt. If a release ref moves silently, the operator can prove that it did, when, and by whom. If a worker process dies mid-write, the ref does not move at all — there is no torn state.
This is borrowed wholesale from Git’s update-ref discipline. Most CAD-history systems do not bother. NeuroCAD does, because the rest of the truth chain depends on it.
Surface B — EntityGenealogyGraph: the death of “Face #14”
The most expensive bug class in B-rep CAD has a name: the topological naming problem (TNP). A downstream feature like a fillet stores a reference to “Face #14 of Body #3”. Then the parametric model rebuilds, and Face #14 is now at a different location, or split into two, or merged with a neighbour, or gone. The fillet either silently moves to the wrong place, refuses to compute, or worse, computes to a value that looks plausible but isn’t.
Parasolid, ACIS, and OpenCascade have all spent decades inventing topological-naming algorithms. Each one is a heuristic. Each one fails on the cases the others handle. The bug class is structural, not implementation-deep.
NeuroCAD takes the Git-internals lesson and applies it to topology: identity is content, not position. The EntityGenealogyGraph stores persistent identities — BodyIdentity, FaceIdentity, EdgeIdentity, FeatureIdentity, SketchIdentity — and tracks the history of how each identity came to exist. The edge kinds in the graph are explicit:
CreatedEvolvedFromSplitFromMergedFromDeletedByRebornFromImportedFromSuppressedByUnsuppressedFrom
When you suppress one of three holes in a part, the other two holes’ face identities do not move. There is a regression test pinned in the codebase that exercises a 3-hole add → suppress-2 → suppress-1 → unsuppress-2 cycle and asserts the surviving identities stay byte-stable across the entire dance. This is not a heuristic; it is a structural property of the SDF representation.
When a face actually does split, the genealogy records both children and links them back to the parent with SplitFrom edges. Downstream features ask resolve_successor(parent_id, target_commit) and either get a single answer, an ambiguous answer, or an explicit “no successor — feature must update”. They never get a silent wrong-answer.
This is the Provenance Data Model (PROV-DM) idea — wasDerivedFrom, wasGeneratedBy, wasInvalidatedBy — applied to CAD topology instead of scientific workflows.
Surface C — ProjectionIndex: the live wiring
The first two surfaces are about history and identity. The third is about the right now. Persistent identities need to map onto live data structures: the active kernel graph, the BVH, the e-graph of compose-operator rewrites, the UI selection, the export buffer for STEP/IGES, the simulation boundary in MOOSE.
The ProjectionIndex is that mapping. It supports queries like:
PersistentId → kernel_graph nodePersistentId → topology entityPersistentId → DCG dirty nodePersistentId → e-graph e-classPersistentId → BVH proxyPersistentId → UI selection handlePersistentId → simulation boundary
Two properties are non-negotiable:
- Snapshot-isolation reads. A long-running query (e.g. “render the whole assembly to a viewport”) sees a consistent state of the world at the moment the query started. Concurrent commits do not interleave into the snapshot. This is plain MVCC, the same model PostgreSQL has used since the 1990s.
- Projection failures are diagnostics, not silent fallbacks. If a
PersistentIdno longer projects to a topology entity, the system raises a projection diagnostic — visible to the operator, surfaced in the UI, blocking release if it persists. There is no quiet “well, let’s match by index instead” behaviour.
The second property is the rule that distinguishes a serious system from a demo. CAD systems with weak persistence quietly fall back to index-matching when names break, and that’s how a hole appears at the wrong location after a rebuild.
Surface D — EvidenceGraph: every claim has receipts
The last surface is the one that makes audit possible. The EvidenceGraph stores typed evidence nodes — DDG descriptors, M-PH rank invariants, Reeb decompositions, persistent-homology evidence, replay receipts, corpus runs, differential-test outputs, UI end-to-end runs — and links them to entities, commits, and release claims with edges:
SupportsContradictsDependsOnDerivedFromRedundantWithIndependentOfStaleComparedTo
Each evidence node carries:
target_id— what it is evidence about,producer— what process produced it,input_hashes— a content-addressed snapshot of the inputs,receipt_hash— proof the producer ran,generated_at,confidence,severity,freshness_policy,release_scope.
The release-readiness rule is sharp: a capability is not release-ready because one test passes. It is release-ready only when its required independent evidence basis is fresh, passing, and free of HIGH blockers. The “independent” word is borrowed from matroid theory: repeated runs against the same weak fixture do not multiply confidence — they add a redundant edge to the basis but do not increase its rank. A hundred near-identical regression tests over the same toy model count as one independent piece of evidence, not a hundred.
When a test fails, every claim that depended on it transitions to unverified immediately. The propagation is part of the same atomic commit that records the failure. Engineering teams who have lived with red CI dashboards that nobody trusts will recognise this as the missing piece: stale evidence is visibly stale, not silently green.
Atomic across all four
This is the punchline. When a parameter changes, the platform’s job is not “rebuild the part”. The job is to produce a single commit that:
- Appends a typed object to the HistoryDag (parents, patch set, timestamp, signed reflog entry on
refs/heads/main). - Records the entity events on the EntityGenealogyGraph (which faces evolved, split, merged, died, were reborn).
- Updates the ProjectionIndex to point at the new kernel-graph state and emits projection diagnostics for anything that broke.
- Marks dependent claims on the EvidenceGraph as stale, attaches the new replay receipt, recomputes the release-basis report.
If any of those four steps fails, the commit fails and none of them apply. Two-phase commit. SAGA pattern with explicit compensations. The same discipline a payment system uses to make sure money does not vanish between accounts.
The benchmark target is that this whole transaction completes within an interactive budget (single-digit milliseconds for a typical parameter bump on a part-scale model, low-tens of milliseconds at assembly scale) and that no observer, including a concurrent renderer or a remote replica, can see a state where some of the four surfaces have the new commit and others have the old one.
That is the property nobody else in CAD has: not Onshape (versions, but not 4-surface atomic), not Teamcenter (PLM-grade audit, but layered on top of CAD instead of inside it), not SolidWorks PDM (file-level, not graph-level), not plain Git (text-level, no semantic identity).
Why this matters outside CAD
This architecture is not specific to mechanical engineering. The same four-surface pattern fits any domain where:
- history must be auditable,
- identity must survive refactoring,
- live state must be queryable cheaply,
- and claims about correctness must be backed by traceable evidence.
PLM is the obvious neighbour. Scientific computing is another. Regulated software (medical devices, aerospace, automotive ASIL) is a third. The pattern is general; NeuroCAD is the first place we know of where it has been built into a CAD kernel from the start instead of bolted on top of a file-based system later.
Status
NeuroCAD’s NeuroGraph crate ships the four surfaces today as explicit modules: history, genealogy, evidence, projection. The HistoryDag has typed object store, safe refs with mandatory reflog, and a first cut of fsck. The EntityGenealogyGraph supports split/merge/map-id and ambiguous-identity detection. The EvidenceGraph has the basis-report API and the no-double-count rule. The ProjectionIndex is the youngest of the four; full projection-failure diagnostics are still being wired through.
Proposed but not yet release-ready: cross-surface atomic commit protocol with explicit two-phase commit and rollback receipts; full ProjectionIndex with PersistentId → e-graph e-class and PersistentId → BVH proxy projections; pack/GC over loose object store; 24h soak replay-divergence canary across all four surfaces.