When You Change One Parameter, Why Does CAD Recompute Everything?
Feature-tree rebuild lag is a solvable problem. Incremental computation has had the answer for decades. Here is how NeuroCAD's Distributed Constraint Graph touches only what changed — and how we plan to benchmark it.
Why feature-tree rebuild lag is a solvable problem — and how NeuroCAD only touches what changed.
The five-second pause that shouldn’t exist
You drag a slider. The pad becomes 1 mm thicker. Your CAD seat freezes for five seconds, the cursor turns into a spinner, and the feature tree icons flash one by one as the kernel grinds through every feature you’ve ever added to the part. Twelve fillets. Four mirrors. A linear pattern of 64 holes. A shell. Three boolean cuts. None of them depend on the parameter you just changed. None of them needed to be touched. They all got recomputed anyway.
If you have ever opened a 200-feature part in a commercial CAD package and felt the rebuild tax every time you nudge a number, you already know the problem. You may have learned to live with it — staging your edits, suppressing branches of the tree, breaking the model into sub-assemblies, exporting and reimporting bodies — none of which are modeling decisions. They are workarounds for the kernel’s inability to distinguish “this feature actually depends on the change” from “this feature happens to live in the same document”.
This post is about the underlying cause and what NeuroCAD does about it.
What is a feature tree, really?
When you build a parametric part in any modern CAD system you are not really drawing geometry. You are recording a recipe. Each feature — extrude, fillet, hole, mirror, shell — is a recipe step that depends on:
- one or more parameters (numbers, dimensions, datum references),
- zero or more upstream features that produce the inputs it consumes.
The feature tree you see on the left of the screen is a flattened view of this recipe. Underneath, the kernel maintains a dependency graph: a directed acyclic structure in which each feature node points to the features and parameters it depends on.
When you change a parameter, the correct behaviour is:
- Find the parameter node in the dependency graph.
- Walk forward (downstream) through the edges and collect every feature that transitively depends on it.
- Re-evaluate exactly those features, in topological order, and nothing else.
- Leave every independent sibling untouched.
That algorithm is well known. It is taught in undergraduate compiler courses as change propagation over a DAG. It is the same idea behind incremental build systems (make, bazel), incremental computation libraries (Adapton, Salsa), and reactive UI frameworks. It is not exotic. It is not a research problem.
So why does it not happen in CAD?
Three reasons commercial CAD over-rebuilds
We don’t know the source code of SolidWorks, Inventor, Creo, NX or Fusion 360, so we can’t say with certainty. But we can read the symptoms — every benchmark thread on every CAD forum tells the same story — and we can read the patents and the Knowledge Base articles that the vendors have published. Three patterns recur:
1. Coarse-grained dependency tracking. Many kernels treat dependencies at the feature level, not at the parameter level. Touching the part document marks the part dirty. Touching any sketch entity marks the sketch dirty. Touching the sketch marks every feature consuming the sketch dirty. The fan-out is so aggressive that “the whole rest of the tree” is the typical dirty set even for edits that should affect one node.
2. Topology-fragile re-evaluation. B-rep CAD identifies geometry by edges, faces, and vertices. A fillet attached to “the edge between Face3 and Face_7” is a string of names. If anything upstream changes the topology — a hole appears, an edge splits — the names go stale. Some kernels conservatively rebuild _everything downstream of any topology-changing operation, because they cannot prove which downstream features still resolve their references.
3. No sibling independence guarantee. When you have two unrelated branches in the feature tree — a left bracket and a right bracket on the same body, never touching — there is no architectural reason why editing the left bracket should re-evaluate anything in the right bracket. Yet because the kernel doesn’t model siblings as independent subtrees, it treats them as part of a single global rebuild.
NeuroCAD’s answer: the Distributed Constraint Graph
NeuroCAD is built around a Distributed Constraint Graph (DCG) that lives in the dcg module of the public API. Every feature, every parameter, every query — every node in the parametric tree — is a typed node in this graph. Every dependency is an explicit edge. The graph is the single source of truth for what depends on what.
When a parameter changes:
- The DCG receives an invalidation event on the parameter node.
- It computes the dirty closure: the transitive forward set of nodes reachable from the invalidated parameter through dependency edges.
- It produces an execution schedule: the dirty closure ordered topologically so that every node is evaluated after its inputs.
- The kernel walks the schedule. Nothing outside the dirty closure is touched. Nothing inside the dirty closure is evaluated twice. Independent siblings are left alone — their
last_computed_revisionfield stays at the same value it had before the edit.
This is not new computer science. What is new is that it is exposed, observable, and benchmarkable in an SDF-native CAD platform.
Why SDF makes this easier
In a B-rep kernel, a feature’s output is topology — a set of faces, edges, vertices with names. The dependency graph has to track those names because downstream features reference them. The names break when topology changes; the rebuild has to be conservative to be safe.
In an SDF kernel, a feature’s output is a function — a signed distance field over a bounded domain. Downstream features compose those functions algebraically. There are no edge names to break. The dependency graph only needs to track which functions feed which compose nodes. If you suppress a hole, the SDF for the hole disappears from the compose tree; if you bring it back, it reappears; the surrounding fillets attached to the body’s overall field don’t need to be rebuilt because they were never bound to a specific edge in the first place. Variable topology is a non-event — and we have written a separate paper on that — but the same property makes the DCG cheaper to maintain because the dependency edges are coarse and stable.
What “sibling independence” looks like in practice
Imagine a 100-feature part: a base body, ten engineering features stacked on top, each with eight sub-features (sketches, datums, secondary cuts). You change parameter p on feature F_50. Three things should happen, in order:
- Features
F_1 … F_49are not touched. Theirlast_computed_revisionstays unchanged. Their cached SDF stays in memory. - Feature
F_50and its transitive dependents (say 12 features in total downstream) are re-evaluated in topological order. - Sibling features
F_51 … F_100that do not consume any output ofF_50are not touched.
In NeuroCAD this is enforced as an architectural invariant. The DCG’s dirty-closure computation is the only authority that decides what gets re-evaluated; the executor refuses to evaluate anything outside the closure. A test contract feature_tree/test_dcg_ordering/ (proposed for the REV11.5 lab) exercises exactly this property: build a 100-feature tree, perturb a single parameter, assert that the set of nodes whose last_computed_revision advanced is exactly the dirty closure of the perturbation, and assert that the order of evaluation is a valid topological sort of that closure.
If the assertion ever fails, the kernel is broken. There is no edge case in which a sibling is “allowed” to be re-evaluated; if it happens, it is a bug. That is the kind of guarantee you cannot make about a closed-source kernel because you can’t see the dependency tracking.
The benchmark we want to publish
The reason this becomes a publication, not just an internal invariant, is that no commercial CAD vendor has ever published a reproducible re-evaluation benchmark for their feature tree. Benchmarks exist for boolean operations, for tessellation, for assembly load times. Re-evaluation of the parametric tree under perturbation is the dark matter of CAD performance: everyone feels it, nobody measures it.
We propose to publish:
- The reference parametric trees (100, 500, 1000 features, varying topology shapes — linear chain, balanced binary, wide fan-out, deeply nested patterns).
- The perturbation patterns (single-parameter, full-cascade, deep-leaf, root, mid-tree).
- The latency metrics (median, p95, p99) per perturbation class.
- The “sibling work” metric: percentage of unrelated features that were not touched.
The numbers for NeuroCAD will be measured against the kernel as-built. Numbers for FreeCAD (the only other open kernel with a feature tree) will be best-effort, with full disclosure of how we ran them. SolidWorks/Onshape/Fusion are closed and cannot be benchmarked apples-to-apples; we will not pretend otherwise.
The point of the benchmark is not to declare a winner. It is to give the field a shared yardstick so that every future CAD kernel — open or closed — can be compared on the same axes. Today there is no such yardstick. We can fix that.
What this unlocks for users
Three things, in order of how much you will feel them:
1. The slider stops freezing. When the kernel only re-evaluates what changed, dragging a parameter slider on a 200-feature part stays interactive instead of staging up a five-second progress bar. You can scrub through values and watch the model respond live.
2. Live coupling becomes possible. Once re-evaluation is bounded by the dirty closure, the kernel can afford to re-evaluate continuously — every keystroke in a numeric field, every degree of a rotation handle. NeuroCAD’s interactive workbench depends on this property; a kernel that rebuilds the world on every edit cannot offer it.
3. Larger models stay tractable. The cost of editing scales with the dirty closure, not the model size. A 1000-feature part where you edit a leaf feature with no downstream dependents costs the same as a 10-feature part with the same edit. This breaks the “models start fast and get slow” curve that every CAD user knows.
A small ask
If you build CAD performance tools or benchmarks and you are interested in a shared yardstick for parametric re-evaluation, we want to hear from you. The format we propose is open; the goal is that every kernel, ours included, has a well-defined number to publish.
Status
The Distributed Constraint Graph itself ships in NeuroCAD’s public API (≈1100 lines in the core DCG module, with companion construction logic). It models nodes, edges, execution phases, dirty closures, execution schedules, and rebuild triggers. It is exercised by every operation in the kernel that touches the feature tree.
The benchmark — the part we want to publish — is proposed, not yet executed. The test bucket name is feature_tree/test_dcg_ordering/; the contract exists on paper; reproducible scripts and numbers will land as part of the REV11.5 publication wave. We mark this honestly because the publication’s value depends on it being a reproducible benchmark, not a vendor whitepaper-style “trust us”.