← Blog
Manufacturing 2026-05-10

Mold Design Has 30 Years of Literature, Zero Open-Source Unit Tests. NeuroCAD Changes That.

Undercut detection, parting-line generation, draft analysis: textbook material in any tooling shop, missing from every open-source CAD as a tested toolchain. Here is the gap and the plan.

#mold design#undercut detection#parting line#draft analysis#injection molding#DFM#SDF#open benchmark

Three of the most cited algorithms in mold design — undercut detection, parting-line generation, draft analysis — are textbook material in any tooling shop. They are also missing from every open-source CAD package as a tested, reproducible toolchain. Here is the gap, and here is what we plan to ship to close it.

The first afternoon a young mold designer loses

Open any open-source CAD: FreeCAD, SolidWorks Education edition you cannot really redistribute, Onshape free tier with the limits, Fusion 360 hobbyist. Pick a plastic part — a hand-held remote, a snap-fit enclosure, a pen cap. Now ask the system three questions:

  1. Where are the undercuts? That is, which features cannot be released straight along the pull direction without a side-action core?
  2. Where does the parting line run? That is, the curve along which the cavity and core halves of the mold meet.
  3. Which faces are below 1° draft? That is, which faces will scuff or galling-stick when ejecting.

In a commercial integrated CAD/CAM tool — Moldex3D, Moldflow, Cimatron, NX Mold Wizard, SolidWorks Plastics — these three questions are buttons on the toolbar. They light up faces in red. They print a parting-line proposal. They cost $10,000 to $80,000 a seat per year and they are closed source.

In FreeCAD, in Open CASCADE, in Build123d, in CADQuery, in any of the open kernels and toolboxes, these are not buttons. They are not modules. They are not even consistently present as broken half-finished prototypes. They are forum threads, abandoned plugins, half a Master’s thesis on GitHub from 2017, and a handful of academic papers behind paywalls.

This is the unit-tested-open-source gap.

What 30 years of literature actually says

The algorithms are not a research mystery. The literature is mature.

Draft analysis is just the dot product of every face’s outward normal with the pull direction . If n̂ · d̂ > cos(90° − θ_min) for the cavity side or n̂ · d̂ < −cos(90° − θ_min) for the core side, the face has enough draft. Otherwise it gets flagged red. That is it. The math fits in a tweet [1].

Undercut detection has three dominant algorithmic families [2,3]:

  1. Visibility-based. Project the part to a 2D image plane along the pull direction; if a face is visible from +d̂ it belongs to the core half, from −d̂ to the cavity half, from neither it is an undercut.
  2. Z-map / height-map. Build a 2.5D heightfield along the pull axis. Any pocket whose floor is not the deepest height at that (x,y) cell is an undercut.
  3. B-rep concavity / hybrid. Walk the topology, find concave edges relative to the pull direction, group them into pockets, classify each pocket as moldable / undercut.

Parting-line generation is more delicate but still well-published [3,4]: walk the silhouette of the part as seen from the pull direction, project to a parting plane, snap to the convex-hull boundary or the maximum-silhouette curve. Refine using concavity edges and user-specified break points.

These three algorithms have been in the academic CAD literature since at least Ravi & Srinivasan 1990. They are taught in Boothroyd-Dewhurst design-for-manufacturing courses [5]. They appear in surveys [2] and SCIRP papers [1]. They are in commercial products and they are in proprietary plugins. They are not in any open-source package as a unit-tested module that you can install and wire into your CI.

Why?

Why open-source CAD does not have this

Three structural reasons.

First, the geometry kernels these tools sit on are not built for it. Open CASCADE — the dominant open kernel — is a B-rep system. Visibility from a pull direction is computationally expensive on B-rep because you have to ray-cast against trimmed NURBS faces. Z-maps require dense surface evaluation. The result is plug-ins that are correct in principle but ten times slower than commercial competitors and effectively undebuggable when a face’s classification flips because of a numerical edge case. Maintainers stop maintaining them. They rot.

Second, there is no benchmark. Compare to robotics (Stanford Bunny, Cranfield) or to feature recognition (NIST DFM benchmark). There, every paper reports recall against the same five reference parts and you can see whose algorithm is better. In mold design there is no shared, openly-licensed corpus of benchmark plastic parts with hand-annotated ground-truth undercuts and parting lines. Each paper rolls its own three test parts, claims success, and the next paper’s three parts are different. The field has no Stanford Bunny.

Third, mold design has never been a hobbyist activity. Robotics has hobby roboticists. CAM has open-source LinuxCNC users running mills out of garages. Mold design is professional, capital-intensive, and run by people whose seats are paid for by the company. The economic pressure to open-source the algorithms simply never arose.

Why SDF makes this tractable

NeuroCAD is built on signed distance fields as the carrier of geometry. Every solid is described by a function f(x,y,z) that returns the signed distance to the nearest surface — negative inside, positive outside, zero on the surface. The gradient ∇f(p) of that function is the unit outward normal at any point on the surface. Smooth, well-defined, no edge cases at trimmed-NURBS seams.

What does that buy us for the three algorithms?

Draft analysis becomes pointwise. Instead of “for every face, look up its analytic normal”, it becomes: sample the surface (f = 0 level set), compute ∇f / ||∇f|| at every sample, dot with the pull direction, threshold. The implementation is fifteen lines of Rust and is differentiable in the pull direction — meaning we can tell the user not just where the part fails draft, but which way to rotate to minimise the failing area.

Undercut detection by visibility becomes a ray-march. From an off-part camera at +∞ d̂, march rays along −d̂ and record the first surface hit at each pixel. This is the same sphere-tracing kernel we already have for rendering [6]. The Z-map version is the same kernel with the depth value retained instead of just the hit position. Both are O(image_resolution × march_steps), trivially parallel, and they share their inner loop with our viewport renderer.

Parting-line generation becomes silhouette extraction. Find the locus of points where ∇f · d̂ = 0 — that is, where the surface is grazing-incident to the pull direction. That locus is the silhouette curve. Project it to the parting plane. The math is already present in our gradient-aware field operators.

None of this is new mathematics. It is the same algorithms, transplanted onto a representation that handles them more naturally. The contribution is not the math — it is the unit tests, the benchmark, and the open-source delivery.

What “unit-tested” actually means here

A buzzword tour of “unit tested” usually means “we wrote some asserts and they pass”. For a mold-design toolchain that is not enough. We need three test layers:

Property tests. For every part in the corpus, the implementations must satisfy invariants that hold by construction:

  • Every face below the draft threshold flagged by analyser must, when rotated by the pull direction , have an outward normal in the failing half-space. (Property: classification is consistent with definition.)
  • For a fully demoldable part, the undercut detector must return the empty set. (Property: false positives bounded.)
  • For a part whose undercut feature is known by construction, the detector must recover it. (Property: false negatives bounded.)
  • The parting-line curve must lie on the surface (f = 0) and be tangential to the pull direction (∇f · d̂ = 0). (Property: silhouette by definition.)

Differential tests. For every part, two implementations of each algorithm must agree:

  • Draft analyser via SDF gradient agrees with B-rep face-normal analyser via the bridge to NURBS within 1°.
  • SDF visibility undercut detector agrees with a brute-force B-rep ray-cast undercut detector on the same part to within one face’s classification.
  • Silhouette-based parting line agrees with a B-rep concave-edge parting line within 0.1mm Hausdorff on a clean reference part.

Corpus tests. Run the toolchain over a published benchmark of plastic parts with hand-annotated ground truth. Report recall on undercuts, mean error on parting line, and false-positive rate on draft. This corpus does not yet exist as an openly-licensed asset; constructing it is part of the contribution.

The honest part: we have not shipped this yet

This blog post is not announcing a finished product. The Tooling workbench in NeuroCAD today is a placeholder — the mold/ module exists as a stub in the workbench tree, and the three test envelopes (draft, undercut, parting-line) are listed as HIGH-priority publishable test contracts in our REV11.5 testing dossier without yet being implemented.

What is real:

  • The SDF gradient-aware machinery is shipped.
  • The sphere-tracing kernel that visibility/Z-map detectors will reuse is shipped.
  • The B-rep ↔ SDF bridge that the differential tests will rely on is in place as a skeleton.
  • The benchmark corpus, the actual implementations, and the comparison against commercial tools are proposed work.

What we are committing to do publicly: implement these three algorithms in the open, unit-test them at all three layers (property, differential, corpus), publish the benchmark corpus under a permissive licence, and submit the result to the Computer-Aided Design journal [8] as the first openly-tested implementation of the foundational mold-design toolchain.

If you are a mold designer who has lost an afternoon to a missing undercut detector in your open-source pipeline — or who has paid $20k a year for a feature that should be a commodity — we want to hear from you. The corpus needs benchmark parts and we need ground truth.

References

  1. SCIRP, “Automatic Recognition and Construction of Draft Angle for Injection Mold Design” (paper #73785).
  2. ResearchGate 274995021, “Automated identification of complex undercut features for side-core design for die-casting”.
  3. ResearchGate 223816338, “Hybrid method for recognition of undercut features”.
  4. cadinterop.com, “3DViewStation draft analysis” technical reference.
  5. Boothroyd & Dewhurst, Product Design for Manufacture and Assembly, 3rd ed.
  6. J. C. Hart, “Sphere tracing: a geometric method for the antialiased ray tracing of implicit surfaces”, The Visual Computer 12(10), 1996.
  7. Computer-Aided Design, Elsevier (https://www.journals.elsevier.com/computer-aided-design).

Status

The Tooling workbench is a placeholder in the current NeuroCAD codebase. The three algorithms described — SDF-gradient draft analysis, sphere-traced visibility / Z-map undercut detection, and silhouette-based parting-line generation — are proposed, not shipped. The supporting infrastructure (SDF kernel, sphere-tracing renderer, B-rep ↔ SDF bridge skeleton) is shipped. The open benchmark corpus does not yet exist; constructing it is part of the proposed contribution. This post describes the work we intend to do in the open, not a fait accompli.

Ready to design differently?

Request early access to NeuroCAD.

Request Access