← Blog
Industry 2026-05-10

Why Aerospace Tech Pubs Still Place Callouts by Hand — and What an Algorithmic Contract Looks Like

S1000D tells you what a callout is. ATA iSpec 2200 tells you what code it carries. Neither tells you where on the page it should sit. Here is what a placement contract looks like.

#S1000D#ATA iSpec 2200#exploded view#technical publications#aerospace IPC#callout placement#BREX#SDF silhouette

The hook

S1000D tells you what a callout is. ATA iSpec 2200 tells you what code the part underneath the callout must carry. Neither standard tells you where on the page the callout should sit.

Walk into any aerospace illustrated parts catalog (IPC) authoring shop today and you will find the same pattern. A CAD model becomes an exploded-view illustration. An item list comes out of the engineering BOM. An illustrator opens the figure in a graphic editor and moves balloons around with the mouse, one by one, until the page looks readable. The reviewer flags the ones whose leader lines cross the part they are labeling. The illustrator drags them again. The page is signed off when it “looks right.”

This is the bottleneck of every modern parts catalog. It is also the reason the same illustration, regenerated three months later from a slightly updated CAD model, has to be re-laid out from scratch — none of the human placement decisions are encoded anywhere a machine can replay.

NeuroCAD ships an SDF-native exploded-view pipeline. Because the geometry is implicit and BVH-accelerated, computing the silhouette of every exploded part on the page plane is cheap. That makes a deterministic, testable callout placement contract possible. This post explains what such a contract looks like, what its rules are, and why it deserves to be a published standard rather than a private feature in one CAD vendor’s GUI.


What the standards actually say

S1000D — the international specification for technical publications — is enormous. Issue 5.0 and 6.0 between them define hundreds of XML element types, business-rule schemas (BREX), data module codes (DMC), and applicability annotations. For an illustrated parts data module, S1000D gives you a <callout> element, a way to bind it to a <catalogSeqNumber> in the IPC table, and a way to declare that the callout points at coordinates inside a referenced graphic.

What S1000D does not give you:

  • A predicate that says “this leader line must not cross the silhouette of the part it is labeling.”
  • A predicate that says “two callouts must be at least d pixels apart.”
  • A predicate that says “no callout may sit outside the printable margin.”
  • A predicate that says “two leader lines must not cross each other.”

Those rules live in internal illustrator style guides at every aerospace prime, every airframe MRO, every defense contractor. They are written in English. They are enforced by human reviewers. They are not encoded in BREX. They are not in iSpec 2200. They are not in the CGM aerospace graphic profile.

ATA iSpec 2200 occupies the adjacent slot — the chapter/section/subject coding for the part item the callout points at, the data module conventions for aviation. It also says nothing about geometric placement.

This is the publishable hole. There is a piece of every aerospace IPC pipeline that no published standard governs and no published algorithm tests.


A picture of the problem

Here is what hand-placement looks like at its worst:

                                        ┌── 12
                                        │  /
   ┌────────────┐                      ┌─/───────┐
   │            │                      │/        │
   │   PART A   │──── 7 ──── 3────┐    │ PART B  │
   │            │                 │    │         │
   └────────────┘                 │    └─────────┘
            \                     │       │
             \                    │       │
              \─── 5 ── 11 ───────┘       │

                              ┌── 4 ──────┘

                              │   (leader 4 crosses leader 11
                              │    AND crosses Part B silhouette
                              │    AND callout 12 is half off the
                              │    page margin)

Three rule violations on one figure: leader-cross-leader, leader-cross-part, page-margin overrun. In a real S1000D delivery, this figure would be flagged by a senior reviewer and bounced back to the illustrator, who would drag the balloons until none of those three violations triggered.

Here is what an algorithmic placement contract is supposed to produce instead:

   ┌────────────┐                      ┌─────────┐
 7─┤            │                      │         ├─11
   │   PART A   │                      │ PART B  │
 5─┤            │                      │         ├─12
   └────────────┘                      └─────────┘

                                              4 ──┘

   (leaders exit the silhouette, terminate at the
    page margin, no leader crosses any other leader,
    no leader crosses the silhouette of any part)

Same parts. Same item numbers. Same silhouettes. Different placement, produced deterministically by an algorithm whose output a regression test can verify.

That regression test is the missing piece.


What the contract has to enforce

Five rules. Stated plainly first.

Rule 1 — No callout overlap

The bounding box of any two callout labels must not intersect. Concretely: if balloon “12” sits at pixel (320, 480) with radius 12, no other balloon may sit within 24 pixels of it on either axis.

Rule 2 — Leader does not cross the part it labels

If callout “12” labels Part B, the leader line from balloon-12 to its anchor point on Part B must enter Part B’s silhouette only at the anchor itself. It must not chord across the part. This is the rule a human reviewer’s eye catches immediately and is the most common style-guide violation.

Rule 3 — Page-margin compliance

Every balloon centre and every visible segment of every leader must lie inside the publishable area defined by the data module’s page geometry — typically a 12 mm safe margin from the trim edge.

Rule 4 — Minimum balloon-to-balloon distance

Independent of bounding-box overlap, balloons must be far enough apart that the printed page does not look cluttered. The threshold is set by the project’s style guide; 18–24 mm is typical.

Rule 5 — No leader-line crossing

No two leader lines may cross each other anywhere along their length. This is the rule that turns the whole problem into a non-trivial geometric one — solving rules 1–4 independently does not solve rule 5.

A figure that satisfies all five rules is “publishable.” A figure that violates one is “rejected at delivery.” Today, that judgment is rendered by a human eye. The contract turns it into a function call that returns Ok(()) or Err(violations).


Why SDF makes this tractable

In a B-rep CAD system, “the silhouette of part B on the page plane” is the projection of part B’s faces onto the camera plane, traced edge-by-edge. It is fast enough but expensive enough that most existing implementations cache it.

In an SDF system, “is point (x, y) on the page plane inside the silhouette of part B” is a single distance-field evaluation: ray-march from camera through (x, y) and check whether the ray hits part B’s SDF before the page-far plane. NeuroCAD’s BVH lets you batch thousands of these queries cheaply. So testing rule 2 — leader does not cross part — becomes “sample the leader at N points and check that no sample point lies inside the silhouette of the part being labeled.” It is a geometric test that runs in milliseconds on a real exploded view.

The other four rules reduce to point-in-rectangle and segment-segment-intersection tests on the 2D page-plane projection. None of this is hard mathematics. What was missing was someone bothering to write the contract down.


What the test looks like

The test bucket in the NeuroCAD lab is tech_pubs/callout_placement/. The test fixture is a curated set of exploded views — initially small, on the order of 10 figures — with hand-validated “publishable” placements as ground truth. The test runs the placement algorithm on each figure and asserts:

  1. No rule 1–5 violation in the algorithm’s output.
  2. The output is deterministic — running the algorithm twice on the same input produces byte-identical leader coordinates.
  3. The output is close to the human ground truth, where “close” is measured as a distance metric on the set of balloon positions.

Determinism matters more than aesthetic similarity to the human placement. A deterministic algorithm produces a callout layout that can be regenerated three months later when the CAD model changes, with no illustrator in the loop. That is the productivity win.


How this fits with the rest of NeuroCAD

NeuroCAD’s NeuroGraph is the platform’s evidence graph — the audit fabric that records every derivation in the system. When the placement algorithm binds callout “12” to Part B, NeuroGraph records the audit edge callout_12 → part_B with a timestamp, a placement-algorithm version, and the rule-evaluation result. This is the same fabric that records dimension_4.7 → sketch_constraint_12 → CAD_part_path for the GD&T side of the platform.

The downstream consequence is non-trivial: when the assembly model changes — a part is added, a part is suppressed, a part’s exploded position drifts — NeuroGraph can replay the placement contract on only the affected callouts. Today the same change forces a full manual re-layout. With the contract, only the callouts whose audit edges changed get re-evaluated.


The call to action

If you author S1000D data modules — especially if you maintain an internal style guide that says things like “leaders must not cross the part” — we want your style guide. The five rules above are the canonical core, but every shop has at least one extra rule (preferred quadrant for balloon placement, short-leader-first ordering, etc.). We are looking for contributions of representative figures with ground-truth placements.

If you are on the S1000D Steering Committee or the ATA iSpec 2200 working group, the contribution we propose is a placement contract that fits inside an existing data-module element set without requiring a schema change.

If you are a CAD vendor: the algorithm is in the open. The interesting work is the integration with the audit graph and the SDF-native silhouette test. NeuroCAD is happy to discuss interoperability.


References (cited above)

  • S1000D Issue 5.0 / 6.0 — https://s1000d.org/
  • ATA iSpec 2200 — https://www.webxsystems.com/specifications/ata-ispec-2200
  • BREX overview — https://www.pennantplc.com/what-is-brex-and-what-is-its-relationship-to-the-s1000d-business-rules/
  • IRIS S1000D / iSpec checker — https://www.irissoftwaresuite.com/irischeck.html

Status

To be honest with the reader: NeuroCAD has not yet shipped this algorithm. The Tech Pubs workbench in the current platform is a placeholder. The exploded-view geometry pipeline exists in the SDF and BVH layers and has been demonstrated in the kernel test suite. The placement contract — the five rules above and the regression test bucket — is proposed.

What is real today: the SDF + BVH machinery that makes silhouette tests cheap; NeuroGraph as the audit fabric; the S1000D / ATA iSpec 2200 schema knowledge that makes the data-module side compliant.

What is proposed: the placement algorithm itself; the benchmark corpus of exploded views with ground-truth placements; the throughput comparison vs hand-placement; the integration into the NeuroGraph audit fabric.

The contract is publishable now because the specification is the contribution; the implementation is downstream of it.

Ready to design differently?

Request early access to NeuroCAD.

Request Access