Skip to content

The transparency question in autonomous interfaces—what to surface, what to simplify, what to explain—needs a concrete framework. Daniel Ruston offers one.

Ruston names the next layer: the Orchestrated User Interface, where the user states intent and the system generates the right interface and executes across multiple agents. The label is less interesting than what it demands from designers:

We can no longer design rigid for “Happy Paths.” We must design for Probabilistic UX. The designer’s job is no longer drawing the buttons; the designer’s job is defining the thresholds for when the button “presses itself” or when the system needs user to clarify, correct or control.

Ruston makes this concrete with a confidence-threshold pattern:

Low Confidence (<60%): The system asks the user for clarification or provides a vague response requiring follow-up (“Which Jane do you want me to schedule with?”). Medium Confidence (60–90%): The system makes a tentative suggestion (“Shall I draft a reply based on your last meeting?”). High Confidence (>90%): The system acts and informs (“I’ve blocked this time on your calendar to prevent conflicts”).

That’s the design lever most AI products skip. They either act without explaining or ask permission for everything. The threshold gives designers something to actually spec: not “should the system do this?” but “how sure does it need to be before it does this without asking?”

Ruston borrows a metaphor from aviation to describe what this visibility should look like:

Analogue cockpits require pilots to look at individual gauges and mentally build a picture of the aircraft’s “system” state. The glass cockpit philosophy shifts the focus to a human-centered design that processes and integrates this data into an intuitive, graphical “picture” of flight.

Same problem, different domain. Most AI products today are analogue cockpits: individual agent outputs, raw status messages, no integrated picture. The confidence thresholds tell the system when to act. The glass cockpit tells the user what’s happening while it acts.

Subscribe for updates

Get weekly (or so) post updates and design insights in your inbox.