Skip to content

102 posts tagged with “user interface”

Taras Bakusevych closes his walkthrough of ten dying UI patterns on the heuristic that matters:

Execution UI: Interfaces that help humans perform deterministic work — entering data, configuring rules, following process steps, executing repetitive operations. 🟠 Shrinking. As AI automates execution, these surfaces lose their reason to exist.

Judgment UI: Interfaces that help humans evaluate, guide, and correct work done by machines — reviewing outputs, verifying changes, understanding reasoning, intervening at exceptions. 🟢 Growing. As AI takes on more autonomous work, humans need better surfaces to supervise it.

The supervision problem is what Jakob Nielsen called evaluability—the new central UX metric—and Bakusevych is doing the screen-by-screen translation. Every pattern in his list gets re-examined under one question: is this surface helping the human do the work, or helping the human check the work?

The HubSpot quote flow makes the friction concrete:

Creating a single sales quote in HubSpot requires navigating seven sequential screens. The rep manually selects the contact, adds company details, configures line items, chooses signature options, sets payment terms, picks a template, and previews the result — before a single quote reaches the buyer. Each step assumes the system doesn’t know information it already has in the CRM.

Bakusevych’s replacement gives the rep a different role: review what Shopify Sidekick assembled, correct what’s wrong, ship.

That’s the test he leaves you with. Open one screen in your product and ask which job it’s doing. If it’s interrogating the user for context the system could have inferred, it’s on the shrinking side.

Grid of UI pattern cards with a recycling icon at the center, illustrating ten interfaces being remade by AI.

10 UI Patterns That Won’t Survive the AI Shift

Taras Bakusevych walks through ten UI patterns under pressure from AI and lands on the one heuristic worth keeping: execution UI shrinks, judgment UI grows.

syntaxstream.substack.com iconsyntaxstream.substack.com

The terminal’s return as a serious surface for new tools (Claude Code, Codex, Omarchy) has mostly been read as a developer aesthetic story. Alcides Fonseca reads it as the receipt for thirty years of GUI toolkit churn. He walks the platforms one by one (Windows, Linux, macOS), then through Electron, then through the failed restarts (Google’s Flutter UI, Zed’s GPUI), and ends on TUIs as the place developers go when none of the layers above hold up.

Fonseca on macOS:

Apple used to be a one-book religion. Apple’s Human Interface Guidelines used to be cited by every User Interface course over the world. Xerox PARC and Apple were the two institutions that studied what it means to have a good human interface. Fast forward a few decades, and Apple is doing the best worst it can to break all the guidelines and consistency it was known for.

This isn’t a nostalgia complaint. Fonseca lists the live breaks (Fitts’ law getting ignored, the Tahoe window-resizing saga that didn’t stay fixed, the icons cluttering Apple menus) and treats them as the same class of failure as Microsoft’s WinForms-WPF-Silverlight-WinUI-MAUI parade. The mechanism differs but the outcome is the same: the platform stops being a place a designer can rely on.

Fonseca on Electron:

Looking at my dock, I have 8 native apps (text mate and macOS system utilities) and 6 electron apps (Slack, Discord, Mattermost, VScode, Cursor, Plexampp). And that’s from someone who really wishes he could avoid having any electron app at all. […] These are actions that should be the same across every macOS application, and even if there are shortcuts, they are not announced in the menus.

The dock count is the right way to measure it. RAM is the visible cost of Electron; the invisible cost is that every Electron app becomes its own little keyboard regime, with shortcuts that often don’t match the rest of the system and aren’t announced in menus when they do exist. Fonseca’s Cursor example (can you keyboard from the agent panel to the agent list and archive an item) is the kind of question any pre-Electron Mac app would have answered yes to. Most Electron apps answer maybe, with a shortcut their vendor invented.

His prescription that follows (make HCI mandatory in CS curricula, fail student projects with bad UIs, push OS vendors to invest in toolkits developers want to use) is correct in shape and probably wrong about leverage. Students aren’t the bottleneck. Apple and Microsoft have already read Norman. TUIs are back because the platforms quit, and the curriculum can’t fix that.

Fonseca’s diagnosis is right. The prescription is narrower. The TUI escape hatch works for developers because their work is text. Designers don’t get the same exit when the canvas is the medium itself.

Bonus: Speaking of TUIs, TUIStudio is a macOS app for designing terminal UIs, just like Figma!

Linux desktop split between a terminal showing an `ls` directory listing, a lazygit interface with recent commits, and btop system monitor displaying CPU, memory, disk, network, and process stats.

Why TUIs are back

Terminal User Interfaces (TUIs) are making a comeback. DHH’s Omarchy is made of three types of user interfaces: TUIs, for immediate feedback and bonus geek points, webapps because 37signals (his company) sells SAAS web applications and the unavoidable gnome-style native applications that really do not fit well in the style of the distro.

wiki.alcidesfonseca.com iconwiki.alcidesfonseca.com

Emil Kowalski, a design engineer at Linear, takes the case for designers who can articulate why a choice works one step further. Once you can explain it, you can hand the rule to an agent.

An engineer has never been more leveraged than today thanks to a fleet of agents. But when it comes to more visual work, like animations, coding agents don’t quite know what great feels like.

My way of getting there is to create a skill file for each aspect of the interface. If you know what great feels like, describe the rules, then give them to your agents so they can follow them.

Kowalski shows two animations side by side, one scaling from scale(0) and one from scale(0.95), and walks the reader from “this feels right” to a real-world reason why:

With enough experience, you can not only tell what feels better, but also why. By then you’ve not only built your taste, but also the ability to articulate it.

The correct animation below feels right, because it animates from a higher initial scale value. It makes the movement feel more gentle, natural, and elegant.

scale(0) on the left feels wrong because it looks like the element comes out of nowhere. A higher initial value resembles the real world more. Just like a balloon, even when deflated it has a visible shape, it never disappears completely.

This is what Ian Guisard at Uber does as a design systems lead: encoding expertise, writing agent skills, defining validation rules, deciding what “correct” means. Nick Babich’s piece on agentic product design covers what makes an agent an agent; Kowalski’s piece shows what an agent actually runs on.

That’s the why. There’s no magic involved. Almost every “taste” decision has a logical reason if you look close enough. This applies to any other discipline really.

Of course the more creative part of the job is still up to you, but the more you can package into a skill, the more leverage you can get out of your agents.

Bold text reading "Agents with Taste" on a white background.

Agents with Taste

How to transfer taste into an AI.

emilkowal.ski iconemilkowal.ski

Andy Matuschak describes two accidental tyrannies that have shaped software for forty years: the application model that traps software in one-size-fits-all packages, and programming as a specialization that crowds out non-programmers from inventing interfaces. He thinks coding agents could break both, and he’s already seeing it happen with the designers he works with:

I’ve been seeing it. I spent 2025 collaborating with two talented designers. Their story with coding agents this past year has been truly wild. I think the impact on my collaborators has been much greater than the impact on me, despite the fact that I’m now building perhaps ten times the speed.

Unlike me, these two started their careers in design and spent their formative years in the arts culture. They can program a bit, but the process was really slow and difficult enough to pose a significant barrier. At the start of 2025, coding models could implement small one-off design ideas—but their outputs would just fall apart after a couple of iterations. By the end of the year, my collaborators were routinely prototyping novel interface ideas and sustaining that iteration across weeks.

“The impact on my collaborators has been much greater than the impact on me.” Matuschak is moving ten times faster, and he still thinks his designers are the ones whose careers just turned over. That observation is rare from the person on the receiving end of the bigger gain in raw output.

Matuschak’s diagnosis of why the old arrangement was such a trap for designers:

Non-programming designers are trying to invent something in an interactive medium without being able to make something meaningfully interactive. So much of invention is about intimacy with the materials, tight feedback, sensitive observation, and authentic use. So it’s a catch-22: to enter into proper dialogue with their medium, a non-programmer needs to get help from a programmer. That generally requires the idea to be at least somewhat legible and compelling. But if they’re doing something truly novel, they often can’t make it legible and compelling without being in that close dialogue with their medium.

The old design-engineering separation trapped designers in a less obvious way. They often couldn’t even tell whether their ideas were brilliant, because they couldn’t get their hands on the material to find out. You can’t iterate on a feeling. You have to push something around until it pushes back. For most of my career, designers did that pushing in flat mockups and click-through prototypes, working through dynamic behavior they had never actually felt. Of course the technical ideas fell short. The designers themselves hadn’t felt the thing yet either.

That’s the asymmetry coding agents collapse. The loop between “I have an inkling” and “I am tinkering with a working version of the inkling” has finally closed for non-developers. They still can’t and mostly shouldn’t ship production code, but they don’t need to. The prototype is enough to do the design work. Once the gatekeeping melts, the next question is institutional: where does the next generation of interface inventors come from? Matuschak’s answer:

So, what now? We’ve spent decades building HCI programs that mostly look like computer science departments with design electives. But if we’re moving toward a world where invention is bottlenecked more on imagination than on technical expertise, we may have that backwards. We may need programs that look a little more like art school with technical electives—learning to develop ideas from intuition before being able to express them precisely, to discover by playing with the material.

Title slide and content page from Andy Matuschak's MIT HCI Seminar talk "Apps and programming: two accidental tyrannies" dated 2026-03-03, showing a table of contents and lecture notes.

Apps and programming: two accidental tyrannies

On coding agents, malleable software, and the future of interface invention

andymatuschak.org iconandymatuschak.org

Every major AI lab spent 2024 bolting GUI surfaces onto chat: Canvas, Artifacts, Projects, Computer Use, Deep Research. That’s seven retrofits across three AI firms in twelve months. Adi Leviim, writing for UX Collective, reads that wave as the industry conceding in public what designers have been saying since Amelia Wattenberger’s 2023 essay on why chatbots aren’t the future of interfaces. His setup for why the default took hold:

Open any AI product launched in the last three years. Ignore the model, the logo, the branding. You will find the same interface: a text input at the bottom of the screen, a send button, and a scrollback of alternating messages. This is not a random convergence. It is the interface that fell out of what large language models could do on day one: pattern-match on text. In 2022 we had a new capability and no time to design around it, so we shipped what was fastest to build and called it conversational AI. Three years later, the fastest thing to build has become the thing everyone builds. That is how defaults calcify.

The lag between Wattenberger’s essay and the retrofit wave was three years. Leviim counts the retrofits as evidence the rectangle was always going to need help:

Calling this progress is charitable. It is the industry discovering, retrofit by retrofit, that a text box alone cannot hold a meaningful creative surface. You cannot edit a thousand-line document by asking the bot to re-output it with “line 312 changed to X”. You cannot iterate on a design by describing it. You cannot plan a research project without seeing the plan. The moment the task has a structured output, the chat box becomes the wrong place to work, and the vendors put a canvas, a side panel, an editor, a workspace, or a planner next to it.

“Retrofit by retrofit” is the phrase that carries his argument. Each retrofit is a clickable, scrollable, draggable pattern the chat box had removed. The AI labs are rebuilding what 2015-era UI already had.

Leviim continues, separating intent from chat:

Expressing intent does not require prose. A date picker expresses temporal intent more precisely than any sentence. A pair of sliders expresses a tradeoff more legibly than a paragraph. A file upload expresses “work on this thing” without ambiguity. Every one of these is intent-based. None of them is chat. The chat box is one possible implementation of the paradigm, and by all accessible evidence it is a low-resolution one.

Jakob Nielsen’s 2023 essay, “AI: First New UI Paradigm in 60 Years,” treated chat as the way to express intent. Leviim agrees intent-based interaction is the shift. He argues chat is the wrong way to express it. Date pickers, sliders, file uploads are all intent surfaces, and none of them is chat. Which is where the design work goes next:

the good AI UX work of the next three years will be distributed across a thousand of those scoped surfaces rather than concentrated in one generalized text field.

That’s the brief for anyone designing AI products.

Side-by-side comparison of a Structured UI with a dropdown, date picker, checkboxes, and range slider versus a minimal AI Chat Interface with a text input and Send button.

The chat box isn’t a UI paradigm. It’s what shipped.

Before LLMs we had direct manipulation, structured forms, and progressive disclosure. Then we collapsed all of it into a text box.

uxdesign.cc iconuxdesign.cc

Tara Tan surveyed more than a dozen AI design tools for The Review. Her field audit sits alongside the design-process compression argument:

In working with these tools, one insight emerged for me: the tools that understand your design system produce better output than the ones that don’t. […] The competitive moat in this market is not generative quality, which is commoditizing fast. The moat is the design system graph: the tokens, components, spacing scales, typography rules, and conventions that make your product look like your product and not a generic template. Whoever makes that system machine-readable for agents will win the enterprise.

That’s the operational reason my proposal for an agent design team hinges on a rock-solid design system. What distinguishes output across the tools Tan surveyed is whether the generator respects your existing design system or treats every request as a fresh mood board.

Tan’s other finding is the role-shift:

The same shift is happening in design. At Uber, Ian Guisard didn’t stop being a design systems lead when uSpec automated his spec-writing. His job shifted from producing documentation to encoding expertise, writing agent skills, defining validation rules, deciding what “correct” means for each component across seven platforms. The human became the system designer, not the system operator. […] The canary is singing. And the song is about the work shifting from execution to judgment, from operating the system to designing the system itself.

Same title, different job. Ian Guisard’s taste still matters; it lives in the skills and validation rules now, not the deliverables. That’s “follow the skill, not the role” made concrete. Guisard used to write specs; now he writes the rules the system follows to validate them.

The infrastructure is catching up to the process. Tan’s implicit prescription is straightforward: make the design system machine-readable, win the enterprise. Some of that tooling is already out in the open. Southleft’s Figma Console MCP (which Uber’s uSpec is built on) lets agents operate on tokens and components without a custom platform.

But tooling alone isn’t enough. Most of us aren’t Uber. The path for teams without a dedicated design systems lead still needs someone to do the work Guisard did: encoding the expertise and defining what “correct” looks like across platforms. That’s where the next round of tooling needs to land.

The Design Agent Landscape" diagram categorizing AI design tools into three groups: Agent-first canvas (Pencil, Paper, OpenPencil), Design system-first (Figma MCP, Console MCP, Google Stitch), and Code-native (Subframe, MagicPath, Tempo, Polymet, Magic Patterns, Lovable, Bolt, v0, Replit).

The Design-Build Loop

Design is where AI product workflows meet their hardest test: an audience that will always, primarily, be human. A look at the tools, teams, and infrastructure emerging around AI design agents.

thereview.strangevc.com iconthereview.strangevc.com

I used Claude to author a process document for my team. After a lot of back and forth, it produced a thorough 4,000-word doc. And then I spent the next 30 minutes reading it, line by line, making sure every recommendation matched my intention.

The AI produced the document in minutes. I evaluated it at human reading and review speed.

Jakob Nielsen has a name for this bottleneck: evaluability. He argues it should replace execution efficiency as the central UX metric:

In command-based UIs, the user’s primary cognitive load was executing the task step-by-step. In intent-based systems, execution is cheap, but evaluation becomes the bottleneck. The usability metric shifts to how rapidly and accurately a user can verify that the AI’s output matches their actual goal. Interfaces must be optimized for “evaluability,” allowing users to judge quality and appropriateness without painstakingly combing through every detail of the result.

“Without painstakingly combing through every detail” is exactly what I was doing with my 4,000-word document. We don’t have the interfaces for this yet. We’re still reading AI output the way we’d read something a colleague wrote, except a colleague wouldn’t hand me 4,000 words and say “check this.” (Unless of course, they wrote it with AI and then, of course they would.)

In agentic engineering, you often hear that code review is the bottleneck.

Nielsen again:

Our designs must not act as cognitive wheelchairs that replace human agency; they must act as cognitive exoskeletons that support and enhance human flourishing, even as traditional work vanishes. Good AI UX will teach just enough, reveal plan structures, and leave a comprehensible trail of action so users can maintain digital judgment.

Most AI interfaces are optimized for generation speed. The harder problem is on the other end: helping humans evaluate what got generated. Until we solve that, productivity gains from AI come with an evaluation tax paid at human speed.

A Viking leader pointing forward from the bow of a dragon ship on stormy seas, crew behind him, with text reading "Intent by Discovery.

Intent by Discovery: Designing the AI User Experience

AI is not just a better chat box. It changes the user’s role from operator to supervisor, which forces UX to move from command-based interaction toward intent-based delegation, new usability metrics, orchestration layers, calibrated friction, and ultimately exploration-based interaction to clarify the user’s needs.

jakobnielsenphd.substack.com iconjakobnielsenphd.substack.com

Gui Seiz designs at Figma. His team uses Claude Code to bridge design and code. And he still reaches for the canvas when precision matters.

Seiz, speaking on Claire Vo’s How I AI podcast:

I don’t think we’re there yet in general with these code tools in terms of the precision editing that you want to do. […] I think still the gold standard for me is just being able to drag stuff around. And you can do a lot with a click that would take you a hundred words to write and to really precisely nail. No one wants to prompt for the exact hex code or the shade of yellow and that kind of stuff. That’s just easier to just quickly do and directly manipulate.

Seiz isn’t anti-AI. His team pulls production code into Figma via MCP, edits it visually, and pushes it back to the codebase. He’s bullish on what that does to the old workflow:

It’s definitely changed our workflows in a way that it’s really blown up what a workflow even is. Before, for the majority of our careers, we’ve had a very linear, agreed-upon workflow where you increase fidelity as you go on. Because it’s really expensive to work in code, and it’s really cheap just to trade ideas and sketch them out. But AI basically collapsed that, and it’s just as cheap to riff in code as it is to riff in design.

The cost of exploration collapsed. The need for direct manipulation didn’t. Both can be true.

How Figma engineers sync designs with Claude Code and Codex

Most teams are still passing static design files back and forth, and most Figma files are already out of date by the time they reach engineering. Gui Seiz (designer) and Alex Kern (engineer) from Figma walk through the exact workflow their team uses to bridge that gap with AI, live onscreen. They…

youtube.com iconyoutube.com

I’ve argued that design tools should be canvas-first, not chatbox-first. Jeff, writing in Abduzeedo makes the case for the opposite:

Designers have always borrowed from developers. Version control, component systems, token-based design — these ideas crossed the aisle from engineering and reshaped how visual work gets done. Vibe designing follows the same logic. Instead of opening Figma and reaching for a drag-and-drop panel, designers drop into the terminal. They prompt an AI model directly from the CLI, pipe the output into a file, and iterate without ever touching a mouse.

He isn’t theorizing. He published this article using browser automation and AI, with minimal manual clicking.

I don’t think the answer is CLI or canvas. It’s both. Designers are visual thinkers—that’s the cognitive foundation of the discipline, not a limitation to engineer away. Going fully terminal assumes we can be retrained to work without seeing what we’re making, or that the profession will attract people with entirely different skills.

What does look right is the plumbing underneath. Jeff on Paper.design’s MCP integration:

Its canvas is built natively on web standards — HTML and CSS — which means AI agents working through Paper’s MCP server can read and write design files directly. Tools like get_screenshot, get_jsx, write_html, and update_styles give Claude Code or Cursor direct read-write access to the design canvas.

HyperCard figured this out in 1987: direct manipulation on top of a scripting layer. The tools are finally catching up, with AI as the scripting engine.

VS Code editor with a browser preview showing the "Abduzeedo Editor" app, displaying a portrait photo with a VHS glitch shader effect applied.

Vibe Designing with Bash Access

Vibe designing is the design equivalent of vibe coding — where bash scripts, AI tools, and CLI commands are finally replacing traditional GUI-only tools.

abduzeedo.com iconabduzeedo.com

Proprioception is the body’s sense of where its parts are in space. Marcin Wichary borrows the term for software that knows where its hardware lives: where the buttons are, where the ports are, where the camera is. His proposed design principle:

The rule here would be, perhaps, a version of “show, don’t tell.” We could call it “point to, don’t describe.” (Describing what to do means cognitive effort to read the words and understand them. An arrow pointing to something should be easier to process.)

Wichary walks through a series of examples, mostly from Apple: the Apple Pay animation that points at the side button, the iPad camera prompt that points to the physical lens, Dynamic Island camouflaging missing pixels as a functional UI element. The one that caught my eye is the device Simulator matching the physical dimensions of your actual phone on-screen and staying accurate even when you change the display density. Reminds me of one of the earliest selling points of the Mac’s 72dpi—it matches the real world: 72 points to an inch.

The MacBook Neo is where Wichary applies the principle and finds Apple falling short. The new model has two USB-C ports with different speeds, and macOS notifies you with text:

I think this is nice! But it’s also just words. It feels a bit cheap. macOS knows exactly where the ports are, and could have thrown a little warning in the lower left corner of the screen, complete with an onscreen animation of swapping the plug to the other port – similar to what “double clicking to pay” does, so you wouldn’t have to look to the side to locate the socket first.

Close-up of a MacBook Touch Bar displaying "Unlock with Touch ID →" above the minus, plus, equals, and delete keys.

Software proprioception

A blog about software craft and quality

unsung.aresluna.org iconunsung.aresluna.org

Buzz Usborne on what happens when AI takes on more responsibility in a product:

AI doesn’t simply make products smarter — it redistributes thinking and decision-making between humans and machines. When AI absorbs cognition, it also inherits responsibility. And when it inherits responsibility, the cost of its mistakes rises.

Usborne frames this through three forces that determine whether AI features survive or fail: trust, value perception, and cognitive effort. They amplify each other. Low trust increases perceived effort. High effort reduces perceived value. Low value further undermines trust.

His answer is to earn autonomy through interaction, not demand trust upfront:

Trust does not always need to precede adoption, it can emerge through usage. Salesforce’s findings show that “Human validation of outputs is the biggest driver in trusting the outcome, over consistently accurate outputs.” In other words, users trust systems they can interrogate, shape, and verify. And instead of designing AI products that are perfect, we can earn trust by designing experiences that are controllable.

Controllable over perfect.

Circular diagram with purple arrows showing a cycle: trust leads to value perception, which leads to effort/cognitive load, which feeds back to trust.

Designing AI Experiences People Actually Use

AI doesn’t just add intelligence — it redistributes it. Here’s how that shift can make or break a product.

buzzusborne.com iconbuzzusborne.com

Most product teams adding AI start by building a new surface for it. A custom panel. A chat sidebar. A dedicated AI workspace. Alexandra Vasquez, writing for Bootcamp, describes her team making exactly that mistake:

We built a custom AI panel with its own navigation, input styles, and button treatments. It looked “futuristic” in the prototype. In user testing, people kept asking where things were and how to get back to their actual work. We had created a separate product inside our product.

The fix was simple: they deleted the panel and put agent actions in the same menus, modals, and toolbars people already used. Slack does this with its /command structure. Notion uses the same slash menu for manual and AI actions. The pattern is existing UI that happens to be smarter.

Vasquez argues most “AI failures” are actually system failures that agents expose at scale:

Designing for agents means treating information architecture and workflows as foundational. Before building an agent, audit your system’s foundations: Are labels consistent? Do hierarchies make sense? Can a new team member navigate workflows without constant help? If humans struggle, agents will fail faster and at scale. Fix the system first.

She’s right. And there’s a more radical version of this: agents don’t need human UI at all. As long as the APIs are available, an agent can complete tasks without ever touching a button or reading a screen. The interface is for the human, not the machine.

But that’s exactly the problem. If the agent bypasses the interface, the human’s ability to express intent and verify output becomes the whole game. Intent has to be crystal clear. Feedback has to be immediate and legible. And there’s a huge amount of trust to earn before anyone is comfortable letting an agent operate in the background on their behalf. Vasquez lands here too:

The AI model is the last thing we discuss, not the first. These are product decisions, and designers have outsized influence here.

The model is the least interesting part. The interesting part is designing the trust.

Humorous UI dialog titled "Applying AI changes" with three checked items—"Making water wet," "Raising dog cuteness," and "Burning fire hotter"—and a progress bar showing "Processing...

Agentic UX: 7 principles for designing systems with agents

Agents don’t need their own screen, they need better systems to operate in

medium.com iconmedium.com

Three people at three different companies, same conclusion. Former Apple designer Jason Yuan calls intelligence “the new materiality” in the previously linked Fast Company piece. Brian Lovin says Notion’s design team can’t design AI products in Figma because the material doesn’t live there. Jenny Blackburn, Google’s VP of UX for Gemini, puts it most directly.

Eli Woolery and Aarron Walter, writing for Design Better, synthesized interviews they’ve done with Google design leaders across YouTube, Search, and Gemini. Blackburn’s framing:

The model is the material that we are designing with, and the more you understand the material, the more you can innovate with it.

You can only direct as well as you understand. But this material behaves unlike anything designers have worked with before. Blackburn on the risk of over-constraining it:

One of the challenges is that these models are so capable. In many ways, they’re actually more capable than you even expect as a designer, and so the risk is that you actually add too much UI that limits the value that the model can provide that would come if you just facilitated a direct conversation between the user and the model.

The Gemini team’s response is smart. When users wrote too-short prompts for custom Gems, they didn’t add a tutorial. They added a “magic wand” that expands the prompt but doesn’t submit it. The user reviews, edits, learns. Teaching without lecturing.

Every previous design material—pixels, paper, aluminum—is deterministic. You shape it, it stays shaped. AI models are probabilistic. Same prompt, different results. Understanding this material isn’t like understanding clay. It’s like understanding weather.

The piece also covers YouTube’s disciplined “bundles” strategy and Search’s AI reimagining. Worth the full read.

Illustrated map of scattered islands in a blue ocean, each hosting different ecosystems and creatures including dinosaurs, large mammals, birds, and desert cacti.

The Roundup (in depth): Google’s 3 design strategies shaping their most popular products

We go deep into YouTube, Gemini, and Search design strategy

designbetterpodcast.com icondesignbetterpodcast.com

I believe in the shokunin mentality. Obsessive iteration, pursuing mastery across decades. But the designers building at the frontier right now are telling a different story.

Mark Wilson, writing for Fast Company, visited Cursor, Anthropic, OpenAI, and Krea in San Francisco. Former Apple designer Jason Yuan, now building his own AI startup:

“You can’t do the old school Apple thing of like, create lickable craft and interface. You can’t because, by the time you’ve done the best interface for ChatGPT 3, you’re on GPT 6.”

That stings a little. The Apple tradition assumes the target holds still long enough to polish. When the platform shifts every few months, polish is a liability.

Anthropic’s head of design Joel Lewenstein is making the same bet:

“Things are moving so fast that we just have to experiment faster. Convergence is hard. Because you have to figure out what’s shared. You have to build that shared path. You have all of the fringe things that people loved on these other systems. And there’s too much changing too quickly.”

Anthropic built Cowork in five or 10 days (depending on who you ask). Ship first, converge later.

What’s telling is who’s embracing this. Yuan and Abs Chowdhury—both former Apple designers, trained in the tradition of lickable craft—have each gone all-in on vibecoding at their startups. Chowdhury transferred rough designs from Photoshop(!) straight into AI code tools. Yuan built his first product mostly alongside AI:

“There’s a new reason to raise lots of money, which is compute. If you have lots of conviction, and you know exactly what you want, like, why would you hire another 20 other people right now to tell you what you’re doing? It’s a coordination cost.”

Yuan calls this the best time to be an “auteur.” The designer who doesn’t wait for engineering to realize the vision, who directs AI the way a film director directs a crew. It’s the orchestrator gap playing out in real time.

I’m not ready to abandon the shokunin mentality. But I’m starting to think the object of obsession needs to shift, from polishing pixels to refining judgment. The craft isn’t in the surface anymore. It’s in knowing what to build.

Wilson’s full piece covers a dozen people across the industry and is worth reading end to end.

Abstract illustration of a chat bubble filled with layered geometric shapes and AI sparkle icons in yellow, blue, and red on a dark background.

‘We just have to experiment faster’: AI’s changed design forever. Now what?

Designers are now coders—or better be. Your interface is a moat—or irrelevant. Inside the dizzying chaos of how AI is upending the design profession, starring its high priests at Anthropic, OpenAI, Cursor, Krea, and more.

fastcompany.com iconfastcompany.com

AI tools made designers faster. The question nobody’s answering is whether their organizations can keep up.

Cameron Worboys, head of product design at Cash App, talking to Michael Riddering on Dive Club:

I think the biggest blockers across all of the tech industry in the next 2 years will not be the speed of building. It’s going to be the operational side and being able to move something from like we have built this thing. How does it move through the operational cogs of product development in order to like get it live to customers? So my view is like how do we set ourselves up for the new world? You have to make sure that your organization is capable at running at the same speed as the AI tools. And these AI tools move fucking fast.

The bottleneck migrated. Building isn’t the constraint anymore. Getting what you’ve built through approvals, reviews, compliance, and deployment is. Cash App’s response has been radical: they’ve flattened to three management layers (they call it “core plus three”), deleted design crits, and are pushing every designer to ship production code.

Worboys on what quality actually looks like at this speed:

The quality piece, there’s a misconception that it comes from a designer sitting in some cave for 3 months and pontificating about the future of software. It literally doesn’t. It comes from reps and the speed which you can be wrong and the speed that you can go again and experiment and experiment and experiment. And I think that’s what we’ve seen change, is the amount designers can produce has exponentially increased and the amount of like bureaucracy and layers you need to run an organization has changed a lot as well.

Quality through iteration, not pontification. That’s always been true, but when each iteration takes minutes instead of days, the gap between teams that ship and teams that sit in review becomes enormous.

Worboys on where this leads:

I believe one of the primary ways which you will create lock-in in the new world is creating apps that feel completely one of one. […] When you think about the future of software development and where it’s going with generative UI, there is nothing in the future that’s going to prevent us from creating these completely one of one experiences. So that’s what is top of mind for me at the moment. And I do think we will get there relatively quickly, that every Cash App does feel unique and completely designed around the person. And then from a business perspective, it creates this deeper, harder to quantify emotional connection with a product that is the same as like your wardrobe. Clothes are by and large like an expression of personal identity.

This is the most concrete product bet I’ve seen on generative UI. Not widgets inside a chat window. Entire apps shaped around the individual. I still think core app chrome should stay stable. But Worboys is betting that consumer fintech is where that line starts to blur.

Cameron Worboys - Inside an AI-native design org

Today’s episode with Cameron Worboys (https://x.com/camworboys) (Head of Product Design at Cash App) is an inside look at how an AI-native design org operates and the ways designers can thrive in this new world.

youtube.com iconyoutube.com

I’ve been playing around with Pencil along with Paper, both newer agentic design tools. The multi-agent demo is genuinely impressive—six AI agents designing an app simultaneously, each with its own cursor, name, and chat on the canvas.

Tom Krcha, Pencil’s CEO, speaking on Peter Yang’s channel, on the format bet at the center of the product:

It’s generating basically a descriptor for the design. And then what you can do, you can essentially ask it what kind of code you want to convert it into. Because we wanted to make sure that it’s sort of platform agnostic. […] So we have this platform agnostic file format. We call it .pen. It’s essentially just JSON-based format. We wanted to really build this format to be agentic from the ground up.

Krcha frames it as “agentic PDF.” I could get behind platform agnosticism as a philosophy, but I need more convincing. The .pen format is still a translation layer between the design and the code. That means migration from Figma, especially for teams with established design systems. And I’m skeptical that a button in Pencil’s built-in design system will correctly map to the right reusable code component when the agent translates .pen to production code. I need to test it out more for myself.

We have enterprises using that for this specific purpose, to convert their design systems into pen format and make sure that it lives in the Git. This is the source of truth for everybody now.

“Source of truth” is doing heavy lifting in that sentence. For teams with mature design systems, the source of truth is the code component, not a JSON representation of it.

This is a pretty impressive demo nonetheless, and it’s a moment of delight to give agents a name and a “face” if you will. Krcha:

Those cursors, it seems like a small touch, but it’s the first time I have seen AI humanized. It feels like there’s someone there. It’s crazy, it’s just a cursor.

I Watched 6 AI Agents Design an App Together And It Blew My Mind | Tom Krcha

Tom is the CEO of Pencil, one of the coolest AI design tools that I’ve ever tried. Watching 6 AI agents design a beautiful app in real-time will genuinely blow your mind. Tom showed me how it all works under the hood (a simple JSON file?!) and how you can use Pencil to design right where you code…

youtube.com iconyoutube.com

Designers aren’t leaving Figma. They’re outgrowing what Figma was built to do.

Punit Chawla, writing for Bootcamp:

Designers are slowly shifting to a building first mindset. Which means that a good chunk of UI designers are moving quickly to AI coding platforms to bring their ideas to life. The “Vibe Coding” trend wasn’t just another tech bubble, but a wake up call for designers to create life like prototypes and MVPs from day zero. In fact, PMs and designers at Meta have publicly stated how they are showing working products instead of UI prototypes.

The shift is real, but “leaving” is the wrong word. Designers aren’t abandoning Figma. They’re adding tools that do things Figma was never designed to do. Figma’s role is narrowing from everything-tool to exploration-and-iteration tool. That’s not the same as dying.

Chawla’s strongest point is structural:

Some companies are built different with a completely separate infrastructure. For Figma to change their infrastructure from the bottom-up will be very difficult. Let’s not forget they are a publicly traded company. Risking major changes can mean risking billions in stakeholder investments. Companies like Cursor on the other hand are built to be building first/coding first products, hence a major advantage.

This is right. Figma’s architecture was purpose-built for collaborative vector editing, not code generation. Bolting on AI code output is a fundamentally different engineering problem. When Figma Make launched, I scored it at 58 out of 100, and it’s getting better, but it’s competing against tools that were born for this.

Where I’d push back is on the builder framing. Designers aren’t becoming coders. They’re becoming directors. A designer who orchestrates AI agents against a design system solves the handoff problem more fundamentally than one who vibe-codes an MVP. One eliminates the bottleneck. The other just moves which side of it you’re standing on.

Chawla hedges his own headline:

Don’t get me wrong, Figma is still the best tool for a majority of creatives and has a strong hold on our day-to-day workflow. Making any strong predictions at this point will be very ill-informed and it’s best to avoid making any conclusions as of now.

Fair enough. But the question worth tracking is whether Figma can expand fast enough to remain relevant as the deliverable shifts from mockups to working software.

Figma app icon being dropped into a recycling bin by a cursor, illustrating uninstalling or abandoning Figma.

Why Are Designers Leaving Figma? The Great Transition.

The Creative Industry Is Changing Rapidly & So Is Figma’s Future

medium.com iconmedium.com

The transparency question in autonomous interfaces—what to surface, what to simplify, what to explain—needs a concrete framework. Daniel Ruston offers one.

Ruston names the next layer: the Orchestrated User Interface, where the user states intent and the system generates the right interface and executes across multiple agents. The label is less interesting than what it demands from designers:

We can no longer design rigid for “Happy Paths.” We must design for Probabilistic UX. The designer’s job is no longer drawing the buttons; the designer’s job is defining the thresholds for when the button “presses itself” or when the system needs user to clarify, correct or control.

Ruston makes this concrete with a confidence-threshold pattern:

Low Confidence (<60%): The system asks the user for clarification or provides a vague response requiring follow-up (“Which Jane do you want me to schedule with?”). Medium Confidence (60–90%): The system makes a tentative suggestion (“Shall I draft a reply based on your last meeting?”). High Confidence (>90%): The system acts and informs (“I’ve blocked this time on your calendar to prevent conflicts”).

That’s the design lever most AI products skip. They either act without explaining or ask permission for everything. The threshold gives designers something to actually spec: not “should the system do this?” but “how sure does it need to be before it does this without asking?”

Ruston borrows a metaphor from aviation to describe what this visibility should look like:

Analogue cockpits require pilots to look at individual gauges and mentally build a picture of the aircraft’s “system” state. The glass cockpit philosophy shifts the focus to a human-centered design that processes and integrates this data into an intuitive, graphical “picture” of flight.

Same problem, different domain. Most AI products today are analogue cockpits: individual agent outputs, raw status messages, no integrated picture. The confidence thresholds tell the system when to act. The glass cockpit tells the user what’s happening while it acts.

Colorful illustration of a laptop surrounded by keyboards, chat bubbles, sliders, graphs and emoji, connected by flowing ribbons.

The rise of the Orchestrated User Interface (OUI)

Designing for intent in a brave new world.

uxdesign.cc iconuxdesign.cc

The pitch for generative UI is simple: stop making users navigate menus and let them say what they want. Every AI product demo shows the same thing: type a prompt, get a result, skip the 47-click workflow. It looks like progress.

Jakob Nielsen names what gets lost in the trade:

However, eliminating the Navigation Tax imposes a new Articulation Tax. In a menu-driven GUI, features are visible and therefore discoverable; a user can find a tool they didn’t know existed simply by browsing. In an intent-based AI interface, the user can only access what they can clearly describe.

“Articulation Tax” is the right frame. Menus are clunky, but they show you what’s possible. A blank prompt field assumes you already know what to ask for. That’s fine for power users. It’s a problem for everyone else. Nielsen:

The shift from WIMP to World Models represents a transition from Deterministic to Probabilistic interaction. In a WIMP interface, clicking an icon is deterministic: it produces the exact same result 100% of the time. In a generative world model, the system is probabilistic: the same prompt may yield different results on different attempts.

Deterministic to probabilistic is a trust problem. Users learned to trust GUIs because the same action always produced the same result. That contract is gone. Users will adjust eventually, but most aren’t there yet.

Comic-style History of the GUI showing Xerox Alto, Macintosh, windows/icons, mouse, touch phone, and holographic globe.

History of the Graphical User Interface: The Rise (and Fall?) of WIMP Design

Summary: The GUI’s success wasn’t about any single invention, but a synergy of 4 elements: Window, Icon, Menu, and Pointer, through a 60-year history of usability improvements.

jakobnielsenphd.substack.com iconjakobnielsenphd.substack.com

The design industry spent a decade burying skeuomorphism. Flat won. And now that AI can generate any flat interface in seconds, physicality is interesting again.

Daniel Rodrigues and Lucas Fischer, writing for Every, describe designing the iOS app for Monologue, a smart dictation tool. Rodrigues studied Braun radios and Teenage Engineering synthesizers, and at one point found himself crouched beside his apartment light switch watching how the shadow moved. His defense of skeuomorphism:

Skeuomorphism has been accused of being overdone, and fairly so, but I think of it as borrowing the credibility that physical things naturally have, like weight, shadow, and texture. Something similar to the way a real button communicates—without explicit explanation—that it can be pressed.

This isn’t a texture pack in Photoshop. Rodrigues studied how light behaves on a physical button and rebuilt that behavior in SwiftUI. The texture is functional, not decorative: it tells you the thing is pressable. Rodrigues and Fischer:

Not every AI product needs skeuomorphic buttons and custom sound effects, but the bar for what “functional” means is shifting. AI is making it faster and cheaper to build “functional” products, so the ones that endure are those where someone thought about what it feels like to use them. For us, that meant studying physical objects, exploring 20 wrong directions to find one right one, and hiring a musician to build sounds we could have pulled from a stock library.

Black glossy light switch plate with a teal rocker labeled "M" on a textured teal wall, flanked by ornate black-and-white classical engravings.

How to Design Software With Weight

A look at the design principles that guided our smart dictation app from desktop to iPhone

every.to iconevery.to

“People are change averse,” Duolingo’s CEO Luis von Ahn said when users revolted against the app’s 2022 redesign. He refused to offer a revert option. The backlash was just resistance to change, and users would get over it, he argued.

Dora Czerna, writing for UX Collective, makes the case that von Ahn got it wrong. Users weren’t afraid of change. They’d lost something:

That old interface isn’t just a collection of buttons and menus–it’s ours. We’ve invested time learning it, built workflows around it, developed preferences and shortcuts. The new design might be objectively superior in controlled testing, but it requires us to surrender something we’ve claimed as our own.

That’s the endowment effect applied to software. The hours you spent learning an interface have real value, and a redesign zeroes them out. Calling that “change aversion” dismisses the investment.

Czerna points to Sonos as the worst-case scenario—users who’d spent thousands on home audio systems suddenly couldn’t adjust the volume after an app update. But even smaller changes trigger the same psychology. Google changed its crop tool from square corners to rounded ones and got enough backlash to reverse it.

Czerna on what happens when you tell users the new version tested better:

Telling users “we tested this, and it’s better” when they’re actively experiencing it as worse creates a disconnect. Acknowledging that change is difficult, explaining what you’re trying to achieve, and being responsive to legitimate concerns about lost functionality builds more goodwill than insisting everything is fine when it clearly isn’t.

What’s less common is teams treating the transition itself as a design problem worth solving. And of course it is.

Vintage Mac displays "OLD INTERFACE - OUTDATED" beside a tablet with a colorful "NEW UPDATE!" dialog; support tickets and charts on the desk.

Why your brain rebels against redesigns — even good ones

The redesign tested well. Users hate it anyway. Welcome to the paradox that costs companies millions and leaves everyone baffled.

uxdesign.cc iconuxdesign.cc

Most people know what a molly guard is, even if they don’t know the name—it’s the plastic cover over an important button that forces you to be deliberate before you press it. Marcin Wichary flips the concept:

it’s also worth thinking of reverse molly guards: buttons that will press themselves if you don’t do anything after a while.

Think OS update dialogs that restart your machine after a countdown, or mobile setup screens that auto-advance. Wichary on why these matter:

There is no worse feeling than waking up, walking up to the machine that was supposed to work through the night, and seeing it did absolutely nothing, stupidly waiting for hours for a response to a question that didn’t even matter.

This is the kind of observation you only make after years of staring at buttons, as Wichary has.

Close-up of a red rectangular guard inside a dark metal casing; caption below reads "Molly guard in reverse" and "Unsung.

Molly guard in reverse

A blog about software craft and quality

unsung.aresluna.org iconunsung.aresluna.org
Person wearing glasses typing at a computer keyboard, surrounded by flowing code and a halftone glitch effect

ASCII Me

Over the past couple months, I’ve noticed a wave of ASCII-related projects show up on my feeds. WTH is ASCII? It’s the basic set of letters, numbers, and symbols that old-school computers agreed to use for text.

ASCII (American Standard Code for Information Interchange) has 128 characters:

  • 95 printable characters: digits 0–9, uppercase A–Z, lowercase a–z, space, and common punctuation and symbols.
  • 33 control characters: non-printing codes like NUL, LF (line feed), CR (carriage return), and DEL used historically for devices like teletypes and printers.

Early internet users who remember plain text-only email and Usenet newsgroups would have encountered ASCII art like these:

 /\_/\
( o.o )
 > ^ <

It’s a cat. Artist unknown.

   __/\\\\\\\\\\\\\____/\\\\\\\\\\\\\_______/\\\\\\\\\\\___
    _\/\\\/////////\\\_\/\\\/////////\\\___/\\\/////////\\\_
     _\/\\\_______\/\\\_\/\\\_______\/\\\__\//\\\______\///__
      _\/\\\\\\\\\\\\\\__\/\\\\\\\\\\\\\\____\////\\\_________
       _\/\\\/////////\\\_\/\\\/////////\\\______\////\\\______
        _\/\\\_______\/\\\_\/\\\_______\/\\\_________\////\\\___
         _\/\\\_______\/\\\_\/\\\_______\/\\\__/\\\______\//\\\__
          _\/\\\\\\\\\\\\\/__\/\\\\\\\\\\\\\/__\///\\\\\\\\\\\/___
           _\/////////////____\/////////////______\///////////_____

Dimensional lettering.

Anyway, you’ve seen it before and get the gist. My guess is that with Claude Code’s halo effect, the terminal is making a comeback and generating interest in this long lost artform again. And it’s text-based which is now fuel for AI.

In my previous post about Google Reader, I wrote about Chris Wetherell’s original vision—a polymorphic information tool, not a feed reader. But even Google Reader ended up as a three-pane inbox. That layout didn’t originate with Reader, though. It’s older than that.

Terry Godier traces that layout to a single decision. In 2002, Brent Simmons released NetNewsWire, the first RSS reader that looked like an email client. Godier asked him why, and Simmons’ answer was pragmatic:

“I was actually thinking about Usenet, not email, but whatever. The question I asked myself then was how would I design a Usenet app for (then-new) Mac OS X in the year 2002?”

“The answer was pretty clear to me: instead of multiple windows, a single window with a sidebar, list of posts, and detail view.”

A reasonable choice in 2002. But then Godier shares Simmons reflecting on why everyone kept copying him twenty-two years later:

“But every new RSS reader ought to consider not being yet another three-paned-aggregator. There are surely millions of users who might prefer a river of news or other paradigms.”

“Why not have some fun and do something new, or at least different?”

The person who designed the original paradigm was asking, twenty-two years later, why everyone was still copying him.

Godier’s argument is that when Simmons borrowed the inbox layout, he inadvertently imported the inbox’s psychology. Unread counts. Bold text for new items. A backlog that accumulates. The visual language of social debt, applied to content nobody sent you:

When you dress a new thing in old clothes, people don’t just learn the shape. They inherit the feelings, the assumptions, the emotional weight. You can’t borrow the layout of an inbox without also borrowing some of its psychology.

He calls this “phantom obligation”—the guilt you feel for something no one asked you to do. And I’ll admit, I feel it. I open Inoreader every morning and when that number isn’t zero, some part of my brain registers it as a task. It shouldn’t. Nobody is waiting. But the interface says otherwise.

Godier’s best line is the one that frames the whole piece:

We’ve been laundering obligation. Each interface inherits legitimacy from the last, but the social contract underneath gets hollowed out.

The red dot on a game has the same visual weight as a text from your kid. We kept the weight and dropped the reason.

PHANTOM OBLIGATION — noun: The guilt you feel for something no one asked you to do.

Phantom Obligation

Why RSS readers look like email clients, and what that’s doing to us.

terrygodier.com iconterrygodier.com

Many designers I’ve worked with want to get to screens as fast as possible. Open Figma, start laying things out, figure out the structure as they go. It works often enough that nobody questions it. But Daniel Rosenberg makes a case for why it shouldn’t be the default.

Rosenberg, writing for the Interaction Design Foundation, argues that the conceptual model—the objects users manipulate, the actions they perform, and the attributes they change—should be designed before anyone touches a screen:

Even before you sketch your first screen it is beneficial to develop a designer’s conceptual model and use it as the baseline for guiding all future interaction design decisions.

Rosenberg maps this to natural language. Objects are nouns. Actions are verbs. Attributes are adjectives. The way these elements relate to each other is the grammar of your interface. Get the grammar wrong and no amount of visual polish will save you.

His example is painfully simple. A tax e-sign system asked him to “ENTER a PIN” when he’d never used the system before. There was no PIN to enter. The action should have been “CREATE.” One wrong verb and a UX expert with 40 years of experience couldn’t complete the task. His accountant confirmed that dozens of clients had called thinking the system was broken.

Rosenberg on why this cascades:

A suboptimal decision on any lower layer will cascade through all the layers above. This is why designing the conceptual model grammar with the lowest cognitive complexity at the very start… is so powerful.

This is the part I want my team to internalize. When you jump straight to screens, you’re making grammar decisions implicitly—choosing verbs for buttons, deciding which objects to surface, grouping attributes in panels. You’re doing conceptual modeling whether you know it or not. The question is whether you’re doing it deliberately.

Article title "The MAGIC of Semantic Interaction Design" with small "Article" label and Interaction Design Foundation logo at bottom left.

The MAGIC of Semantic Interaction Design

Blame the user: me, a UX expert with more than 40 years of experience, who has designed more than 100 successful commercial products and evaluated the inadequate designs of nearly 1, 000 more.

interaction-design.org iconinteraction-design.org