Skip to content

Design orgs and publications have been issuing AI bans, calling them principled responses to job displacement, training data theft, and the degradation of craft. The impulse is understandable: AI doesn’t just replace tools; it challenges what made you worth hiring, and the prospect of losing what you’ve built is felt more sharply than any potential gain. Christopher Butler thinks those lines are drawn in the wrong place:

By drawing hard lines against entire categories of tools, we’re mistaking the means for the problem itself, and in doing so, we’re limiting our ability to shape how these technologies integrate into creative work.

Butler doesn’t dismiss the concerns driving those bans: training data problems, corporate consolidation, job displacement. He thinks they’re legitimate and urgent. His objection is to making the tool the target rather than the behavior. Drawing the line at AI, he argues, repeats the mistake designers made at the letterpress and again at paste-up. The technology changed. The question—about authorship, judgment, and what craft actually requires—stayed the same.

Butler’s conclusion:

A designer who uses AI to plagiarize another artist’s style with a simple prompt is engaged in something fundamentally different from one who trains a tool to extend their own creative capacity. A writer who publishes purely generated text as their own work is making a different choice than one who uses AI as a thinking partner and editor while maintaining authorship over their ideas and voice. These distinctions matter more than blanket prohibitions.

Discernment in practice means asking: Am I using this tool to extend my own capabilities or to replicate someone else’s work? Am I shaping the output or simply accepting what’s generated? Does this use serve my creative vision or just expedite a result? These aren’t always easy questions, but they’re the right ones.

Butler himself is the illustration. He spent months training Claude on a 10,000-word skill file—the accumulated context of his subject matter and his voice—building a sounding board and editor that already knows his context. He still writes without it. He says some of his best writing has come from working with it. The output may be indistinguishable to most readers. The difference, he says, is real to him.

The choice isn’t between purity and complicity, between craft and automation. It’s between engagement and abdication—between shaping how these tools develop and how they’re used, or ceding that ground entirely to those with the least interest in protecting what we value about creative work.

Four-panel collage featuring a close-up microchip, a red diagonal line on blue background, an open human hand in black and white, and grid paper partially lit by light.

Red-lining AI - Christopher Butler

Why blanket AI bans mistake the tool for the problem, and how thoughtful integration of automation, ethics, and creative work offers a better path forward.

chrbutler.com iconchrbutler.com

Ant Murphy opens with an eyebrow-raising McKinsey number:

McKinsey reports that 88% of organisations say they “use AI” but only about 1% have mature AI deployments delivering real value.

Murphy’s explanation for the gap is familiar: the diffusion of innovation, Geoffrey Moore’s chasm between early adopters and the majority, now applied to AI. What’s less common in the AI discourse is a behavioral explanation for why the adoption keeps stalling. Murphy:

AI is personal. It’s not another tool, to some it’s viewed as a replacement. “AI attacks our identity in a way that most software doesn’t” — Vikram Sreekanti

That resistance shows up in the record: a friend’s “I didn’t sign up for this”. Claire Vo described designers as the most resistant to change in the EPD triad, vocal AI opponents with little appetite for campaigning for resources. None of it is irrational. Daniel Kahneman and Amos Tversky found that humans weigh losses about twice as heavily as equivalent gains. Years of accumulated craft become our identity. AI doesn’t ask you to learn new tools; it asks you to renegotiate what made you worth hiring in the first place. The reskilling conversation treats that as a capability problem. Identity problems don’t resolve themselves through training on new tools.

Murphy on what that requires:

Surviving a paradigm shift like this is less about what your product does […] Instead it’s about you adapting to the change.

The 88% are held back by what AI is asking them to let go of. Murphy’s argument is that organizations clearing the chasm are doing the internal work first—on process, on how teams function—before it shows up in the product.

There’s an old relationship adage that you can’t be a good partner to someone until you’ve worked out your own stuff first. I think Murphy’s argument is the organizational equivalent.

Diagram labeled "The AI Bubble" with a red arrow pointing to a tiny red dot inside a large circle labeled "Everyone Else," illustrating how small the AI bubble is relative to the general population.

The AI Chasm — Ant Murphy

I challenge the hype around AI and share a more grounded perspective on how adoption actually works. Drawing on real data and firsthand experience, I break down why most companies are still early in the AI journey—and what product leaders should focus on instead.

antmurphy.me iconantmurphy.me

I’ve been pro-prototype: PMs replacing PRDs, designers prototyping interactions in code. Pavel Samsonov, writing at Product Picnic, aims at exactly that position. He opens by borrowing a distinction from Andy Polaine:

Demos and prototypes sit on a continuum, but I consider demos something to help you show a concept to other people in a form that looks and feels like the real thing. Prototypes are things you create to test something you don’t know until you build and test it.

Correct distinction. A demo succeeds on stakeholder approval; a prototype succeeds on learning. Both artifacts can be interactive and polished. What separates them is what counts as success. Samsonov on what happens when teams conflate them:

The only thing these demos are helping you test is whether your stakeholder likes what they see (the first loop) and as soon as they say “yes,” it becomes good enough to ship. Whether that second loop (releases go out, measurements come in) ever gets tracked or not is not something I’d be willing to put money on. Because once the demo is productionized, it goes from the realm of delivery velocity (which gets you shoutouts and promotions) into the realm of maintenance (which tends to be ignored even as it eats up more than half of the team’s bandwidth).

AI makes it easier to produce both, and Samsonov’s read on what happens when teams use the speedup wrong:

Shoving out more prototypes is not a heuristic for success; it is a heuristic for failure because it shows that you don’t know what you are trying to learn.

Agreed. Samsonov goes further:

This is exactly why AI-generated prototypes are not working, and have not helped anyone do anything ever. Some have accused me of going too far with this assertion, but I stand by it, because it is rooted in the very nature of what a prototype is (and is not), and what makes it successful (or does not).

Here’s where I differ. Brian Lovin’s Notion prototype playground exists because static mocks enforce golden-path thinking. The playground surfaces the messy middle of AI chat: follow-ups and latency changes no one mocks up. Édouard Wautier’s Dust team prototypes state changes and motion Figma can’t show. Figma PMs ran five user interviews in two days off an AI-built prototype, which is a textbook closed second loop. All three count as prototype work.

Samsonov’s diagnosis is right. His absolute stance is, well, too absolute. AI-generated prototypes haven’t helped anyone only if you assume they’re all demos, which is exactly what the distinction he just drew tells us not to assume.

Product Picnic 64 title card over a vintage black-and-white photo of three people eating and drinking outdoors on rocky terrain.

Designers will never have influence without understanding how organizations learn

We confuse prototypes with demos, and validation with confirmation bias. As a result, we cannot lead — instead, we are led.

productpicnic.beehiiv.com iconproductpicnic.beehiiv.com

In my previous item, I linked to a post by Adi Leviim who made the case against chat as the AI interface default, reading the 2024 wave of GUI retrofits AI labs shipped—Canvas, Artifacts, Projects, Computer Use, Deep Research—as the industry admitting a text box alone wasn’t enough. Matt Webb, writing on Interconnected, wants every service to ship a CLI instead. Both arguments are about text. They look like they contradict. They don’t. Webb’s case for going headless:

It’s pretty clear that apps and services are all going to have to go headless: that is, they will have to provide access and tools for personal AI agents without any of the visual UI that us humans use today. […] Why? Because using personal AIs is a better experience for users than using services directly (honestly); and headless services are quicker and more dependable for the personal AIs than having them click round a GUI with a bot-controlled mouse.

Webb’s CLI sits on the agent-to-service layer. Leviim’s retrofits sit on the human-to-agent layer. The text on one side is a protocol for machines. The text on the other is a user writing out intent in sentences. Both are text, but the role is different. Webb makes the split explicit when he turns to what it means for design:

So from a usability perspective I see front-end as somewhat sacrificial. AI agents will drive straight through it; users will encounter it only once or twice; it will be customised or personalised; all that work on optimising user journeys doesn’t matter any more. But from a vibe perspective, services are not fungible. […] Understanding that a service is for you is 50% an unconscious process - we call it brand - and I look forward to front-end design for apps and services optimising for brand rather than ease of use.

Interesting, right? Webb believes that the need for human-facing UI and therefore user journeys will be less. He’s designing for an agent-first world.

Webb, goes on…

If I were a bank, I would be releasing a hardened CLI tool like yesterday. There is so much to figure out: […] How does adjacency work? My bank gives me a current account in exchange for putting a “hey, get a loan!” button on the app home screen. How do you make offers to an agent?

The agent becomes the surface designers have to figure out.

Abstract illustration of tangled white curved lines forming loose oval shapes against a soft green background with muted circular shadows.

Headless everything for personal AI

It’s pretty clear that apps and services are all going to have to go *headless:* that is, they will have to provide access and tools for personal AI agents without any of the visual UI that us humans use today.

interconnected.org iconinterconnected.org

Every major AI lab spent 2024 bolting GUI surfaces onto chat: Canvas, Artifacts, Projects, Computer Use, Deep Research. That’s seven retrofits across three AI firms in twelve months. Adi Leviim, writing for UX Collective, reads that wave as the industry conceding in public what designers have been saying since Amelia Wattenberger’s 2023 essay on why chatbots aren’t the future of interfaces. His setup for why the default took hold:

Open any AI product launched in the last three years. Ignore the model, the logo, the branding. You will find the same interface: a text input at the bottom of the screen, a send button, and a scrollback of alternating messages. This is not a random convergence. It is the interface that fell out of what large language models could do on day one: pattern-match on text. In 2022 we had a new capability and no time to design around it, so we shipped what was fastest to build and called it conversational AI. Three years later, the fastest thing to build has become the thing everyone builds. That is how defaults calcify.

The lag between Wattenberger’s essay and the retrofit wave was three years. Leviim counts the retrofits as evidence the rectangle was always going to need help:

Calling this progress is charitable. It is the industry discovering, retrofit by retrofit, that a text box alone cannot hold a meaningful creative surface. You cannot edit a thousand-line document by asking the bot to re-output it with “line 312 changed to X”. You cannot iterate on a design by describing it. You cannot plan a research project without seeing the plan. The moment the task has a structured output, the chat box becomes the wrong place to work, and the vendors put a canvas, a side panel, an editor, a workspace, or a planner next to it.

“Retrofit by retrofit” is the phrase that carries his argument. Each retrofit is a clickable, scrollable, draggable pattern the chat box had removed. The AI labs are rebuilding what 2015-era UI already had.

Leviim continues, separating intent from chat:

Expressing intent does not require prose. A date picker expresses temporal intent more precisely than any sentence. A pair of sliders expresses a tradeoff more legibly than a paragraph. A file upload expresses “work on this thing” without ambiguity. Every one of these is intent-based. None of them is chat. The chat box is one possible implementation of the paradigm, and by all accessible evidence it is a low-resolution one.

Jakob Nielsen’s 2023 essay, “AI: First New UI Paradigm in 60 Years,” treated chat as the way to express intent. Leviim agrees intent-based interaction is the shift. He argues chat is the wrong way to express it. Date pickers, sliders, file uploads are all intent surfaces, and none of them is chat. Which is where the design work goes next:

the good AI UX work of the next three years will be distributed across a thousand of those scoped surfaces rather than concentrated in one generalized text field.

That’s the brief for anyone designing AI products.

Side-by-side comparison of a Structured UI with a dropdown, date picker, checkboxes, and range slider versus a minimal AI Chat Interface with a text input and Send button.

The chat box isn’t a UI paradigm. It’s what shipped.

Before LLMs we had direct manipulation, structured forms, and progressive disclosure. Then we collapsed all of it into a text box.

uxdesign.cc iconuxdesign.cc

Showing stakeholders prototypes is often a high-wire act. Back in the old days, that’s why we showed wireframes prior to high-fidelity comps, or mockups. But now with tools like Lovable or even Claude Design, where the prototype demos really well, it’s easy to mistake it for a product that is shippable. The stakeholder in the room could easily say “ship it.”

That used to be where the Figma-to-code handoff became visible. Now it’s invisible. Greg Kozakiewicz, writing on LinkedIn, wants designers to see it again. He updates an old construction-industry line for the AI era:

We used to confuse the drawing with the building. Now we confuse the prototype with the product. A working prototype also accepts everything. It will let you register, log in, fill out a form, submit something. It all works. In the demo. On a good laptop. With a fast connection. With someone who knows what they’re doing and what the app is supposed to do.

The design-to-code gap didn’t vanish when AI made prototypes interactive. It went underground. Now it shows up as a stakeholder saying “looks great, let’s ship it” to something that couldn’t survive real data or production constraints. Kozakiewicz puts a number on it:

AI gets you to about 60%. A solid, reasonable, generic 60%. The layout makes sense. The flow is logical. The copy is clear enough. It looks like a product that works. And for a lot of people, especially people making decisions about budgets and timelines, 60% looks like 90%. Because the last time they saw a prototype, it was a static Figma file with “Lorem ipsum” everywhere.

A hand lifts a modular glass block from a detailed architectural scale model, revealing illuminated interior floors with tiny figurines inside.

Paper accepts everything. So does a prototype.

There’s an old saying in construction. Paper will accept everything. You can draw anything on paper. A swimming pool on the roof. A spiral staircase made of glass. A cantilever that defies physics. Paper doesn’t argue. Paper doesn’t say “this won’t hold.” Paper just sits there, looking beautiful, full of promise.

linkedin.com iconlinkedin.com
Pointillist-style painting of a formally dressed figure in a black top hat holding a glowing green laptop, surrounded by a crowd of early 20th-century people.

A Sunday Afternoon with Claude Design

It’s really hard to get momentum on a side project when you have a full-time job with lots of travel, an active blog, and a newsletter. But I had to recapture that momentum because this side project is important. It’s for a preschool website for my cousin.

Walking into My Little Learning Tree is like stepping into pure warmth. Yes, yes, preschools are inherently fun environments, but the kids and the teachers there create a visceral energy that is simply special. I wanted to capture that specialness in a long-overdue website redesign project.

Looking at my in-progress design, something felt off. I had these long horizontal lines preceding the eyebrows—the small text above a heading that names the section—that didn’t feel right. First, they were straight. Second, the lines only occurred before the text, not also after. I clicked on the Comment button to enter Comment mode, then clicked on the eyebrow and prompted, “These lines aren’t playful enough. Let’s make them squiggles and have them before and after the eyebrow text.”

And then Claude Design did its thing.

“Taste is the scarce thing” has become shorthand for what designers still own in the AI era. I’ve written about it in the abstract more than once. Chris R Becker, writing for UX Collective, opens with an old Marshall McLuhan-era line—“we shape our tools and then our tools shape us”—and then shows how to keeping doing the shaping.

Becker cites the Steve Jobs-attributed 10-80-10 rule:

Start away from any AI. Use the 10–80–10 rule. 10% away thinking, defining, establishing vision. 80% making use of AI to assist the vision. 10% away from AI critiquing, testing, and evaluating the solution.

The bookends are the work. Both 10% slots sit explicitly away from the model, which is another way of saying they’re the judgment layer. The first defines what good looks like before inviting AI in. The second evaluates what came out. AI collapses the cost of the 80%, which is the whole productivity story. But that collapse means the bookends are no longer preamble and postscript. They’re most of the job.

Becker gets at why the closing 10% matters:

The authority bestowed on institutions, educators, and SMEs (subject matter experts) is being absorbed by AI and spread thin like butter on toast. An AI appears to slather knowledge evenly, but the quality of the knowledge butter is deliberately made opaque.

AI output arrives looking uniformly authoritative, the same confident tone whether the underlying source is a peer-reviewed paper or a forum post from 2013. Provenance gets flattened. Without a prior standard to judge against, the designer reviewing output has nothing to push back on. That’s Becker’s larger point:

The irony, I suppose, is that Designers are, hopefully, trained not to be “yes men” but rather to ask hard questions, challenge the prevailing motivations of business over our users, and, most importantly, find the root cause of the problem, rather than just the surface reaction. AI, unfortunately, is not built to push back; it will not say… “I don’t know,” or “I think that is a bad idea,” or “what if you did this… instead,” or “I understand YOU (CEO) wants this feature, but the user research and ‘our users’ want something different.” AI is designed to serve, and in the hands of people in an organization who are looking for the least amount of pushback, it is a recipe for deep institutional implementation and, frankly, a lot of bad ideas, fast.

“A recipe for deep institutional implementation.” A sycophantic tool plus an organization that wants frictionless agreement equals speed in the wrong direction. The 10-80-10 rule is a personal discipline. What’s still unresolved is how teams build that discipline into the process before the wrong direction becomes the default.

Pen-and-ink illustration of a thoughtful man seated in a chair holding a hammer, with rows of large server racks filling a data center behind him.

We become what we behold

A discussion of AI + Design and our shifting roles.

uxdesign.cc iconuxdesign.cc

Here’s a quickie. Interaction developer David Aerne created a fun, Tempest-inspired Unicode character explorer called Charcuterie. Click a character to see visually-similar ones. You can even draw a symbol in the box in the upper left corner. Super fun.

Charcutrie app interface showing a grid of Unicode glyphs in blue and white, with a selected Hangul character and descriptive sidebar text.

Charcuterie

A visual explorer for Unicode. Browse characters, discover related glyphs, and explore scripts, symbols, and shapes across the standard.

charcuterie.elastiq.ch iconcharcuterie.elastiq.ch

My current side project is a website for a preschool in San Francisco. I’m using AI to accelerate wherever it fits, but I’ve reserved the primary visual treatments to be made by hand. Partly because that’s the right call for a preschool brand. And partly because of a phrase Pablo Stanley coined for this: creativity osteoporosis.

I wrote about creativity osteoporosis a while back. The idea that your creative skills get weaker when AI does all the reps, like bones thinning when they’re not stressed. You don’t notice it happening. Everything seems fine. Then one day you reach for a skill and it’s… not there like it used to be

Stanley wrote this after a weekend of making pixel art by hand—a project called Pixabots, little 32x32 robot characters—as a deliberate detox. He describes what set off the detox:

The whole time I was drawing, there was this pull. Physical, almost. Like my body was telling me to open a tab and start prompting. Not because the work was bad. Not because I was stuck. Just because my brain has been trained, over the last two years, to route every creative problem through an LLM.

He still used AI for the parts that weren’t the art:

I used AI to build the Pixabots website. The stuff I’m not that good at… setting up Next.js, canvas rendering, exporting without antialiasing. And I tried to keep to myself the stuff that felt more “artistic” like the animation, the look and feel.

And then the operating principle:

The parts that feed my soul, I protected (even though everything in my body wanted to pull me away from them). The parts that would’ve killed the project with friction, I automated.

Maybe that’s the whole game now… knowing which parts to protect…

Knowing which parts to protect is becoming a judgment call I have to make on every project. The preschool site makes the decision easy: the visual language stays in my hands, AI handles the plumbing. The real work of this judgment is in the middle: projects where craft matters but throughput has merit, and every protect-or-automate call costs you something. If you don’t draw that line on purpose, it draws itself for you.

A grid of colorful pixel art robot and creature characters in various designs, colors, and accessories, displayed against a white background.

AI feels like a drug

I forced myself to make pixel art by hand. My brain had withdrawal symptoms.

pablostanley.substack.com iconpablostanley.substack.com

When generation gets cheap, craft becomes judgment. Raj Nandan Sharma, writing on his blog, puts it bluntly:

Before AI, mediocre work often reflected a lack of time, resources, or execution skill. Today mediocre work often means something else: the person stopped at the first acceptable draft. That is the economic shift AI introduces. It compresses the cost of first drafts, which means the value moves downstream… In other words, the scarce skill is not generation. It is refusal.

Refusal—knowing what to throw out and why—is what’s scarce in a world where anyone can generate ten competent drafts before lunch.

But Sharma doesn’t stop there. He warns that elevating taste alone can quietly corner humans into an end-of-pipeline selector role:

There is a strong version of the “taste matters” argument that quietly pushes humans into a narrow role. In that version, AI generates many outputs and the human stands at the end of the pipeline selecting the best one. That is a useful role, but it is also too small… The warning is not that taste has no value. It does. The warning is that taste without authorship, stake, or construction can become a narrow and eventually fragile role.

The warning Sharma adds is the part the “taste is the moat” conversation tends to skip. Refusal without authorship is still selector work, and selector work has a ceiling. The durable position pairs refined taste with authorship—owning what ships and the stake for getting it wrong.

Abstract swirling ink or fluid art in dark and pink tones with white text reading "Good Taste: The Only Real Moat Left.

Good Taste the Only Real Moat Left

AI makes competent output cheap. That makes taste more valuable, but also more incomplete. The real edge comes from pairing judgment with context, stakes, and the willingness to build.

rajnandan.com iconrajnandan.com

The designer’s role is widening at both ends of the product stack. Earlier, I linked to a post by Chad Johnson arguing designers gain influence by moving upstream: becoming orientation devices for the team, shaping the problem before it gets named. Daniel Mitev, writing for UX Collective, argues designers gain authorship by moving downstream, into the code:

The industry has been asking whether designers should code for over a decade. It was always the wrong question, or at least the wrong framing. It implied the barrier was technical: that designers lacked something fundamental, something that required years of study to acquire. Learn TypeScript. Understand the DOM. Earn your way across the divide. That wasn’t the barrier.

Mitev’s argument comes down to access. AI tooling compresses the translation layer and returns authorship to the designer:

What AI tooling gives back is authorship over the surface layer — the part users actually touch. A designer can now open the codebase, adjust how an element behaves, change how a transition feels, and verify the output against their own intent in real time. The easing curve gets set by the person who decided what it should feel like. The hover state gets defined by the person who thought through why it matters. That work no longer requires an interpreter.

He points at Alan’s “Everyone Can Build” initiative—283 pull requests shipped by non-engineers over two quarters, each merged after engineering review—as evidence it’s already happening.

Johnson and Mitev aren’t in conflict. They’re describing the same shift from opposite ends. The interpreters at the top of the product stack—PMs who owned problem framing and prioritization—are compressing. The interpreters at the bottom—frontend engineers translating intent into code—are compressing too. Both jobs return to the designer who understood the intent first.

The role widens. Some designers will gravitate to one end or the other. The designers who stretch the full range—orientation work and authorship—are working the widest version of the job.

A hand pressing an Enter key above a terminal showing a git commit command, with text reading "Designers finally have a say in the product they design.

Designers finally have a say in the product they design

AI didn’t teach designers to code. It gave them back the decisions that were always theirs.

uxdesign.cc iconuxdesign.cc

(Second link to Chad Johnson this week, but I just discovered his Substack, so ¯\_(ツ)_/¯.)

Chad Johnson, writing in his newsletter, argues that designer influence in product decisions comes from something other than craft output. He lays out the underlying dynamic:

Roadmaps are shaped less by who has the best ideas and more by who controls the framing of tradeoffs. Every roadmap decision is a bet: build this instead of that, now instead of later, for these users instead of those. Whoever makes the risk feel smaller tends to win.

So where does the designer fit? Johnson:

The most influential designers at startups do not position themselves as makers of screens. They act as orientation devices for the team. Orientation is the ability to help a group understand where they are, what matters, and what tradeoffs are real. It precedes prioritization, and it makes decision-making possible.

A designer whose output stops at screens is working on the wrong layer of the problem. Johnson lists the skills that back the orientation role:

Designers who shape direction invest in strategic framing, business literacy, and narrative construction. They learn to say no with evidence and to disagree without drama.

Johnson’s list is right as far as it goes. He understates one skill: legibility. A lot of design influence breaks down at translation. The thinking is strategic; the communication stays in design vocabulary. A sharp problem statement understandable only to other designers stays in the design review. Designers who change the conversation make their analysis readable in product and business terms without flattening it. That’s the same move Johnson gestures at when he describes “decision-ready artifacts” as “tools for comparison… designed to provoke judgment, not admiration.”

Johnson’s closer calls the future of design leadership “quieter, more rigorous, and deeply strategic.” That’s right. It’s also a role that depends on being read by the people making the call.

Large-scale flowchart on a white wall with quirky decision questions including "Have you ever missed an airplane flight?" and "Are you good with names?

Why Most Designers Will Never Influence Product Roadmaps

A practical explanation of how roadmap decisions are really made, and how designers can gain influence

chadsnewsletter.substack.com iconchadsnewsletter.substack.com

Two podcast conversations with frontier-lab design leaders on what designing at an AI lab looks like day-to-day. I previously linked to Lenny Rachitsky’s interview with Jenny Wen, head of design for Claude, where she described a redistribution of designer hours: less mocking, more pairing with engineers, a sliver of direct implementation. The activities themselves still look like design.

Ian Silber, head of product design at OpenAI, on Michael Riddering’s Dive Club, describes work that doesn’t fit the same list:

Designers working on this are hopefully spending a lot less time in Figma or whatever tool you use to draw pixels, and more time really thinking about how you interact with this thing, and the fact that the model really is the core product.

Silber’s concrete example is onboarding. Instead of building a first-run tutorial, his team shapes what the model already knows about the person:

We have this super intelligent model that could probably do a much better job trying to understand what this person’s goals are […] We’re really stripping back a lot of what you might traditionally do and trying to say, “Well, actually […] let’s think about like how we should give this context to the model that this person is brand new and they might need some handholding.”

The traditional response adds UI around the problem. Silber’s team takes it out and gives the model enough context to meet the user where they are.

That kind of work needs its own scaffolding, and OpenAI is building it:

We have a whole system called the Dynamic User Interface Library, which allows us to design things that the model can then interpret.

Primitives the model composes at runtime, shaped by system prompts and context rather than drawn flow by flow. Wen is describing a redistribution of designer hours inside activities that still look recognizable. Silber is describing activities that don’t quite have names yet. And yes, that is still design.

Ian Silber - What it’s like designing at OpenAI

If you’re like me you gotta be curious... what’s it like designing at OpenAI?

youtube.com iconyoutube.com

The gap between an AI-produced prototype and a shippable product has a shape. Most of us assume it’s the visual 20%: the polish AI output drifts on. Chad Johnson’s case is that the 20% is the trivial part, and the real gap sits upstream of everything visible.

Chad Johnson, writing in his newsletter:

The deeper issue was that nobody had asked whether a prototype was even the right artifact to produce at that stage. The PM had made three assumptions about user intent that we hadn’t validated. They’d skipped past a critical question about whether this flow needed to exist at all, or whether the real problem was upstream in the information architecture. They’d built a beautiful answer to a question nobody had confirmed was worth asking. That’s the part that stuck with me. Not the visual gaps. The thinking gaps.

That lines up with what I’ve been calling C+ out of the box: artifacts that read well and seem credible until you apply critical thinking. Johnson gets specific about what’s actually missing, and none of it is visual: the assumption nobody validated, the upstream question nobody asked. The interface was fine. The thinking was absent from the (probably) AI-generated PRD.

Johnson again:

…design production got democratized, but design judgment didn’t. Anyone can make something now. Almost nobody new learned how to think well about what should be made, why, and for whom. And that gap, between what’s possible to produce and what’s actually been thought through, is now the entire playing field for our profession. Designers aren’t becoming obsolete. They’re becoming stewards.

Judgment still takes years to build, and no tool compresses that.

The last 20% is rarely the gap that matters. The first question—should we build this?—almost always is. Very few teams have the muscle to ask it.

Abstract digital art featuring curved, layered surfaces with fine parallel lines in warm orange, red, and deep blue gradients.

The Last 20% and Who’s Asking Why?

Everyone can build now. Almost nobody stops to ask if they should.

chadsnewsletter.substack.com iconchadsnewsletter.substack.com

Tara Tan surveyed more than a dozen AI design tools for The Review. Her field audit sits alongside the design-process compression argument:

In working with these tools, one insight emerged for me: the tools that understand your design system produce better output than the ones that don’t. […] The competitive moat in this market is not generative quality, which is commoditizing fast. The moat is the design system graph: the tokens, components, spacing scales, typography rules, and conventions that make your product look like your product and not a generic template. Whoever makes that system machine-readable for agents will win the enterprise.

That’s the operational reason my proposal for an agent design team hinges on a rock-solid design system. What distinguishes output across the tools Tan surveyed is whether the generator respects your existing design system or treats every request as a fresh mood board.

Tan’s other finding is the role-shift:

The same shift is happening in design. At Uber, Ian Guisard didn’t stop being a design systems lead when uSpec automated his spec-writing. His job shifted from producing documentation to encoding expertise, writing agent skills, defining validation rules, deciding what “correct” means for each component across seven platforms. The human became the system designer, not the system operator. […] The canary is singing. And the song is about the work shifting from execution to judgment, from operating the system to designing the system itself.

Same title, different job. Ian Guisard’s taste still matters; it lives in the skills and validation rules now, not the deliverables. That’s “follow the skill, not the role” made concrete. Guisard used to write specs; now he writes the rules the system follows to validate them.

The infrastructure is catching up to the process. Tan’s implicit prescription is straightforward: make the design system machine-readable, win the enterprise. Some of that tooling is already out in the open. Southleft’s Figma Console MCP (which Uber’s uSpec is built on) lets agents operate on tokens and components without a custom platform.

But tooling alone isn’t enough. Most of us aren’t Uber. The path for teams without a dedicated design systems lead still needs someone to do the work Guisard did: encoding the expertise and defining what “correct” looks like across platforms. That’s where the next round of tooling needs to land.

The Design Agent Landscape" diagram categorizing AI design tools into three groups: Agent-first canvas (Pencil, Paper, OpenPencil), Design system-first (Figma MCP, Console MCP, Google Stitch), and Code-native (Subframe, MagicPath, Tempo, Polymet, Magic Patterns, Lovable, Bolt, v0, Replit).

The Design-Build Loop

Design is where AI product workflows meet their hardest test: an audience that will always, primarily, be human. A look at the tools, teams, and infrastructure emerging around AI design agents.

thereview.strangevc.com iconthereview.strangevc.com
A sleek high-speed bullet train with glowing headlights crossing a bridge through dense fog over a misty landscape.

Acceleration Is Not Automation

I’ve been wandering the wilderness to understand where the software design profession is going. Via this blog and my newsletter, I’ve been exploring the possibilities by reading, commenting, and writing. Many other designers are in the same boat, with Erika Flowers’s Zero Vector design methodology being the most defined. Kudos to her for being one of the first—if not the first—to plant the flag.

Directionally Flowers is right. But for me, working in a team and on B2B software, it feels too simplistic and ignores the realities of working with customers and counterparts in product management and engineering. (That’s her whole point: one person to do it all, no handoff.)

The destination is within view. But it’s hazy and distant. The path to get there is unclear, like driving through soupy fog when your headlights reflecting off the mist are all you can see.

I’ve written that AI-era design work reduces to taste and judgment. Elizabeth Goodspeed’s case for designer-writers gets there from a different direction.

Elizabeth Goodspeed, writing for It’s Nice That:

You can get away with a lot in design: conceptual ideas are able to sit inside a visual piece of work without ever being fully spelled out. They’re gestured at rather than articulated. Writing forces you to figure out exactly what your idea is; if it isn’t working, you’ll know immediately. Where design is like a ballet – implicit ideas carried through form – then writing is closer to a theatre – your thinking has to be explicitly spoken.

Goodspeed’s point is that design lets you gesture at an idea without ever articulating it, and writing forces you to name it. A designer who can’t explain why a choice works has taste they can’t grow or pass on.

Goodspeed’s second point goes further:

Writing is to graphic design what clay is to pottery. It’s the material designer shape and massage into form. To work with text well, you have to really be able to read and understand what you’re setting – not just how it looks and basics like not hyphenating a word in a bad spot, but what it means on a deeper level. Just as reading makes you a better writer, writing makes you a better reader.

Product designers don’t usually think of themselves as writers. But user stories are writing, and articulating what a user should be able to do through an experience and why is essential.

Worth reading in full. She makes writing feel like a design discipline.

Bold black text reading "Placeholder Text" and "Elizabeth Goodspeed" on a pink background, flanked by columns of lorem ipsum-style body copy.

Elizabeth Goodspeed on why design writing needs designers writing

Without designers writing about their own work, design is easy to misunderstand. Writing helps designers work through what they think – and makes that thinking visible to others.

itsnicethat.com iconitsnicethat.com

Every few weeks, another essay or YouTube video announces that AI has killed craft. One of my favorite designers writing about design, Christopher Butler, goes the other way:

No knowledge I possess about design—the incorporeal understanding that makes what I create better than an off-the-shelf template or something done by someone without my experience—is made irrelevant by AI. Nor is it contradicted by my use of AI tools. Structure still communicates before content. Visual hierarchy still guides attention. Negative space still creates rhythm. These principles don’t vanish because I’m working through AI rather than directly manipulating pixels. The craft migrates to a different level of abstraction. But it remains craft.

Butler’s claim is that the principles don’t vanish; they operate at a higher altitude. The unfinished part is naming where that altitude actually is. For product designers, it’s concept and hierarchy: the decisions that require knowing the user and the stake someone is willing to carry. The generated layout and the choice of components are still outputs. What’s left of design is the judgment that picks between them.

Butler’s sharper line is the binary between consumption and practice:

Someone who generates an interface with AI and calls it done isn’t practicing craft. They’re consuming convenience. Someone who generates an interface, inspects it, questions what it’s actually communicating, refines the structure, generates again, compares variations, understands why one serves the user better than another—they’re practicing craft. They’re building knowledge through iteration. The tool doesn’t determine whether you’re working with craft. Your approach does.

That’s Jiro Ono’s shokunin applied to interfaces: craft as lifelong practice, not manual labor. A camera doesn’t take a picture, and a model doesn’t make a design. That decision is the craft.

Butler’s argument reassures me. What worries me is how optional that decision is becoming. The output already looks finished. The designers who keep asking why one version serves the user better than another will still be designers in five years. The rest may still have jobs, as operators of a tool doing the work their taste used to do.

Close-up of a vibrant fingerprint with swirling ridge patterns in orange, red, blue, and yellow iridescent colors with glittery highlights.

Craft is Untouchable

I have a vested interest in the title of this piece being true. I’ve spent decades developing craft—not just making things, but understanding systems, seeing patterns, making judgments that can’t be reduced to prompts. If AI eliminates the need for that expertise, I’m in trouble.

chrbutler.com iconchrbutler.com

Tommaso Nervegna writes about LinkedIn killing its Associate Product Manager program and replacing it with a new role called the “Full Stack Builder.” The structural bet is interesting, but the finding from their rollout is what matters:

The expectation was that AI would be a great equalizer: juniors would benefit most because AI would close their skill gaps, while seniors would resist the change. The reality was the opposite. Top performers adopted AI fastest and derived the most value from it. Why? Because they had the judgment and experience to know what to ask for, how to evaluate the output, and where to apply it for maximum leverage.

That tracks with everything I’ve predicted, experienced, and seen. The skill that makes AI useful is knowing what good looks like before and after the model generates something. That ability comes from reps.

Nervegna distills LinkedIn CPO Tomer Cohen’s thesis to five skills AI cannot automate:

The five skills that AI cannot automate, according to Cohen, are Vision, Empathy, Communication, Creativity, and Judgment. As he puts it: “I’m working hard to automate everything else.”

The operational version:

The critical insight: the builder orchestrates the agents. The agents execute. Judgment stays human. This is not about replacing people with AI. It’s about compressing the team needed to ship something meaningful from fifteen people to three - or even one.

I’ve been calling this the orchestrator gap: the distance between a designer who uses AI and one who directs it. LinkedIn just gave it a job title. I think we will see more companies go this way. Whether or not it’s a good idea remains to be seen.

A Renaissance-era man studies blueprint sketches on a glowing drafting table while a giant mechanical lobster draws on the plans with an ornate pen.

The Full Stack Builder: The End of the Design Process as We Know It

The double diamond is a liability. Engineers ship faster than designers can explore. The PM role is dissolving and the three profiles that will survive this era look nothing like who we’ve been hiring

nervegna.substack.com iconnervegna.substack.com

Specialization is the whole game. Give an agent a specific role and clear constraints, and the quality of the output changes completely. I’ve been learning this firsthand with Claude Code skills.

Marie Claire Dean took that principle and scaled it into an open-source system called Designpowers. Her reasoning:

Most AI tools give you one assistant. You ask it something, it answers, and you figure out what to do next. That’s not how design teams work.

Design teams work because a strategist thinks differently from a visual designer, who thinks differently from a content writer, who thinks differently from someone doing accessibility review. The handoffs between those perspectives are where the work gets better. The friction is productive.

Her team of ten covers the full pipeline from discovery through shipping, with dedicated specialists for strategy, visual design, content, motion, accessibility, and critique. All sharing one design state document, with the human directing.

On what she learned building it:

The act of encoding a design process forces you to decide what the handoffs actually are. When does strategy end and visual design begin? What does the content writer need from the strategist before they can start? What happens when the accessibility reviewer and the design critic disagree?

That’s the same clarity I’ve found writing Claude Code skills: what does this agent need to know, and where does its scope end? On where the human stays essential:

The idea is simple: agents can verify that a design is correct, aligned to the brief, accessible, consistent. They can’t tell you whether it’s beautiful. That’s your job.

The full system is on GitHub.

3D illustration of abstract biological structures resembling a protein or molecule, with colorful folded shapes, helices, and spheres floating against a dark blue background.

I Built a Design Team Out of AI Agents

...and they’re free!

marieclairedean.substack.com iconmarieclairedean.substack.com

I’ve watched design team values die in a Confluence page. The offsite happens, the Post-Its get transcribed, the principles get written up with care, and then everyone goes back to their desks and ships exactly the way they did before. I’ve seen it with product principles and brand values too. The deck gets built, implementation starts, and the deck gets forgotten.

Vitaly Friedman, writing for Smashing Magazine, on why this matters more than ever:

We often see design principles as rigid guidelines that dictate design decisions. But actually, they are an incredible tool to rally the team around a shared purpose and document the values and beliefs that an organization embodies. They align teams and inform decision-making. They also keep us afloat amidst all the hype, big assumptions, desire for faster delivery, and AI workslop.

Friedman again:

In times when we can generate any passable design and code within minutes, we need to decide better what’s worth designing and building — and what values we want our products to embody. It’s similar to voice and tone. You might not design it intentionally, but then end users will define it for you. And so, without principles, many company initiatives are random, sporadic, ad-hoc — and feel vague, inconsistent, or simply dull to the outside world.

You might not write principles intentionally, but your product will have them anyway. The question is whether you chose them or inherited them by default.

Friedman closes with the part most teams skip:

Creating principles is only a small portion of the work; most work is about effectively sharing and embedding them. It’s difficult to get anywhere without finding ways to make design principles a default — by revisiting settings, templates, naming conventions, and output. Principles help avoid endless discussions that often stem from personal preferences or taste. But design should not be a matter of taste; it must be guided by our goals and values.

Creating principles feels productive. But alignment without embedding is a Confluence page nobody opens twice. Principles have to show up in the Figma component library, the ticket template, the review rubric. They have to be repeated so that they are ingrained. They have to become the path of least resistance.

Smashing Magazine article title card: "A Practical Guide To Design Principles" by Vitaly Friedman, tagged Design, UX, UI.

A Practical Guide To Design Principles — Smashing Magazine

Design principles with references, examples, and methods for quick look-up. Brought to you by Design Patterns For AI Interfaces, **friendly video courses on UX** and design patterns by Vitaly.

smashingmagazine.com iconsmashingmagazine.com

Dan Saffer applies mid-century existentialism to the question of what “meaning” actually requires of the people building digital products, and the result is unusually rigorous. His sharpest move is applying Sartre’s concept of “projects” to AI tools:

When someone uses ChatGPT to write an essay, the Sartrean question is: whose project is this really? If the user is exploring ideas and using the tool as a thinking partner, they’re taking it up into their own meaning-making project. But if they’re pasting in a prompt and submitting the output unchanged, the system has effectively become the meaning-maker, and the user has become a delivery mechanism. The same tool can function either way. The design question is which relationship the system encourages.

Saffer connects this to Camus and the problem of frictionless design:

When every friction is removed in the name of efficiency, the activity can be hollowed out. There is nothing left to push against, and meaning drains away. This is something that AI systems have become exceedingly good at. Push the sparkle button, the task is done for you, and you have learned nothing and enjoyed nothing.

The HCI/UX field spent decades optimizing for friction removal. Saffer’s argument is that some friction is where the meaning lives. Design the struggle away and you don’t help the user. You empty the experience. Not every friction should be removed.

Saffer’s closing:

This sensibility insists that users are not information processors, not customers, not eyeballs, not tapping fingers, and not data sources. They are meaning-making beings whose freedom and dignity are at stake in every interaction. It asks designers to take seriously the existential weight of what they build. The systems we design become part of the conditions of human existence, shaping what people can choose, what they can see, who they can become.

Saffer covers Sartre, Camus, Kierkegaard, Heidegger, and de Beauvoir in the full piece, each applied to contemporary design problems. It’s a lot, and it’s all good.

Collage of five black-and-white portrait photos of mid-20th century philosophers, including one woman and four men, one holding a pipe.

The Existential Designer: Facilitating Meaning Through Interaction

Designers like to talk about making meaningful products or using the tools of design to make meaning.

odannyboy.medium.com iconodannyboy.medium.com

Yours truly got quoted in Fast Company. Grace Snelling, surveying the industry reaction to Lenny Rachitsky’s TrueUp hiring data, pulled a comment I left under Rachitsky’s original Twitter post:

Designers have designed themselves out of the equation because of design systems. But, IMHO, the secret sauce has never been the UI. It was the workflows and looking across the experience holistically.

Let me expand on that. The UI has always been the easiest part of product design. Design systems made that even more true. What separates a great product from a mediocre one is understanding our users deeply enough to create experiences that actually delight them. That understanding is the work AI can’t do, and it’s the work too many teams were already skipping before any standoff started.

The data behind the standoff: Rachitsky’s analysis of TrueUp’s job market tracker shows design roles have been flat since early 2023 while PM and engineering roles surged. (Quick side note: this data is for tech startups, not the general tech industry or design industry at large.) His theory:

I don’t know exactly what’s going on here, but it does feel AI-related. […] Unlike PM and eng, which started growing in 2024 (two years post-ChatGPT), design didn’t. If I had to venture a theory, I’d say that because AI is allowing engineers to move so quickly, there’s less opportunity—and less desire—to involve the traditional design process.

Claire Vo, founder of ChatPRD, puts the harder version of why:

Often design teams & designers are the most resistant to change org in the EPD triad, with highly vocal AI opponents, and little skill or interest in the art of campaigning for influence or resources. […] If a PM or engineer can get 85% there with tailwind and a dream, you better come to the table with more than ‘I represent the user.’

“I represent the user” was never enough on its own. It just went unchallenged when designers were the only ones who could ship polished interfaces.

Anthropic’s chief design officer Joel Lewenstein on where the EPD triad actually lands:

I think there’s a lot of role collapse at the very beginning, but there are still pretty clear swim lanes as things get into the later stages of product development. […] It’s like a Venn diagram that’s coming closer together.

Three hands pointing toward a central point on a red background, surrounded by colorful lightning bolt shapes in green, blue, and pink.

Why are designers, engineers, and product managers in a ‘three-way standoff’?

New data has the design community in a debate about the future of their jobs.

fastcompany.com iconfastcompany.com

Silicon Valley’s pitch to designers is that AI is the more knowledgeable partner now, so they should get good at prompting it. Write better instructions, get better output.

Peter Zakrzewski, writing for UX Collective, pushes back:

The current Silicon Valley pitch to designers is essentially this: AI is your MKO now. It knows more patterns than you do. It executes faster than you do. It can code. Your job is to learn how to give it good instructions — to become a fluent prompter of a more capable system. I want to challenge that framing directly.

His challenge starts with a concrete test. He asked three leading AI systems to render a dining table with a concrete slab top resting on dry spaghetti legs, then show the scene five seconds after the legs gave way. All three rendered the impossibility with total confidence. None could feel that the physics don’t work.

That test illustrates what Zakrzewski calls the Inversion Error:

We have built a Symbolic Giant resting on an Enactive Void. These systems can write about gravity with technical or even poetic fluency but cannot feel it. They can describe a structure but cannot tell you whether it will stand or fall. The ground is shaking because the floor is missing.

“Symbolic Giant resting on an Enactive Void” is a mouthful, but the floor metaphor does the work: AI’s language fluency masks a total absence of spatial, embodied reasoning. The kind designers rely on every day without naming it. Zakrzewski on what that means for the prompting pitch:

Designers do not think primarily in sentences. Our human cognition is deeply embodied. We think in diagrams, in spatial relationships, in load paths and sight lines and in the non-discursive logic of things that must connect to other things in three-dimensional space. […] We are being asked to compress years of embodied cognition and our three-dimensional spatial judgment into a text prompt and then accept whatever the machine generates as an adequate rendering of our intent. We are, in other words, being asked to abandon the very capability that the AI lacks and that our projects require.

When someone tells designers to compress spatial judgment into a text prompt, they’re asking designers to throw away the one capability AI genuinely lacks and the one we’re genuinely great at.

There was a theme to some of the posts on this blog last week—about how words should come before the pixels. I made a similar argument in the newsletter: the work is getting more verbal and conceptual, but the eye stays. Zakrzewski makes the case for what words alone can’t carry: the spatial, embodied judgment that tells you whether the thing will actually stand.

A mechanical robotic hand reaching upward against a stormy sky, overlaid with a bold red banner reading "Form follows nothing.

The ground is shaking: Why designers must flip the script on AI

Something has shifted in the way the design field operates, and I think most of us can sense it even if we haven’t yet found the words or…

uxdesign.cc iconuxdesign.cc