Skip to content

284 posts tagged with “product design”

I’ve been pro-prototype: PMs replacing PRDs, designers prototyping interactions in code. Pavel Samsonov, writing at Product Picnic, aims at exactly that position. He opens by borrowing a distinction from Andy Polaine:

Demos and prototypes sit on a continuum, but I consider demos something to help you show a concept to other people in a form that looks and feels like the real thing. Prototypes are things you create to test something you don’t know until you build and test it.

Correct distinction. A demo succeeds on stakeholder approval; a prototype succeeds on learning. Both artifacts can be interactive and polished. What separates them is what counts as success. Samsonov on what happens when teams conflate them:

The only thing these demos are helping you test is whether your stakeholder likes what they see (the first loop) and as soon as they say “yes,” it becomes good enough to ship. Whether that second loop (releases go out, measurements come in) ever gets tracked or not is not something I’d be willing to put money on. Because once the demo is productionized, it goes from the realm of delivery velocity (which gets you shoutouts and promotions) into the realm of maintenance (which tends to be ignored even as it eats up more than half of the team’s bandwidth).

AI makes it easier to produce both, and Samsonov’s read on what happens when teams use the speedup wrong:

Shoving out more prototypes is not a heuristic for success; it is a heuristic for failure because it shows that you don’t know what you are trying to learn.

Agreed. Samsonov goes further:

This is exactly why AI-generated prototypes are not working, and have not helped anyone do anything ever. Some have accused me of going too far with this assertion, but I stand by it, because it is rooted in the very nature of what a prototype is (and is not), and what makes it successful (or does not).

Here’s where I differ. Brian Lovin’s Notion prototype playground exists because static mocks enforce golden-path thinking. The playground surfaces the messy middle of AI chat: follow-ups and latency changes no one mocks up. Édouard Wautier’s Dust team prototypes state changes and motion Figma can’t show. Figma PMs ran five user interviews in two days off an AI-built prototype, which is a textbook closed second loop. All three count as prototype work.

Samsonov’s diagnosis is right. His absolute stance is, well, too absolute. AI-generated prototypes haven’t helped anyone only if you assume they’re all demos, which is exactly what the distinction he just drew tells us not to assume.

Product Picnic 64 title card over a vintage black-and-white photo of three people eating and drinking outdoors on rocky terrain.

Designers will never have influence without understanding how organizations learn

We confuse prototypes with demos, and validation with confirmation bias. As a result, we cannot lead — instead, we are led.

productpicnic.beehiiv.com iconproductpicnic.beehiiv.com

In my previous item, I linked to a post by Adi Leviim who made the case against chat as the AI interface default, reading the 2024 wave of GUI retrofits AI labs shipped—Canvas, Artifacts, Projects, Computer Use, Deep Research—as the industry admitting a text box alone wasn’t enough. Matt Webb, writing on Interconnected, wants every service to ship a CLI instead. Both arguments are about text. They look like they contradict. They don’t. Webb’s case for going headless:

It’s pretty clear that apps and services are all going to have to go headless: that is, they will have to provide access and tools for personal AI agents without any of the visual UI that us humans use today. […] Why? Because using personal AIs is a better experience for users than using services directly (honestly); and headless services are quicker and more dependable for the personal AIs than having them click round a GUI with a bot-controlled mouse.

Webb’s CLI sits on the agent-to-service layer. Leviim’s retrofits sit on the human-to-agent layer. The text on one side is a protocol for machines. The text on the other is a user writing out intent in sentences. Both are text, but the role is different. Webb makes the split explicit when he turns to what it means for design:

So from a usability perspective I see front-end as somewhat sacrificial. AI agents will drive straight through it; users will encounter it only once or twice; it will be customised or personalised; all that work on optimising user journeys doesn’t matter any more. But from a vibe perspective, services are not fungible. […] Understanding that a service is for you is 50% an unconscious process - we call it brand - and I look forward to front-end design for apps and services optimising for brand rather than ease of use.

Interesting, right? Webb believes that the need for human-facing UI and therefore user journeys will be less. He’s designing for an agent-first world.

Webb, goes on…

If I were a bank, I would be releasing a hardened CLI tool like yesterday. There is so much to figure out: […] How does adjacency work? My bank gives me a current account in exchange for putting a “hey, get a loan!” button on the app home screen. How do you make offers to an agent?

The agent becomes the surface designers have to figure out.

Abstract illustration of tangled white curved lines forming loose oval shapes against a soft green background with muted circular shadows.

Headless everything for personal AI

It’s pretty clear that apps and services are all going to have to go *headless:* that is, they will have to provide access and tools for personal AI agents without any of the visual UI that us humans use today.

interconnected.org iconinterconnected.org

Every major AI lab spent 2024 bolting GUI surfaces onto chat: Canvas, Artifacts, Projects, Computer Use, Deep Research. That’s seven retrofits across three AI firms in twelve months. Adi Leviim, writing for UX Collective, reads that wave as the industry conceding in public what designers have been saying since Amelia Wattenberger’s 2023 essay on why chatbots aren’t the future of interfaces. His setup for why the default took hold:

Open any AI product launched in the last three years. Ignore the model, the logo, the branding. You will find the same interface: a text input at the bottom of the screen, a send button, and a scrollback of alternating messages. This is not a random convergence. It is the interface that fell out of what large language models could do on day one: pattern-match on text. In 2022 we had a new capability and no time to design around it, so we shipped what was fastest to build and called it conversational AI. Three years later, the fastest thing to build has become the thing everyone builds. That is how defaults calcify.

The lag between Wattenberger’s essay and the retrofit wave was three years. Leviim counts the retrofits as evidence the rectangle was always going to need help:

Calling this progress is charitable. It is the industry discovering, retrofit by retrofit, that a text box alone cannot hold a meaningful creative surface. You cannot edit a thousand-line document by asking the bot to re-output it with “line 312 changed to X”. You cannot iterate on a design by describing it. You cannot plan a research project without seeing the plan. The moment the task has a structured output, the chat box becomes the wrong place to work, and the vendors put a canvas, a side panel, an editor, a workspace, or a planner next to it.

“Retrofit by retrofit” is the phrase that carries his argument. Each retrofit is a clickable, scrollable, draggable pattern the chat box had removed. The AI labs are rebuilding what 2015-era UI already had.

Leviim continues, separating intent from chat:

Expressing intent does not require prose. A date picker expresses temporal intent more precisely than any sentence. A pair of sliders expresses a tradeoff more legibly than a paragraph. A file upload expresses “work on this thing” without ambiguity. Every one of these is intent-based. None of them is chat. The chat box is one possible implementation of the paradigm, and by all accessible evidence it is a low-resolution one.

Jakob Nielsen’s 2023 essay, “AI: First New UI Paradigm in 60 Years,” treated chat as the way to express intent. Leviim agrees intent-based interaction is the shift. He argues chat is the wrong way to express it. Date pickers, sliders, file uploads are all intent surfaces, and none of them is chat. Which is where the design work goes next:

the good AI UX work of the next three years will be distributed across a thousand of those scoped surfaces rather than concentrated in one generalized text field.

That’s the brief for anyone designing AI products.

Side-by-side comparison of a Structured UI with a dropdown, date picker, checkboxes, and range slider versus a minimal AI Chat Interface with a text input and Send button.

The chat box isn’t a UI paradigm. It’s what shipped.

Before LLMs we had direct manipulation, structured forms, and progressive disclosure. Then we collapsed all of it into a text box.

uxdesign.cc iconuxdesign.cc

Showing stakeholders prototypes is often a high-wire act. Back in the old days, that’s why we showed wireframes prior to high-fidelity comps, or mockups. But now with tools like Lovable or even Claude Design, where the prototype demos really well, it’s easy to mistake it for a product that is shippable. The stakeholder in the room could easily say “ship it.”

That used to be where the Figma-to-code handoff became visible. Now it’s invisible. Greg Kozakiewicz, writing on LinkedIn, wants designers to see it again. He updates an old construction-industry line for the AI era:

We used to confuse the drawing with the building. Now we confuse the prototype with the product. A working prototype also accepts everything. It will let you register, log in, fill out a form, submit something. It all works. In the demo. On a good laptop. With a fast connection. With someone who knows what they’re doing and what the app is supposed to do.

The design-to-code gap didn’t vanish when AI made prototypes interactive. It went underground. Now it shows up as a stakeholder saying “looks great, let’s ship it” to something that couldn’t survive real data or production constraints. Kozakiewicz puts a number on it:

AI gets you to about 60%. A solid, reasonable, generic 60%. The layout makes sense. The flow is logical. The copy is clear enough. It looks like a product that works. And for a lot of people, especially people making decisions about budgets and timelines, 60% looks like 90%. Because the last time they saw a prototype, it was a static Figma file with “Lorem ipsum” everywhere.

A hand lifts a modular glass block from a detailed architectural scale model, revealing illuminated interior floors with tiny figurines inside.

Paper accepts everything. So does a prototype.

There’s an old saying in construction. Paper will accept everything. You can draw anything on paper. A swimming pool on the roof. A spiral staircase made of glass. A cantilever that defies physics. Paper doesn’t argue. Paper doesn’t say “this won’t hold.” Paper just sits there, looking beautiful, full of promise.

linkedin.com iconlinkedin.com

The designer’s role is widening at both ends of the product stack. Earlier, I linked to a post by Chad Johnson arguing designers gain influence by moving upstream: becoming orientation devices for the team, shaping the problem before it gets named. Daniel Mitev, writing for UX Collective, argues designers gain authorship by moving downstream, into the code:

The industry has been asking whether designers should code for over a decade. It was always the wrong question, or at least the wrong framing. It implied the barrier was technical: that designers lacked something fundamental, something that required years of study to acquire. Learn TypeScript. Understand the DOM. Earn your way across the divide. That wasn’t the barrier.

Mitev’s argument comes down to access. AI tooling compresses the translation layer and returns authorship to the designer:

What AI tooling gives back is authorship over the surface layer — the part users actually touch. A designer can now open the codebase, adjust how an element behaves, change how a transition feels, and verify the output against their own intent in real time. The easing curve gets set by the person who decided what it should feel like. The hover state gets defined by the person who thought through why it matters. That work no longer requires an interpreter.

He points at Alan’s “Everyone Can Build” initiative—283 pull requests shipped by non-engineers over two quarters, each merged after engineering review—as evidence it’s already happening.

Johnson and Mitev aren’t in conflict. They’re describing the same shift from opposite ends. The interpreters at the top of the product stack—PMs who owned problem framing and prioritization—are compressing. The interpreters at the bottom—frontend engineers translating intent into code—are compressing too. Both jobs return to the designer who understood the intent first.

The role widens. Some designers will gravitate to one end or the other. The designers who stretch the full range—orientation work and authorship—are working the widest version of the job.

A hand pressing an Enter key above a terminal showing a git commit command, with text reading "Designers finally have a say in the product they design.

Designers finally have a say in the product they design

AI didn’t teach designers to code. It gave them back the decisions that were always theirs.

uxdesign.cc iconuxdesign.cc

(Second link to Chad Johnson this week, but I just discovered his Substack, so ¯\_(ツ)_/¯.)

Chad Johnson, writing in his newsletter, argues that designer influence in product decisions comes from something other than craft output. He lays out the underlying dynamic:

Roadmaps are shaped less by who has the best ideas and more by who controls the framing of tradeoffs. Every roadmap decision is a bet: build this instead of that, now instead of later, for these users instead of those. Whoever makes the risk feel smaller tends to win.

So where does the designer fit? Johnson:

The most influential designers at startups do not position themselves as makers of screens. They act as orientation devices for the team. Orientation is the ability to help a group understand where they are, what matters, and what tradeoffs are real. It precedes prioritization, and it makes decision-making possible.

A designer whose output stops at screens is working on the wrong layer of the problem. Johnson lists the skills that back the orientation role:

Designers who shape direction invest in strategic framing, business literacy, and narrative construction. They learn to say no with evidence and to disagree without drama.

Johnson’s list is right as far as it goes. He understates one skill: legibility. A lot of design influence breaks down at translation. The thinking is strategic; the communication stays in design vocabulary. A sharp problem statement understandable only to other designers stays in the design review. Designers who change the conversation make their analysis readable in product and business terms without flattening it. That’s the same move Johnson gestures at when he describes “decision-ready artifacts” as “tools for comparison… designed to provoke judgment, not admiration.”

Johnson’s closer calls the future of design leadership “quieter, more rigorous, and deeply strategic.” That’s right. It’s also a role that depends on being read by the people making the call.

Large-scale flowchart on a white wall with quirky decision questions including "Have you ever missed an airplane flight?" and "Are you good with names?

Why Most Designers Will Never Influence Product Roadmaps

A practical explanation of how roadmap decisions are really made, and how designers can gain influence

chadsnewsletter.substack.com iconchadsnewsletter.substack.com

Two podcast conversations with frontier-lab design leaders on what designing at an AI lab looks like day-to-day. I previously linked to Lenny Rachitsky’s interview with Jenny Wen, head of design for Claude, where she described a redistribution of designer hours: less mocking, more pairing with engineers, a sliver of direct implementation. The activities themselves still look like design.

Ian Silber, head of product design at OpenAI, on Michael Riddering’s Dive Club, describes work that doesn’t fit the same list:

Designers working on this are hopefully spending a lot less time in Figma or whatever tool you use to draw pixels, and more time really thinking about how you interact with this thing, and the fact that the model really is the core product.

Silber’s concrete example is onboarding. Instead of building a first-run tutorial, his team shapes what the model already knows about the person:

We have this super intelligent model that could probably do a much better job trying to understand what this person’s goals are […] We’re really stripping back a lot of what you might traditionally do and trying to say, “Well, actually […] let’s think about like how we should give this context to the model that this person is brand new and they might need some handholding.”

The traditional response adds UI around the problem. Silber’s team takes it out and gives the model enough context to meet the user where they are.

That kind of work needs its own scaffolding, and OpenAI is building it:

We have a whole system called the Dynamic User Interface Library, which allows us to design things that the model can then interpret.

Primitives the model composes at runtime, shaped by system prompts and context rather than drawn flow by flow. Wen is describing a redistribution of designer hours inside activities that still look recognizable. Silber is describing activities that don’t quite have names yet. And yes, that is still design.

Ian Silber - What it’s like designing at OpenAI

If you’re like me you gotta be curious... what’s it like designing at OpenAI?

youtube.com iconyoutube.com

The gap between an AI-produced prototype and a shippable product has a shape. Most of us assume it’s the visual 20%: the polish AI output drifts on. Chad Johnson’s case is that the 20% is the trivial part, and the real gap sits upstream of everything visible.

Chad Johnson, writing in his newsletter:

The deeper issue was that nobody had asked whether a prototype was even the right artifact to produce at that stage. The PM had made three assumptions about user intent that we hadn’t validated. They’d skipped past a critical question about whether this flow needed to exist at all, or whether the real problem was upstream in the information architecture. They’d built a beautiful answer to a question nobody had confirmed was worth asking. That’s the part that stuck with me. Not the visual gaps. The thinking gaps.

That lines up with what I’ve been calling C+ out of the box: artifacts that read well and seem credible until you apply critical thinking. Johnson gets specific about what’s actually missing, and none of it is visual: the assumption nobody validated, the upstream question nobody asked. The interface was fine. The thinking was absent from the (probably) AI-generated PRD.

Johnson again:

…design production got democratized, but design judgment didn’t. Anyone can make something now. Almost nobody new learned how to think well about what should be made, why, and for whom. And that gap, between what’s possible to produce and what’s actually been thought through, is now the entire playing field for our profession. Designers aren’t becoming obsolete. They’re becoming stewards.

Judgment still takes years to build, and no tool compresses that.

The last 20% is rarely the gap that matters. The first question—should we build this?—almost always is. Very few teams have the muscle to ask it.

Abstract digital art featuring curved, layered surfaces with fine parallel lines in warm orange, red, and deep blue gradients.

The Last 20% and Who’s Asking Why?

Everyone can build now. Almost nobody stops to ask if they should.

chadsnewsletter.substack.com iconchadsnewsletter.substack.com

Tara Tan surveyed more than a dozen AI design tools for The Review. Her field audit sits alongside the design-process compression argument:

In working with these tools, one insight emerged for me: the tools that understand your design system produce better output than the ones that don’t. […] The competitive moat in this market is not generative quality, which is commoditizing fast. The moat is the design system graph: the tokens, components, spacing scales, typography rules, and conventions that make your product look like your product and not a generic template. Whoever makes that system machine-readable for agents will win the enterprise.

That’s the operational reason my proposal for an agent design team hinges on a rock-solid design system. What distinguishes output across the tools Tan surveyed is whether the generator respects your existing design system or treats every request as a fresh mood board.

Tan’s other finding is the role-shift:

The same shift is happening in design. At Uber, Ian Guisard didn’t stop being a design systems lead when uSpec automated his spec-writing. His job shifted from producing documentation to encoding expertise, writing agent skills, defining validation rules, deciding what “correct” means for each component across seven platforms. The human became the system designer, not the system operator. […] The canary is singing. And the song is about the work shifting from execution to judgment, from operating the system to designing the system itself.

Same title, different job. Ian Guisard’s taste still matters; it lives in the skills and validation rules now, not the deliverables. That’s “follow the skill, not the role” made concrete. Guisard used to write specs; now he writes the rules the system follows to validate them.

The infrastructure is catching up to the process. Tan’s implicit prescription is straightforward: make the design system machine-readable, win the enterprise. Some of that tooling is already out in the open. Southleft’s Figma Console MCP (which Uber’s uSpec is built on) lets agents operate on tokens and components without a custom platform.

But tooling alone isn’t enough. Most of us aren’t Uber. The path for teams without a dedicated design systems lead still needs someone to do the work Guisard did: encoding the expertise and defining what “correct” looks like across platforms. That’s where the next round of tooling needs to land.

The Design Agent Landscape" diagram categorizing AI design tools into three groups: Agent-first canvas (Pencil, Paper, OpenPencil), Design system-first (Figma MCP, Console MCP, Google Stitch), and Code-native (Subframe, MagicPath, Tempo, Polymet, Magic Patterns, Lovable, Bolt, v0, Replit).

The Design-Build Loop

Design is where AI product workflows meet their hardest test: an audience that will always, primarily, be human. A look at the tools, teams, and infrastructure emerging around AI design agents.

thereview.strangevc.com iconthereview.strangevc.com
A sleek high-speed bullet train with glowing headlights crossing a bridge through dense fog over a misty landscape.

Acceleration Is Not Automation

I’ve been wandering the wilderness to understand where the software design profession is going. Via this blog and my newsletter, I’ve been exploring the possibilities by reading, commenting, and writing. Many other designers are in the same boat, with Erika Flowers’s Zero Vector design methodology being the most defined. Kudos to her for being one of the first—if not the first—to plant the flag.

Directionally Flowers is right. But for me, working in a team and on B2B software, it feels too simplistic and ignores the realities of working with customers and counterparts in product management and engineering. (That’s her whole point: one person to do it all, no handoff.)

The destination is within view. But it’s hazy and distant. The path to get there is unclear, like driving through soupy fog when your headlights reflecting off the mist are all you can see.

Every few weeks, another essay or YouTube video announces that AI has killed craft. One of my favorite designers writing about design, Christopher Butler, goes the other way:

No knowledge I possess about design—the incorporeal understanding that makes what I create better than an off-the-shelf template or something done by someone without my experience—is made irrelevant by AI. Nor is it contradicted by my use of AI tools. Structure still communicates before content. Visual hierarchy still guides attention. Negative space still creates rhythm. These principles don’t vanish because I’m working through AI rather than directly manipulating pixels. The craft migrates to a different level of abstraction. But it remains craft.

Butler’s claim is that the principles don’t vanish; they operate at a higher altitude. The unfinished part is naming where that altitude actually is. For product designers, it’s concept and hierarchy: the decisions that require knowing the user and the stake someone is willing to carry. The generated layout and the choice of components are still outputs. What’s left of design is the judgment that picks between them.

Butler’s sharper line is the binary between consumption and practice:

Someone who generates an interface with AI and calls it done isn’t practicing craft. They’re consuming convenience. Someone who generates an interface, inspects it, questions what it’s actually communicating, refines the structure, generates again, compares variations, understands why one serves the user better than another—they’re practicing craft. They’re building knowledge through iteration. The tool doesn’t determine whether you’re working with craft. Your approach does.

That’s Jiro Ono’s shokunin applied to interfaces: craft as lifelong practice, not manual labor. A camera doesn’t take a picture, and a model doesn’t make a design. That decision is the craft.

Butler’s argument reassures me. What worries me is how optional that decision is becoming. The output already looks finished. The designers who keep asking why one version serves the user better than another will still be designers in five years. The rest may still have jobs, as operators of a tool doing the work their taste used to do.

Close-up of a vibrant fingerprint with swirling ridge patterns in orange, red, blue, and yellow iridescent colors with glittery highlights.

Craft is Untouchable

I have a vested interest in the title of this piece being true. I’ve spent decades developing craft—not just making things, but understanding systems, seeing patterns, making judgments that can’t be reduced to prompts. If AI eliminates the need for that expertise, I’m in trouble.

chrbutler.com iconchrbutler.com

Tommaso Nervegna writes about LinkedIn killing its Associate Product Manager program and replacing it with a new role called the “Full Stack Builder.” The structural bet is interesting, but the finding from their rollout is what matters:

The expectation was that AI would be a great equalizer: juniors would benefit most because AI would close their skill gaps, while seniors would resist the change. The reality was the opposite. Top performers adopted AI fastest and derived the most value from it. Why? Because they had the judgment and experience to know what to ask for, how to evaluate the output, and where to apply it for maximum leverage.

That tracks with everything I’ve predicted, experienced, and seen. The skill that makes AI useful is knowing what good looks like before and after the model generates something. That ability comes from reps.

Nervegna distills LinkedIn CPO Tomer Cohen’s thesis to five skills AI cannot automate:

The five skills that AI cannot automate, according to Cohen, are Vision, Empathy, Communication, Creativity, and Judgment. As he puts it: “I’m working hard to automate everything else.”

The operational version:

The critical insight: the builder orchestrates the agents. The agents execute. Judgment stays human. This is not about replacing people with AI. It’s about compressing the team needed to ship something meaningful from fifteen people to three - or even one.

I’ve been calling this the orchestrator gap: the distance between a designer who uses AI and one who directs it. LinkedIn just gave it a job title. I think we will see more companies go this way. Whether or not it’s a good idea remains to be seen.

A Renaissance-era man studies blueprint sketches on a glowing drafting table while a giant mechanical lobster draws on the plans with an ornate pen.

The Full Stack Builder: The End of the Design Process as We Know It

The double diamond is a liability. Engineers ship faster than designers can explore. The PM role is dissolving and the three profiles that will survive this era look nothing like who we’ve been hiring

nervegna.substack.com iconnervegna.substack.com

I’ve watched design team values die in a Confluence page. The offsite happens, the Post-Its get transcribed, the principles get written up with care, and then everyone goes back to their desks and ships exactly the way they did before. I’ve seen it with product principles and brand values too. The deck gets built, implementation starts, and the deck gets forgotten.

Vitaly Friedman, writing for Smashing Magazine, on why this matters more than ever:

We often see design principles as rigid guidelines that dictate design decisions. But actually, they are an incredible tool to rally the team around a shared purpose and document the values and beliefs that an organization embodies. They align teams and inform decision-making. They also keep us afloat amidst all the hype, big assumptions, desire for faster delivery, and AI workslop.

Friedman again:

In times when we can generate any passable design and code within minutes, we need to decide better what’s worth designing and building — and what values we want our products to embody. It’s similar to voice and tone. You might not design it intentionally, but then end users will define it for you. And so, without principles, many company initiatives are random, sporadic, ad-hoc — and feel vague, inconsistent, or simply dull to the outside world.

You might not write principles intentionally, but your product will have them anyway. The question is whether you chose them or inherited them by default.

Friedman closes with the part most teams skip:

Creating principles is only a small portion of the work; most work is about effectively sharing and embedding them. It’s difficult to get anywhere without finding ways to make design principles a default — by revisiting settings, templates, naming conventions, and output. Principles help avoid endless discussions that often stem from personal preferences or taste. But design should not be a matter of taste; it must be guided by our goals and values.

Creating principles feels productive. But alignment without embedding is a Confluence page nobody opens twice. Principles have to show up in the Figma component library, the ticket template, the review rubric. They have to be repeated so that they are ingrained. They have to become the path of least resistance.

Smashing Magazine article title card: "A Practical Guide To Design Principles" by Vitaly Friedman, tagged Design, UX, UI.

A Practical Guide To Design Principles — Smashing Magazine

Design principles with references, examples, and methods for quick look-up. Brought to you by Design Patterns For AI Interfaces, **friendly video courses on UX** and design patterns by Vitaly.

smashingmagazine.com iconsmashingmagazine.com

Dan Saffer applies mid-century existentialism to the question of what “meaning” actually requires of the people building digital products, and the result is unusually rigorous. His sharpest move is applying Sartre’s concept of “projects” to AI tools:

When someone uses ChatGPT to write an essay, the Sartrean question is: whose project is this really? If the user is exploring ideas and using the tool as a thinking partner, they’re taking it up into their own meaning-making project. But if they’re pasting in a prompt and submitting the output unchanged, the system has effectively become the meaning-maker, and the user has become a delivery mechanism. The same tool can function either way. The design question is which relationship the system encourages.

Saffer connects this to Camus and the problem of frictionless design:

When every friction is removed in the name of efficiency, the activity can be hollowed out. There is nothing left to push against, and meaning drains away. This is something that AI systems have become exceedingly good at. Push the sparkle button, the task is done for you, and you have learned nothing and enjoyed nothing.

The HCI/UX field spent decades optimizing for friction removal. Saffer’s argument is that some friction is where the meaning lives. Design the struggle away and you don’t help the user. You empty the experience. Not every friction should be removed.

Saffer’s closing:

This sensibility insists that users are not information processors, not customers, not eyeballs, not tapping fingers, and not data sources. They are meaning-making beings whose freedom and dignity are at stake in every interaction. It asks designers to take seriously the existential weight of what they build. The systems we design become part of the conditions of human existence, shaping what people can choose, what they can see, who they can become.

Saffer covers Sartre, Camus, Kierkegaard, Heidegger, and de Beauvoir in the full piece, each applied to contemporary design problems. It’s a lot, and it’s all good.

Collage of five black-and-white portrait photos of mid-20th century philosophers, including one woman and four men, one holding a pipe.

The Existential Designer: Facilitating Meaning Through Interaction

Designers like to talk about making meaningful products or using the tools of design to make meaning.

odannyboy.medium.com iconodannyboy.medium.com

Yours truly got quoted in Fast Company. Grace Snelling, surveying the industry reaction to Lenny Rachitsky’s TrueUp hiring data, pulled a comment I left under Rachitsky’s original Twitter post:

Designers have designed themselves out of the equation because of design systems. But, IMHO, the secret sauce has never been the UI. It was the workflows and looking across the experience holistically.

Let me expand on that. The UI has always been the easiest part of product design. Design systems made that even more true. What separates a great product from a mediocre one is understanding our users deeply enough to create experiences that actually delight them. That understanding is the work AI can’t do, and it’s the work too many teams were already skipping before any standoff started.

The data behind the standoff: Rachitsky’s analysis of TrueUp’s job market tracker shows design roles have been flat since early 2023 while PM and engineering roles surged. (Quick side note: this data is for tech startups, not the general tech industry or design industry at large.) His theory:

I don’t know exactly what’s going on here, but it does feel AI-related. […] Unlike PM and eng, which started growing in 2024 (two years post-ChatGPT), design didn’t. If I had to venture a theory, I’d say that because AI is allowing engineers to move so quickly, there’s less opportunity—and less desire—to involve the traditional design process.

Claire Vo, founder of ChatPRD, puts the harder version of why:

Often design teams & designers are the most resistant to change org in the EPD triad, with highly vocal AI opponents, and little skill or interest in the art of campaigning for influence or resources. […] If a PM or engineer can get 85% there with tailwind and a dream, you better come to the table with more than ‘I represent the user.’

“I represent the user” was never enough on its own. It just went unchallenged when designers were the only ones who could ship polished interfaces.

Anthropic’s chief design officer Joel Lewenstein on where the EPD triad actually lands:

I think there’s a lot of role collapse at the very beginning, but there are still pretty clear swim lanes as things get into the later stages of product development. […] It’s like a Venn diagram that’s coming closer together.

Three hands pointing toward a central point on a red background, surrounded by colorful lightning bolt shapes in green, blue, and pink.

Why are designers, engineers, and product managers in a ‘three-way standoff’?

New data has the design community in a debate about the future of their jobs.

fastcompany.com iconfastcompany.com

Nate Parrott, a product designer at Anthropic, in an interview with Ryan Mather for AI Design Field Guide:

More Google Docs than you’d think. More Slack posts than you’d think. I meant what I said earlier: I think that this is the era of designers who design with words more so than designing with pixels.

Parrott describes a content design team whose job is making alien concepts legible:

We have several people at the company on the design team whose job is content design. Their job is basically to look at concepts which are very alien, and figure out how to make them legible to human beings. They don’t draw any pixels, but their work is really important because they are literally thinking about the words we use to describe and the mental models we expect people to put on that will make this stuff work.

The Figma work, Parrott says, is “the easy part.” He uses Anthropic’s design system, drops in components, and moves on. The hard work is upstream: expressing the ideas, figuring out the right language, talking to users. The production of screens has become the smallest slice of the job.

Jenny Wen described designers at Anthropic shipping code, prototyping against the live model, stretching into PM territory. Parrott is describing the same shift from a different angle. The deliverable used to be the mockup. Now the deliverable is the thinking that precedes it.

Vibrant abstract illustration of stylized flowers with glowing, blurred edges in bold red, yellow, orange, pink, and blue tones against a soft gradient background.

AI Design Field Guide

Learn techniques from the designers behind OpenAI, Anthropic, Figma, Notion & more

aidesignfieldguide.com iconaidesignfieldguide.com

The first time I wrote about Jenny Wen, I pushed back. She said the design process was dead, and I argued the proportions had shifted but the process itself was intact. I also noted a context problem: her “ship fast, iterate publicly” approach makes sense for greenfield AI products at Anthropic but gets harder with established install bases.

Wen has been making the rounds and in a new interview, I’m finding a lot that I’m nodding my head to.

Jenny Wen, speaking on Tommy Geoco’s State of Play:

Often design needs to follow what the model is capable of and design from there, as opposed to starting from a design vision first. I think that can feel tough as a designer because you’re like, oh, I want to be design-led, we should be designing it first and then the technology should follow. But I think that’s just the reality of working at a research lab where the technology is emergent and you have to sort of decide what to do with it.

“Design follows the model” is an interesting phrase from a design leader. It inverts the dogma that design should lead and engineering should follow. But Wen isn’t being defeatist. She’s describing a practical reality at at a leading AI lab where the models’ capabilities are changing faster than any roadmap can account for.

This shows up concretely in how her team works:

The big thing is designers are implementing code, through using Claude Code. That has been the biggest difference from working at Anthropic versus back when I worked at Figma. […] Even today, we were reporting some bugs and some quality issues, and one of the designers was like, “Cool, let me just fix them.” And that was cool to just not have to tag an engineer for them to do anything.

A designer casually fixing production bugs without tagging an engineer. Just another Tuesday at Anthropic.

Geoco’s summary of Wen’s argument crystallizes something we’ve all been thinking quietly about:

She said, having taste versus being able to execute are two completely different things. They’re usually bundled together, but they don’t have to be. And in a world where AI can increasingly execute, the question becomes, and it’s kind of uncomfortable, do you actually have good taste or are you just pushing pixels around?

That’s the thread tying all of this together. When designers are closer to the product, fixing bugs in production, prototyping against the live model, the judgment they’re applying isn’t visual. It’s product sense: knowing which of those 12 options is worth shipping, which edge case will break trust, when the model’s output is good enough for real users. That’s the taste Wen is describing, and it has very little to do with pixels.

A lot of designers have been coasting on execution skills that felt like taste. They debate corner radii and centering labels in a button with amateur vs pro designer memes. Who cares! AI is about to make the difference visible.

The New Era of UX Designers

Jenny Wen led design on FigJam, one of the most playful tools to hit design in a decade. Now she’s at Anthropic designing Claude. Not just the model, but the product that millions use daily.

youtube.com iconyoutube.com

Stripe design manager Kris Puckett, speaking on Michael Riddering’s Dive Club, spent the first half of the conversation demoing metal shaders, custom ocean animations, and a full iOS reading app he built with Claude Code. Then he stopped himself:

AI native has to be beyond just “I made a really cool shader” or “I made this dither effect that every other person is making.” I was doing that today and then I was like, “Oh my gosh, this is… why am I doing this? There’s a hundred of these that are way better than what I’m making right now.”

So what does AI-native design actually look like? Puckett’s answer is “soul”—the quality that makes work feel specifically, unmistakably yours:

I think what people are going to be desperate for is more of that human side of things. They’re going to be longing for […] an era they’ve never experienced because they’re younger, that MySpace generation where your MySpace page was deeply personal to you. My MySpace page was complete custom Kris Puckett perfection at that time. And I think that we’re going to want to see that come back. And I think people are going to want more of those—your portfolio looks and feels like you.

“Soul” is doing a lot of work as a concept there. What Puckett is describing sounds a lot like taste—the ability to make something that feels intentional and specific rather than procedurally generated. His workflow backs that up. Being contrarian, he explicitly rejects the “let the agent run” approach:

I want off that cycle. I do not want to be riding that bike race with anyone else because that’s not how I view these things. They are a force multiplier, but I want them to be focused. I want it to be something that I feel is still authentically me.

What unlocked all of this for Puckett wasn’t technical skill—he’s a designer, not an engineer. It was admitting “I don’t know” and starting anyway. He’d been dreaming of building his own software for 20 years. Claude Code’s blinking cursor was enough to get him started.

Kris Puckett - Becoming an AI-native designer

Today’s episode is with Kris Puckett (https://x.com/krispuckett) who has led design at Mercury, Dropbox, and now as a design manager at Stripe. His journey is the perfect example of what it looks like to lean into this moment in time with AI.

youtube.com iconyoutube.com

Figma is opening its canvas as a writeable surface for AI agents. Matt Colyer, product director at Figma, on why this matters:

Design decisions—from color palettes and button padding, to typography and interactivity—have always defined how products take shape. No matter how small, those decisions add up. They make your product and user experience stand out from the rest. To date, AI agents haven’t had this context, which is why so many designs created by AI often feel unfamiliar and generic.

The fix is beefing up skills files, by encoding a team’s design decisions, conventions, and sequencing rules. Agents read them before they touch the canvas. The use_figma tool lets Claude Code, Codex, and other MCP clients create and update assets tied to your design system. Colyer on what that changes:

Your conventions are no longer static documentation. They become rules agents follow as they work—applied through components, variables, and the structure you’ve already defined.

The detail worth paying attention to is what Colyer describes as a self-healing loop. When an agent generates a screen, it screenshots the result, checks it against the design system, and iterates. Because it’s working with real components and auto layout, those corrections compound through the system itself, not just the pixels on screen.

It’s free during beta, with plans to move to a paid API. Figma is finally joining the party as Subframe, Paper, and Pencil all offer this workflow already.

Terminal window titled "earthling — zsh" showing an AI prompt to build a component set from a button.tsx file, with output confirming 72 button variants created, overlaid on a Figma canvas with UI components.

Agents, Meet the Figma Canvas

Starting today, you can use AI agents to design directly on the Figma canvas. And with skills, you can guide agents with context about your team’s decisions and intent.

figma.com iconfigma.com

Gui Seiz designs at Figma. His team uses Claude Code to bridge design and code. And he still reaches for the canvas when precision matters.

Seiz, speaking on Claire Vo’s How I AI podcast:

I don’t think we’re there yet in general with these code tools in terms of the precision editing that you want to do. […] I think still the gold standard for me is just being able to drag stuff around. And you can do a lot with a click that would take you a hundred words to write and to really precisely nail. No one wants to prompt for the exact hex code or the shade of yellow and that kind of stuff. That’s just easier to just quickly do and directly manipulate.

Seiz isn’t anti-AI. His team pulls production code into Figma via MCP, edits it visually, and pushes it back to the codebase. He’s bullish on what that does to the old workflow:

It’s definitely changed our workflows in a way that it’s really blown up what a workflow even is. Before, for the majority of our careers, we’ve had a very linear, agreed-upon workflow where you increase fidelity as you go on. Because it’s really expensive to work in code, and it’s really cheap just to trade ideas and sketch them out. But AI basically collapsed that, and it’s just as cheap to riff in code as it is to riff in design.

The cost of exploration collapsed. The need for direct manipulation didn’t. Both can be true.

How Figma engineers sync designs with Claude Code and Codex

Most teams are still passing static design files back and forth, and most Figma files are already out of date by the time they reach engineering. Gui Seiz (designer) and Alex Kern (engineer) from Figma walk through the exact workflow their team uses to bridge that gap with AI, live onscreen. They…

youtube.com iconyoutube.com

I published an article about the design talent crisis in Fast Company! The setup is what I’ve covered before on this blog extensively. But there’s a connection that I draw with the trades—the construction industry and how they have a solution that the design industry could learn from.

In the article, I write:

Construction has been running formal apprenticeship programs since the National Apprenticeship Act of 1937, and informally for centuries before that. The Department of Labor’s Registered Apprenticeship Programs enrolled roughly 940,000 people nationwide in fiscal year 2024. These aren’t casual internships. They’re structured, multi-year pathways that pair inexperienced workers with seasoned professionals and build skills through graduated responsibility. The retention numbers tell you everything: Apprenticeship programs report a 93% employee retention rate. For every $100 employers invest, they see an estimated $144 return.

The contractors I work with don’t debate whether to invest in their pipeline during a downturn. They know that if they stop training apprentices, they won’t have journeymen in four years, and they won’t have master tradespeople in 10. The pipeline is the business.

There’s a three-point plan to dig us out of this hole. But of course, it requires committments from design leaders and the C-suite:

  1. Stop tying junior hiring to project demand
  2. Formalize mentorship
  3. Accept the short-term cost

There is more to the article. Please take a read and share!

Smiling woman with short hair and round glasses looking down at a tablet, wearing a floral patterned blouse, with FC Executive Board branding.

Hire junior designers today or risk a broken pipeline

The tech industry keeps telling itself the pipeline will refill on its own. Construction figured out a century ago why that thinking is wrong.

fastcompany.com iconfastcompany.com

Forty-four UI panels generated in ten minutes, each one grounded in real customer research. Jason Cyr, writing for The Human in the Loop, on what happened when his team pointed Claude Code at Cisco’s design system:

Last week, one of my design directors pointed Claude Code at Magnetic and asked it to build a security detection prototype. Real components, real navigation, theme switching, working admin panels — running in ten minutes. Then he connected it to our research repository and it built 44 detection detail panels, every design decision tracing back to something a real customer said. That happened because the AI had access to our design system.

Cyr’s takeaway: the design system was the design review.

Your design system is your leverage. It’s how your taste scales. The teams that invest here will see their design decisions show up in every agent-generated output, automatically. The teams that don’t will spend all their time cleaning up messes that a good system would have prevented.

Monday.com arrived at the same conclusion from the engineering side. They built a design-system MCP after their agents kept hardcoding colors and ignoring typography tokens.

Cyr doesn’t shy away from who this leaves behind, either: designers whose value lives entirely in production. “Not because they’re bad at their jobs — but because AI just got very good at theirs.”

Title card reading "Design Teams in the Agentic Era" with the subtitle "A manifesto for what comes next." on a dark background.

Design Teams in the Agentic Era

My thoughts on what comes next

jasoncyr.substack.com iconjasoncyr.substack.com

David Hoang, writing for Proof of Concept, proposes a squad model for tackling a company’s hardest, most ambiguous problems:

The squad: a forward deployed engineer, a forward deployed designer, and a researcher. Three people. That’s it. They operate like a startup-within-the-company, deployed against a specific, ambiguous problem. […] This is a product discovery team with teeth — they don’t just produce insights and hand them off. They produce working prototypes and validated direction. […] Three people don’t need standups, retros, or Jira boards. They need a shared problem and a whiteboard.

No PM. The shared problem replaces the roadmap, and a researcher replaces the product manager. Hoang borrows the concept from Palantir’s Forward Deployed Engineers and extends it to design. His argument: AI tools have given designers enough technical leverage to prototype at engineering speed, so the designer who finds the problem can build the first cut of the solution.

A three-person team with AI tools in 2026 can cover the ground that used to require a ten-person cross-functional team. That’s the direct result of collapsing the build cost of exploration.

Hoang argues that the rotation model matters as much as the squad composition. Four to eight weeks, then disband. The team doesn’t calcify into a feature factory. Designers rotate through the company’s hardest problems instead of sitting on the same product team filing tickets for years.

Although, my counter to that would be designers sitting in the same problem space will gain deeper knowledge and context. Rotation could be counterproductive if not handled deliberately.

Hand-drawn Venn diagram showing three overlapping circles labeled Researcher, Design Engineer, and GTM, with the center intersection labeled "Forward Deployed Designer.

Forward deployed designer

In the early 2010s, Palantir coined a role that didn’t exist before: the Forward Deployed Software Engineer. These weren’t engineers building features on a roadmap. They were engineers embedded directly at client companies — sitting with analysts, operators, and decision-makers — to discover the problem and build the solution in the same motion. The role spread. Databricks, Scale AI, and OpenAI adopted variations.

proofofconcept.pub iconproofofconcept.pub

I’ve argued that design tools should be canvas-first, not chatbox-first. Jeff, writing in Abduzeedo makes the case for the opposite:

Designers have always borrowed from developers. Version control, component systems, token-based design — these ideas crossed the aisle from engineering and reshaped how visual work gets done. Vibe designing follows the same logic. Instead of opening Figma and reaching for a drag-and-drop panel, designers drop into the terminal. They prompt an AI model directly from the CLI, pipe the output into a file, and iterate without ever touching a mouse.

He isn’t theorizing. He published this article using browser automation and AI, with minimal manual clicking.

I don’t think the answer is CLI or canvas. It’s both. Designers are visual thinkers—that’s the cognitive foundation of the discipline, not a limitation to engineer away. Going fully terminal assumes we can be retrained to work without seeing what we’re making, or that the profession will attract people with entirely different skills.

What does look right is the plumbing underneath. Jeff on Paper.design’s MCP integration:

Its canvas is built natively on web standards — HTML and CSS — which means AI agents working through Paper’s MCP server can read and write design files directly. Tools like get_screenshot, get_jsx, write_html, and update_styles give Claude Code or Cursor direct read-write access to the design canvas.

HyperCard figured this out in 1987: direct manipulation on top of a scripting layer. The tools are finally catching up, with AI as the scripting engine.

VS Code editor with a browser preview showing the "Abduzeedo Editor" app, displaying a portrait photo with a VHS glitch shader effect applied.

Vibe Designing with Bash Access

Vibe designing is the design equivalent of vibe coding — where bash scripts, AI tools, and CLI commands are finally replacing traditional GUI-only tools.

abduzeedo.com iconabduzeedo.com

Intercom’s design team published numbers that show what happens when agents take over the build. John Moriarty, writing for Fin Ideas:

At Intercom, how we design and build software is unrecognizable from 12 months ago. Our engineering team is already at the point where 90% of pull requests are authored by Claude Code, part of an internal initiative called 2x, where the explicit goal is to double productivity using AI.

When 90% of your pull requests are AI-authored, the designer’s job changes whether you update the title or not. Moriarty’s framework for what comes next:

As the rate of execution accelerates, the role of design becomes sharper. Agents can generate artefacts, but they cannot decide which problems matter, set intent, resolve trade-offs, or hold the bar for quality. Our craft shifts with that reality. […] Agents will own the middle, the build. Design’s value concentrates at the edges, deciding what to build and then determining whether the output is good enough.

Design’s value lands at the edges, not the middle, and Intercom is already adapting their infrastructure to match. They’ve repositioned their design system as what Moriarty calls “agentic infrastructure”:

In a world where Agents write most of the code, design systems become the infrastructure that protects quality. Components, libraries and guidelines are the foundation that Agents and teams build on top of. The better the system, the better everything produced. Strong systems allow quality to scale without adding review overhead.

This tracks with the argument that design systems are becoming AI infrastructure—and Intercom is running it in production. The design system is the quality control layer that lets agents ship at speed without designers reviewing every screen.

Moriarty’s full piece covers how they’re restructuring day-to-day work—moving designers into code, treating Figma as a whiteboard, running structured AI fluency training. Worth a full read.

A paintbrush dissolves into digital code lines and circuitry, with the text "How we design when the code writes itself" and "Fin/ideas" logo.

How we design when the code writes itself

AI isn’t just increasing the speed of building, it’s changing how we work

ideas.fin.ai iconideas.fin.ai