Skip to content

280 posts tagged with “product design”

The designer’s role is widening at both ends of the product stack. Earlier, I linked to a post by Chad Johnson arguing designers gain influence by moving upstream: becoming orientation devices for the team, shaping the problem before it gets named. Daniel Mitev, writing for UX Collective, argues designers gain authorship by moving downstream, into the code:

The industry has been asking whether designers should code for over a decade. It was always the wrong question, or at least the wrong framing. It implied the barrier was technical: that designers lacked something fundamental, something that required years of study to acquire. Learn TypeScript. Understand the DOM. Earn your way across the divide. That wasn’t the barrier.

Mitev’s argument comes down to access. AI tooling compresses the translation layer and returns authorship to the designer:

What AI tooling gives back is authorship over the surface layer — the part users actually touch. A designer can now open the codebase, adjust how an element behaves, change how a transition feels, and verify the output against their own intent in real time. The easing curve gets set by the person who decided what it should feel like. The hover state gets defined by the person who thought through why it matters. That work no longer requires an interpreter.

He points at Alan’s “Everyone Can Build” initiative—283 pull requests shipped by non-engineers over two quarters, each merged after engineering review—as evidence it’s already happening.

Johnson and Mitev aren’t in conflict. They’re describing the same shift from opposite ends. The interpreters at the top of the product stack—PMs who owned problem framing and prioritization—are compressing. The interpreters at the bottom—frontend engineers translating intent into code—are compressing too. Both jobs return to the designer who understood the intent first.

The role widens. Some designers will gravitate to one end or the other. The designers who stretch the full range—orientation work and authorship—are working the widest version of the job.

A hand pressing an Enter key above a terminal showing a git commit command, with text reading "Designers finally have a say in the product they design.

Designers finally have a say in the product they design

AI didn’t teach designers to code. It gave them back the decisions that were always theirs.

uxdesign.cc iconuxdesign.cc

(Second link to Chad Johnson this week, but I just discovered his Substack, so ¯\_(ツ)_/¯.)

Chad Johnson, writing in his newsletter, argues that designer influence in product decisions comes from something other than craft output. He lays out the underlying dynamic:

Roadmaps are shaped less by who has the best ideas and more by who controls the framing of tradeoffs. Every roadmap decision is a bet: build this instead of that, now instead of later, for these users instead of those. Whoever makes the risk feel smaller tends to win.

So where does the designer fit? Johnson:

The most influential designers at startups do not position themselves as makers of screens. They act as orientation devices for the team. Orientation is the ability to help a group understand where they are, what matters, and what tradeoffs are real. It precedes prioritization, and it makes decision-making possible.

A designer whose output stops at screens is working on the wrong layer of the problem. Johnson lists the skills that back the orientation role:

Designers who shape direction invest in strategic framing, business literacy, and narrative construction. They learn to say no with evidence and to disagree without drama.

Johnson’s list is right as far as it goes. He understates one skill: legibility. A lot of design influence breaks down at translation. The thinking is strategic; the communication stays in design vocabulary. A sharp problem statement understandable only to other designers stays in the design review. Designers who change the conversation make their analysis readable in product and business terms without flattening it. That’s the same move Johnson gestures at when he describes “decision-ready artifacts” as “tools for comparison… designed to provoke judgment, not admiration.”

Johnson’s closer calls the future of design leadership “quieter, more rigorous, and deeply strategic.” That’s right. It’s also a role that depends on being read by the people making the call.

Large-scale flowchart on a white wall with quirky decision questions including "Have you ever missed an airplane flight?" and "Are you good with names?

Why Most Designers Will Never Influence Product Roadmaps

A practical explanation of how roadmap decisions are really made, and how designers can gain influence

chadsnewsletter.substack.com iconchadsnewsletter.substack.com

Two podcast conversations with frontier-lab design leaders on what designing at an AI lab looks like day-to-day. I previously linked to Lenny Rachitsky’s interview with Jenny Wen, head of design for Claude, where she described a redistribution of designer hours: less mocking, more pairing with engineers, a sliver of direct implementation. The activities themselves still look like design.

Ian Silber, head of product design at OpenAI, on Michael Riddering’s Dive Club, describes work that doesn’t fit the same list:

Designers working on this are hopefully spending a lot less time in Figma or whatever tool you use to draw pixels, and more time really thinking about how you interact with this thing, and the fact that the model really is the core product.

Silber’s concrete example is onboarding. Instead of building a first-run tutorial, his team shapes what the model already knows about the person:

We have this super intelligent model that could probably do a much better job trying to understand what this person’s goals are […] We’re really stripping back a lot of what you might traditionally do and trying to say, “Well, actually […] let’s think about like how we should give this context to the model that this person is brand new and they might need some handholding.”

The traditional response adds UI around the problem. Silber’s team takes it out and gives the model enough context to meet the user where they are.

That kind of work needs its own scaffolding, and OpenAI is building it:

We have a whole system called the Dynamic User Interface Library, which allows us to design things that the model can then interpret.

Primitives the model composes at runtime, shaped by system prompts and context rather than drawn flow by flow. Wen is describing a redistribution of designer hours inside activities that still look recognizable. Silber is describing activities that don’t quite have names yet. And yes, that is still design.

Ian Silber - What it’s like designing at OpenAI

If you’re like me you gotta be curious... what’s it like designing at OpenAI?

youtube.com iconyoutube.com

The gap between an AI-produced prototype and a shippable product has a shape. Most of us assume it’s the visual 20%: the polish AI output drifts on. Chad Johnson’s case is that the 20% is the trivial part, and the real gap sits upstream of everything visible.

Chad Johnson, writing in his newsletter:

The deeper issue was that nobody had asked whether a prototype was even the right artifact to produce at that stage. The PM had made three assumptions about user intent that we hadn’t validated. They’d skipped past a critical question about whether this flow needed to exist at all, or whether the real problem was upstream in the information architecture. They’d built a beautiful answer to a question nobody had confirmed was worth asking. That’s the part that stuck with me. Not the visual gaps. The thinking gaps.

That lines up with what I’ve been calling C+ out of the box: artifacts that read well and seem credible until you apply critical thinking. Johnson gets specific about what’s actually missing, and none of it is visual: the assumption nobody validated, the upstream question nobody asked. The interface was fine. The thinking was absent from the (probably) AI-generated PRD.

Johnson again:

…design production got democratized, but design judgment didn’t. Anyone can make something now. Almost nobody new learned how to think well about what should be made, why, and for whom. And that gap, between what’s possible to produce and what’s actually been thought through, is now the entire playing field for our profession. Designers aren’t becoming obsolete. They’re becoming stewards.

Judgment still takes years to build, and no tool compresses that.

The last 20% is rarely the gap that matters. The first question—should we build this?—almost always is. Very few teams have the muscle to ask it.

Abstract digital art featuring curved, layered surfaces with fine parallel lines in warm orange, red, and deep blue gradients.

The Last 20% and Who’s Asking Why?

Everyone can build now. Almost nobody stops to ask if they should.

chadsnewsletter.substack.com iconchadsnewsletter.substack.com

Tara Tan surveyed more than a dozen AI design tools for The Review. Her field audit sits alongside the design-process compression argument:

In working with these tools, one insight emerged for me: the tools that understand your design system produce better output than the ones that don’t. […] The competitive moat in this market is not generative quality, which is commoditizing fast. The moat is the design system graph: the tokens, components, spacing scales, typography rules, and conventions that make your product look like your product and not a generic template. Whoever makes that system machine-readable for agents will win the enterprise.

That’s the operational reason my proposal for an agent design team hinges on a rock-solid design system. What distinguishes output across the tools Tan surveyed is whether the generator respects your existing design system or treats every request as a fresh mood board.

Tan’s other finding is the role-shift:

The same shift is happening in design. At Uber, Ian Guisard didn’t stop being a design systems lead when uSpec automated his spec-writing. His job shifted from producing documentation to encoding expertise, writing agent skills, defining validation rules, deciding what “correct” means for each component across seven platforms. The human became the system designer, not the system operator. […] The canary is singing. And the song is about the work shifting from execution to judgment, from operating the system to designing the system itself.

Same title, different job. Ian Guisard’s taste still matters; it lives in the skills and validation rules now, not the deliverables. That’s “follow the skill, not the role” made concrete. Guisard used to write specs; now he writes the rules the system follows to validate them.

The infrastructure is catching up to the process. Tan’s implicit prescription is straightforward: make the design system machine-readable, win the enterprise. Some of that tooling is already out in the open. Southleft’s Figma Console MCP (which Uber’s uSpec is built on) lets agents operate on tokens and components without a custom platform.

But tooling alone isn’t enough. Most of us aren’t Uber. The path for teams without a dedicated design systems lead still needs someone to do the work Guisard did: encoding the expertise and defining what “correct” looks like across platforms. That’s where the next round of tooling needs to land.

The Design Agent Landscape" diagram categorizing AI design tools into three groups: Agent-first canvas (Pencil, Paper, OpenPencil), Design system-first (Figma MCP, Console MCP, Google Stitch), and Code-native (Subframe, MagicPath, Tempo, Polymet, Magic Patterns, Lovable, Bolt, v0, Replit).

The Design-Build Loop

Design is where AI product workflows meet their hardest test: an audience that will always, primarily, be human. A look at the tools, teams, and infrastructure emerging around AI design agents.

thereview.strangevc.com iconthereview.strangevc.com
A sleek high-speed bullet train with glowing headlights crossing a bridge through dense fog over a misty landscape.

Acceleration Is Not Automation

I’ve been wandering the wilderness to understand where the software design profession is going. Via this blog and my newsletter, I’ve been exploring the possibilities by reading, commenting, and writing. Many other designers are in the same boat, with Erika Flowers’s Zero Vector design methodology being the most defined. Kudos to her for being one of the first—if not the first—to plant the flag.

Directionally Flowers is right. But for me, working in a team and on B2B software, it feels too simplistic and ignores the realities of working with customers and counterparts in product management and engineering. (That’s her whole point: one person to do it all, no handoff.)

The destination is within view. But it’s hazy and distant. The path to get there is unclear, like driving through soupy fog when your headlights reflecting off the mist are all you can see.

Every few weeks, another essay or YouTube video announces that AI has killed craft. One of my favorite designers writing about design, Christopher Butler, goes the other way:

No knowledge I possess about design—the incorporeal understanding that makes what I create better than an off-the-shelf template or something done by someone without my experience—is made irrelevant by AI. Nor is it contradicted by my use of AI tools. Structure still communicates before content. Visual hierarchy still guides attention. Negative space still creates rhythm. These principles don’t vanish because I’m working through AI rather than directly manipulating pixels. The craft migrates to a different level of abstraction. But it remains craft.

Butler’s claim is that the principles don’t vanish; they operate at a higher altitude. The unfinished part is naming where that altitude actually is. For product designers, it’s concept and hierarchy: the decisions that require knowing the user and the stake someone is willing to carry. The generated layout and the choice of components are still outputs. What’s left of design is the judgment that picks between them.

Butler’s sharper line is the binary between consumption and practice:

Someone who generates an interface with AI and calls it done isn’t practicing craft. They’re consuming convenience. Someone who generates an interface, inspects it, questions what it’s actually communicating, refines the structure, generates again, compares variations, understands why one serves the user better than another—they’re practicing craft. They’re building knowledge through iteration. The tool doesn’t determine whether you’re working with craft. Your approach does.

That’s Jiro Ono’s shokunin applied to interfaces: craft as lifelong practice, not manual labor. A camera doesn’t take a picture, and a model doesn’t make a design. That decision is the craft.

Butler’s argument reassures me. What worries me is how optional that decision is becoming. The output already looks finished. The designers who keep asking why one version serves the user better than another will still be designers in five years. The rest may still have jobs, as operators of a tool doing the work their taste used to do.

Close-up of a vibrant fingerprint with swirling ridge patterns in orange, red, blue, and yellow iridescent colors with glittery highlights.

Craft is Untouchable

I have a vested interest in the title of this piece being true. I’ve spent decades developing craft—not just making things, but understanding systems, seeing patterns, making judgments that can’t be reduced to prompts. If AI eliminates the need for that expertise, I’m in trouble.

chrbutler.com iconchrbutler.com

Tommaso Nervegna writes about LinkedIn killing its Associate Product Manager program and replacing it with a new role called the “Full Stack Builder.” The structural bet is interesting, but the finding from their rollout is what matters:

The expectation was that AI would be a great equalizer: juniors would benefit most because AI would close their skill gaps, while seniors would resist the change. The reality was the opposite. Top performers adopted AI fastest and derived the most value from it. Why? Because they had the judgment and experience to know what to ask for, how to evaluate the output, and where to apply it for maximum leverage.

That tracks with everything I’ve predicted, experienced, and seen. The skill that makes AI useful is knowing what good looks like before and after the model generates something. That ability comes from reps.

Nervegna distills LinkedIn CPO Tomer Cohen’s thesis to five skills AI cannot automate:

The five skills that AI cannot automate, according to Cohen, are Vision, Empathy, Communication, Creativity, and Judgment. As he puts it: “I’m working hard to automate everything else.”

The operational version:

The critical insight: the builder orchestrates the agents. The agents execute. Judgment stays human. This is not about replacing people with AI. It’s about compressing the team needed to ship something meaningful from fifteen people to three - or even one.

I’ve been calling this the orchestrator gap: the distance between a designer who uses AI and one who directs it. LinkedIn just gave it a job title. I think we will see more companies go this way. Whether or not it’s a good idea remains to be seen.

A Renaissance-era man studies blueprint sketches on a glowing drafting table while a giant mechanical lobster draws on the plans with an ornate pen.

The Full Stack Builder: The End of the Design Process as We Know It

The double diamond is a liability. Engineers ship faster than designers can explore. The PM role is dissolving and the three profiles that will survive this era look nothing like who we’ve been hiring

nervegna.substack.com iconnervegna.substack.com

I’ve watched design team values die in a Confluence page. The offsite happens, the Post-Its get transcribed, the principles get written up with care, and then everyone goes back to their desks and ships exactly the way they did before. I’ve seen it with product principles and brand values too. The deck gets built, implementation starts, and the deck gets forgotten.

Vitaly Friedman, writing for Smashing Magazine, on why this matters more than ever:

We often see design principles as rigid guidelines that dictate design decisions. But actually, they are an incredible tool to rally the team around a shared purpose and document the values and beliefs that an organization embodies. They align teams and inform decision-making. They also keep us afloat amidst all the hype, big assumptions, desire for faster delivery, and AI workslop.

Friedman again:

In times when we can generate any passable design and code within minutes, we need to decide better what’s worth designing and building — and what values we want our products to embody. It’s similar to voice and tone. You might not design it intentionally, but then end users will define it for you. And so, without principles, many company initiatives are random, sporadic, ad-hoc — and feel vague, inconsistent, or simply dull to the outside world.

You might not write principles intentionally, but your product will have them anyway. The question is whether you chose them or inherited them by default.

Friedman closes with the part most teams skip:

Creating principles is only a small portion of the work; most work is about effectively sharing and embedding them. It’s difficult to get anywhere without finding ways to make design principles a default — by revisiting settings, templates, naming conventions, and output. Principles help avoid endless discussions that often stem from personal preferences or taste. But design should not be a matter of taste; it must be guided by our goals and values.

Creating principles feels productive. But alignment without embedding is a Confluence page nobody opens twice. Principles have to show up in the Figma component library, the ticket template, the review rubric. They have to be repeated so that they are ingrained. They have to become the path of least resistance.

Smashing Magazine article title card: "A Practical Guide To Design Principles" by Vitaly Friedman, tagged Design, UX, UI.

A Practical Guide To Design Principles — Smashing Magazine

Design principles with references, examples, and methods for quick look-up. Brought to you by Design Patterns For AI Interfaces, **friendly video courses on UX** and design patterns by Vitaly.

smashingmagazine.com iconsmashingmagazine.com

Dan Saffer applies mid-century existentialism to the question of what “meaning” actually requires of the people building digital products, and the result is unusually rigorous. His sharpest move is applying Sartre’s concept of “projects” to AI tools:

When someone uses ChatGPT to write an essay, the Sartrean question is: whose project is this really? If the user is exploring ideas and using the tool as a thinking partner, they’re taking it up into their own meaning-making project. But if they’re pasting in a prompt and submitting the output unchanged, the system has effectively become the meaning-maker, and the user has become a delivery mechanism. The same tool can function either way. The design question is which relationship the system encourages.

Saffer connects this to Camus and the problem of frictionless design:

When every friction is removed in the name of efficiency, the activity can be hollowed out. There is nothing left to push against, and meaning drains away. This is something that AI systems have become exceedingly good at. Push the sparkle button, the task is done for you, and you have learned nothing and enjoyed nothing.

The HCI/UX field spent decades optimizing for friction removal. Saffer’s argument is that some friction is where the meaning lives. Design the struggle away and you don’t help the user. You empty the experience. Not every friction should be removed.

Saffer’s closing:

This sensibility insists that users are not information processors, not customers, not eyeballs, not tapping fingers, and not data sources. They are meaning-making beings whose freedom and dignity are at stake in every interaction. It asks designers to take seriously the existential weight of what they build. The systems we design become part of the conditions of human existence, shaping what people can choose, what they can see, who they can become.

Saffer covers Sartre, Camus, Kierkegaard, Heidegger, and de Beauvoir in the full piece, each applied to contemporary design problems. It’s a lot, and it’s all good.

Collage of five black-and-white portrait photos of mid-20th century philosophers, including one woman and four men, one holding a pipe.

The Existential Designer: Facilitating Meaning Through Interaction

Designers like to talk about making meaningful products or using the tools of design to make meaning.

odannyboy.medium.com iconodannyboy.medium.com

Yours truly got quoted in Fast Company. Grace Snelling, surveying the industry reaction to Lenny Rachitsky’s TrueUp hiring data, pulled a comment I left under Rachitsky’s original Twitter post:

Designers have designed themselves out of the equation because of design systems. But, IMHO, the secret sauce has never been the UI. It was the workflows and looking across the experience holistically.

Let me expand on that. The UI has always been the easiest part of product design. Design systems made that even more true. What separates a great product from a mediocre one is understanding our users deeply enough to create experiences that actually delight them. That understanding is the work AI can’t do, and it’s the work too many teams were already skipping before any standoff started.

The data behind the standoff: Rachitsky’s analysis of TrueUp’s job market tracker shows design roles have been flat since early 2023 while PM and engineering roles surged. (Quick side note: this data is for tech startups, not the general tech industry or design industry at large.) His theory:

I don’t know exactly what’s going on here, but it does feel AI-related. […] Unlike PM and eng, which started growing in 2024 (two years post-ChatGPT), design didn’t. If I had to venture a theory, I’d say that because AI is allowing engineers to move so quickly, there’s less opportunity—and less desire—to involve the traditional design process.

Claire Vo, founder of ChatPRD, puts the harder version of why:

Often design teams & designers are the most resistant to change org in the EPD triad, with highly vocal AI opponents, and little skill or interest in the art of campaigning for influence or resources. […] If a PM or engineer can get 85% there with tailwind and a dream, you better come to the table with more than ‘I represent the user.’

“I represent the user” was never enough on its own. It just went unchallenged when designers were the only ones who could ship polished interfaces.

Anthropic’s chief design officer Joel Lewenstein on where the EPD triad actually lands:

I think there’s a lot of role collapse at the very beginning, but there are still pretty clear swim lanes as things get into the later stages of product development. […] It’s like a Venn diagram that’s coming closer together.

Three hands pointing toward a central point on a red background, surrounded by colorful lightning bolt shapes in green, blue, and pink.

Why are designers, engineers, and product managers in a ‘three-way standoff’?

New data has the design community in a debate about the future of their jobs.

fastcompany.com iconfastcompany.com

Nate Parrott, a product designer at Anthropic, in an interview with Ryan Mather for AI Design Field Guide:

More Google Docs than you’d think. More Slack posts than you’d think. I meant what I said earlier: I think that this is the era of designers who design with words more so than designing with pixels.

Parrott describes a content design team whose job is making alien concepts legible:

We have several people at the company on the design team whose job is content design. Their job is basically to look at concepts which are very alien, and figure out how to make them legible to human beings. They don’t draw any pixels, but their work is really important because they are literally thinking about the words we use to describe and the mental models we expect people to put on that will make this stuff work.

The Figma work, Parrott says, is “the easy part.” He uses Anthropic’s design system, drops in components, and moves on. The hard work is upstream: expressing the ideas, figuring out the right language, talking to users. The production of screens has become the smallest slice of the job.

Jenny Wen described designers at Anthropic shipping code, prototyping against the live model, stretching into PM territory. Parrott is describing the same shift from a different angle. The deliverable used to be the mockup. Now the deliverable is the thinking that precedes it.

Vibrant abstract illustration of stylized flowers with glowing, blurred edges in bold red, yellow, orange, pink, and blue tones against a soft gradient background.

AI Design Field Guide

Learn techniques from the designers behind OpenAI, Anthropic, Figma, Notion & more

aidesignfieldguide.com iconaidesignfieldguide.com

The first time I wrote about Jenny Wen, I pushed back. She said the design process was dead, and I argued the proportions had shifted but the process itself was intact. I also noted a context problem: her “ship fast, iterate publicly” approach makes sense for greenfield AI products at Anthropic but gets harder with established install bases.

Wen has been making the rounds and in a new interview, I’m finding a lot that I’m nodding my head to.

Jenny Wen, speaking on Tommy Geoco’s State of Play:

Often design needs to follow what the model is capable of and design from there, as opposed to starting from a design vision first. I think that can feel tough as a designer because you’re like, oh, I want to be design-led, we should be designing it first and then the technology should follow. But I think that’s just the reality of working at a research lab where the technology is emergent and you have to sort of decide what to do with it.

“Design follows the model” is an interesting phrase from a design leader. It inverts the dogma that design should lead and engineering should follow. But Wen isn’t being defeatist. She’s describing a practical reality at at a leading AI lab where the models’ capabilities are changing faster than any roadmap can account for.

This shows up concretely in how her team works:

The big thing is designers are implementing code, through using Claude Code. That has been the biggest difference from working at Anthropic versus back when I worked at Figma. […] Even today, we were reporting some bugs and some quality issues, and one of the designers was like, “Cool, let me just fix them.” And that was cool to just not have to tag an engineer for them to do anything.

A designer casually fixing production bugs without tagging an engineer. Just another Tuesday at Anthropic.

Geoco’s summary of Wen’s argument crystallizes something we’ve all been thinking quietly about:

She said, having taste versus being able to execute are two completely different things. They’re usually bundled together, but they don’t have to be. And in a world where AI can increasingly execute, the question becomes, and it’s kind of uncomfortable, do you actually have good taste or are you just pushing pixels around?

That’s the thread tying all of this together. When designers are closer to the product, fixing bugs in production, prototyping against the live model, the judgment they’re applying isn’t visual. It’s product sense: knowing which of those 12 options is worth shipping, which edge case will break trust, when the model’s output is good enough for real users. That’s the taste Wen is describing, and it has very little to do with pixels.

A lot of designers have been coasting on execution skills that felt like taste. They debate corner radii and centering labels in a button with amateur vs pro designer memes. Who cares! AI is about to make the difference visible.

The New Era of UX Designers

Jenny Wen led design on FigJam, one of the most playful tools to hit design in a decade. Now she’s at Anthropic designing Claude. Not just the model, but the product that millions use daily.

youtube.com iconyoutube.com

Stripe design manager Kris Puckett, speaking on Michael Riddering’s Dive Club, spent the first half of the conversation demoing metal shaders, custom ocean animations, and a full iOS reading app he built with Claude Code. Then he stopped himself:

AI native has to be beyond just “I made a really cool shader” or “I made this dither effect that every other person is making.” I was doing that today and then I was like, “Oh my gosh, this is… why am I doing this? There’s a hundred of these that are way better than what I’m making right now.”

So what does AI-native design actually look like? Puckett’s answer is “soul”—the quality that makes work feel specifically, unmistakably yours:

I think what people are going to be desperate for is more of that human side of things. They’re going to be longing for […] an era they’ve never experienced because they’re younger, that MySpace generation where your MySpace page was deeply personal to you. My MySpace page was complete custom Kris Puckett perfection at that time. And I think that we’re going to want to see that come back. And I think people are going to want more of those—your portfolio looks and feels like you.

“Soul” is doing a lot of work as a concept there. What Puckett is describing sounds a lot like taste—the ability to make something that feels intentional and specific rather than procedurally generated. His workflow backs that up. Being contrarian, he explicitly rejects the “let the agent run” approach:

I want off that cycle. I do not want to be riding that bike race with anyone else because that’s not how I view these things. They are a force multiplier, but I want them to be focused. I want it to be something that I feel is still authentically me.

What unlocked all of this for Puckett wasn’t technical skill—he’s a designer, not an engineer. It was admitting “I don’t know” and starting anyway. He’d been dreaming of building his own software for 20 years. Claude Code’s blinking cursor was enough to get him started.

Kris Puckett - Becoming an AI-native designer

Today’s episode is with Kris Puckett (https://x.com/krispuckett) who has led design at Mercury, Dropbox, and now as a design manager at Stripe. His journey is the perfect example of what it looks like to lean into this moment in time with AI.

youtube.com iconyoutube.com

Figma is opening its canvas as a writeable surface for AI agents. Matt Colyer, product director at Figma, on why this matters:

Design decisions—from color palettes and button padding, to typography and interactivity—have always defined how products take shape. No matter how small, those decisions add up. They make your product and user experience stand out from the rest. To date, AI agents haven’t had this context, which is why so many designs created by AI often feel unfamiliar and generic.

The fix is beefing up skills files, by encoding a team’s design decisions, conventions, and sequencing rules. Agents read them before they touch the canvas. The use_figma tool lets Claude Code, Codex, and other MCP clients create and update assets tied to your design system. Colyer on what that changes:

Your conventions are no longer static documentation. They become rules agents follow as they work—applied through components, variables, and the structure you’ve already defined.

The detail worth paying attention to is what Colyer describes as a self-healing loop. When an agent generates a screen, it screenshots the result, checks it against the design system, and iterates. Because it’s working with real components and auto layout, those corrections compound through the system itself, not just the pixels on screen.

It’s free during beta, with plans to move to a paid API. Figma is finally joining the party as Subframe, Paper, and Pencil all offer this workflow already.

Terminal window titled "earthling — zsh" showing an AI prompt to build a component set from a button.tsx file, with output confirming 72 button variants created, overlaid on a Figma canvas with UI components.

Agents, Meet the Figma Canvas

Starting today, you can use AI agents to design directly on the Figma canvas. And with skills, you can guide agents with context about your team’s decisions and intent.

figma.com iconfigma.com

Gui Seiz designs at Figma. His team uses Claude Code to bridge design and code. And he still reaches for the canvas when precision matters.

Seiz, speaking on Claire Vo’s How I AI podcast:

I don’t think we’re there yet in general with these code tools in terms of the precision editing that you want to do. […] I think still the gold standard for me is just being able to drag stuff around. And you can do a lot with a click that would take you a hundred words to write and to really precisely nail. No one wants to prompt for the exact hex code or the shade of yellow and that kind of stuff. That’s just easier to just quickly do and directly manipulate.

Seiz isn’t anti-AI. His team pulls production code into Figma via MCP, edits it visually, and pushes it back to the codebase. He’s bullish on what that does to the old workflow:

It’s definitely changed our workflows in a way that it’s really blown up what a workflow even is. Before, for the majority of our careers, we’ve had a very linear, agreed-upon workflow where you increase fidelity as you go on. Because it’s really expensive to work in code, and it’s really cheap just to trade ideas and sketch them out. But AI basically collapsed that, and it’s just as cheap to riff in code as it is to riff in design.

The cost of exploration collapsed. The need for direct manipulation didn’t. Both can be true.

How Figma engineers sync designs with Claude Code and Codex

Most teams are still passing static design files back and forth, and most Figma files are already out of date by the time they reach engineering. Gui Seiz (designer) and Alex Kern (engineer) from Figma walk through the exact workflow their team uses to bridge that gap with AI, live onscreen. They…

youtube.com iconyoutube.com

I published an article about the design talent crisis in Fast Company! The setup is what I’ve covered before on this blog extensively. But there’s a connection that I draw with the trades—the construction industry and how they have a solution that the design industry could learn from.

In the article, I write:

Construction has been running formal apprenticeship programs since the National Apprenticeship Act of 1937, and informally for centuries before that. The Department of Labor’s Registered Apprenticeship Programs enrolled roughly 940,000 people nationwide in fiscal year 2024. These aren’t casual internships. They’re structured, multi-year pathways that pair inexperienced workers with seasoned professionals and build skills through graduated responsibility. The retention numbers tell you everything: Apprenticeship programs report a 93% employee retention rate. For every $100 employers invest, they see an estimated $144 return.

The contractors I work with don’t debate whether to invest in their pipeline during a downturn. They know that if they stop training apprentices, they won’t have journeymen in four years, and they won’t have master tradespeople in 10. The pipeline is the business.

There’s a three-point plan to dig us out of this hole. But of course, it requires committments from design leaders and the C-suite:

  1. Stop tying junior hiring to project demand
  2. Formalize mentorship
  3. Accept the short-term cost

There is more to the article. Please take a read and share!

Smiling woman with short hair and round glasses looking down at a tablet, wearing a floral patterned blouse, with FC Executive Board branding.

Hire junior designers today or risk a broken pipeline

The tech industry keeps telling itself the pipeline will refill on its own. Construction figured out a century ago why that thinking is wrong.

fastcompany.com iconfastcompany.com

Forty-four UI panels generated in ten minutes, each one grounded in real customer research. Jason Cyr, writing for The Human in the Loop, on what happened when his team pointed Claude Code at Cisco’s design system:

Last week, one of my design directors pointed Claude Code at Magnetic and asked it to build a security detection prototype. Real components, real navigation, theme switching, working admin panels — running in ten minutes. Then he connected it to our research repository and it built 44 detection detail panels, every design decision tracing back to something a real customer said. That happened because the AI had access to our design system.

Cyr’s takeaway: the design system was the design review.

Your design system is your leverage. It’s how your taste scales. The teams that invest here will see their design decisions show up in every agent-generated output, automatically. The teams that don’t will spend all their time cleaning up messes that a good system would have prevented.

Monday.com arrived at the same conclusion from the engineering side. They built a design-system MCP after their agents kept hardcoding colors and ignoring typography tokens.

Cyr doesn’t shy away from who this leaves behind, either: designers whose value lives entirely in production. “Not because they’re bad at their jobs — but because AI just got very good at theirs.”

Title card reading "Design Teams in the Agentic Era" with the subtitle "A manifesto for what comes next." on a dark background.

Design Teams in the Agentic Era

My thoughts on what comes next

jasoncyr.substack.com iconjasoncyr.substack.com

David Hoang, writing for Proof of Concept, proposes a squad model for tackling a company’s hardest, most ambiguous problems:

The squad: a forward deployed engineer, a forward deployed designer, and a researcher. Three people. That’s it. They operate like a startup-within-the-company, deployed against a specific, ambiguous problem. […] This is a product discovery team with teeth — they don’t just produce insights and hand them off. They produce working prototypes and validated direction. […] Three people don’t need standups, retros, or Jira boards. They need a shared problem and a whiteboard.

No PM. The shared problem replaces the roadmap, and a researcher replaces the product manager. Hoang borrows the concept from Palantir’s Forward Deployed Engineers and extends it to design. His argument: AI tools have given designers enough technical leverage to prototype at engineering speed, so the designer who finds the problem can build the first cut of the solution.

A three-person team with AI tools in 2026 can cover the ground that used to require a ten-person cross-functional team. That’s the direct result of collapsing the build cost of exploration.

Hoang argues that the rotation model matters as much as the squad composition. Four to eight weeks, then disband. The team doesn’t calcify into a feature factory. Designers rotate through the company’s hardest problems instead of sitting on the same product team filing tickets for years.

Although, my counter to that would be designers sitting in the same problem space will gain deeper knowledge and context. Rotation could be counterproductive if not handled deliberately.

Hand-drawn Venn diagram showing three overlapping circles labeled Researcher, Design Engineer, and GTM, with the center intersection labeled "Forward Deployed Designer.

Forward deployed designer

In the early 2010s, Palantir coined a role that didn’t exist before: the Forward Deployed Software Engineer. These weren’t engineers building features on a roadmap. They were engineers embedded directly at client companies — sitting with analysts, operators, and decision-makers — to discover the problem and build the solution in the same motion. The role spread. Databricks, Scale AI, and OpenAI adopted variations.

proofofconcept.pub iconproofofconcept.pub

I’ve argued that design tools should be canvas-first, not chatbox-first. Jeff, writing in Abduzeedo makes the case for the opposite:

Designers have always borrowed from developers. Version control, component systems, token-based design — these ideas crossed the aisle from engineering and reshaped how visual work gets done. Vibe designing follows the same logic. Instead of opening Figma and reaching for a drag-and-drop panel, designers drop into the terminal. They prompt an AI model directly from the CLI, pipe the output into a file, and iterate without ever touching a mouse.

He isn’t theorizing. He published this article using browser automation and AI, with minimal manual clicking.

I don’t think the answer is CLI or canvas. It’s both. Designers are visual thinkers—that’s the cognitive foundation of the discipline, not a limitation to engineer away. Going fully terminal assumes we can be retrained to work without seeing what we’re making, or that the profession will attract people with entirely different skills.

What does look right is the plumbing underneath. Jeff on Paper.design’s MCP integration:

Its canvas is built natively on web standards — HTML and CSS — which means AI agents working through Paper’s MCP server can read and write design files directly. Tools like get_screenshot, get_jsx, write_html, and update_styles give Claude Code or Cursor direct read-write access to the design canvas.

HyperCard figured this out in 1987: direct manipulation on top of a scripting layer. The tools are finally catching up, with AI as the scripting engine.

VS Code editor with a browser preview showing the "Abduzeedo Editor" app, displaying a portrait photo with a VHS glitch shader effect applied.

Vibe Designing with Bash Access

Vibe designing is the design equivalent of vibe coding — where bash scripts, AI tools, and CLI commands are finally replacing traditional GUI-only tools.

abduzeedo.com iconabduzeedo.com

Intercom’s design team published numbers that show what happens when agents take over the build. John Moriarty, writing for Fin Ideas:

At Intercom, how we design and build software is unrecognizable from 12 months ago. Our engineering team is already at the point where 90% of pull requests are authored by Claude Code, part of an internal initiative called 2x, where the explicit goal is to double productivity using AI.

When 90% of your pull requests are AI-authored, the designer’s job changes whether you update the title or not. Moriarty’s framework for what comes next:

As the rate of execution accelerates, the role of design becomes sharper. Agents can generate artefacts, but they cannot decide which problems matter, set intent, resolve trade-offs, or hold the bar for quality. Our craft shifts with that reality. […] Agents will own the middle, the build. Design’s value concentrates at the edges, deciding what to build and then determining whether the output is good enough.

Design’s value lands at the edges, not the middle, and Intercom is already adapting their infrastructure to match. They’ve repositioned their design system as what Moriarty calls “agentic infrastructure”:

In a world where Agents write most of the code, design systems become the infrastructure that protects quality. Components, libraries and guidelines are the foundation that Agents and teams build on top of. The better the system, the better everything produced. Strong systems allow quality to scale without adding review overhead.

This tracks with the argument that design systems are becoming AI infrastructure—and Intercom is running it in production. The design system is the quality control layer that lets agents ship at speed without designers reviewing every screen.

Moriarty’s full piece covers how they’re restructuring day-to-day work—moving designers into code, treating Figma as a whiteboard, running structured AI fluency training. Worth a full read.

A paintbrush dissolves into digital code lines and circuitry, with the text "How we design when the code writes itself" and "Fin/ideas" logo.

How we design when the code writes itself

AI isn’t just increasing the speed of building, it’s changing how we work

ideas.fin.ai iconideas.fin.ai

Proprioception is the body’s sense of where its parts are in space. Marcin Wichary borrows the term for software that knows where its hardware lives: where the buttons are, where the ports are, where the camera is. His proposed design principle:

The rule here would be, perhaps, a version of “show, don’t tell.” We could call it “point to, don’t describe.” (Describing what to do means cognitive effort to read the words and understand them. An arrow pointing to something should be easier to process.)

Wichary walks through a series of examples, mostly from Apple: the Apple Pay animation that points at the side button, the iPad camera prompt that points to the physical lens, Dynamic Island camouflaging missing pixels as a functional UI element. The one that caught my eye is the device Simulator matching the physical dimensions of your actual phone on-screen and staying accurate even when you change the display density. Reminds me of one of the earliest selling points of the Mac’s 72dpi—it matches the real world: 72 points to an inch.

The MacBook Neo is where Wichary applies the principle and finds Apple falling short. The new model has two USB-C ports with different speeds, and macOS notifies you with text:

I think this is nice! But it’s also just words. It feels a bit cheap. macOS knows exactly where the ports are, and could have thrown a little warning in the lower left corner of the screen, complete with an onscreen animation of swapping the plug to the other port – similar to what “double clicking to pay” does, so you wouldn’t have to look to the side to locate the socket first.

Close-up of a MacBook Touch Bar displaying "Unlock with Touch ID →" above the minus, plus, equals, and delete keys.

Software proprioception

A blog about software craft and quality

unsung.aresluna.org iconunsung.aresluna.org

Thu Do set up Figma MCP + Claude Code and audited her entire design system in 10 minutes. The setup took 4 hours. But the reframe she arrives at matters more than the tooling:

Design tokens used to be “nice to have” for consistency. Now they’re infrastructure for AI-to-code-to-design workflows. AI agents read tokens to understand design intent. Proper tokenization = accurate code generation. Inconsistent systems = AI making wrong assumptions.

The bar for design systems just shifted from visual consistency to machine readability.

3D illustration of a large red X shape constructed from hundreds of small red geometric block pieces on a dark background.

Your Design System Isn’t a Style Guide Anymore — It’s AI Infrastructure

I humbled myself quickly. Six months ago, I managed design systems the way most teams do: make and isolate small changes, coordinate with developers on implementation, write documentation manually, run audits when time allowed, and hand off specs for each new feature.

linkedin.com iconlinkedin.com

Buzz Usborne on what happens when AI takes on more responsibility in a product:

AI doesn’t simply make products smarter — it redistributes thinking and decision-making between humans and machines. When AI absorbs cognition, it also inherits responsibility. And when it inherits responsibility, the cost of its mistakes rises.

Usborne frames this through three forces that determine whether AI features survive or fail: trust, value perception, and cognitive effort. They amplify each other. Low trust increases perceived effort. High effort reduces perceived value. Low value further undermines trust.

His answer is to earn autonomy through interaction, not demand trust upfront:

Trust does not always need to precede adoption, it can emerge through usage. Salesforce’s findings show that “Human validation of outputs is the biggest driver in trusting the outcome, over consistently accurate outputs.” In other words, users trust systems they can interrogate, shape, and verify. And instead of designing AI products that are perfect, we can earn trust by designing experiences that are controllable.

Controllable over perfect.

Circular diagram with purple arrows showing a cycle: trust leads to value perception, which leads to effort/cognitive load, which feeds back to trust.

Designing AI Experiences People Actually Use

AI doesn’t just add intelligence — it redistributes it. Here’s how that shift can make or break a product.

buzzusborne.com iconbuzzusborne.com

Most product teams adding AI start by building a new surface for it. A custom panel. A chat sidebar. A dedicated AI workspace. Alexandra Vasquez, writing for Bootcamp, describes her team making exactly that mistake:

We built a custom AI panel with its own navigation, input styles, and button treatments. It looked “futuristic” in the prototype. In user testing, people kept asking where things were and how to get back to their actual work. We had created a separate product inside our product.

The fix was simple: they deleted the panel and put agent actions in the same menus, modals, and toolbars people already used. Slack does this with its /command structure. Notion uses the same slash menu for manual and AI actions. The pattern is existing UI that happens to be smarter.

Vasquez argues most “AI failures” are actually system failures that agents expose at scale:

Designing for agents means treating information architecture and workflows as foundational. Before building an agent, audit your system’s foundations: Are labels consistent? Do hierarchies make sense? Can a new team member navigate workflows without constant help? If humans struggle, agents will fail faster and at scale. Fix the system first.

She’s right. And there’s a more radical version of this: agents don’t need human UI at all. As long as the APIs are available, an agent can complete tasks without ever touching a button or reading a screen. The interface is for the human, not the machine.

But that’s exactly the problem. If the agent bypasses the interface, the human’s ability to express intent and verify output becomes the whole game. Intent has to be crystal clear. Feedback has to be immediate and legible. And there’s a huge amount of trust to earn before anyone is comfortable letting an agent operate in the background on their behalf. Vasquez lands here too:

The AI model is the last thing we discuss, not the first. These are product decisions, and designers have outsized influence here.

The model is the least interesting part. The interesting part is designing the trust.

Humorous UI dialog titled "Applying AI changes" with three checked items—"Making water wet," "Raising dog cuteness," and "Burning fire hotter"—and a progress bar showing "Processing...

Agentic UX: 7 principles for designing systems with agents

Agents don’t need their own screen, they need better systems to operate in

medium.com iconmedium.com