
Product Design Is Changing
I made my first website in Macromedia Dreamweaver in 1999. Its claim to fame was an environment with code on one side and a rudimentary WYSIWYG editor on the other. My site was a simple portfolio site, with a couple of animated GIFs thrown in for some interest. Over the years, I used other tools to create for the web, but usually, I left the coding to the experts. I’d design in Photoshop, Illustrator, Sketch, or Figma and then hand off to a developer. Until recently, with rebuilding this site a couple of times and working on a Severance fan project.
A couple weeks ago, as an experiment, I pointed Claude Code at our BuildOps design system repo and asked it to generate a screen using our components. It worked after about three prompts. Not one-shotted, but close. I sat there looking at a functioning UI—built from our actual components—and realized I’d just skipped the entire part of my job that I’ve spent many years doing: drawing pictures of apps and websites in a design tool, then handing them to someone else to build.
That moment crystallized something I’d been circling all last year. I wrote last spring about how execution skills were being commoditized and the designer’s value was shifting toward taste and strategic direction. A month later I mapped out a timeline for how design systems would become the infrastructure that AI tools generate against—prompt, generate, deploy. That was ten months ago, and most of it is already happening. Product design is changing. Not in the way most people are talking about it, but in a way that’s more fundamental and more interesting.
The Wrong Debate
The discourse around AI and product jobs is stuck on the wrong question. Scroll through LinkedIn on any given day and you’ll find some variation of: “Will designers lose their jobs because PMs can use Figma Make?” or “Will engineers get replaced because designers can ship with Cursor?” This framing is about headcount. It’s a turf war dressed up as industry analysis.
The real question is about process.
Regardless of how many people or AI agents do the work, the work still needs to get done. Problems need defining. Experiences need designing. Code needs writing. Shipping still has to happen. AI doesn’t eliminate these functions—it changes who does them, how fast they happen, and where the bottlenecks land.
Here’s what’s occupying my brain these days: the invisible work—coding, PRD writing, data analysis, summarization—is easier to automate because quality gaps hide behind a user interface. If the code is ugly but the app works, nobody cares. If the PRD was AI-generated but the problem is framed correctly, nobody cares. But the visible work—the user interface, the flows, the experience, the thing people actually see and touch—is user-facing. Quality gaps show up there. Jankiness will show. Users will notice.
Phil Morton put it well:
When building becomes fast and cheap, the hardest problem isn’t how to make something, it’s deciding what’s worth making at all.
The velocity gains in AI-assisted design will not match those in engineering. And that asymmetry is going to reshape the entire product development process and the way teams are assembled.
The Visible Work
I’ve been using a plumbing analogy in conversations lately and it seems to land, so let me try it here.
Engineering is like plumbing. It’s behind a wall, it’s hidden in the ceiling or floor, and as long as the water runs when I turn on the tap, who cares what it looks like underneath? The gains AI is delivering for engineers are real and massive. Boris Cherny runs four or five coding agents simultaneously. That’s a 400%+ velocity increase, and it’s increasingly how Silicon Valley engineers work—orchestrating teams of agents rather than writing code line by line.
But software design isn’t behind a wall. It is the wall. It’s the tap. It’s the handle you grab to make the water come out. If the controls are reversed or the handle isn’t intuitive, that’s a bad experience—even if AI produced it. Users will care what it looks like, how it feels, how it actually works. It takes more human-in-the-loop intervention to shape AI output for product and design than it does for engineering. Again, AI can brute-force something in code to make a feature work. But it can’t do the same for interfaces and flows to satisfy user needs. Why?
AI can follow standards, what it knows in its training data. We could even teach it design patterns specific to the application we’re working on. But how does it make decisions based on a heap of user research: dozens of user interviews, survey results, usage analytics, competitive audits, etc. That’s too much context.
There’s another bottleneck nobody talks about. AI can automate production at incredible speed, but a human still has to read it, internalize it, and critically evaluate whether it’s the right path. Call it the ingestion problem. Now that AI agents can generate a massive amount of code, code review is a real bottleneck (if PRs are done by humans, which they probably should). A colleague of mine extended this further—no matter how much AI pumps out or how many meetings it synthesizes, someone has to ingest the output to act on it. You still need a human to read it all if you want them to have an intelligent conversation about it. That’s a human-speed constraint that no model can bypass.
Another friend of mine framed it simply: what AI does well right now is content generation and summarization in different forms. He doesn’t see evidence it can create something novel or have taste. I agree. AI is excellent at producing volume. It’s not excellent at making judgment calls about what that volume should contain.
Design in Code, Not in Figma
The single biggest bottleneck in product development is translating Figma mockups into production code. We all know this. It’s the designer-to-developer handoff gap. We draw pictures of software, sweat over pixels, hand those pictures to engineers who replicate them as best they can, and then QA checks the coded pages against the mockups—rejecting PRs because the type is off or the spacing doesn’t match. There’s an enormous amount of swirl around this handoff, and it’s been that way for as long as I’ve been doing this work.
AI collapses this bottleneck. But only if designers start designing in code. If we actually use the final material.
I’m hearing about designers dropping Figma entirely. Not hypothetically—actually canceling subscriptions and designing with AI tools instead. And the argument is hard to dismiss: mockups aren’t the product. They’re a parallel artifact that has to be translated, reviewed, and reconciled with what actually ships. Every pixel you push in Figma is a promise that an engineer has to keep in a completely different medium. The further your design tool sits from production code, the more waste you generate in the handoff.
Phil Morton called the current process “absurdly wasteful”—and he’s right. We draw pictures of software and hand them to someone else to build. AI gives us the option to skip that translation step entirely, but only if we’re willing to work in the same material we ship.
My own experiment—the one I described at the top—confirmed this. Three prompts, working UI, real components. To make it reliable at scale, you need robust documentation, explicit rules for how the design system fits together, and agent orchestration to reference that context at the right time. But the foundation is already there.
Monday.com’s engineering team learned this the hard way. Their first attempt at AI-powered design-to-code was the obvious one: paste a Figma link into Cursor and let it generate code. The output looked fine at first glance. But the generated code didn’t use their design system components. Colors were hardcoded. Typography overrode system defaults. CSS was written manually where it shouldn’t have existed at all. The model had no understanding of what the design system actually was.
Their solution: they built a design-system MCP (Model Context Protocol) that makes the design system machine-readable—components, tokens, accessibility rules, usage patterns—and built an 11-node agentic workflow that constructs structured context for the model. The agent doesn’t write code. It builds an understanding of what the code should be, then hands that context to the developer’s coding agent. As they put it: “Orchestration, not magic.”
This is already happening inside top companies. In a Cat Wu interview about how the Claude Code team ships, she mentioned an Anthropic’s designer now making pull requests directly to Claude Code and the console product. A designer, committing code, shipping to production. That’s not a theoretical future. That’s February 2026.
What Stays Human
If AI can generate code, write PRDs, summarize research, and prototype interfaces, what’s left for the humans?
The orchestration.
Anyone who’s used these tools seriously knows this already. The models are capable enough. The bottleneck is the person at the keyboard—knowing what to ask for, how to break the work into pieces the model can handle, and when to reject what it gives back. The orchestrator matters more than the model. Kyle Zantos, a designer who now spends 70% of his working hours inside terminals, put it well on Dive Club: learn the philosophy and the approach more than the literal setup, because the tools change so fast that specific recommendations from four months ago are already outdated. What doesn’t change is the skill of directing the work.
Quality in AI-powered products means something different than it used to. Surface polish isn’t enough when the system underneath is unreliable. It goes back to building the right thing. Arin Bhowmick, SAP’s Chief Design Officer, made this point well:
A visually polished interface can mask deeper issues: unreliable outputs, opaque decision-making, brittle behavior at the edges. Design leaders must stop measuring quality just by surface-level polish and instead treat trust, clarity, and reliability as first-class design outcomes.
Can users rely on the outputs? Do they understand why the system made a decision? Does it fail safely when it’s wrong? Those are UX design questions, and they require human judgment to answer.
There’s also the question of where AI should actually be applied in a design leader’s day. Vlad Derdeicea wrote about this—design leads spend about 80% of their time on communication, alignment, and justification. Not on hands-on design work. Every design decision carries a “justification tax”: the time spent explaining, documenting, and defending choices that other disciplines make in a quick conversation. AI should be targeting that 80%, not the mockup work. Use it to synthesize meeting notes, draft stakeholder communications, generate research summaries, and build quick prototypes that settle debates with data instead of opinions.
AI is getting very good at the 20%—the mockups, the prototypes, the visual production. What it can’t do is the 80%.
The most forward-thinking framing I’ve seen comes from Jan Tegze: don’t try to be better at your current job. Find the constraint in your domain that exists because of human limitations, then use agents to remove it—not to speed up your current tasks, but to do things that were previously impossible.
You’re not competing with the agent. You’re creating a new capability that requires both you and the agent.
Unfortunately, this means less experienced designers are at greater risk here. They lack the judgment to evaluate AI outputs. They don’t have enough reps to know when the model is wrong. If you have five or fewer years of experience, you’re at the short end of the stick. The skill floor is rising. Junior designers who can’t critically assess AI-generated work will find their roles shrinking fast.
Small Teams, Big Leverage
Most software companies are organized wrong for this moment. They’re PM-heavy feature factories where each squad gets a product manager regardless of whether it has dedicated design support. PMs multiplied during the ZIRP era because they sit closer to revenue and headcount scales with organizational complexity. Marty Cagan calls this “product management theater”—a surplus of ineffective PMs who resemble overpaid project managers, cranking out roadmaps and running standups.
Andrew Ng predicted at Davos that the PM-to-engineer ratio will flip from 1:8 toward 1:1 as AI explodes engineering productivity. If AI agents can write most production code, the wide engineering base shrinks. Specification and judgment become the scarce resources—not implementation.
There’s a better model already in production. Airbnb merged product management with product marketing into a single “full-stack” role. Brian Chesky has said, “you can’t develop products unless you know how to talk about the products,” making storytelling and outward communication a first-class part of the PM job. More importantly, Chesky elevated designers to “architects” who sit alongside engineers and help drive the product—not a downstream service that catches tickets thrown over the wall. The coordination work that used to bloat PM headcount got moved to dedicated program managers.
This mirrors Apple’s functional model: experts lead experts, the CEO is the integration point, and there are no “mini-CEO” product managers running business units. Both companies treat design as a co-owner of product direction, not an execution layer.
The ideal AI-era team is small: two or three engineers, a PM, and a designer. Empowered, fast, iterating constantly. Design systems become critical infrastructure—the backbone that makes AI-assisted design possible at scale. Without your design system and associated documentation in code, the AI is going to make all sorts of bad decisions about your UI and its implementation. The companies that invest in machine-readable design systems and small, empowered teams will ship circles around the ones still running feature factories with 15-person squads and three layers of approvals.
The Compounding Bet
That experiment I mentioned at the top—pointing Claude Code at our design system and getting working screens in three prompts—was version one. Since then I’ve been iterating on the setup: better documentation, tighter component rules, clearer instructions for how the system fits together. Each round gets faster and the output gets closer to production-ready. The models improve, their skills get refined, and I get better at directing them. That all compounds.
This is the part that should make designers pay attention. The gap between “designer who orchestrates AI” and “designer who pushes pixels in Figma” is going to be enormous within 12 months. Not because the pixel-pushers are bad at their jobs. Because the orchestrators will be operating at a fundamentally different speed and scope—shipping working UI while others are still exporting mockups for a handoff meeting.
I’m teaching my team to work this way now. Not because the job is dying. But because it’s becoming a job where taste, judgment, and the ability to direct the work matter more than the ability to draw pictures of it. I’d rather have that job. I think most designers would.

