Skip to content

39 posts tagged with “product design”

Our profession is changing rapidly. I’ve been covering that here for nearly a year now. Lots of posts come across my desk that say similar things. Tom Scott repeats a lot of what’s been said, but I’ll pull out a couple nuggets that caught my eye.

He declares that “Hands-on is the new default.” Quoting Vitor Amaral, a designer at Intercom:

Being craft-focused means staying hands-on, regardless of specialty or seniority. This won’t be a niche role, it will be an expectation for everyone, from individual contributors to VPs. The value lies in deeply understanding how things actually work, and that comes from direct involvement in the work.

As AI speeds up execution, the craft itself will become easier, but what will matter most is the critical judgment to craft the right thing, move fast, and push the boundaries of quality.

For those looking for work, Scott says, “You NEED to change how you find a job.” Quoting Felix Haas, investor and designer at Lovable:

Start building a real product and get a feeling for it what it means pushing something out in the market

Learn to use AI to prototype interactively → even at a basic level

Get comfortable with AI tools early → they’ll be your co-designer / sparring partner

Focus on solving real problems, not just making things look good (Which was a problem for very long in the design space)

Scott also says that “Design roles are merging,” and Ridd from Dive Club illustrates the point:

We are seeing a collapse of design’s monopoly on ideation where designers no longer “own” the early idea stage. PMs, engineers, and others are now prototyping directly with new tools.

If designers move too slow, others will fill the gap. The line between PM, engineer, and designer is thinner than ever. Anyone tool-savvy can spin up prototypes — which raises the bar for designers.

Impact comes from working prototypes, not just facilitation. Leading brainstorms or “owning process” isn’t enough. Real influence comes from putting tangible prototypes in front of the team and aligning everyone around them.

Design is still best positioned — but not guaranteed

Designers could lead this shift, but only if they step up. Ownership of ideation is earned, not assumed.

The future of product design

The future of product design

The future belongs to AI-native designers

verifiedinsider.substack.com iconverifiedinsider.substack.com

The headline rings true to me because that’s what I look for in designers and how I run my team. The software that we build is too complex and too mission-critical for designers to vibe-code—at least given today’s tooling. But each one of the designers on my team can fill in for a PM when they’re on vacation.

Kai Wong, writing in UX Collective:

One thing I’ve learned, talking with 15 design leaders (and one CEO), is that a ‘designer who codes’ may look appealing, but a ‘designer who understands business’ is far more valuable and more challenging to replace.

You already possess the core skill that makes this transition possible: the ability to understand users with systematic observation and thoughtful questioning.

The only difference, now, is learning to apply that same methodology to understand your business.

Strategic thinking doesn’t require fancy degrees (although it may sometimes help).

Ask strategic questions about business goals. Understand how to balance user and business needs. Frame your design decisions in terms of measurable business impact.

preview-1758775414784.png

Why many employers want Designers to think like PMs, not Devs

How asking questions, which used to annoy teams, is now critical to UX’s future

uxdesign.cc iconuxdesign.cc

I have always wanted to read 6,200 words about color! Sorry, that’s a lie. But I did skim it and really admired the very pretty illustrations. Dan Hollick is a saint for writing and illustrating this chapter in his living book called Making Software, a reference manual for designers and programmers that make digital products. From his newsletter:

I started writing this chapter just trying to explain what a color space is. But it turns out, you can’t really do that without explaining a lot of other stuff at the same time.

Part of the issue is color is really complicated and full of confusing terms that need a maths degree to understand. Gamuts, color models, perceptual uniformity, gamma etc. I don’t have a maths degree but I do have something better: I’m really stubborn.

And here are the opening sentences of the chapter on color:

Color is an unreasonably complex topic. Just when you think you’ve got it figured out, it reveals a whole new layer of complexity that you didn’t know existed.

This is partly because it doesn’t really exist. Sure, there are different wavelengths of light that our eyes perceive as color, but that doesn’t mean that color is actually a property of that light - it’s a phenomenon of our perception.

Digital color is about trying to map this complex interplay of light and perception into a format that computers can understand and screens can display. And it’s a miracle that any of it works at all.

I’m just waiting for him to put up a Stripe link so I can throw money at him.

preview-1756359522301.jpg

Making Software: What is a color space?

In which we answer every question you've ever had about digital color, and some you haven't.

makingsoftware.com iconmakingsoftware.com

Christopher K. Wong argues that desirability is a key part of design that helps decide which features users really want:

To give a basic definition, desirability is a strategic part of UX that revolves around a single user question: Have you defined (and solved) the right problem for users?

In other words, before drawing a single box or arrow, have you done your research and discovery to know you’re solving a pain point?

The way the post is written makes it hard to get at a succinct definition, but here’s my take. Desirability is about ensuring a product or feature is truly wanted, needed, and chosen by users—not just visual appeal—making it a core pillar for impactful design decisions and prioritization. And designers should own this.

preview-1754632102491.jpeg

Want to have a strategic design voice at work? Talk about desirability

Desirability isn’t just about visual appeal: it’s one of the most important user factors

dataanddesign.substack.com icondataanddesign.substack.com

Luke Wroblewski, writing in his blog:

Across several of our companies, software development teams are now “out ahead” of design. To be more specific, collaborating with AI agents (like Augment Code) allows software developers to move from concept to working code 10x faster. This means new features become code at a fast and furious pace.

When software is coded this way, however, it (currently at least) lacks UX refinement and thoughtful integration into the structure and purpose of a product. This is the work that designers used to do upfront but now need to “clean up” afterward. It’s like the development process got flipped around. Designers used to draw up features with mockups and prototypes, then engineers would have to clean them up to ship them. Now engineers can code features so fast that designers are ones going back and cleaning them up.

This is what I’ve been secretly afraid of. That we would go back to the times when designers were called in to do cleanup. Wroblewski says:

Instead of waiting for months, you can start playing with working features and ideas within hours. This allows everyone, whether designer or engineer, an opportunity to learn what works and what doesn’t. At its core rapid iteration improves software and the build, use/test, learn, repeat loop just flipped, it didn’t go away.

Yeah, or the feature will get shipped this way and be stuck this way because startups move fast and move on.

My take is that as designers, we need to meet the moment and figure out how to build design systems and best practices into the agentic workflows our developer counterparts are using.

preview-1753725448535.png

AI Has Flipped Software Development

For years, it's been faster to create mockups and prototypes of software than to ship it to production. As a result, software design teams could stay "ahead" of...

lukew.com iconlukew.com

This is a really well-written piece that pulls the AI + design concepts neatly together. Sharang Sharma, writing in UX Collective:

As AI reshapes how we work, I’ve been asking myself, it’s not just how to stay relevant, but how to keep growing and finding joy in my craft.

In my learning, the new shift requires leveraging three areas

  1. AI tools: Assembling an evolving AI design stack to ship fast
  2. AI fluency: Learning how to design for probabilistic systems
  3. Human-advantage: Strengthening moats like craft, agency and judgment to stay ahead of automation

Together with strategic thinking and human-centric skills, these pillars shape our path toward becoming an AI-native designer.

Sharma connects all the crumbs I’ve been dropping this week:

preview-1752771124483.jpeg

AI tools + AI fluency + human advantage = AI-native designer

From tools to agency, is this what it would take to thrive as a product designer in the AI era?

uxdesign.cc iconuxdesign.cc

From UX Magazine:

Copilots helped enterprises dip their toes into AI. But orchestration platforms and tools are where the real transformation begins — systems that can understand intent, break it down, distribute it, and deliver results with minimal hand-holding.

Think of orchestration as how “meta-agents” are conducting other agents.

The first iteration of AI in SaaS was copilots. They were like helpful interns eagerly awaiting your next command. Orchestration platforms are more like project managers. They break down big goals into smaller tasks, assign them to the right AI agents, and keep everything coordinated. This shift is changing how companies design software and user experiences, making things more seamless and less reliant on constant human input.

For designers and product teams, it means thinking about workflows that cross multiple tools, making sure users can trust and control what the AI is doing, and starting small with automation before scaling up.

Beyond Copilots: The Rise of the AI Agent Orchestration Platform

AI agent orchestration platforms are replacing simple copilots, enabling enterprises to coordinate autonomous agents for smarter, more scalable workflows.

uxmag.com iconuxmag.com

Let’s stay on the train of designing AI interfaces for a bit. Here’s a piece by Rob Chappell in UX Collective where he breaks down how to give users control—something I’ve been advocating—when working with AI.

AI systems are transforming the structure of digital interaction. Where traditional software waited for user input, modern AI tools infer, suggest, and act. This creates a fundamental shift in how control moves through a experience or product — and challenges many of the assumptions embedded in contemporary UX methods.

The question is no longer: “What is the user trying to do?”

The more relevant question is: “Who is in control at this moment, and how does that shift?”

Designers need better ways to track how control is initiated, shared, and handed back — focusing not just on what users see or do, but on how agency is negotiated between human and system in real time.

Most design frameworks still assume the user is in the driver’s seat. But AI is changing the rules. The challenge isn’t just mapping user flows or intent—it’s mapping who holds the reins, and how that shifts, moment by moment. Designers need new tools to visualize and shape these handoffs, or risk building systems that feel unpredictable or untrustworthy. The future of UX is about negotiating agency, not just guiding tasks.

preview-1752705140164.png

Beyond journey maps: designing for control in AI UX

When systems act on their own, experience design is about balancing agency — not just user flow

uxdesign.cc iconuxdesign.cc

Vitaly Friedman writes a good primer on the design possibilities for users to interact with AI features. As AI capabilities become more and more embedded in the products designers make, we have to become facile in manipulating AI as material.

Many products are obsessed with being AI-first. But you might be way better off by being AI-second instead. The difference is that we focus on user needs and sprinkle a bit of AI across customer journeys where it actually adds value.

preview-1752639762962.jpg

Design Patterns For AI Interfaces

Designing a new AI feature? Where do you even begin? From first steps to design flows and interactions, here’s a simple, systematic approach to building AI experiences that stick.

smashingmagazine.com iconsmashingmagazine.com

Since its debut at Config back in May, Figma has steadily added practical features to Figma Make for product teams. Supabase integration now allows for authentication, data storage, and file uploads. Designers can import design system libraries, which helps maintain visual consistency. Real-time collaboration has improved, giving teams the ability to edit code and prototypes together. The tool now supports backend connections for managing state and storing secrets. Prototypes can be published to custom domains. These changes move Figma Make closer to bridging the gap between design concepts and advanced prototypes.

In my opinion, there’s a stronger relationship between Sites and Make than there is Make and Design. The Make-generated code may be slightly better than when Sites debuted, but it is still not semantic.

Anyhow, I think Make is great for prototyping and it’s convenient to have it built right into Figma. Julius Patto, writing in UX Collective:

Prompting well in Figma Make isn’t about being clever, it’s about being clear, intentional, and iterative. Think of it as a new literacy in the design toolkit: the better you get at it, the more you unlock AI’s potential without losing your creative control.

preview-1752622395695.jpeg

How to prompt Figma Make’s AI better for product design

Learn how to use AI in Figma Make with UX intention, from smarter prompts to inclusive flows that reflect real user needs.

uxdesign.cc iconuxdesign.cc

Ted Goas, writing in UX Collective:

I predict the early parts of projects, getting from nothing to something, will become shared across roles. For designers looking to branch out, code is a natural next step. I see a future where we’re fixing small bugs ourselves instead of begging an engineer, implementing that animation that didn’t make the sprint but you know would absolutely slap, and even building simple features when engineering resources are tight.

Our new reality is that anyone can make a rough draft.

But that doesn’t mean those drafts are good. That’s where our training and taste come in.

I think Goas is right and it echoes the AI natives post by Elena Verna. I wrote a little more extensively in my newsletter over the weekend.

preview-1752467928143.jpg

Designers: We’ll all be design engineers in a year

And that’s a good thing.

uxdesign.cc iconuxdesign.cc

Miquad Jaffer, a product leader at OpenAI shares his 4D method on how to build AI products that users want. In summary, it’s…

  • Discover: Find and prioritize real user pain points and friction in daily workflows.
  • Design: Make AI features invisible and trustworthy, fitting naturally into users’ existing habits.
  • Develop: Build AI systematically, with robust evaluation and clear plans for failures or edge cases.
  • Deploy: Treat each first use like a product launch, ensuring instant value and building user trust quickly.
preview-1752209855759.png

OpenAI Product Leader: The 4D Method to Build AI Products That Users Actually Want

An OpenAI product leader's complete playbook to discover real user friction, design invisible AI, plan for failure cases, and go from "cool demo" to "daily habit"

creatoreconomy.so iconcreatoreconomy.so

Sara Paul writing for NN/g:

The core principles of UX and product design remain unchanged, and AI amplifies their importance in many ways. To stay indispensable, designers must evolve: adapt to new workflows, deepen their judgment, and double down on the uniquely human skills that AI can’t replace.

They spoke with seven UX practitioners to get their take on AI and the design profession.

I think this is great advice and echoes what I’ve written about previously (here and here):

There is a growing misconception that AI tools can take over design, engineering, and strategy. However, designers offer more than interaction and visual-design skills. They offer judgment, built on expertise that AI cannot replicate.

Our panelists return to a consistent message: across every tech hype cycle, from responsive design to AI, the value of design hasn’t changed. Good design goes deeper than visuals; it requires critical thinking, empathy, and a deep understanding of user needs.

preview-1749705164986.png

The Future-Proof Designer

Top product experts share four strategies for remaining indispensable as AI changes UI design, accelerates feature production, and reshapes data analysis.

nngroup.com iconnngroup.com

Great reminder from Kai Wong about getting stuck on a solution too early:

Imagine this: the Product Manager has a vision of a design solution based on some requirements and voices it to the team. They say, “I want a table that allows us to check statuses of 100 devices at once.”

You don’t say anything, so that sets the anchor of a design solution as “a table with a bunch of devices and statuses.”

preview-1749704193306.jpeg

Avoid premature solutions: how to respond when stakeholders ask for certain designs

How to avoid anchoring problems that result in stuck designers

dataanddesign.substack.com icondataanddesign.substack.com
A futuristic scene with a glowing, tech-inspired background showing a UI design tool interface for AI, displaying a flight booking project with options for editing and previewing details. The screen promotes the tool with a “Start for free” button.

Beyond the Prompt: Finding the AI Design Tool That Actually Works for Designers

There has been an explosion of AI-powered prompt-to-code tools within the last year. The space began with full-on integrated development environments (IDEs) like Cursor and Windsurf. These enabled developers to use leverage AI assistants right inside their coding apps. Then came a tools like v0, Lovable, and Replit, where users could prompt screens into existence at first, and before long, entire applications.

A couple weeks ago, I decided to test out as many of these tools as I could. My aim was to find the app that would combine AI assistance, design capabilities, and the ability to use an organization’s coded design system.

While my previous essay was about the future of product design, this article will dive deep into a head-to-head between all eight apps that I tried. I recorded the screen as I did my testing, so I’ve put together a video as well, in case you didn’t want to read this.

Play

It is a long video, but there’s a lot to go through. It’s also my first video on YouTube, so this is an experiment.

The Bottom Line: What the Testing Revealed

I won’t bury the lede here. AI tools can be frustrating because they are probabilistic. One hour they can solve an issue quickly and efficiently, while the next they can spin on a problem and make you want to pull your hair out. Part of this is the LLM—and they all use some combo of the major LLMs. The other part is the tool itself for not handling what happens when their LLMs fail. 

For example, this morning I re-evaluated Lovable and Bolt because they’ve released new features within the last week, and I thought it would only be fair to assess the latest version. But both performed worse than in my initial testing two weeks ago. In fact, I tried Bolt twice this morning with the same prompt because the first attempt netted a blank preview. Unfortunately, the second attempt also resulted in a blank screen and then I ran out of credits. 🤷‍♂️

Scorecard for Subframe, with a total of 79 points across different categories: User experience (22), Visual design (13), Prototype (6), Ease of use (13), Design control (15), Design system integration (5), Speed (5), Editor’s discretion (0).

For designers who want actual design tools to work on UI, Subframe is the clear winner. The other tools go directly from prompt to code, skipping giving designers any control via a visual editor. We’re not developers, so manipulating the design in code is not for us. We need to be able to directly manipulate the components by clicking and modifying shapes on the canvas or changing values in an inspector.

For me, the runner-up is v0, if you want to use it only for prototyping and for getting ideas. It’s quick—the UI is mostly unstyled, so it doesn’t get in the way of communicating the UX.

The Players: Code-Only vs. Design-Forward Tools

There are two main categories of contenders: code-only tools, and code plus design tools.

Code-Only

  • Bolt
  • Lovable
  • Polymet
  • Replit
  • v0

Code + Design

  • Onlook
  • Subframe
  • Tempo

My Testing Approach: Same Prompt, Different Results

As mentioned at the top, I tested these tools between April 16–27, 2025. As with most SaaS products, I’m sure things change daily, so this report captures a moment in time.

For my evaluation, since all these tools allow for generating a design from a prompt, that’s where I started. Here’s my prompt:

Create a complete shopping cart checkout experience for an online clothing retailer

I would expect the following pages to be generated:

  • Shopping cart
  • Checkout page (or pages) to capture payment and shipping information
  • Confirmation

I scored each app based on the following rubric:

  • Sample generation quality
  • User experience (25)
  • Visual design (15)
  • Prototype (10)
  • Ease of use (15)
  • Control (15)
  • Design system integration (10)
  • Speed (10)
  • Editor’s discretion (±10)

The Scoreboard: How Each Tool Stacked Up

AI design tools for designers, with scores: Subframe 79, Onlook 71, v0 61, Tempo 59, Polymet 58, Lovable 49, Bolt 43, Replit 31. Evaluations conducted between 4/16–4/27/25.

Final summary scores for AI design tools for designers. Evaluations conducted between 4/16–4/27/25.

Here are the summary scores for all eight tools. For the detailed breakdown of scores, view the scorecards here in this Google Sheet.

The Blow-by-Blow: The Good, the Bad, and the Ugly

Bolt

Bolt screenshot: A checkout interface with a shopping cart summary, items listed, and a “Proceed to Checkout” button, displaying prices and order summary.

First up, Bolt. Classic prompt-to-code pattern here—text box, type your prompt, watch it work. 

Bolt shows you the code generation in real-time, which is fascinating if you’re a developer but mostly noise if you’re not. The resulting design was decent but plain, with typical UX patterns. It missed delivering the confirmation page I would expect. And when I tried to re-evaluate it this morning with their new features? Complete failure—blank preview screens until I ran out of credits. No rhyme or reason. And there it is—a perfect example of the maddening inconsistency these tools deliver. Working beautifully in one session, completely broken in another. Same inputs, wildly different outputs.

Score: 43

Lovable

Lovable screenshot: A shipping information form on a checkout page, including fields for personal details and a “Continue to Payment” button.

Moving on to Lovable, which I captured this morning right after they launched their 2.0 version. The experience was a mixed bag. While it generated clean (if plain) UI with some nice touches like toast notifications and a sidebar shopping cart, it got stuck at a critical juncture—the actual checkout. I had to coax it along, asking specifically for the shopping cart that was missing from the initial generation.

The tool encountered an error but at least provided a handy “Try to fix” button. Unlike Bolt, Lovable tries to hide the code, focusing instead on the browser preview—which as a designer, I appreciate. When it finally worked, I got a very vanilla but clean checkout flow and even the confirmation page I was looking for. Not groundbreaking, but functional. The approach of hiding code complexity might appeal to designers who don’t want to wade through development details.

Score: 49

Polymet

Polymet screenshot: A checkout page design for a fashion store showing payment method options (Credit Card, PayPal, Apple Pay), credit card fields, order summary with subtotal, shipping, tax, and total.

Next up is Polymet. This one has a very interesting interface and I kind of like it. You have your chat on the left and a canvas on the right. But instead of just showing the screen it’s working on, it’s actually creating individual components that later get combined into pages. It’s almost like building Figma components and then combining them at the end, except these are all coded components.

The design is pretty good—plain but very clean. I feel like it’s got a little more character than some of the others. What’s nice is you can go into focus mode and actually play with the prototype. I was able to navigate from the shopping cart through checkout (including Apple Pay) to confirmation. To export the code, you need to be on a paid plan, but the free trial gives you at least a taste of what it can do.

Score: 58

Replit

Replit screenshot: A developer interface showing progress on an online clothing store checkout project with error messages regarding the use of the useCart hook.

Replit was a test of patience—no exaggeration, it was the slowest tool of the bunch at 20 minutes to generate anything substantial. Why so slow? It kept encountering errors and falling into those weird loops that LLMs often do when they get stuck. At one point, I had to explicitly ask it to “make it work” just to progress beyond showing product pages, which wasn’t even what I’d asked for in the first place.

When it finally did generate a checkout experience, the design was nothing to write home about. Lines in the stepper weren’t aligning properly, there were random broken elements, and ultimately—it just didn’t work. I couldn’t even complete the checkout flow, which was the whole point of the exercise. I stopped recording at that point because, frankly, I just didn’t want to keep fighting with a tool that’s both slow and ineffective. 

Score: 31

v0

v0 screenshot: An online shopping cart with a multi-step checkout process, including a shipping form and order summary with prices and a “Continue to Payment” button.

Taking v0 for a spin next, which comes from Vercel. I think it was one of the earlier prompt-to-code generators I heard about—originally just for components, not full pages (though I could be wrong). The interface is similar to Bolt with a chat panel on the left and code on the right. As it works, it shows you the generated code in real-time, which I appreciate. It’s pretty mature and works really well.

The result almost looks like a wireframe, but the visual design has a bit more personality than Bolt’s version, even though it’s using the unstyled shadcn components. It includes form validation (which I checked), and handles the payment flow smoothly before showing a decent confirmation page. Speed-wise, v0 is impressively quick compared to some others I tested—definitely a plus when you’re iterating on designs and trying to quickly get ideas.

Score: 61

Onlook

Onlook screenshot: A design tool interface showing a cart with empty items and a “Continue Shopping” button on a fashion store checkout page.

Onlook stands out as a self-contained desktop app rather than a web tool like the others. The experience starts the same way—prompt in, wait, then boom—but instead of showing you immediate results, it drops you into a canvas view with multiple windows displaying localhost:3000, which is your computer running a web server locally. The design it generated was fairly typical and straightforward, properly capturing the shopping cart, shipping, payment, and confirmation screens I would expect. You can zoom out to see a canvas-style overview and manipulate layers, with a styles tab that lets you inspect and edit elements.

The dealbreaker? Everything gets generated as a single page application, making it frustratingly difficult to locate and edit specific states like shipping or payment. I couldn’t find these states visually or directly in the pages panel—they might’ve been buried somewhere in the layers, but I couldn’t make heads or tails of it. When I tried using it again today to capture the styles functionality for the video, I hit the same wall that plagued several other tools I tested—blank previews and errors. Despite going back and forth with the AI, I couldn’t get it running again.

Score: 71

Subframe

Subframe screenshot: A design tool interface with a checkout page showing a cart with items, a shipping summary, and the option to continue to payment.

My time with Subframe revealed a tool that takes a different approach to the same checkout prompt. Unlike most competitors, Subframe can’t create an entire flow at once (though I hear they’re working on multi-page capabilities). But honestly, I kind of like this limitation—it forces you as a designer to actually think through the process.

What sets Subframe apart is its MidJourney-like approach, offering four different design options that gradually come into focus. These aren’t just static mockups but fully coded, interactive pages you can preview in miniature. After selecting a shopping cart design, I simply asked it to create the next page, and it intelligently moved to shipping/billing info.

The real magic is having actual design tools—layers panel, property inspector, direct manipulation—alongside the ability to see the working React code. For designers who want control beyond just accepting whatever the AI spits out, Subframe delivers the best combination of AI generation and familiar design tooling.

Score: 79

Tempo

Tempo screenshot: A developer tool interface generating a clothing store checkout flow, showing wireframe components and code previews.

Lastly, Tempo. This one takes a different approach than most other tools. It starts by generating a PRD from your prompt, then creates a user flow diagram before coding the actual screens—mimicking the steps real product teams would take. Within minutes, it had generated all the different pages for my shopping cart checkout experience. That’s impressive speed, but from a design standpoint, it’s just fine. The visual design ends up being fairly plain, and the prototype had some UX issues—the payment card change was hard to notice, and the “Place order” action didn’t properly lead to a confirmation screen even though it existed in the flow.

The biggest disappointment was with Tempo’s supposed differentiator. Their DOM inspector theoretically allows you to manipulate components directly on canvas like you would in Figma—exactly what designers need. But I couldn’t get it to work no matter how hard I tried. I even came back days later to try again with a different project and reached out to their support team, but after a brief exchange—crickets. Without this feature functioning, Tempo becomes just another prompt-to-code tool rather than something truly designed for visual designers who want to manipulate components directly. Not great.

Score: 59

The Verdict: Control Beats Code Every Time

Subframe screenshot: A design tool interface displaying a checkout page for a fashion store with a cart summary and a “Proceed to Checkout” button.

Subframe offers actual design tools—layers panel, property inspector, direct manipulation—along with AI chat.

I’ve spent the last couple weeks testing these prompt-to-code tools, and if there’s one thing that’s crystal clear, it’s this: for designers who want actual design control rather than just code manipulation, Subframe is the standout winner.

I will caveat that I didn’t do a deep dive into every single tool. I played with them at a cursory level, giving each a fair shot with the same prompt. What I found was a mix of promising starts and frustrating dead ends.

The reality of AI tools is their probabilistic nature. Sometimes they’ll solve problems easily, and then at other times they’ll spectacularly fail. I experienced this firsthand when retesting both Lovable and Bolt with their latest features—both performed worse than in my initial testing just two weeks ago. Blank screens. Error messages. No rhyme or reason.

For designers like me, the dealbreaker with most of these tools is being forced to manipulate designs through code rather than through familiar design interfaces. We need to be able to directly manipulate components by clicking and modifying shapes on the canvas or changing values in an inspector. That’s where Subframe delivers while others fall short—if their audience includes designers, which might not be the case.

For us designers, I believe Subframe could be the answer. But I’m also looking forward to if Figma will have an answer. Will the company get in the AI > design > code game? Or will it be left behind? 

The future belongs to applications that balance AI assistance with familiar design tooling—not just code generators with pretty previews.

Illustration of humanoid robots working at computer terminals in a futuristic control center, with floating digital screens and globes surrounding them in a virtual space.

Prompt. Generate. Deploy. The New Product Design Workflow

Product design is going to change profoundly within the next 24 months. If the AI 2027 report is any indication, the capabilities of the foundational models will grow exponentially, and with them—I believe—will the abilities of design tools.

A graph comparing AI Foundational Model Capabilities (orange line) versus AI Design Tools Capabilities (blue line) from 2026 to 2028. The orange line shows exponential growth through stages including Superhuman Coder, Superhuman AI Researcher, Superhuman Remote Worker, Superintelligent AI Researcher, and Artificial Superintelligence. The blue line shows more gradual growth through AI Designer using design systems, AI Design Agent, and Integration & Deployment Agents.

The AI foundational model capabilities will grow exponentially and AI-enabled design tools will benefit from the algorithmic advances. Sources: AI 2027 scenario & Roger Wong

The TL;DR of the report is this: companies like OpenAI have more advanced AI agent models that are building the next-generation models. Once those are built, the previous generation is tested for safety and released to the public. And the cycle continues. Currently, and for the next year or two, these companies are focusing their advanced models on creating superhuman coders. This compounds and will result in artificial general intelligence, or AGI, within the next five years. 

Non-AI companies will benefit from new model releases. We already see how much the performance of coding assistants like Cursor has improved with recent releases of Claude 3.7 Sonnet, Gemini 2.5 Pro, and this week, GPT-4.1, OpenAI’s latest.

Tools like v0LovableReplit, and Bolt are leading the charge in AI-assisted design. Creating new landing pages and simple apps is literally as easy as typing English into a chat box. You can whip up a very nice-looking dashboard in single-digit minutes.

However, I will argue they are only serving a small portion of the market. These tools are great for zero-to-one digital products or websites. While new sites and software need to be designed and built, the vast majority of the market is in extending and editing current products. There are hordes more designers who work at corporations such as Adobe, Microsoft, Salesforce, Shopify, and Uber than there are designers at agencies. They all need to adhere to their company’s design system and can’t use what Lovable produces from scratch. The generated components can’t be used even if they were styled to look correct. They must be components from their design system code repositories.

The Design-to-Code Gap

But first, a quick detour…

For any designer who has ever handed off a Figma file to a developer, they have felt the stinging disappointment days or weeks later when it’s finally coded. The spacing is never quite right. The type sizes are off. And the back and forth seems endless. The developer handoff experience has been a well-trodden path full of now-defunct or dying companies like InVisionAbstract, and Zeplin. Figma tries to solve this issue with Dev Mode, but even then, there’s a translation that has to happen from pixels and vectors in a proprietary program to code. 

Yes, no- and low-code platforms like Webflow, Framer, and Builder.io exist. But the former two are proprietary platforms—you can’t take the code with you—and the latter is primarily a CMS (no-code editing for content editors).

The dream is for a design app similar to Figma that uses components from your team’s GitHub design system repository.1 I’m not talking about a Figma-only component library. No. Real components with controllable props in an inspector. You can’t break them apart and any modifications have to be made at the repo level. But you can visually put pages together. For new components, well, if they’re made of atomic parts, then yes, that should be possible too.

UXPin Merge comes close. Everything I mentioned above is theoretically possible. But if I’m being honest, I did a trial and the product is buggy and wasn’t great to use. 

A Glimpse of What’s Coming

Enter TempoPolymet, and Subframe. These are very new entrants to the design tool space. Tempo and Polymet are backed by Y Combinator and Subframe is pre-seed.

For Subframe, they are working on a beta feature that will allow you to connect your GitHub repository, append a little snippet of code to each component, and then the library of components will appear in their app. Great! This is the dream. The app seems fairly easy to use and wasn’t sluggish and buggy like UXPin.

But the kicker—the Holy Grail—is their AI. 

I quickly put together a hideous form screen based on one of the oldest pages in BuildOps that is long overdue for a redesign. Then, I went into Subframe’s Ask AI tab and prompted, “Make this design more user friendly.” Similar to Midjourney, four blurry tiles appeared and slowly came into focus. This diffuser model effect was a moment of delight for me. I don’t know if they’re actually using a diffuser model—think Stable Diffusion and Midjourney—or if they spent the time building a kick-ass loading state. Anyway, four completely built alternate layouts were generated. I clicked into each one to see it larger and noticed they each used components from our styled design library. (I’m on a trial, so it’s not exactly components from our repo, but it demonstrates the promise.) And I felt like I just witnessed the future.

Image shows a side-by-side comparison of design screens from what appears to be Subframe, a design tool. On the left is a generic form page layout with fields for customer information, property details, billing options, job specifications, and financial information. On the right is a more refined "Create New Job" interface with improved organization, clearer section headings (Customer Information, Job Details, Work Description), and thumbnail previews of alternative design options at the bottom. Both interfaces share the same navigation header with Reports, Dashboard, Operations, Dispatch, and Accounting tabs. The bottom of the right panel indicates "Subframe AI is in beta."RetryClaude can make mistakes. Please double-check responses.

Subframe’s Ask AI mode drafted four options in under a minute, turning an outdated form into something much more user-friendly.

What Product Design in 2027 Might Look Like

From the AI 2027 scenario report, in the chapter, “March 2027: Algorithmic Breakthroughs”:

Three huge datacenters full of Agent-2 copies work day and night, churning out synthetic training data. Another two are used to update the weights. Agent-2 is getting smarter every day.

With the help of thousands of Agent-2 automated researchers, OpenBrain is making major algorithmic advances.

Aided by the new capabilities breakthroughs, Agent-3 is a fast and cheap superhuman coder. OpenBrain runs 200,000 Agent-3 copies in parallel, creating a workforce equivalent to 50,000 copies of the best human coder sped up by 30x. OpenBrain still keeps its human engineers on staff, because they have complementary skills needed to manage the teams of Agent-3 copies.

As I said at the top of this essay, AI is making AI and the innovations are compounding. With UX design, there will be a day when design is completely automated.

Imagine this. A product manager at a large-scale e-commerce site wants to decrease shopping cart abandonment by 10%. They task an AI agent to optimize a shopping cart flow with that metric as the goal. A week later, the agent returns the results:

  • It ran 25 experiments, with each experiment being a design variation of multiple pages.
  • Each experiment was with 1,000 visitors, totaling about 10% of their average weekly traffic.
  • Experiment #18 was the winner, resulting in an 11.3% decrease in cart abandonment.

The above will be possible. A few things have to fall in place first, though, and the building blocks are being made right now.

The Foundation Layer : Integrate Design Systems

The design industry has been promoting the benefits of design systems for many years now. What was once a Sisyphean uphill battle is now mostly easier. Development teams understand the benefits of using a shared and standardized component library.

To capture the larger piece of the design market that is not producing greenfield work, AI design tools like Subframe will have to depend on well-built component libraries. Their AI must be able to ingest and internalize design system documentation that govern how components should be used. 

Then we’ll be able to prompt new screens with working code into existence. 

**Forecast: **Within six months.

Professionals Still Need Control

Cursor—the AI-assisted development tool that’s captured the market—is VS Code enhanced with AI features. In other words, it is a professional-grade programming tool that allows developers to write and edit code, *and *generate it via AI chat. It gives the pros control. Contrast that with something like Lovable, which is aimed at designers and the code is accessible, but you have to look for it. The canvas and chat are prioritized.

For AI-assisted design tools to work, they need to give us designers control. That control comes in the form of curation and visual editing. Give us choices when generating alternates and let us tweak elements to our heart’s content—within the confines of the design system, of course. 

A diagram showing the process flow of creating a shopping cart checkout experience. At the top is a prompt box, which leads to four generated layout options below it. The bottom portion shows configuration panels for adjusting size and padding properties of the selected design.

The product design workflow in the future will look something like this: prompt the AI, view choices and select one, then use fine-grained controls to tweak.

Automating Design with Design Agents

Agent mode in Cursor is pretty astounding. You’ll see it plan its actions based on the prompt, then execute them one by one. If it encounters an error, it’ll diagnose and fix it. If it needs to install a package or launch the development server to test the app, it will do that. Sometimes, it can go for many minutes without needing intervention. It’s literally like watching a robot assemble a thingamajig. 

We will need this same level of agentic AI automation in design tools. If I could write in a chat box “Create a checkout flow for my site” and the AI design tool can generate a working cart page, payment page, and thank-you page from that one prompt using components from the design system, that would be incredible.

Yes, zero-to-one tools are starting to add this feature. Here’s a shopping cart flow from v0…

Building a shopping cart checkout flow in v0 was incredibly fast. Two minutes flat. This video is sped up 400%.

Polymet and Lovable were both able to create decent flows. There is also promise with Tempo, although the service was bugging out when I tested it earlier today. Tempo will first plan by writing a PRD, then it draws a flow diagram, then wireframes the flow, and then generates code for each screen. If I were to create a professional tool, this is how I would do it. I truly hope they can resolve their tech issues. 

**Forecast: **Within one year.

A screenshot of Tempo, an AI-powered design tool interface showing the generation of a complete checkout experience. The left sidebar displays a history of AI-assisted tasks including generating PRD, mermaid diagrams, wireframes and components. The center shows a checkout page preview with cart summary, checkout form, and order confirmation screens visible in a component-based layout.

Tempo’s workflow seems ideal. It generates a PRD, draws a flow diagram, creates wireframes, and finally codes the UI.

The Final Pieces: Integration and Deployment Agents

The final pieces to realizing our imaginary scenario are coding agents that integrate the frontend from AI design tools to the backend application, and then deploy the code to a server for public consumption. I’m not an expert here, so I’ll just hand-wave past this part. The AI-assisted design tooling mentioned above is frontend-only. For the data to flow and the business logic to work, the UI must be integrated with the backend.

CI/CD (Continuous Integration and Continuous Deployment) platforms like GitHub Actions and Vercel already exist today, so it’s not difficult to imagine deploys being initiated by AI agents.

**Forecast: **Within 18–24 months.

Where Is Figma?

The elephant in the room is Figma’s position in all this. Since their rocky debut of AI features last year, Figma has been trickling out small AI features like more powerful search, layer renaming, mock data generation, and image generation. The biggest AI feature they have is called First Draft, which is a relaunch of design generation. They seem to be stuck placating to designers and developers (Dev Mode), instead of considering how they can bring value to the entire organization. Maybe they will make a big announcement at Config, their upcoming user conference in May. But if they don’t compete with one of these aforementioned tools, they will be left behind.

To be clear, Figma is still going to be a necessary part of the design process. A canvas free from the confines of code allows for easy *manual *exploration. But the dream of closing the gap between design and code needs to come true sooner than later if we’re to take advantage of AI’s promise.

The Two-Year Horizon

As I said at the top of this essay, product design is going to change profoundly within the next two years. The trajectory is clear: AI is making AI, and the innovations are compounding rapidly. Design systems provide the structured foundation that AI needs, while tools like Subframe are developing the crucial integration with these systems.

For designers, this isn’t the end—if anything, it’s a transformation. We’ll shift from pixel-pushers to directors, from creators to curators. Our value will lie in knowing what to ask for and making the subtle refinements that require human taste and judgment.

The holy grail of seamless design-to-code is finally within reach. In 24 months, we won’t be debating if AI will transform product design—we’ll be reflecting on how quickly it happened.


1 I know Figma has the feature called Code Connect. I haven’t used it, but from what I can tell, you match your Figma component library to the code component library. Then in Dev Mode, it makes it easier for engineers to discern which component from the repo to use.

Karri Saarinen, writing for the Linear blog:

Unbounded AI, much like a river without banks, becomes powerful but directionless. Designers need to build the banks and bring shape to the direction of AI’s potential. But we face a fundamental tension in that AI sort of breaks our usual way of designing things, working back from function, and shaping the form.

I love the metaphor of AI being the a river and we designers are the banks. Feels very much in line with my notion that we need to become even better curators.

Saarinen continues, critiquing the generic chatbox being the primary form of interacting with AI:

One way I visualize this relationship between the form of traditional UI and the function of AI is through the metaphor of a ‘workbench’. Just as a carpenter’s workbench is familiar and purpose-built, providing an organized environment for tools and materials, a well-designed interface can create productive context for AI interactions. Rather than being a singular tool, the workbench serves as an environment that enhances the utility of other tools – including the ‘magic’ AI tools.

Software like Linear serves as this workbench. It provides structure, context, and a specialized environment for specific workflows. AI doesn’t replace the workbench, it’s a powerful new tool to place on top of it.

It’s interesting. I don’t know what Linear is telegraphing here, but if I had to guess, I wonder if it’s closer to being field-specific or workflow-specific, similar to Generative Fill in Photoshop. It’s a text field—not textarea—limited to a single workflow.

preview-1744257584139.png

Design for the AI age

For decades, interfaces have guided users along predefined roads. Think files and folders, buttons and menus, screens and flows. These familiar structures organize information and provide the comfort of knowing where you are and what's possible.

linear.app iconlinear.app
Closeup of a man with glasses, with code being reflected in the glasses

From Craft to Curation: Design Leadership in the Age of AI

In a recent podcast with partners at startup incubator Y Combinator, Jared Friedman, citing statistics from a survey with their current batch of founders says, “[The] crazy thing is one quarter of the founders said that more than 95% of their code base was AI generated, which is like an insane statistic. And it’s not like we funded a bunch of non-technical founders. Like every one of these people is highly tactical, completely capable of building their own product from scratch a year ago…”

A comment they shared from founder Leo Paz reads, “I think the role of Software Engineer will transition to Product Engineer. Human taste is now more important than ever as codegen tools make everyone a 10x engineer.”

Still from a YouTube video that shows a quote from Leo Paz

While vibe coding—the new term coined by Andrej Karpathy about coding by directing AI—is about leveraging AI for programming, it’s a window into what will happen to the software development lifecycle as a whole and how all the disciplines, including product management and design will be affected.

A skill inversion trend is happening. Being great at execution is becoming less valuable when AI tools can generate deliverables in seconds. Instead, our value as product professionals is shifting from mastering tools like Figma or languages like JavaScript, to strategic direction. We’re moving from the how to the what and why; from craft to curation. As Leo Paz says, “human taste is now more important than ever.”

The Traditional Value Hierarchy

The industry has been used to the model of unified teams for software development for the last 15–20 years. Product managers define requirements, manage the roadmap, and align stakeholders. Designers focus on the user interface, ensure visual appeal and usability, and prototype solutions. Engineers design the system architecture and then build the application via quality code.

For each of the core disciplines, execution was paramount. (Arguably, product management has always been more strategic, save for ticket writing.) Screens must be pixel-perfect and code must be efficient and bug-free.

The Forces Driving Inversion

Vibe Coding and Vibe Design

With new AI tools like Cursor and Lovable coming into the mix, the nature of implementation fundamentally changes. In Karpathy’s tweet about vibe coding, he says, “…I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.” He’s telling the LLM what he wants—his intent—and the AI delivers, with some cajoling. Jakob Nielsen picks up on this thread and applies it to vibe design. “Vibe design applies similar AI-assisted principles to UX design and user research, by focusing on high-level intent while delegating execution to AI.”

He goes on:

…vibe design emphasizes describing the desired feeling or outcome of a design, and letting AI propose the visual or interactive solutions​. Rather than manually drawing every element, a designer might say to an AI tool, “The interface feels a bit too formal; make it more playful and engaging,” and the AI could suggest color changes, typography tweaks, or animation accents to achieve that vibe. This is analogous to vibe coding’s natural language prompts, except the AI’s output is a design mockup or updated UI style instead of code.

This sounds very much like creative direction to me. It’s shaping the software. It’s using human taste to make it better.

Acceleration of Development Cycles

The founder of TrainLoop also says in the YC survey that his coding has sped up one-hundred-fold since six months ago. He says, “I’m no longer an engineer. I’m a product person.”

This means that experimentation is practically free. What’s the best way of creating a revenue forecasting tool? You can whip up three prototypes in about 10 minutes using Lovable and then get them in front of users. Of course, designers have always had the power to explore and create variations for an interface. But to have three functioning prototypes in 10 minutes? Impossible.

With this new-found coding superpower, the idea of bespoke, personal software is starting to take off. Non-coders like The New York Times’ Kevin Roose are using AI to create apps just for themselves, like an app that recommends what to pack his son for lunch based on the contents of his fridge. This is an evolution of the low-code/no-code movement of recent years. The gap between idea to reality is literally 10 minutes.

Democratization of Creation

Designer Tommy Geoco has a running series on his YouTube channel called “Build Wars” where he invites a couple of designers to battle head-to-head on the same assignment. In a livestream in late February, he and his cohosts had a professional web designer Brett Williams square off against 19 year-old Lovable marketer Henrik Westerlund. Their assignment was to build a landing page for a robotics company in 45 minutes, and they would be judged on design quality, execution quality, interactive quality, and strategic approach.

Play

Forty-five minutes to design and build a cohesive landing page is not enough time. Similar to TV cooking competitions, this artificial time constraint forced the two competitors to focus on what mattered and to use their time strategically. In the end, the professional designer won, but the commentators were impressed by how much a young marketer with little design experience could accomplish with AI tools in such a short time, suggesting a fundamental shift in how websites may be created in the future.

Cohost Tom Johnson suggested that small teams using AI tools will outcompete enterprises resistant to adopt them, “Teams that are pushing back on these new AI tools… get real… this is the way that things are going to go. You’re going to get destroyed by a team of 10 or five or one.”

The Maturation Cycle of Specialized Skills

“UX and UX people used to be special, but now we have become normal,” says Jakob Nielsen in a recent article about the decline of ROI from UX work. For enterprises, product or user experience design is now baseline. AI will dramatically increase the chances that young startups, too, will employ UX best practices.

Obviously, with AI, engineering is more accessible, but so are traditional product management processes. ChatGPT can write a pretty good PRD. Dovetail’s AI-powered insights supercharges customer discovery. And yes, why not use ChatGPT to write user stories and Jira tickets?

The New Value Hierarchy

From Technical Execution to Strategic Direction & Taste Curation

In the AI-augmented product development landscape, articulating vision and intent becomes significantly more valuable than implementation skills. While AI can generate better and better code and design assets, it can’t determine what is worth building or why.

Mike Krieger, cofounder of Instagram and now Chief Product Officer at Anthropic, identifies this change clearly. He believes the true bottleneck in product development is shifting to “alignment, deciding what to build, solving real user problems, and figuring out a cohesive product strategy.” These are all areas he describes as “very human problems” that we’re “at least three years away from models solving.”

This makes taste and judgement even more important. When everyone can generate good-enough, decent work via AI, having a strong point of view becomes a differentiator. To repeat Leo Paz, “Human taste is now more important than ever as codegen tools make everyone a 10x engineer.” The ability to recognize and curate quality outputs becomes as valuable as creating them manually.

This transformation manifests differently across disciplines but follows the same pattern:

  • Product managers shift from writing detailed requirements to articulating problems worth solving and recognizing valuable solutions
  • Designers transition from pixel-level execution to providing creative direction that guides AI-generated outputs
  • Engineers evolve from writing every line of code to focusing on architecture, quality standards, and system design Each role maintains its core focus while delegating much of the execution to AI tools. The skill becomes knowing what to ask for rather than how to build it—a fundamental reorientation of professional value.

From Process Execution to User Understanding

In a scene from the film "Blade Runner," replicant Leon Kowalski can't quite understand how to respond to the situation about the incapacitated tortoise.

In a scene from the film Blade Runner, replicant Leon Kowalski can’t quite understand how to respond to the situation about the incapacitated tortoise.

While AI is great at summarizing mountains of text, it can’t yet replicate human empathy or understand nuanced user needs. The human ability to interpret context, detect unstated problems, and understand emotional responses remains irreplaceable.

Nielsen emphasizes this point when discussing vibe coding and design: “Building the right product remains a human responsibility, in terms of understanding user needs, prioritizing features, and crafting a great user experience.” Even as AI handles more implementation, the work of understanding what users need remains distinctly human.

Research methodologies are evolving to leverage AI’s capabilities while maintaining human insight:

  • AI tools can process and analyze massive amounts of user feedback
  • Platforms like Dovetail now offer AI-powered insights from user research
  • However, interpreting this data and identifying meaningful patterns still requires human judgment

The gap between what users say they want and what they actually need remains a space where human intuition and empathy create tremendous value. Those who excel at extracting these insights will become increasingly valuable as AI handles more of the execution.

From Specialized to Cross-Functional

The traditional boundaries between product disciplines are blurring as AI lowers the barriers between the specialized areas of expertise. This transformation is enabling more fluid, cross-functional files and changing how teams collaborate.

The aforementioned YC podcast highlights this evolution with Leo Paz’s observation that software engineers will become product engineers. The YC founders who are using AI-generated code are already reaping the benefits. They act more like product people and talk to more customers so they can understand them better and build better products.

Concrete examples of this cross-functionality are already emerging:

  • Designers can now generate functional prototypes without developer assistance using tools like Lovable
  • Product managers can create basic UI mockups to communicate their ideas more effectively
  • Engineers can make design adjustments directly rather than waiting for design handoffs

This doesn’t mean that all specialization disappears. As Diana Hu from YC notes:

Zero-to-one will be great for vibe coding where founders can ship features very quickly. But once they hit product market fit, they’re still going to have a lot of really hardcore systems engineering, where you need to get from the one to n and you need to hire very different kinds of people.

The result is a more nuanced specialization landscape. Early-stage products benefit from generalists who can work across domains with AI assistance. As products mature, deeper expertise remains valuable but is focused on different aspects: system architecture rather than implementation details, information architecture rather than UI production, product strategy rather than feature specification.

Team structures are evolving in response:

  • Smaller, more fluid teams with less rigid role definitions
  • T-shaped skills becoming increasingly valuable—depth in one area with breadth across others
  • New collaboration models replacing traditional waterfall handoffs
  • Emerging hybrid roles that combine traditionally separate domains

The most competitive teams will find the right balance between AI capabilities and human direction, creating new workflows that leverage both. As Johnson warned in the Build Wars competition, “Teams that are pushing back on these new AI tools, get real! This is the way that things are going to go. You’re going to get destroyed by a team of 10 or five or one.”

The ability to adapt across domains is becoming a meta-skill in itself. Those who can navigate multiple disciplines while maintaining a consistent vision will thrive in this new environment where execution is increasingly delegated to artificial intelligence.

Thriving in the Inverted Landscape

The future is already here. AI is fundamentally inverting the skill hierarchy in product development, creating opportunities for those willing to adapt.

Product professionals who succeed in this new landscape will be those who embrace this inversion rather than resist it. This means focusing less on execution mechanics and more on the strategic and human elements that AI cannot replicate: vision, judgment, and taste.

For product managers, double down on developing the abilities to extract profound insights from user conversations and articulate clear, compelling problem statements. Your value will increasingly come from knowing which problems are worth solving rather than specifying how to solve them. AI also can’t align stakeholders and prioritize the work.

For designers, invest in strengthening your design direction skills. The best designers will evolve from skilled craftspeople to visionaries who can guide AI toward creating experiences that resonate emotionally with users. Develop your critical eye and the language to articulate what makes a design succeed or fail. Remember that design has always been about the why.

For engineers, emphasize systems thinking and architecture over implementation details. Your unique value will come from designing resilient, scalable systems and making critical technical decisions that AI cannot yet make autonomously.

Across all roles, three meta-skills will differentiate the exceptional from the merely competent:

  • Prompt engineering: The ability to effectively direct AI tools
  • Judgment and taste development: The discernment to recognize quality and make value-based decisions
  • Cross-functional fluency: The capacity to work effectively across traditional role boundaries

We’re seeing the biggest shift in how we build products since agile came along. Teams are getting smaller and more flexible. Specialized roles are blurring together. And product cycles that used to take months now take days.

There is a silver lining. We can finally focus on what actually matters: solving real problems for real people. By letting AI handle the grunt work, we can spend our time understanding users better and creating things that genuinely improve their lives.

Companies that get this shift will win big. Those that reorganize around these new realities first will pull ahead. But don’t wait too long—as Nielsen points out, this “land grab” won’t last forever. Soon enough, everyone will be working this way.

The future belongs to people who can set the vision and direct AI to make it happen, not those hanging onto skills that AI is rapidly taking over. Now’s the time to level up how you think about products, not just how you build them. In this new world, your strategic thinking and taste matter more than your execution skills.

A cut-up Sonos speaker against a backdrop of cassette tapes

When the Music Stopped: Inside the Sonos App Disaster

The fall of Sonos isn’t as simple as a botched app redesign. Instead, it is the cumulative result of poor strategy, hubris, and forgetting the company’s core value proposition. To recap, Sonos rolled out a new mobile app in May 2024, promising “an unprecedented streaming experience.” Instead, it was a severely handicapped app, missing core features and broke users’ systems. By January 2025, that failed launch wiped nearly $500 million from the company’s market value and cost CEO Patrick Spence his job.

What happened? Why did Sonos go backwards on accessibility? Why did the company remove features like sleep timers and queue management? Immediately after the rollout, the backlash began to snowball into a major crisis.

A collage of torn newspaper-style headlines from Bloomberg, Wired, and The Verge, all criticizing the new Sonos app. Bloomberg’s headline states, “The Volume of Sonos Complaints Is Deafening,” mentioning customer frustration and stock decline. Wired’s headline reads, “Many People Do Not Like the New Sonos App.” The Verge’s article, titled “The new Sonos app is missing a lot of features, and people aren’t happy,” highlights missing features despite increased speed and customization.

As a designer and longtime Sonos customer who was also affected by the terrible new app, a little piece of me died inside each time I read the word “redesign.” It was hard not to take it personally, knowing that my profession could have anything to do with how things turned out. Was it really Design’s fault?

Even after devouring dozens of news articles, social media posts, and company statements, I couldn’t get a clear picture of why the company made the decisions it did. I cast a net on LinkedIn, reaching out to current and former designers who worked at Sonos. This story is based on hours of conversations between several employees and me. They only agreed to talk on the condition of anonymity. I’ve also added context from public reporting.

The shape of the story isn’t much different than what’s been reported publicly. However, the inner mechanics of how those missteps happened are educational. The Sonos tale illustrates the broader challenges that most companies face as they grow and evolve. How do you modernize aging technology without breaking what works? How do public company pressures affect product decisions? And most importantly, how do organizations maintain their core values and user focus as they scale?

It Just Works

Whenever I moved into a new home, I used to always set up the audio system first. Speaker cable had to be routed under the carpet, along the baseboard, or through walls and floors. To get speakers in the right place, cable management was always a challenge, especially with a surround setup. Then Sonos came along and said, “Wires? We don’t need no stinking wires.” (OK, so they didn’t really say that. Their first wireless speaker, the PLAY:5, was launched in late 2009.)

I purchased my first pair of Sonos speakers over ten years ago. I had recently moved into a modest one-bedroom apartment in Venice, and I liked the idea of hearing my music throughout the place. Instead of running cables, setting up the two PLAY:1 speakers was simple. At the time, you had to plug into Ethernet for the setup and keep at least one component hardwired in. But once that was done, adding the other speaker was easy.

The best technology is often invisible. It turns out that making it work this well wasn’t easy. According to their own history page, in its early days, the company made the difficult decision to build a distributed system where speakers could communicate directly with each other, rather than relying on central control. It was a more complex technical path, but one that delivered a far better user experience. The founding team spent months perfecting their mesh networking technology, writing custom Linux drivers, and ensuring their speakers would stay perfectly synced when playing music.

A network architecture diagram for a Sonos audio system, showing Zone Players, speakers, a home network, and various audio sources like a computer, MP3 store, CD player, and internet connectivity. The diagram includes wired and wireless connections, a WiFi handheld controller, and a legend explaining connection types. Handwritten notes describe the Zone Player’s ability to play, fetch, and store MP3 files for playback across multiple zones. Some elements, such as source converters, are crossed out.

As a new Sonos owner, a concept that was a little challenging to wrap my head around was that the speaker is the player. Instead of casting music from my phone or computer to the speaker, the speaker itself streamed the music from my network-attached storage (NAS, aka a server) or streaming services like Pandora or Spotify.

One of my sources told me about the “beer test” they had at Sonos. If you’re having a house party and run out of beer, you could leave the house without stopping the music. This is a core Sonos value proposition.

A Rat’s Nest: The Weight of Tech Debt

The original Sonos technology stack, built carefully and methodically in the early 2000s, had served the company well. Its products always passed the beer test. However, two decades later, the company’s software infrastructure became increasingly difficult to maintain and update. According to one of my sources, who worked extensively on the platform, the codebase had become a “rat’s nest,” making even simple changes hugely challenging.

The tech debt had been accumulating for years. While Sonos continued adding features like Bluetooth playback and expanding its product line, the underlying architecture remained largely unchanged. The breaking point came with the development of the Sonos Ace headphones. This major new product category required significant changes to how the Sonos app handled device control and audio streaming.

Rather than tackle this technical debt incrementally, Sonos chose to completely rewrite its mobile app. This “clean slate” approach was seen as the fastest way to modernize the platform. But as many developers know, complete refactors are notoriously risky. And unlike in its early days, when the company would delay launches to get things right—famously once stopping production lines over a glue issue—this time Sonos seemed determined to push forward regardless of quality concerns.

Set Up for Failure

The rewrite project began around 2022 and would span approximately two years. The team did many things right initially—spending a year and a half conducting rigorous user testing and building functional prototypes using SwiftUI. According to my sources, these prototypes and tests validated their direction—the new design was a clear improvement over the current experience. The problem wasn’t the vision. It was execution.

A wave of new product managers, brought in around this time, were eager to make their mark but lacked deep knowledge of Sonos’s ecosystem. One designer noted it was “the opposite of normal feature creep”—while product designers typically push for more features, in this case they were the ones advocating for focusing on the basics.

As a product designer, this role reversal is particularly telling. Typically in a product org, designers advocate for new features and enhancements, while PMs act as a check on scope creep, ensuring we stay focused on shipping. When this dynamic inverts—when designers become the conservative voice arguing for stability and basic functionality—it’s a major red flag. It’s like architects pleading to fix the foundation while the clients want to add a third story. The fact that Sonos’s designers were raising these alarms, only to be overruled, speaks volumes about the company’s shifting priorities.

The situation became more complicated when the app refactor project, codenamed Passport, was coupled to the hardware launch schedule for the Ace headphones. One of my sources described this coupling of hardware and software releases as “the Achilles heel” of the entire project. With the Ace’s launch date set in stone, the software team faced immovable deadlines for what should have been a more flexible development timeline. This decision and many others, according to another source, were made behind closed doors, with individual contributors being told what to do without room for discussion. This left experienced team members feeling voiceless in crucial technical and product decisions. All that careful research and testing began to unravel as teams rushed to meet the hardware schedule.

This misalignment between product management and design was further complicated by organizational changes in the months leading up to launch. First, Sonos laid off many members of its forward-thinking teams. Then, closer to launch, another round of cuts significantly impacted QA and user research staff. The remaining teams were stretched thin, simultaneously maintaining the existing S2 app while building its replacement. The combination of a growing backlog from years prior and diminished testing resources created a perfect storm.

Feeding Wall Street

A data-driven slide showing Sonos’ customer base growth and revenue opportunities. It highlights increasing product registrations, growth in multi-product households, and a potential >$6 billion revenue opportunity by converting single-product households to multi-product ones.

Measurement myopia can lead to unintended consequences. When Sonos became public in 2018, three metrics the company reported to Wall Street were products registered, Sonos households, and products per household. Requiring customers to register their products is easy enough for a stationary WiFi-connected speaker. But it’s a different issue when it’s a portable one like the Sonos Roam when it’ll be used primarily as a Bluetooth speaker. When my daughter moved into the dorms at UCLA two years ago, I bought her a Roam. But because of Sonos’ quarterly financial reporting and the necessity to tabulate product registrations and new households, her Bluetooth speaker was a paperweight until she came home for Christmas. The speaker required WiFi connectivity and account creation for initial setup, but the university’s network security prevented the required initial WiFi connection.

The Content Distraction

A promotional image for Sonos Radio, featuring bold white text over a red, semi-transparent square with a bubbly texture. The background shows a tattooed woman wearing a translucent green top, holding a patterned ceramic mug. Below the main text, a caption reads “Now Playing – Indie Gold”, with a play button icon beneath it. The Sonos logo is positioned vertically on the right side.

Perhaps the most egregious example of misplaced priorities, driven by the need to show revenue growth, was Sonos’ investment into content features. Sonos Radio launched in April 2020 as a complimentary service for owners. An HD, ad-free paid tier launched later in the same year. Clearly, the thirst to generate another revenue stream, especially a monthly recurring one, was the impetus behind Sonos Radio. Customers thought of Sonos as a hardware company, not a content one.

At the time of the Sonos Radio HD launch, “Beagle” a user in Sonos’ community forums, wrote (emphasis mine):

I predicted a subscription service in a post a few months back. I think it’s the inevitable outcome of floating the company - they now have to demonstrate ways of increasing revenue streams for their shareholders. In the U.K the U.S ads from the free version seem bizarre and irrelevant.

If Sonos wish to commoditise streaming music that’s their business but I see nothing new or even as good as other available services. What really concerns me is if Sonos were to start “encouraging” (forcing) users to access their streams by removing Tunein etc from the app. I’m not trying to demonise Sonos, heaven knows I own enough of their products but I have a healthy scepticism when companies join an already crowded marketplace with less than stellar offerings. Currently I have a choice between Sonos Radio and Tunein versions of all the stations I wish to use. I’ve tried both and am now going to switch everything to Tunein. Should Sonos choose to “encourage” me to use their service that would be the end of my use of their products. That may sound dramatic and hopefully will prove unnecessary but corporate arm twisting is not for me.

My sources said the company started growing its content team, reflecting the belief that Sonos would become users’ primary way to discover and consume music. However, this strategy ignored a fundamental reality: Sonos would never be able to do Spotify better than Spotify or Apple Music better than Apple.

This split focus had real consequences. As the content team expanded, the small controls team struggled with a significant backlog of UX and tech debt, often diverted to other mandatory projects. For example, one employee mentioned that a common user fear was playing music in the wrong room. I can imagine the grief I’d get from my wife if I accidentally played my emo Death Cab For Cutie while she was listening to her Eckhart Tolle podcast in the other room. Dozens, if not hundreds of paper cuts like this remained unaddressed as resources went to building content discovery features that many users would never use. It’s evident that when buying a speaker, as a user, you want to be able to control it to play your music. It’s much less evident that you want to replace your Spotify with Sonos Radio.

But while old time customers like Beagle didn’t appreciate the addition of Sonos content, it’s not conclusive that it was a complete waste of time and effort. The last mention of Sonos Radio performance was in the Q4 2022 earnings call:

Sonos Radio has become the #1 most listened to service on Sonos, and accounted for nearly 30% of all listening.

The company has said it will break out the revenue from Sonos Radio when it becomes material. It has yet to do so in the four years since its release.

The Release Decision

Four screenshots of the Sonos app interface on a mobile device, displaying music playback, browsing, and system controls. The first screen shows the home screen with recently played albums, music services, and a playback bar. The second screen presents a search interface with Apple Music and Spotify options. The third screen displays the now-playing view with album art and playback controls. The fourth screen shows multi-room speaker controls with volume levels and playback status for different rooms.

As the launch date approached, concerns about readiness grew. According to my sources, experienced engineers and designers warned that the app wasn’t ready. Basic features were missing or unstable. The new cloud-based architecture was causing latency issues. But with the Ace launch looming and business pressures mounting, these warnings fell on deaf ears.

The aftermath was swift and severe. Like countless other users, I found myself struggling with an app that had suddenly become frustratingly sluggish. Basic features that had worked reliably for years became unpredictable. Speaker groups would randomly disconnect. Simple actions like adjusting volume now had noticeable delays. The UX was confusing. The elegant simplicity that had made Sonos special was gone.

Making matters worse, the company couldn’t simply roll back to the previous version. The new app’s architecture was fundamentally incompatible with the old one, and the cloud services had been updated to support the new system. Sonos was stuck trying to fix issues on the fly while customers grew increasingly frustrated.

Looking Forward

Since the PR disaster, the company has steadily improved the app. It even published a public Trello board to keep customers apprised of its progress, though progress seemed to stall at some point, and it has since been retired.

A Trello board titled “Sonos App Improvement & Bug Tracker” displaying various columns with updates on issues, roadmap items, upcoming features, recent fixes, and implemented solutions. Categories include system issues, volume responsiveness, music library performance, and accessibility improvements for the Sonos app.

Tom Conrad, cofounder of Pandora and a director on Sonos’s board, became the company’s interim CEO after Patrick Spence was discharged. Conrad addressed these issues head-on in his first letter to employees:

I think we’ll all agree that this year we’ve let far too many people down. As we’ve seen, getting some important things right (Arc Ultra and Ace are remarkable products!) is just not enough when our customers’ alarms don’t go off, their kids can’t hear their playlist during breakfast, their surrounds don’t fire, or they can’t pause the music in time to answer the buzzing doorbell.

Conrad signals that the company has already begun shifting resources back to core functionality, promising to “get back to the innovation that is at the heart of Sonos’s incredible history.” But rebuilding trust with customers will take time.

Since Conrad’s takeover, more top brass from Sonos left the company, including the chief product officer, the chief commercial officer, and the chief marketing officer.

Lessons for Product Teams

I admit that my original hypothesis in writing this piece was that B2C tech companies are less customer-oriented in their product management decisions than B2B firms. I think about the likes of Meta making product decisions to juice engagement. But in more conversations with PM friends and lurking in r/ProductManagement, that hypothesis is debunked. Sonos just ended making a bunch of poor decisions.

One designer noted that what happened at Sonos isn’t necessarily unique. Incentives, organizational structures, and inertia can all color decision-making at any company. As designers, product managers, and members of product teams, what can we learn from Sonos’s series of unfortunate events?

  1. Don’t let tech debt get out of control. Companies should not let technical debt accumulate until a complete rewrite becomes necessary. Instead, they need processes to modernize their code constantly.
  2. Protect core functionality. Maintaining core functionality must be prioritized over new features when modernizing platforms. After all, users care more about reliability than new fancy new capabilities. You simply can’t mess up what’s already working.
  3. Organizational memory matters. New leaders must understand and respect institutional knowledge about technology, products, and customers. Quick changes without deep understanding can be dangerous.
  4. Listen to the OG. When experienced team members raise concerns, those warnings deserve serious consideration.
  5. Align incentives with user needs. Organizations need to create systems and incentives that reward user-centric decision making. When the broader system prioritizes other metrics, even well-intentioned teams can drift away from user needs.

As a designer, I’m glad I now understand it wasn’t Design’s fault. In fact, the design team at Sonos tried to warn the powers-that-be about the impending disaster.

As a Sonos customer, I’m hopeful that Sonos will recover. I love their products—when they work. The company faces months of hard work to rebuild customer trust. For the broader tech industry, it is a reminder that even well-resourced companies can stumble when they lose sight of their core value proposition in pursuit of new initiatives.

As one of my sources reflected, the magic of Sonos was always in making complex technology invisible—you just wanted to play music, and it worked. Somewhere along the way, that simple truth got lost in the noise.


P.S. I wanted to acknowledge Michael Tsai’s excellent post on his blog about this fiasco. He’s been constantly updating it with new links from across the web. I read all of those sources when writing this post.

Zuckerberg believes Apple “[hasn’t] really invented anything great in a while…”

Appearing on Joe Rogan’s podcast, this week, Meta CEO Mark Zuckerberg said that Apple “[hasn’t] really invented anything great in a while. Steve Jobs invented the iPhone and now they’re just kind of sitting on it 20 years later.”

Let’s take a look at some hard metrics, shall we?

I did a search of the USPTO site for patents filed by Apple and Meta since 2007. In that time period, Apple filed for 44,699 patents. Meta, nee Facebook, filed for 4,839, or about 10% of Apple’s inventions.

Side-by-side screenshots of patent searches from the USPTO database showing results for Apple Inc. and Meta Platforms. The Apple search (left) returned 44,699 results since 2007, while the Meta search (right) returned 4,839 results.

You can argue that not all companies file for patents for everything, or that Zuck said Apple hasn’t “really invented anything great in a while.” Great being the keyword here.

He left out the following “great” Apple inventions since 2007:

  • App Store (2008)
  • iPad (2010)
  • Apple Pay (2014)
  • Swift (2014)
  • Apple Watch (2015)
  • AirPods (2016)
  • Face ID (2017)
  • Neural engine SoC (2017)
  • SwiftUI (2019)
  • Apple silicon (2020)
  • Vision Pro (2023) [arguable, since it wasn’t a commercial success, but definitely a technical feat]

The App Store, I’d argue, is on the same level as the iPhone because it opened up an entire new economy for developers, resulting in an astounding $935 billion market in 2025. Apple Watch might be a close second, kicking off a $38 billion market for smartwatches.

Let’s think about Meta’s since 2007, excluding acquisitions*:

  • Facebook Messenger (2011)
  • React (2013)
  • React Native (2015)
  • GraphQL (2015)
  • PyTorch (2016)
  • Ray-Ban Stories (2021)
  • Llama (2023)

*Yes, excluding acquisitions, as Zuckerberg is talking about inventions. That’s why WhatsApp, Instagram, and Quest are not included. Anything I’m missing on this list?

As you can see, other than Messenger and the Ray-Ban glasses, the rest of Meta’s inventions are aimed at developers, not consumers. I’m being a little generous.

Update 1/12/2025

I’ve added some products to the lists above based on some replies to my Threads post. I also added a sentence to clarify excluding acquisitions.

Page 1 of 2