Skip to content

If you’re a designer who feels the ground shifting but doesn’t know where to step, Erika Flowers built a free, structured curriculum for exactly that moment. Zero-Vector Design is her framework for collapsing the handoff between design and engineering, using AI agents as crew rather than replacements. The distinction she draws between this and vibe coding is worth internalizing:

You bring the systems thinking, the architecture, the years of knowing what good looks like. The AI extends your reach, not your judgment. Speed without intention is just faster failure. Speed with intention is leverage.

Six levels, 60+ lessons, all free. Worth bookmarking.

Zero-Vector Design brand card on dark background with tagline "From intent to artifact, directly." and website zerovector.design

Zero-Vector Design

A design philosophy for the age of AI. No intermediary. No translation layer. No friction. From intent to artifact, directly.

zerovector.design iconzerovector.design

Weber Wong’s “artifact thinking“ names the problem: creative work that produces one-off outputs, each beginning from scratch. Prompts are artifacts. Skills are not.

Nick Babich, following up his earlier roundup of Claude skills, looks at Anthropic’s skill-creator, a meta-skill that generates and evaluates new skills. His framing of what a skill actually is:

Many people explain the role of a skill as a set of instructions that Claude automatically activates for a particular task. While this is a correct way to describe its behavior, it’s better to think of a skill as a recipe. Just like when we cook something, we rely on a recipe to do the job correctly, Claude will rely on a dedicated skill.

Recipes compound. You refine them, share them, adapt them for new contexts. Prompts are disposable. Skills persist.

And now skills can write other skills. Babich walks through the full skill-creator setup, and the most interesting detail is the self-evaluation loop:

The great thing about Skill Creator is that it triggers a process that evaluates the quality of output a newly created skill will produce. This evaluation is exactly what helps you achieve better results with your skill.

Worth following along if you’re building your own. (And you should be!)

Title graphic for "Claude Skills 2.0" featuring a terracotta square with a white silhouetted head containing a flower or starburst design.

Claude Skills 2.0 for Product Designers

Anthropic has recently improved the process of creating new Claude Skills, and this improvement is so significant that it almost feels like…

uxplanet.org iconuxplanet.org

Most design teams treat the design system as the starting point. Open a new project, pull in the component library, start assembling. It’s efficient. It’s also a trap according to one designer.

David Hoang, writing for Proof of Concept:

I start without a design system. This is deliberate. Production-grade components carry assumptions—spacing, hierarchy, interaction patterns—that narrow the solution space before you’ve had a chance to explore it. If I’m proposing a feature, the design system is the right starting point. But in exploration mode, the system comes later. Sketches are for divergence; design systems are instruments of convergence.

Design systems exist to create consistency, not ideas. When you reach for them too early, you may be converging before you’ve diverged.

Hoang’s workflow inverts the order: sketch unconstrained in code, dial up technical fidelity first, bring the design system in only after you’ve found directions worth pursuing. LLMs make that final step nearly free:

The design system isn’t a starting point—it’s a finishing move. You sketch unconstrained to explore the problem space, then snap your best ideas onto the system’s rails to see if they hold up. The LLM makes that snap nearly instant, so I can run the full loop—sketch, evaluate, systemize—multiple times in a single session. Ideas that break under the system’s constraints get caught early. Ideas that survive get stronger.

The designer makes every structural decision. The LLM handles the re-skinning. Production work, not judgment work.

And ideas that break the system’s constraints surface gaps worth contributing back. That’s the part most design system teams miss. The system should learn from the exploration it constrains, not just gate it.

Hand-drawn diagram showing multiple "Code slides" feeding into a central "Draw tool" grid, which outputs to a "Solution" box on the right.

Sketching with code

Issue 286: Treating code like a pencil, not a blueprint

proofofconcept.pub iconproofofconcept.pub

Director. Orchestrator. Architect. Different words for the same shift. Stop making things one at a time. Start building systems that make things.

Weber Wong, writing for Every, gives this shift a useful name: artifact thinking.

I call this mental model artifact thinking: creative work that produces discrete outputs, one at a time, each beginning from scratch. Traditional tools like Photoshop and Illustrator, which demand endless hand-tuned adjustments and manual refinements to produce a single polished image, trap you in this way of working. Midjourney and DALL-E feel like liberation because they generate outputs so quickly, and you can communicate with them in the same language you speak every day. But visual prompts, too, are one-time, disposable things. You can’t hand them to a colleague and be confident you will get the same result. The magic of near-instantaneous generation masks the fact that you are still in artifact thinking.

That last line is the sharp one. Adopting Midjourney doesn’t mean you’ve left artifact thinking. You’re still producing one-offs—just faster ones. The orchestrator gap isn’t about which tool you use. It’s about whether you’re building systems or pressing buttons.

Wong’s proposed fix is node-based visual programming—workflows you can inspect, modify, and share. He knows it sounds like he’s asking designers to become engineers:

I understand the resistance to this idea. Some people hear “visual programming” and think we’re trying to turn designers into engineers. That’s backwards. We’re trying to give creative professionals the power that programmers have always had: the ability to build systems that work while you sleep, that can be stored as multiple versions and shared and improved, and that take what people already know how to do and make it something anyone can run.

I’ve been asking for canvas-first tools, not chatbox-first ones. Wong is right that chat alone isn’t enough for professional creative work. “Artifact thinking” is a concept worth keeping—regardless of whether Flora is the tool that finally kills it.

Person wearing a "node-pilled" cap typing at a keyboard with red strings tangled around their fingers, overlaid with the word "THESIS.

Creative Work Is About to Look a Lot More Like Programming

Flora’s Weber Wong on why creative professionals need to stop thinking in artifacts and start thinking in systems

every.to iconevery.to

Designers have been saying this for years. Cameras don’t take pictures, photographers do. Tools don’t make you a better designer. Now the PM world is arriving at the same conclusion.

Shreyas Doshi argues that AI tools will commoditize across companies—any effective tool becomes common knowledge—and the only durable career moat is the human judgment applied on top of AI outputs. He calls it “Product Sense.”

Tools have never been a significant source of alpha in product success and that is not changing with AI tools. What this means for you personally is that - while you can and should use all the AI tools you can - you cannot bank on your use of those AI tools today to provide you a long-term advantage in your product career.

Replace “product people” with “designers” and this could be a post on my blog. The five skills Shreyas decomposes Product Sense into—empathy, simulation, strategic thinking, taste, creative execution—are skills good designers have cultivated under different names for decades.

The piece includes an appended Claude conversation that stress-tests the argument. The sharpest exchange challenges the Silicon Valley orthodoxy that fast B+ beats slow A+:

In practice, the B+ decision made quickly tends to create a cascade of follow-on decisions, each of which is also slightly off, and you end up significantly off-course in ways that are expensive to correct. Whereas the A+ decision, even if it takes longer, tends to set you on a trajectory where subsequent decisions are easier and more obvious. The compounding effect favors quality of judgment, not speed of judgment.

Good judgment compounds. Bad judgment compounds too, just in the wrong direction.

Definition slide: "Product Sense is the ability to make correct product decisions, both macro & micro, in the presence of significant ambiguity.

Why Product Sense is the only product skill that will matter in the AI age

I get asked all the time:

shreyasdoshi.substack.com iconshreyasdoshi.substack.com

Eugene O’Neill had a line: “Critics? I love every bone in their heads.” I think about it whenever someone proposes that what design really needs is more people who understand it without doing it.

Jon Kolko, writing for Interactions Magazine, argues that design is experiencing a disciplinary “turn”—away from making and toward literacy. Drawing on Richard Buchanan’s 1992 framework of design as a “liberal art of technological culture,” he proposes a future with fewer practitioners and more people who can read, critique, and discuss designed artifacts without designing them.

Rather than viewing design as an applied craft, a liberal art of technological culture recasts design as a way of understanding our role in the designed world around us. It’s difficult for many practitioners to imagine this, because making things is so integral to the idea of design, and embedding design in the humanities is very different from viewing it as an organizing principle like the humanities. But if design is not about making things, but instead about understanding the things that are made, vocation is no longer a goal of design education.

Kolko’s diagnosis is sharp—the layoffs, the AI anxiety, the assembly-line feeling of modern product design. And he sits with the discomfort rather than cheerleading:

As a craftsperson and a maker, I don’t like the way this turn feels, because it appears threatening to the fundamentals of the profession. Understanding design without making things seems impossible. I don’t like this development as an educator either, because it means my students, trained to be practitioners, may find no design jobs, despite getting a very expensive education. But as someone observing the various trends chipping away at what is actually meaningful about being a designer—our ability to humanize the dysfunction of technological change—I am drawn to this turn.

I respect the honesty. But I have a love/hate relationship with critics. It’s easy to throw stones from a perch. When you’re in it—fighting organizational politics, staring at data, listening to customers, compromising with engineering—the outcomes are never as clean as you’d hoped. Design literacy matters. But literacy divorced from practice produces critics, not designers. The world doesn’t need more critics. It needs more people who understand why the compromises were made via lived experience.

Jon Kolko - A Design Turn

Designers are anxious. Layoffs have not let up, AI has seemingly trivialized our magic skill of making things, and practicing designers describe the assembly-style nature of software design as soul-crushing.

jonkolko.com iconjonkolko.com

Three people at three different companies, same conclusion. Former Apple designer Jason Yuan calls intelligence “the new materiality” in the previously linked Fast Company piece. Brian Lovin says Notion’s design team can’t design AI products in Figma because the material doesn’t live there. Jenny Blackburn, Google’s VP of UX for Gemini, puts it most directly.

Eli Woolery and Aarron Walter, writing for Design Better, synthesized interviews they’ve done with Google design leaders across YouTube, Search, and Gemini. Blackburn’s framing:

The model is the material that we are designing with, and the more you understand the material, the more you can innovate with it.

You can only direct as well as you understand. But this material behaves unlike anything designers have worked with before. Blackburn on the risk of over-constraining it:

One of the challenges is that these models are so capable. In many ways, they’re actually more capable than you even expect as a designer, and so the risk is that you actually add too much UI that limits the value that the model can provide that would come if you just facilitated a direct conversation between the user and the model.

The Gemini team’s response is smart. When users wrote too-short prompts for custom Gems, they didn’t add a tutorial. They added a “magic wand” that expands the prompt but doesn’t submit it. The user reviews, edits, learns. Teaching without lecturing.

Every previous design material—pixels, paper, aluminum—is deterministic. You shape it, it stays shaped. AI models are probabilistic. Same prompt, different results. Understanding this material isn’t like understanding clay. It’s like understanding weather.

The piece also covers YouTube’s disciplined “bundles” strategy and Search’s AI reimagining. Worth the full read.

Illustrated map of scattered islands in a blue ocean, each hosting different ecosystems and creatures including dinosaurs, large mammals, birds, and desert cacti.

The Roundup (in depth): Google’s 3 design strategies shaping their most popular products

We go deep into YouTube, Gemini, and Search design strategy

designbetterpodcast.com icondesignbetterpodcast.com

I believe in the shokunin mentality. Obsessive iteration, pursuing mastery across decades. But the designers building at the frontier right now are telling a different story.

Mark Wilson, writing for Fast Company, visited Cursor, Anthropic, OpenAI, and Krea in San Francisco. Former Apple designer Jason Yuan, now building his own AI startup:

“You can’t do the old school Apple thing of like, create lickable craft and interface. You can’t because, by the time you’ve done the best interface for ChatGPT 3, you’re on GPT 6.”

That stings a little. The Apple tradition assumes the target holds still long enough to polish. When the platform shifts every few months, polish is a liability.

Anthropic’s head of design Joel Lewenstein is making the same bet:

“Things are moving so fast that we just have to experiment faster. Convergence is hard. Because you have to figure out what’s shared. You have to build that shared path. You have all of the fringe things that people loved on these other systems. And there’s too much changing too quickly.”

Anthropic built Cowork in five or 10 days (depending on who you ask). Ship first, converge later.

What’s telling is who’s embracing this. Yuan and Abs Chowdhury—both former Apple designers, trained in the tradition of lickable craft—have each gone all-in on vibecoding at their startups. Chowdhury transferred rough designs from Photoshop(!) straight into AI code tools. Yuan built his first product mostly alongside AI:

“There’s a new reason to raise lots of money, which is compute. If you have lots of conviction, and you know exactly what you want, like, why would you hire another 20 other people right now to tell you what you’re doing? It’s a coordination cost.”

Yuan calls this the best time to be an “auteur.” The designer who doesn’t wait for engineering to realize the vision, who directs AI the way a film director directs a crew. It’s the orchestrator gap playing out in real time.

I’m not ready to abandon the shokunin mentality. But I’m starting to think the object of obsession needs to shift, from polishing pixels to refining judgment. The craft isn’t in the surface anymore. It’s in knowing what to build.

Wilson’s full piece covers a dozen people across the industry and is worth reading end to end.

Abstract illustration of a chat bubble filled with layered geometric shapes and AI sparkle icons in yellow, blue, and red on a dark background.

‘We just have to experiment faster’: AI’s changed design forever. Now what?

Designers are now coders—or better be. Your interface is a moat—or irrelevant. Inside the dizzying chaos of how AI is upending the design profession, starring its high priests at Anthropic, OpenAI, Cursor, Krea, and more.

fastcompany.com iconfastcompany.com

Notion built a prototype playground for their designers. It’s a single Next.js repo with shared styles and slash commands for deployment. The infrastructure is solid. The adoption question is harder.

Brian Lovin, talking to Claire Vo on How I AI:

It’s still a Next.js app. It’s still React and TypeScript and Git and branches and it’s just a lot of concepts to throw at someone who maybe is used to only prototyping in Figma or they’re intimidated by a terminal or code. And so I’m trying to figure out like how do we make this thing more approachable? How do we make it easier to onboard but also not dumbed down, right? I want people to learn how to use computers. I want people to even subconsciously absorb the ideas of git and branching and pull requests and merging.

“Make it easier but not dumbed down” is the tension every team building AI design tooling is going to hit. Lovin wants designers to actually learn Git, not just have it abstracted away. That’s a bet on long-term capability over short-term adoption. If Notion, with its engineering culture and resources, is still working through this, the rest of the industry has a longer road than the demos suggest.

But Lovin makes a sharp case for why the effort is worth it, especially for AI product design:

I don’t think you can design a good chat experience in Figma. You can design what the chat input looks like. You could design a little chat bubble and a send button and a dropdown for model picker. I think all that’s fine in Figma, but what you can’t design in Figma is what it actually will feel like to use that thing. I probably should have said this at the very beginning, but the reason Prototype Playground existed is because when I started working on Notion AI, I was literally designing conversations in Figma — the user’s going to say this, and then the AI is going to say this, and then it’s going to work perfectly and create a page or a database. You mock these golden paths in Figma and then the engineers go and they build it. And it just doesn’t work that way, right? You send a message, the AI gets stuck, or asks a follow-up question, or does the wrong thing and you need to correct it.

This is the strongest argument I’ve heard for code-first prototyping of AI features. Static mocks enforce golden-path thinking. Real models surface the messy middle: the weird follow-ups, the latency that changes how an interaction feels, the error states you’d never think to mock up.

And yet:

I still use Figma. I probably still spend 60 to 70% of my time in Figma. There’s just certain things that you’re making that don’t need to be in the browser. They don’t need to be coded up. You can just look at it and be like, “Yeah, that’s roughly right. We should just ship that.”

So even the person who built the Prototype Playground still spends most of his time in Figma. Figma isn’t dying just yet, but its scope is narrowing. But for AI features specifically, Lovin’s case is hard to argue with: you need the real model running to know if the design works.

The interview gets most interesting when Lovin describes his operating philosophy for AI agents and how to get them to run longer:

My philosophy on this has been anytime the AI asks you to do something, you should, before responding, try your best to see if you could teach the AI to answer that question for itself. […] So, for example, I’ve taught Claude, “Hey, check your work. One, you can run commands like eslint, right? And like look for actual TypeScript errors.” The second is you can give it access to MCP tools. […] Before installing this, Claude would say to you, “Hey, I’ve implemented your feature. Go take a look at it and let me know what you think.” And remember, our rule is anytime Claude tells you to do something? Ask if you can teach it to do that thing for itself. So, I don’t want to have to look at the browser every time to see if I did it correctly. So, instead, I teach Claude, “Actually, you should be the one to go and open the browser.”

Every interruption from the AI is interrupting your flow state. That’s orchestration in practice: building infrastructure that lets the AI handle its own quality checks so you the designer stays in the flow of deciding what to build and whether it’s right.

Lovin again:

You want your designs to encounter reality as early as possible. And if you imagine this gradient of like I’m scribbling on a napkin on one side to I’m shipping to production and showing customers on the other side, our goal as designers is to move up that gradient towards prod as quickly as possible. […] I just find that when you’re designing something in Figma and then you actually try it in the browser, in the browser you notice a ton of problems. All of a sudden you’re clicking things, you notice loading states, you notice “ah, that didn’t quite work on this screen size.”

Encounter reality as early as possible. That’s the whole argument in six words. There’s a lot more in this conversation, and it’s worth the full watch.

How Notion designers ship live prototypes in minutes | Brian Lovin (Product designer)

Brian Lovin is a designer at Notion AI who has transformed how the design team builds prototypes, by creating a shared code environment powered by Claude Code. Instead of designers working in isolated repositories or limited to static Figma designs, Brian built a collaborative “prototype…

youtube.com iconyoutube.com

On Jayneil Dalal’s Sneak Peek, Domingo Widen, a staff designer at Intercom, walks through their version of an AI-native design org: Figma MCP plus Claude Code plus Code Connect, producing prototypes that deploy as PRs to GitHub. Designers never check the code. Engineers get real components, not AI hallucinations.

The trick is in the plumbing:

This is something that designers don’t understand, that sometimes they use the MCP without an actual proper code connection, which is good, right? Like the link that you’re sending to AI, it’s going to include a lot of information around the spacing, the token, the color. But it’s not real code connection. The real power that you find is that when you actually connect these components. […] You’re actually giving Claude the actual path to that component in the codebase. so that when you send the link, the button already exists under this path. You don’t need to create it again. You can just import it.

Without Code Connect mapping every component to its import path, AI gets visual information but reinvents components from scratch. The judgment is encoded in the infrastructure, not the model.

Widen again:

In the background, every single pattern that we add to the system, every single component that we add to the system, it becomes a new piece of code that designers can use to prototype, that PMs can use to prototype, that engineers can use to prototype and build. And it’s kind of like a compounding effect essentially. So the more things we add to our design system in terms of components and patterns, the better cleanups that we do and the more tunings that we do, everybody kind of can benefit from them.

The compounding is real, but so is the upfront cost. Intercom needed a dedicated team, a prototyping hub, documentation, tutorials, and months of skills engineering to get here. A 20-person startup isn’t replicating this workflow anytime soon.

I wrote about this gap after getting pushback on my own AI-in-design arguments. The tooling works if you already have the infrastructure and the experience. For most designers, that’s not where they are yet.

How I Vibe Code as a Designer at Intercom

👋 Welcome to Sneak Peek with Jay, a series where you will see how top design teams use AI. In this interview Jay chats with Domingo Widen (Staff Product Designer) who shows the AI design process at Intercom!

youtube.com iconyoutube.com

AI tools made designers faster. The question nobody’s answering is whether their organizations can keep up.

Cameron Worboys, head of product design at Cash App, talking to Michael Riddering on Dive Club:

I think the biggest blockers across all of the tech industry in the next 2 years will not be the speed of building. It’s going to be the operational side and being able to move something from like we have built this thing. How does it move through the operational cogs of product development in order to like get it live to customers? So my view is like how do we set ourselves up for the new world? You have to make sure that your organization is capable at running at the same speed as the AI tools. And these AI tools move fucking fast.

The bottleneck migrated. Building isn’t the constraint anymore. Getting what you’ve built through approvals, reviews, compliance, and deployment is. Cash App’s response has been radical: they’ve flattened to three management layers (they call it “core plus three”), deleted design crits, and are pushing every designer to ship production code.

Worboys on what quality actually looks like at this speed:

The quality piece, there’s a misconception that it comes from a designer sitting in some cave for 3 months and pontificating about the future of software. It literally doesn’t. It comes from reps and the speed which you can be wrong and the speed that you can go again and experiment and experiment and experiment. And I think that’s what we’ve seen change, is the amount designers can produce has exponentially increased and the amount of like bureaucracy and layers you need to run an organization has changed a lot as well.

Quality through iteration, not pontification. That’s always been true, but when each iteration takes minutes instead of days, the gap between teams that ship and teams that sit in review becomes enormous.

Worboys on where this leads:

I believe one of the primary ways which you will create lock-in in the new world is creating apps that feel completely one of one. […] When you think about the future of software development and where it’s going with generative UI, there is nothing in the future that’s going to prevent us from creating these completely one of one experiences. So that’s what is top of mind for me at the moment. And I do think we will get there relatively quickly, that every Cash App does feel unique and completely designed around the person. And then from a business perspective, it creates this deeper, harder to quantify emotional connection with a product that is the same as like your wardrobe. Clothes are by and large like an expression of personal identity.

This is the most concrete product bet I’ve seen on generative UI. Not widgets inside a chat window. Entire apps shaped around the individual. I still think core app chrome should stay stable. But Worboys is betting that consumer fintech is where that line starts to blur.

Cameron Worboys - Inside an AI-native design org

Today’s episode with Cameron Worboys (https://x.com/camworboys) (Head of Product Design at Cash App) is an inside look at how an AI-native design org operates and the ways designers can thrive in this new world.

youtube.com iconyoutube.com

I’ve been playing around with Pencil along with Paper, both newer agentic design tools. The multi-agent demo is genuinely impressive—six AI agents designing an app simultaneously, each with its own cursor, name, and chat on the canvas.

Tom Krcha, Pencil’s CEO, speaking on Peter Yang’s channel, on the format bet at the center of the product:

It’s generating basically a descriptor for the design. And then what you can do, you can essentially ask it what kind of code you want to convert it into. Because we wanted to make sure that it’s sort of platform agnostic. […] So we have this platform agnostic file format. We call it .pen. It’s essentially just JSON-based format. We wanted to really build this format to be agentic from the ground up.

Krcha frames it as “agentic PDF.” I could get behind platform agnosticism as a philosophy, but I need more convincing. The .pen format is still a translation layer between the design and the code. That means migration from Figma, especially for teams with established design systems. And I’m skeptical that a button in Pencil’s built-in design system will correctly map to the right reusable code component when the agent translates .pen to production code. I need to test it out more for myself.

We have enterprises using that for this specific purpose, to convert their design systems into pen format and make sure that it lives in the Git. This is the source of truth for everybody now.

“Source of truth” is doing heavy lifting in that sentence. For teams with mature design systems, the source of truth is the code component, not a JSON representation of it.

This is a pretty impressive demo nonetheless, and it’s a moment of delight to give agents a name and a “face” if you will. Krcha:

Those cursors, it seems like a small touch, but it’s the first time I have seen AI humanized. It feels like there’s someone there. It’s crazy, it’s just a cursor.

I Watched 6 AI Agents Design an App Together And It Blew My Mind | Tom Krcha

Tom is the CEO of Pencil, one of the coolest AI design tools that I’ve ever tried. Watching 6 AI agents design a beautiful app in real-time will genuinely blow your mind. Tom showed me how it all works under the hood (a simple JSON file?!) and how you can use Pencil to design right where you code…

youtube.com iconyoutube.com

Designers aren’t leaving Figma. They’re outgrowing what Figma was built to do.

Punit Chawla, writing for Bootcamp:

Designers are slowly shifting to a building first mindset. Which means that a good chunk of UI designers are moving quickly to AI coding platforms to bring their ideas to life. The “Vibe Coding” trend wasn’t just another tech bubble, but a wake up call for designers to create life like prototypes and MVPs from day zero. In fact, PMs and designers at Meta have publicly stated how they are showing working products instead of UI prototypes.

The shift is real, but “leaving” is the wrong word. Designers aren’t abandoning Figma. They’re adding tools that do things Figma was never designed to do. Figma’s role is narrowing from everything-tool to exploration-and-iteration tool. That’s not the same as dying.

Chawla’s strongest point is structural:

Some companies are built different with a completely separate infrastructure. For Figma to change their infrastructure from the bottom-up will be very difficult. Let’s not forget they are a publicly traded company. Risking major changes can mean risking billions in stakeholder investments. Companies like Cursor on the other hand are built to be building first/coding first products, hence a major advantage.

This is right. Figma’s architecture was purpose-built for collaborative vector editing, not code generation. Bolting on AI code output is a fundamentally different engineering problem. When Figma Make launched, I scored it at 58 out of 100, and it’s getting better, but it’s competing against tools that were born for this.

Where I’d push back is on the builder framing. Designers aren’t becoming coders. They’re becoming directors. A designer who orchestrates AI agents against a design system solves the handoff problem more fundamentally than one who vibe-codes an MVP. One eliminates the bottleneck. The other just moves which side of it you’re standing on.

Chawla hedges his own headline:

Don’t get me wrong, Figma is still the best tool for a majority of creatives and has a strong hold on our day-to-day workflow. Making any strong predictions at this point will be very ill-informed and it’s best to avoid making any conclusions as of now.

Fair enough. But the question worth tracking is whether Figma can expand fast enough to remain relevant as the deliverable shifts from mockups to working software.

Figma app icon being dropped into a recycling bin by a cursor, illustrating uninstalling or abandoning Figma.

Why Are Designers Leaving Figma? The Great Transition.

The Creative Industry Is Changing Rapidly & So Is Figma’s Future

medium.com iconmedium.com

Prototypes have always been alignment tools. Whether you’re testing with users or convincing leadership, the prototype’s job is to make the abstract concrete. That part isn’t new.

What’s worth noticing in Emma Webster’s case study roundup on the Figma blog is who’s doing the prototyping. Three stories. Three product managers. Zero designer protagonists.

ServiceNow’s Ram Devanathan explains the dynamic:

“They have a big portfolio, so they can’t always pivot directly to my project.”

So Ram built it himself in Make. His designer’s mockup missed the nuance he was after, so he took a crack at it:

“Make helped me show what I meant rather than trying to describe it in the abstract. I’m able to explain my ideas better. I’m able to convince people faster. That reduces the whole cycle for me.”

Ticketmaster PM Brian Muehlenkamp prototyped an AI assistant that wasn’t even on the roadmap and shipped it. Affirm’s SVP of Product Vishal Kapoor puts the value in craft terms:

“The real work lives in the variations, rabbit holes, and edge cases. It requires a lot of thinking, a lot of precision, and a lot of love.”

All three stories follow the same arc: PM has an idea, designer is unavailable or the mockup misses the mark, PM builds it in Make, team aligns faster. Designers aren’t the heroes of these stories. They’re the bottleneck the tool routes around.

I don’t think that’s Figma’s intended message. But it’s the one that came through to me.

Colorful abstract illustration mixing UI elements like toggles, cursors, and image placeholders with decorative floral patterns on a purple background.

3 Ways Teams Are Building Conviction Faster With Figma Make | Figma Blog

Product managers at ServiceNow, Ticketmaster, and Affirm are using Figma Make to prototype their way forward.

figma.com iconfigma.com

Designers are builders by nature. We break problems apart, iterate through uncertainty, and treat process itself as something to be shaped. That instinct is exactly what Pete Pachal, writing for Fast Company, identifies as the dividing line in the age of agents:

We’ve trained a generation of office workers to work within software with clear boundaries and reusable templates. If there’s an issue, they call IT. Any feature request gets filtered and, if you’re lucky, put on a roadmap that pushes it out 6-12 months.

In short, most people don’t have a builder mentality to begin with, and expecting them to suddenly be comfortable working and creating with agents is unrealistic.

Pachal draws the line at mindset, not coding ability:

Builders don’t need to be coders, but they do have characteristics that most workers don’t: They seek to understand the process beneath their tasks, and treat that process as modifiable and programmable. More importantly, they see failure and iteration as tolerable, even fun. They thrive in uncertainty.

That’s the design process. What Pachal frames as rare in the broader workforce is default operating mode for most designers. We want to make things. We fiddle with tools and rebuild workflows for fun. The builder mentality isn’t something designers need to acquire; it’s the reason most of us got into this field.

Pachal again:

You don’t have to build agents to matter in an agent-driven workplace. But you do have to understand the systems being built around you, because soon enough, your job will be defined by defaults someone else designed. Most professionals will not build agents. But everyone will work inside systems builders create.

Pachal is describing the orchestrator gap at scale, not just in design but across all knowledge work. And it suggests designers are uniquely positioned to be on the right side of it. Shaping how people interact with systems has always been the job description.

Person viewed from behind facing a large blue screen displaying an AI prompt interface with an "Enter prompt" text field and "Generate" button.

The agent boom is splitting the workforce in two

Most people don’t have a builder mentality and expecting them to suddenly be comfortable working and creating with agents is unrealistic.

fastcompany.com iconfastcompany.com

Every design system is an exercise in compression. You take contextual reasoning—why this spacing, why this type scale—and flatten it into tokens and components that can ship without the backstory.

Mark Anthony Cianfrani:

the reason that your line height is set to 1.1 is because your application is, or was at one point, very data-intensive and thus you needed to optimize for information density. Because one time someone complained about not being able to see a very important row in a table and that mistake cost so much money that you were hired to redesign the whole system. But that’s a mouthful. You can’t throw that over the wall. An engineer can’t implement that. So we make little boxes with all batteries included.

All of that reasoning gets flattened into line-height: 1.1. The token ships. The reasoning doesn’t. Every design system makes this trade-off: you lose the why to gain portability.

Cianfrani argues we don’t have to accept that trade-off anymore:

LLMs give us the ability to ship our exact train of thought, uncompressed, a little bit lossy but still significantly useful. Full context that is instantly digestable. Instead of shipping <Boxes>, ship a factory.

Design systems were never the end goal. They were the best compression format we had. Components and tokens became the shipping containers because the full reasoning was too unwieldy to hand off. That constraint is loosening. In spec-driven development, that factory looks like a structured document: design intent expressed in plain language that AI agents build against directly. The spec is the reasoning, uncompressed.

Even if the AI bet doesn’t pay off:

And if this whole AI thing turns out to burst, at least you’ve improved the one skill that some of the best designers I’ve ever worked with had in common—the ability to communicate their design decisions into words.

The compression problem was always worth solving, with or without LLMs.

Pale cream background with four small colored squares—teal, burgundy, orange-red, and mustard—aligned along the bottom-right edge.

Designing in English

Components are dead. Use your words.

cianfrani.dev iconcianfrani.dev

The transparency question in autonomous interfaces—what to surface, what to simplify, what to explain—needs a concrete framework. Daniel Ruston offers one.

Ruston names the next layer: the Orchestrated User Interface, where the user states intent and the system generates the right interface and executes across multiple agents. The label is less interesting than what it demands from designers:

We can no longer design rigid for “Happy Paths.” We must design for Probabilistic UX. The designer’s job is no longer drawing the buttons; the designer’s job is defining the thresholds for when the button “presses itself” or when the system needs user to clarify, correct or control.

Ruston makes this concrete with a confidence-threshold pattern:

Low Confidence (<60%): The system asks the user for clarification or provides a vague response requiring follow-up (“Which Jane do you want me to schedule with?”). Medium Confidence (60–90%): The system makes a tentative suggestion (“Shall I draft a reply based on your last meeting?”). High Confidence (>90%): The system acts and informs (“I’ve blocked this time on your calendar to prevent conflicts”).

That’s the design lever most AI products skip. They either act without explaining or ask permission for everything. The threshold gives designers something to actually spec: not “should the system do this?” but “how sure does it need to be before it does this without asking?”

Ruston borrows a metaphor from aviation to describe what this visibility should look like:

Analogue cockpits require pilots to look at individual gauges and mentally build a picture of the aircraft’s “system” state. The glass cockpit philosophy shifts the focus to a human-centered design that processes and integrates this data into an intuitive, graphical “picture” of flight.

Same problem, different domain. Most AI products today are analogue cockpits: individual agent outputs, raw status messages, no integrated picture. The confidence thresholds tell the system when to act. The glass cockpit tells the user what’s happening while it acts.

Colorful illustration of a laptop surrounded by keyboards, chat bubbles, sliders, graphs and emoji, connected by flowing ribbons.

The rise of the Orchestrated User Interface (OUI)

Designing for intent in a brave new world.

uxdesign.cc iconuxdesign.cc

The shift from mockups to code is one thing. The shift from designing tools to designing autonomous behavior is another. Sergio Ortega proposes expanding Human-Computer Interaction into Human-Machine Interaction. The label is less interesting than what it points at.

The part that matters for working designers is the transparency problem:

This is where design must decide what to show, what to simplify, and what to explain. Absolute transparency is unfeasible, total opacity should be unacceptable. In short, designing for autonomous systems means finding a balance between technological complexity and human trust.

When a system makes decisions the user didn’t ask for, someone has to decide what gets surfaced. Ortega:

The focus does not abandon user experience, but expands toward system behavior and its influence on human and organizational decisions. Design is no longer only about defining how technology is used, but about establishing the limits of its behavior.

And the implication for design teams:

When the machine acts, design becomes a mechanism of continuous balance.

Brass steampunk robot typing on a gear-driven computer in a cluttered workshop while a goggled inventor watches nearby

Human-Machine Interaction: the evolution of design and user experience

Human-Machine Interaction expands the traditional Human-Computer Interaction framework. An analysis of how autonomous systems and acting technologies are reshaping design and user experience.

sortega.com iconsortega.com

The pitch for generative UI is simple: stop making users navigate menus and let them say what they want. Every AI product demo shows the same thing: type a prompt, get a result, skip the 47-click workflow. It looks like progress.

Jakob Nielsen names what gets lost in the trade:

However, eliminating the Navigation Tax imposes a new Articulation Tax. In a menu-driven GUI, features are visible and therefore discoverable; a user can find a tool they didn’t know existed simply by browsing. In an intent-based AI interface, the user can only access what they can clearly describe.

“Articulation Tax” is the right frame. Menus are clunky, but they show you what’s possible. A blank prompt field assumes you already know what to ask for. That’s fine for power users. It’s a problem for everyone else. Nielsen:

The shift from WIMP to World Models represents a transition from Deterministic to Probabilistic interaction. In a WIMP interface, clicking an icon is deterministic: it produces the exact same result 100% of the time. In a generative world model, the system is probabilistic: the same prompt may yield different results on different attempts.

Deterministic to probabilistic is a trust problem. Users learned to trust GUIs because the same action always produced the same result. That contract is gone. Users will adjust eventually, but most aren’t there yet.

Comic-style History of the GUI showing Xerox Alto, Macintosh, windows/icons, mouse, touch phone, and holographic globe.

History of the Graphical User Interface: The Rise (and Fall?) of WIMP Design

Summary: The GUI’s success wasn’t about any single invention, but a synergy of 4 elements: Window, Icon, Menu, and Pointer, through a 60-year history of usability improvements.

jakobnielsenphd.substack.com iconjakobnielsenphd.substack.com

The design industry spent a decade burying skeuomorphism. Flat won. And now that AI can generate any flat interface in seconds, physicality is interesting again.

Daniel Rodrigues and Lucas Fischer, writing for Every, describe designing the iOS app for Monologue, a smart dictation tool. Rodrigues studied Braun radios and Teenage Engineering synthesizers, and at one point found himself crouched beside his apartment light switch watching how the shadow moved. His defense of skeuomorphism:

Skeuomorphism has been accused of being overdone, and fairly so, but I think of it as borrowing the credibility that physical things naturally have, like weight, shadow, and texture. Something similar to the way a real button communicates—without explicit explanation—that it can be pressed.

This isn’t a texture pack in Photoshop. Rodrigues studied how light behaves on a physical button and rebuilt that behavior in SwiftUI. The texture is functional, not decorative: it tells you the thing is pressable. Rodrigues and Fischer:

Not every AI product needs skeuomorphic buttons and custom sound effects, but the bar for what “functional” means is shifting. AI is making it faster and cheaper to build “functional” products, so the ones that endure are those where someone thought about what it feels like to use them. For us, that meant studying physical objects, exploring 20 wrong directions to find one right one, and hiring a musician to build sounds we could have pulled from a stock library.

Black glossy light switch plate with a teal rocker labeled "M" on a textured teal wall, flanked by ornate black-and-white classical engravings.

How to Design Software With Weight

A look at the design principles that guided our smart dictation app from desktop to iPhone

every.to iconevery.to

Set some type in Illustrator. Print it out on a laser printer. Crumple the paper, really manhandle it. Rub it on the sidewalk. Scratch it with the back of an X-acto blade. Now scan it back in. That was the real analogue way I distressed type back in the 1990s.

That analogue look is trendy again. Hand-rendered type, ink textures, visible grain. All in search of “authenticity.”

Elizabeth Goodspeed, writing for It’s Nice That, has a name for what’s actually happening:

But if analogue only matters as a foil to the digital, why are analogue aesthetics being embraced without analogue tools? If the goal is to prove something wasn’t made by AI, faking “realness” on a computer doesn’t really get us anywhere new. It just reflects a different kind of dissonance (call it fauxbi-sabi). Case in point: I noticed that one vendor selling “analogue” Photoshop actions advertises them with the tagline “Save time, focus on being creative”, a promise suspiciously similar to every argument made in favour of AI.

“Fauxbi-sabi” is the whole scam in one word. AI and digital tools made polish free, so imperfection became the new signal for authenticity. But most of the “handmade” work in those trend reports was made in Photoshop with purchased texture packs. Goodspeed again:

You can think of adding in fake ink splatters a bit like penciling in a beauty mark: an intentional imperfection done to signal authenticity, rather than the byproduct of a real nuisance.

The whole essay is sharp, especially the historical parallels. When Kodak made photography easy in 1888, art photographers retreated to difficult, slow processes to prove human involvement. We’re running the same play 138 years later with different tools. The piece is worth reading in full.

THE END OF ANALOGUE' large black headline on yellow, author 'ELIZABETH GOODSPEED' below, columns of text at sides.

“Faking ‘realness’ on a computer doesn’t get us anywhere new.” – Elizabeth Goodspeed on imperfection as design strategy

As AI and digital tools make polish effortless, analogue imperfection has taken on new cultural weight. But what does “analogue” actually mean when most things are made, shared, and consumed digitally?

itsnicethat.com iconitsnicethat.com

I’ve rebuilt my personal website more times than I can count. The tools and platforms change; the principle doesn’t: I own my content, and nobody gets to take it away. I have a Substack, but it’s a digest, a syndication channel. The canonical content lives on my site, on my domain. My website can’t be enshittified by anyone but me.

Henry Desroches makes the case through Ivan Illich’s Tools for Conviviality:

In his book Tools For Conviviality, technology philosopher and social critic Ivan Illich identifies these two critical moments, the optimistic arrival & the deadening industrialization, as watersheds of technological advent. Tools are first created to enhance our capacities to spend our energy more freely and in turn spend our days more freely, but as their industrialization increases, their manipulation & usurpation of society increases in tow.

Illich also describes the concept of radical monopoly, which is that point where a technological tool is so dominant that people are excluded from society unless they become its users. We saw this with the automobile, we saw it with the internet, and we even see it with social media.

That’s social media in one paragraph. You don’t join Instagram because you want to; you join because opting out means opting out of the conversation. Desroches argues personal websites are the answer:

Hand-coded, syndicated, and above all personal websites are exemplary: They let users of the internet to be autonomous, experiment, have ownership, learn, share, find god, find love, find purpose. Bespoke, endlessly tweaked, eternally redesigned, built-in-public, surprising UI and delightful UX. The personal website is a staunch undying answer to everything the corporate and industrial web has taken from us.

The practical argument is strong enough on its own. Own your content. Own your platform. Syndicate outward. The moment you frame it as reclaiming the soul of the internet, you lose the people who most need to hear the boring version: just put your stuff on a domain you control.

Headline "A website to destroy all websites." above a central dark horse etching; side caption: "How to win the war for the soul of the internet.

A Website To End All Websites

How to win the war for the soul of the internet, and build the Web We Want.

henry.codes iconhenry.codes

The design process isn’t dead. It’s changing. My belief is that the high-level steps are exactly the same, but where designers spend their time is being redistributed.

Jenny Wen, head of design for Claude at Anthropic (formerly at Figma), on Lenny’s Podcast:

This design process that designers have been taught, we sort of treat it as gospel. That’s basically dead. I think it was sort of dying before the age of AI, but given now that engineers can go off and spin off their seven Claudes, I think as designers, we really have to let go of that process.

It’s a strong headline. But Wen then describes her actual day-to-day, and it sounds familiar:

We are still prototyping stuff. I’m still mocking stuff up. I think it’s just I have a wider set of tools now, and I think the proportion of time I spend doing each thing just has changed.

So the process isn’t dead. The proportions shifted. Wen breaks it down:

A few years ago, 60 to 70% of it was mocking and prototyping, but now I feel the mocking up part of it is 30 to 40%. And then there’s that other 30 to 40% there that is now jamming and pairing directly with engineers. And then there’s a slice of it that is now implementation as well.

What’s missing from that breakdown is user research and discovery. Wen mentions having a researcher on the team, mentions reading studies and feedback, but those activities don’t factor into the breakdown at all. For a team building products where, by Wen’s own admission, “you can’t mock up all the states” and “you actually discover use cases as you see people using them,” you’d think research would be eating a larger share of the pie, not disappearing from the conversation entirely. In my day-to-day, the designers on my team spend 30–40% on discovery and flows. Maybe 40–50% on mockups and prototypes. We’re basically already at her breakdown.

There’s also a context problem. Wen’s “ship fast, iterate publicly, build trust through speed” approach makes sense for Anthropic. They’re building greenfield AI products where nobody knows the right interaction patterns yet. The models are non-deterministic. Labeling something a “research preview” and iterating in public is the right call when the design space is that undefined.

That approach gets harder with a product that has an established install base. When you’re updating features that millions of people depend on, “ship it and iterate” has real costs. Sonos learned this. Or if your product is mission-critical as Figma learned when it shipped its UI3 and designers revolted. Or worse, an essential service like a CRM or operational software. The slow, unglamorous work of discovery and user testing exists because breaking what already works is expensive. Wen has the advantage of building greenfield — there’s no install base to protect. Not every team has that luxury.

The interview gets more interesting when Wen turns to hiring. She describes three archetypes: the “block-shaped” strong generalist who’s 80th percentile across multiple skills, the deep T-shaped specialist who’s in the top 10% of their area, and then a third she says the industry is overlooking:

My last one is probably the one that I think we’re all overlooking, which is what I call the crack new grad. It’s just somebody who’s early career and feels, like, wise and experienced beyond their years, but is also just very humble and very eager to learn. I think this person is really interesting right now because I think most companies are just hiring senior talent, folks that have done things before, are super experienced, but given how much the roles are changing and what we’re expected to do is changing, I think having somebody who almost has a blank slate, and is just a really quick learner and is really eager to learn new tactics and stuff like that, and doesn’t have all these baked in processes and rituals in their mind, that’s super valuable.

Wen’s “crack new grad” maps closely to the strategies I wrote for entry-level designers: build things, get comfortable with AI tools, be what Josh Silverman calls the “dangerous generalist.” Someone without baked-in rituals who learns fast and ships. That a design leader at a frontier lab is actively looking for this profile matters, because most of the industry is still filtering for ten years of experience.

The design process is dead. Here’s what’s replacing it. | Jenny Wen (head of design at Claude)

Jenny Wen leads design for Claude at Anthropic. Prior to this, she was Director of Design at Figma, where she led the teams behind FigJam and Slides. Before that, she was a designer at Dropbox, Square, and Shopify.

youtube.com iconyoutube.com

The behavioral gap, the calcified companies, the startups shipping while incumbents argue about roadmap slides: there’s an economic force underneath all of it. Andy Coenen names it. He picks up from Matt Shumer’s “Something Big Is Happening“ and builds the case that we’re living through a Software Industrial Revolution, where the cost of producing software collapses the way textiles did in the 18th century.

His thesis on what survives the cost collapse:

Because while the act of building software will fundamentally change, software engineering has never really been about producing code. It’s about understanding and modeling domains, managing complexity (especially over time), and the dynamic interplay between software and the real world as the system evolves. And while the ability to produce code by hand is rapidly becoming irrelevant, the core skills of software engineering will only become more important as we radically scale up the amount of software in the world.

Replace “software engineering” with “product design” and “producing code” with “producing mockups” and you have the argument I made in Product Design Is Changing. The artifact was never the job. The judgment was.

Coenen again, on what abundance looks like in practice:

My friend, Dr. Steve Blum, is a brilliant cancer researcher. Steve’s work deals with massive amounts of data, and analyzing that data is a major bottleneck. But writing software to do so is extremely difficult, and there’s no world where Steve’s limited attention ought to be spent on python venv management.

The Software Industrial Revolution means that Dr. Blum and thousands of his colleagues have all, suddenly, almost magically, been given massive new leverage via the ability to conjure up almost any tool imaginable, on demand. This is like giving every cancer researcher in the world a team of world-class software engineers on staff overnight, for less than the price of Netflix. Frankly, I think this is nothing short of miraculous.

Now do that thought experiment for design. Every small business owner who needs a custom tool, every nonprofit that can’t afford a design team. The Industrial Revolution didn’t just make cloth cheap. It made good cloth cheap. That’s the part designers should be paying attention to.

Isometric pixel-art tech campus with factories, conveyor belts, data servers, robots, wind turbines and workers.

The Software Industrial Revolution

Late 2025 marked a true inflection point in the history of AI. Between increased frontier model capabilities and the maturation of agentic harnesses, AI coding agents just _clicked_. And just like that, it just works.

cannoneyed.com iconcannoneyed.com

Darragh Curran’s 2× goal reads like a halftime speech. We can do this. The tools are here. The gap is behavioral. Double your output in twelve months.

Claire Vo wrote the post-game report:

If AI adoption had 7 stages of grief, almost all of you would be in denial. No matter how many AI memos your CEO sends, the amount of Claude that’s being Coded, the chatbots in app and the evals in data—I’m here to tell you: you’re not competing. In fact, you probably can’t anymore.

Vo’s target is the company that thinks it’s adapting: AI features shipped, internal power users, a natural-language interface named after a gem. She’s not buying it:

While they try on the bows and ribbons of an AI-native team, they ignore the fact that their bones are old and the company has calcified. For the most part: sales still sells the same and marketing is still talking about channels and CAC and product says “prioritize” and eng says “capacity” and the board is endlessly asking either about Q1 perf and Q2 projections or the ever-elusive “increase in product velocity.”

“Bows and ribbons” versus “bones.” That’s the whole post in one sentence.

I have some sympathy for the incumbents, though. Vo’s startup-swagger framing undersells how much gravitational pull a $100M business carries. Enterprise contracts, compliance obligations, a customer base that didn’t sign up for a pivot. The companies she’s diagnosing aren’t stupid. They’re heavy. And heavy things don’t accelerate the same way light things do, even when both see the cliff.

None of that makes her wrong. It just means even the companies that want to change are fighting physics. But they’ll have to figure it out sooner than later.

You’ve been kicked out of the arena, you just don’t know it yet

No matter how many AI memos your CEO sends, the amount of Claude that’s being Coded, the chatbots in app and the evals in data--I’m here to tell you: you’re not competing. In fact, you probably can’t anymore.

x.com iconx.com