Skip to content

116 posts tagged with “process”

The first time I wrote about Jenny Wen, I pushed back. She said the design process was dead, and I argued the proportions had shifted but the process itself was intact. I also noted a context problem: her “ship fast, iterate publicly” approach makes sense for greenfield AI products at Anthropic but gets harder with established install bases.

Wen has been making the rounds and in a new interview, I’m finding a lot that I’m nodding my head to.

Jenny Wen, speaking on Tommy Geoco’s State of Play:

Often design needs to follow what the model is capable of and design from there, as opposed to starting from a design vision first. I think that can feel tough as a designer because you’re like, oh, I want to be design-led, we should be designing it first and then the technology should follow. But I think that’s just the reality of working at a research lab where the technology is emergent and you have to sort of decide what to do with it.

“Design follows the model” is an interesting phrase from a design leader. It inverts the dogma that design should lead and engineering should follow. But Wen isn’t being defeatist. She’s describing a practical reality at at a leading AI lab where the models’ capabilities are changing faster than any roadmap can account for.

This shows up concretely in how her team works:

The big thing is designers are implementing code, through using Claude Code. That has been the biggest difference from working at Anthropic versus back when I worked at Figma. […] Even today, we were reporting some bugs and some quality issues, and one of the designers was like, “Cool, let me just fix them.” And that was cool to just not have to tag an engineer for them to do anything.

A designer casually fixing production bugs without tagging an engineer. Just another Tuesday at Anthropic.

Geoco’s summary of Wen’s argument crystallizes something we’ve all been thinking quietly about:

She said, having taste versus being able to execute are two completely different things. They’re usually bundled together, but they don’t have to be. And in a world where AI can increasingly execute, the question becomes, and it’s kind of uncomfortable, do you actually have good taste or are you just pushing pixels around?

That’s the thread tying all of this together. When designers are closer to the product, fixing bugs in production, prototyping against the live model, the judgment they’re applying isn’t visual. It’s product sense: knowing which of those 12 options is worth shipping, which edge case will break trust, when the model’s output is good enough for real users. That’s the taste Wen is describing, and it has very little to do with pixels.

A lot of designers have been coasting on execution skills that felt like taste. They debate corner radii and centering labels in a button with amateur vs pro designer memes. Who cares! AI is about to make the difference visible.

The New Era of UX Designers

Jenny Wen led design on FigJam, one of the most playful tools to hit design in a decade. Now she’s at Anthropic designing Claude. Not just the model, but the product that millions use daily.

youtube.com iconyoutube.com

When I was a younger designer, I always started with a pen and sketchbook. Sketch first, think with your hands. Now I write first to understand the problem space, then sketch. The images come after the words.

Elizabeth Goodspeed, speaking on Nicola Hamilton’s DesignThinkers podcast, takes this further than I ever would—she can barely picture images at all:

I am far more towards aphantasia. I have a very limited view of things in my mind. I think the analogy I use is it’s looking at an apple in a dark room and the lights are turning on and off and I’m wearing sunglasses and also the apple’s moving.

Her ideas don’t start as images. They start as words:

My ideas are usually very conceptual verbal, not even sentences. I guess I’m a robot—I don’t have an inner voice either. It’s just a pure void concept up there.

That might explain why Goodspeed is one of the sharpest design writers working. When you can’t conjure images internally, language becomes your primary tool for developing ideas. The archives and ephemera she’s known for aren’t aesthetic mood boards—they’re external memory for a mind that processes concepts before forms.

Goodspeed on the myth of the visually inspired designer:

That to me is damaging to creatives because it has this idea that we’re this noble savage where these images just move through us and we see everything in this Willy Wonka kind of way. In reality, I think it’s a process just like any other making process, whether that’s a carpenter or writer or anything else. It actually, I think at its best, is methodical and not just this inspired bolt of lightning.

The best design work starts with a concept, not a visual. Goodspeed just happens to have a neurological reason for working that way. The rest of us had to learn it. Worth listening to the full conversation—she also covers teaching, thesis panic, and why she calls her own work “graphic design fan art.”

RGD DesignThinkers Podcast episode 041 cover featuring Elizabeth Goodspeed, with a green-tinted portrait of a woman with dark curly hair and bangs.

DesignThinkers: Elizabeth Goodspeed

Elizabeth Goodspeed discusses how research, design history, and close attention to visual culture can help creatives develop deeper, more original work beyond trends.

printmag.com iconprintmag.com

Gui Seiz designs at Figma. His team uses Claude Code to bridge design and code. And he still reaches for the canvas when precision matters.

Seiz, speaking on Claire Vo’s How I AI podcast:

I don’t think we’re there yet in general with these code tools in terms of the precision editing that you want to do. […] I think still the gold standard for me is just being able to drag stuff around. And you can do a lot with a click that would take you a hundred words to write and to really precisely nail. No one wants to prompt for the exact hex code or the shade of yellow and that kind of stuff. That’s just easier to just quickly do and directly manipulate.

Seiz isn’t anti-AI. His team pulls production code into Figma via MCP, edits it visually, and pushes it back to the codebase. He’s bullish on what that does to the old workflow:

It’s definitely changed our workflows in a way that it’s really blown up what a workflow even is. Before, for the majority of our careers, we’ve had a very linear, agreed-upon workflow where you increase fidelity as you go on. Because it’s really expensive to work in code, and it’s really cheap just to trade ideas and sketch them out. But AI basically collapsed that, and it’s just as cheap to riff in code as it is to riff in design.

The cost of exploration collapsed. The need for direct manipulation didn’t. Both can be true.

How Figma engineers sync designs with Claude Code and Codex

Most teams are still passing static design files back and forth, and most Figma files are already out of date by the time they reach engineering. Gui Seiz (designer) and Alex Kern (engineer) from Figma walk through the exact workflow their team uses to bridge that gap with AI, live onscreen. They…

youtube.com iconyoutube.com

Sarah Gibbons and Huei-Hsin Wang, writing for Nielsen Norman Group:

What looks like “skipping the process” is just compressing it — running faster through the stages and using experience as a guide. […] What gets called “intuition” is really process, compressed and internalized through years of doing the work. The intuition designers trust was built by the very process they dismiss.

Gibbons and Wang on what comes after you stop pretending you’re not using one:

The real skill in modern design is not the ability to abandon process — it’s process literacy: picking the right approach and tool for the problem. Know which process fits the job and understand the risks of not following it. Better yet, don’t claim you’re not using a process if you’re just applying it differently.

The article responds directly to Anthropic’s Jenny Wen’s interview. Wen’s advice works because she’s a senior designer inside a well-resourced AI company with strong design culture. But we only hear about the wins. The solution-first prototypes that went nowhere, the features that shipped and saw no adoption, don’t make it into any public interviews. Most teams don’t have Wen’s conditions. And even inside teams that do, the advice assumes seniority. Junior designers haven’t accumulated the experience that make compression possible. They’re being told to skip a step they haven’t taken yet.

Two overlapping diamond shapes in purple and violet with dashed outlines illustrate compression, alongside the title "Design Process Isn't Dead, It's Compressed" from NN/G.

Design Process Isn’t Dead, It’s Compressed

As AI speeds up design work, the argument to “throw out the process” misrepresents how experienced designers work.

nngroup.com iconnngroup.com

The Sonos app disaster taught me something about roadmaps. Leadership kept adding initiatives—Sonos Radio, the Ace headphones—without ever naming what those additions displaced. QA got squeezed. Stability testing got cut. The designers who warned them were overruled. No leader said out loud what was being sacrificed to make room.

Yusuf Aytas names exactly this failure:

People like to talk about priorities as if the main problem is choosing what matters. In practice, the deterministic factor is capacity. Team capacity. System capacity. The share you lose to maintenance, interruptions, coordination, and keeping the machine fit to run. Ignoring these physical limits turns an ambitious roadmap into a collective illusion.

“Collective illusion.” That’s the right name for it. Aytas on where the dishonesty starts:

A new customer request appears. Leadership wants a visible bet. Sales needs something for a deal. Everyone talks about importance. Almost nobody says what gets pushed out. That is the real decision. They have only added pressure and left the team to absorb the contradiction later.

Aytas builds the whole piece around a carpentry metaphor—one saw, limited operators, timber that needs oiling and adjustment before it can be cut. Software hides the constraint better, but the physics are the same. There’s more in the piece on shaping work before it competes for capacity, using visible investment buckets, and why reallocation is never free.

A green manual press machine surrounded by bulging white sacks inside a rustic mud-walled storage shed with a corrugated metal roof.

Capacity Is the Roadmap

Most roadmap problems are capacity problems. Make investment buckets visible, budget interrupts, and force trade-offs into the open.

yusufaytas.com iconyusufaytas.com

David Hoang, writing for Proof of Concept, proposes a squad model for tackling a company’s hardest, most ambiguous problems:

The squad: a forward deployed engineer, a forward deployed designer, and a researcher. Three people. That’s it. They operate like a startup-within-the-company, deployed against a specific, ambiguous problem. […] This is a product discovery team with teeth — they don’t just produce insights and hand them off. They produce working prototypes and validated direction. […] Three people don’t need standups, retros, or Jira boards. They need a shared problem and a whiteboard.

No PM. The shared problem replaces the roadmap, and a researcher replaces the product manager. Hoang borrows the concept from Palantir’s Forward Deployed Engineers and extends it to design. His argument: AI tools have given designers enough technical leverage to prototype at engineering speed, so the designer who finds the problem can build the first cut of the solution.

A three-person team with AI tools in 2026 can cover the ground that used to require a ten-person cross-functional team. That’s the direct result of collapsing the build cost of exploration.

Hoang argues that the rotation model matters as much as the squad composition. Four to eight weeks, then disband. The team doesn’t calcify into a feature factory. Designers rotate through the company’s hardest problems instead of sitting on the same product team filing tickets for years.

Although, my counter to that would be designers sitting in the same problem space will gain deeper knowledge and context. Rotation could be counterproductive if not handled deliberately.

Hand-drawn Venn diagram showing three overlapping circles labeled Researcher, Design Engineer, and GTM, with the center intersection labeled "Forward Deployed Designer.

Forward deployed designer

In the early 2010s, Palantir coined a role that didn’t exist before: the Forward Deployed Software Engineer. These weren’t engineers building features on a roadmap. They were engineers embedded directly at client companies — sitting with analysts, operators, and decision-makers — to discover the problem and build the solution in the same motion. The role spread. Databricks, Scale AI, and OpenAI adopted variations.

proofofconcept.pub iconproofofconcept.pub

I’ve argued that design tools should be canvas-first, not chatbox-first. Jeff, writing in Abduzeedo makes the case for the opposite:

Designers have always borrowed from developers. Version control, component systems, token-based design — these ideas crossed the aisle from engineering and reshaped how visual work gets done. Vibe designing follows the same logic. Instead of opening Figma and reaching for a drag-and-drop panel, designers drop into the terminal. They prompt an AI model directly from the CLI, pipe the output into a file, and iterate without ever touching a mouse.

He isn’t theorizing. He published this article using browser automation and AI, with minimal manual clicking.

I don’t think the answer is CLI or canvas. It’s both. Designers are visual thinkers—that’s the cognitive foundation of the discipline, not a limitation to engineer away. Going fully terminal assumes we can be retrained to work without seeing what we’re making, or that the profession will attract people with entirely different skills.

What does look right is the plumbing underneath. Jeff on Paper.design’s MCP integration:

Its canvas is built natively on web standards — HTML and CSS — which means AI agents working through Paper’s MCP server can read and write design files directly. Tools like get_screenshot, get_jsx, write_html, and update_styles give Claude Code or Cursor direct read-write access to the design canvas.

HyperCard figured this out in 1987: direct manipulation on top of a scripting layer. The tools are finally catching up, with AI as the scripting engine.

VS Code editor with a browser preview showing the "Abduzeedo Editor" app, displaying a portrait photo with a VHS glitch shader effect applied.

Vibe Designing with Bash Access

Vibe designing is the design equivalent of vibe coding — where bash scripts, AI tools, and CLI commands are finally replacing traditional GUI-only tools.

abduzeedo.com iconabduzeedo.com

Intercom’s design team published numbers that show what happens when agents take over the build. John Moriarty, writing for Fin Ideas:

At Intercom, how we design and build software is unrecognizable from 12 months ago. Our engineering team is already at the point where 90% of pull requests are authored by Claude Code, part of an internal initiative called 2x, where the explicit goal is to double productivity using AI.

When 90% of your pull requests are AI-authored, the designer’s job changes whether you update the title or not. Moriarty’s framework for what comes next:

As the rate of execution accelerates, the role of design becomes sharper. Agents can generate artefacts, but they cannot decide which problems matter, set intent, resolve trade-offs, or hold the bar for quality. Our craft shifts with that reality. […] Agents will own the middle, the build. Design’s value concentrates at the edges, deciding what to build and then determining whether the output is good enough.

Design’s value lands at the edges, not the middle, and Intercom is already adapting their infrastructure to match. They’ve repositioned their design system as what Moriarty calls “agentic infrastructure”:

In a world where Agents write most of the code, design systems become the infrastructure that protects quality. Components, libraries and guidelines are the foundation that Agents and teams build on top of. The better the system, the better everything produced. Strong systems allow quality to scale without adding review overhead.

This tracks with the argument that design systems are becoming AI infrastructure—and Intercom is running it in production. The design system is the quality control layer that lets agents ship at speed without designers reviewing every screen.

Moriarty’s full piece covers how they’re restructuring day-to-day work—moving designers into code, treating Figma as a whiteboard, running structured AI fluency training. Worth a full read.

A paintbrush dissolves into digital code lines and circuitry, with the text "How we design when the code writes itself" and "Fin/ideas" logo.

How we design when the code writes itself

AI isn’t just increasing the speed of building, it’s changing how we work

ideas.fin.ai iconideas.fin.ai

Karo Zieminski spent nine days breaking Claude Cowork before writing this guide:

I’ve seen enough of shallow tutorials that simply rephrase the official docs to know I wanted to do something different. So I rebuilt some of my workflows from scratch, tracked what failed, measured what saved time, and mapped 56 practical tips into the resource I wish existed when I started.

I appreciate her methodical breakdown of the app, especially when to use which flavor of Claude, which for me TBH, has been an issue.

Comparison table of Claude Chat, Cowork, and Code modes across six aspects: interface, best for, output, sub-agents, file access, and target user.

Zieminski’s nice breakdown of the differences between Claude Chat, Cowork, and Code.

The guide barely talks about prompting. It’s almost entirely about the pre-work: dedicated folder structures, global instructions via CLAUDE.md, chunked skills, delegation patterns that define end-states instead of steps. The distinction Karo draws between Chat skills and Cowork skills:

Skills in Chat were useful. Skills in Cowork are operational. They shape autonomous work. Your brand guidelines skill doesn’t just influence a reply. It governs every file Claude creates. Your writing guidelines skill doesn’t just shape a draft. It governs every article Claude writes autonomously.

Zieminski on skill architecture:

Chunk your skills instead of building one giant skill that tries to handle everything. I’ve tested both approaches and the results from one giant skill were much worse. For example, I use three separate writing skills instead of one: an overall voice skill, a corporate writing skill, and a newsletter writing skill. Each handles its own context. Claude never confuses who I’m writing for.

If you’re already using Claude Cowork or just Cowork curious, bookmark this one.

Cartoon girl with a ponytail standing on a stool, hammering a nail into a wall to hang a blank canvas or paper.

Claude Cowork Guide for Power Users: 50+ Tested Tips on Plugins, Skills, Sub-Agents, and Memory

What works, what breaks, and how to make Claude Cowork genuinely useful in 2026.

karozieminski.substack.com iconkarozieminski.substack.com

In high school and through college, I worked at a desktop publishing service bureau in San Francisco. We had Macintosh computers and Linotronic imagesetters (super hi-res laser printers), not Linotype machines. Down the street, those traditional type shops still existed, but their business was already thinning out. Occasionally a graphic designer would send us type to set, and we’d do it in QuarkXPress. The fact that the job landed on our desk at all told you everything about where the industry was headed. The shop’s real business was pre-press and color separations, and eventually direct-to-plate eliminated even that.

Erika Flowers has been building out her Zero-Vector Design framework, and two of her pieces read as a pair. “Zero Stage to Orbit” on UX Magazine uses the rocket equation as a structural lens for the design-to-development pipeline. “The Last Typesetter” on her Substack uses the death of the typesetting profession to make the same argument from a different direction. Together they make the case that the design role, not the skill, is dissolving.

In “The Last Typesetter,” Flowers draws on Sennett:

When suddenly everyone could set type, the difference between good typography and bad typography went from an industry concern to a public epidemic. Bad kerning everywhere. Rivers running through justified text. Orphaned words dangling at the tops of columns like socks left on a clothesline. The people who understood typography were needed more than ever.

But not as typesetters.

Richard Sennett wrote about this in The Craftsman: the difference between a skill and the institutional container built around that skill. Containers look permanent until they are not. The skill outlives every container it has ever occupied.

That’s what happened at the service bureau. The skill—color, typography, print production—survived. The container—the shop, the role, the apprenticeship—did not.

In “Zero Stage to Orbit,” Flowers maps the pipeline onto rocket science:

Each stage in the traditional pipeline is designed to compensate for the limitations of the previous one. Research to inform design. Design to spec for developers. Specs to survive handoff. QA to catch what handoff broke. Retros to discuss why QA caught so much. Process to manage process.

Fuel to carry fuel. The modern development pipeline is not a solution. It is a multi-stage rocket. And most of the energy is going to overhead.

The overhead diagnosis is sharp, and the launch pad economy—consultancies, workflow tools, Agile coaching certifications—has a financial interest in keeping the rocket grounded.

Flowers addresses why the “unicorn” solution failed:

The design technologist did not fail because no one person can possess all the skills. The design technologist failed because no one can hold all the skills while still fighting gravity. They were still launching from the ground, still hauling the translation overhead, just with one person doing all the hauling instead of a team.

The problem was never the number of stages. It was the gravity well.

A product manager I work with recently told me he could think of a solution to a user need, but not a creative solution the way the designer on his team could. Specialization produces real expertise. The design technologist wasn’t wrong about the vision. They were wrong about the physics. AI changes the gravity, not the skills.

What separates both pieces from the standard “AI changes everything” take:

I am also uncertain here, also mid-journey, also discovering orbit’s real constraints in real time. My career, work, and livelihood are just as much at risk as everyone else’s. But that doesn’t discount the facts about the transition to new capabilities.

She’s out on a limb, reflecting a shift the entire industry can feel, without pretending she has the map. In “The Last Typesetter,” she puts it more bluntly: “Defend the role, or follow the skill.”

The skill will survive. It always has. But the transition is real, and not everyone can afford to be mid-journey. Truthfully, I am uncertain either. The thing I’ve loved to do since the 7th grade, the thing that has been my identity for most of my life is changing, possibly dissolving into something else.

Shiny metallic rocket launching diagonally upward against a blue sky, with the text "Design never had a process problem but a gravity one."

Zero Stage to Orbit

What if the pipeline was never broken — it was just never meant to get you to orbit? From handoff docs to sprint ceremonies, every tool and role we built was rational until Orbit became available. Find out what it really means to ship from there.

uxmag.com iconuxmag.com

After nine years of failed attempts at his typeface Nave, Jamie Clarke did something counterintuitive: he threw out the files and started drawing from memory.

Jamie Clarke, writing for I Love Typography:

I began again from scratch, drawing from memory rather than reworking the old outlines (a great tip from Gerry Leonidas), and the results were instantly better.

Memory is a taste filter. When you draw from memory, you keep only the ideas that have lodged deep enough to matter. The cruft—the half-committed decisions, the accumulated compromises—falls away. Clarke’s breakthrough came not from refining what he had, but from forgetting most of it.

The second breakthrough was lateral. While flipping through specimen books, he landed on something unrelated to his project:

One day, while flicking through some specimen books, I came across a specimen of Futura Black. It had little in common with what I was trying to do, but it sparked an idea for the capitals. Paul Renner’s stencil forms look as if they were carved out of solid blocks, which puts all the emphasis on the negative shapes. Thinking this way allowed me to keep the outer shapes formal while letting the internal cuts be more playful. That balance finally gave me the capital forms I had been searching for and brought the design back in line with my original aim.

That recognition only works after enough reps. Clarke spent a decade shipping other typefaces—Brim Narrow, Rig Shaded, Span—before he had the vocabulary to see what Futura Black was telling him.

A type specimen sheet displaying large-scale serif typeface characters set in multiple lines, annotated with handwritten red critique notes. The text reads pangram fragments ("nymph blitz quick vex / dwarf jogs an walts jo / b veaenexeneaeed a qu / ick frong ingk duniper"). Red ink annotations point out design issues including "imbalanced," "different," "too shy," "rounds seem wide," "still wobbles," "bigger," "n has thick shoulder / a doesn't," and "dark," with corresponding arrows and underlines marking specific letterforms.

How Not to Take 10 Years to Design a Typeface

I have often heard type designers talk about the many years they spend developing a typeface. I would listen with awe and think, “That must have been a real challenge. It must be exquisitely crafted and probably a little bit groundbreaking too.” So it feels slightly absurd to admit that […]

ilovetypography.com iconilovetypography.com

If you’re a designer who feels the ground shifting but doesn’t know where to step, Erika Flowers built a free, structured curriculum for exactly that moment. Zero-Vector Design is her framework for collapsing the handoff between design and engineering, using AI agents as crew rather than replacements. The distinction she draws between this and vibe coding is worth internalizing:

You bring the systems thinking, the architecture, the years of knowing what good looks like. The AI extends your reach, not your judgment. Speed without intention is just faster failure. Speed with intention is leverage.

Six levels, 60+ lessons, all free. Worth bookmarking.

Zero-Vector Design brand card on dark background with tagline "From intent to artifact, directly." and website zerovector.design

Zero-Vector Design

A design philosophy for the age of AI. No intermediary. No translation layer. No friction. From intent to artifact, directly.

zerovector.design iconzerovector.design

Most design teams treat the design system as the starting point. Open a new project, pull in the component library, start assembling. It’s efficient. It’s also a trap according to one designer.

David Hoang, writing for Proof of Concept:

I start without a design system. This is deliberate. Production-grade components carry assumptions—spacing, hierarchy, interaction patterns—that narrow the solution space before you’ve had a chance to explore it. If I’m proposing a feature, the design system is the right starting point. But in exploration mode, the system comes later. Sketches are for divergence; design systems are instruments of convergence.

Design systems exist to create consistency, not ideas. When you reach for them too early, you may be converging before you’ve diverged.

Hoang’s workflow inverts the order: sketch unconstrained in code, dial up technical fidelity first, bring the design system in only after you’ve found directions worth pursuing. LLMs make that final step nearly free:

The design system isn’t a starting point—it’s a finishing move. You sketch unconstrained to explore the problem space, then snap your best ideas onto the system’s rails to see if they hold up. The LLM makes that snap nearly instant, so I can run the full loop—sketch, evaluate, systemize—multiple times in a single session. Ideas that break under the system’s constraints get caught early. Ideas that survive get stronger.

The designer makes every structural decision. The LLM handles the re-skinning. Production work, not judgment work.

And ideas that break the system’s constraints surface gaps worth contributing back. That’s the part most design system teams miss. The system should learn from the exploration it constrains, not just gate it.

Hand-drawn diagram showing multiple "Code slides" feeding into a central "Draw tool" grid, which outputs to a "Solution" box on the right.

Sketching with code

Issue 286: Treating code like a pencil, not a blueprint

proofofconcept.pub iconproofofconcept.pub

Director. Orchestrator. Architect. Different words for the same shift. Stop making things one at a time. Start building systems that make things.

Weber Wong, writing for Every, gives this shift a useful name: artifact thinking.

I call this mental model artifact thinking: creative work that produces discrete outputs, one at a time, each beginning from scratch. Traditional tools like Photoshop and Illustrator, which demand endless hand-tuned adjustments and manual refinements to produce a single polished image, trap you in this way of working. Midjourney and DALL-E feel like liberation because they generate outputs so quickly, and you can communicate with them in the same language you speak every day. But visual prompts, too, are one-time, disposable things. You can’t hand them to a colleague and be confident you will get the same result. The magic of near-instantaneous generation masks the fact that you are still in artifact thinking.

That last line is the sharp one. Adopting Midjourney doesn’t mean you’ve left artifact thinking. You’re still producing one-offs—just faster ones. The orchestrator gap isn’t about which tool you use. It’s about whether you’re building systems or pressing buttons.

Wong’s proposed fix is node-based visual programming—workflows you can inspect, modify, and share. He knows it sounds like he’s asking designers to become engineers:

I understand the resistance to this idea. Some people hear “visual programming” and think we’re trying to turn designers into engineers. That’s backwards. We’re trying to give creative professionals the power that programmers have always had: the ability to build systems that work while you sleep, that can be stored as multiple versions and shared and improved, and that take what people already know how to do and make it something anyone can run.

I’ve been asking for canvas-first tools, not chatbox-first ones. Wong is right that chat alone isn’t enough for professional creative work. “Artifact thinking” is a concept worth keeping—regardless of whether Flora is the tool that finally kills it.

Person wearing a "node-pilled" cap typing at a keyboard with red strings tangled around their fingers, overlaid with the word "THESIS.

Creative Work Is About to Look a Lot More Like Programming

Flora’s Weber Wong on why creative professionals need to stop thinking in artifacts and start thinking in systems

every.to iconevery.to

Three people at three different companies, same conclusion. Former Apple designer Jason Yuan calls intelligence “the new materiality” in the previously linked Fast Company piece. Brian Lovin says Notion’s design team can’t design AI products in Figma because the material doesn’t live there. Jenny Blackburn, Google’s VP of UX for Gemini, puts it most directly.

Eli Woolery and Aarron Walter, writing for Design Better, synthesized interviews they’ve done with Google design leaders across YouTube, Search, and Gemini. Blackburn’s framing:

The model is the material that we are designing with, and the more you understand the material, the more you can innovate with it.

You can only direct as well as you understand. But this material behaves unlike anything designers have worked with before. Blackburn on the risk of over-constraining it:

One of the challenges is that these models are so capable. In many ways, they’re actually more capable than you even expect as a designer, and so the risk is that you actually add too much UI that limits the value that the model can provide that would come if you just facilitated a direct conversation between the user and the model.

The Gemini team’s response is smart. When users wrote too-short prompts for custom Gems, they didn’t add a tutorial. They added a “magic wand” that expands the prompt but doesn’t submit it. The user reviews, edits, learns. Teaching without lecturing.

Every previous design material—pixels, paper, aluminum—is deterministic. You shape it, it stays shaped. AI models are probabilistic. Same prompt, different results. Understanding this material isn’t like understanding clay. It’s like understanding weather.

The piece also covers YouTube’s disciplined “bundles” strategy and Search’s AI reimagining. Worth the full read.

Illustrated map of scattered islands in a blue ocean, each hosting different ecosystems and creatures including dinosaurs, large mammals, birds, and desert cacti.

The Roundup (in depth): Google’s 3 design strategies shaping their most popular products

We go deep into YouTube, Gemini, and Search design strategy

designbetterpodcast.com icondesignbetterpodcast.com

I believe in the shokunin mentality. Obsessive iteration, pursuing mastery across decades. But the designers building at the frontier right now are telling a different story.

Mark Wilson, writing for Fast Company, visited Cursor, Anthropic, OpenAI, and Krea in San Francisco. Former Apple designer Jason Yuan, now building his own AI startup:

“You can’t do the old school Apple thing of like, create lickable craft and interface. You can’t because, by the time you’ve done the best interface for ChatGPT 3, you’re on GPT 6.”

That stings a little. The Apple tradition assumes the target holds still long enough to polish. When the platform shifts every few months, polish is a liability.

Anthropic’s head of design Joel Lewenstein is making the same bet:

“Things are moving so fast that we just have to experiment faster. Convergence is hard. Because you have to figure out what’s shared. You have to build that shared path. You have all of the fringe things that people loved on these other systems. And there’s too much changing too quickly.”

Anthropic built Cowork in five or 10 days (depending on who you ask). Ship first, converge later.

What’s telling is who’s embracing this. Yuan and Abs Chowdhury—both former Apple designers, trained in the tradition of lickable craft—have each gone all-in on vibecoding at their startups. Chowdhury transferred rough designs from Photoshop(!) straight into AI code tools. Yuan built his first product mostly alongside AI:

“There’s a new reason to raise lots of money, which is compute. If you have lots of conviction, and you know exactly what you want, like, why would you hire another 20 other people right now to tell you what you’re doing? It’s a coordination cost.”

Yuan calls this the best time to be an “auteur.” The designer who doesn’t wait for engineering to realize the vision, who directs AI the way a film director directs a crew. It’s the orchestrator gap playing out in real time.

I’m not ready to abandon the shokunin mentality. But I’m starting to think the object of obsession needs to shift, from polishing pixels to refining judgment. The craft isn’t in the surface anymore. It’s in knowing what to build.

Wilson’s full piece covers a dozen people across the industry and is worth reading end to end.

Abstract illustration of a chat bubble filled with layered geometric shapes and AI sparkle icons in yellow, blue, and red on a dark background.

‘We just have to experiment faster’: AI’s changed design forever. Now what?

Designers are now coders—or better be. Your interface is a moat—or irrelevant. Inside the dizzying chaos of how AI is upending the design profession, starring its high priests at Anthropic, OpenAI, Cursor, Krea, and more.

fastcompany.com iconfastcompany.com

Notion built a prototype playground for their designers. It’s a single Next.js repo with shared styles and slash commands for deployment. The infrastructure is solid. The adoption question is harder.

Brian Lovin, talking to Claire Vo on How I AI:

It’s still a Next.js app. It’s still React and TypeScript and Git and branches and it’s just a lot of concepts to throw at someone who maybe is used to only prototyping in Figma or they’re intimidated by a terminal or code. And so I’m trying to figure out like how do we make this thing more approachable? How do we make it easier to onboard but also not dumbed down, right? I want people to learn how to use computers. I want people to even subconsciously absorb the ideas of git and branching and pull requests and merging.

“Make it easier but not dumbed down” is the tension every team building AI design tooling is going to hit. Lovin wants designers to actually learn Git, not just have it abstracted away. That’s a bet on long-term capability over short-term adoption. If Notion, with its engineering culture and resources, is still working through this, the rest of the industry has a longer road than the demos suggest.

But Lovin makes a sharp case for why the effort is worth it, especially for AI product design:

I don’t think you can design a good chat experience in Figma. You can design what the chat input looks like. You could design a little chat bubble and a send button and a dropdown for model picker. I think all that’s fine in Figma, but what you can’t design in Figma is what it actually will feel like to use that thing. I probably should have said this at the very beginning, but the reason Prototype Playground existed is because when I started working on Notion AI, I was literally designing conversations in Figma — the user’s going to say this, and then the AI is going to say this, and then it’s going to work perfectly and create a page or a database. You mock these golden paths in Figma and then the engineers go and they build it. And it just doesn’t work that way, right? You send a message, the AI gets stuck, or asks a follow-up question, or does the wrong thing and you need to correct it.

This is the strongest argument I’ve heard for code-first prototyping of AI features. Static mocks enforce golden-path thinking. Real models surface the messy middle: the weird follow-ups, the latency that changes how an interaction feels, the error states you’d never think to mock up.

And yet:

I still use Figma. I probably still spend 60 to 70% of my time in Figma. There’s just certain things that you’re making that don’t need to be in the browser. They don’t need to be coded up. You can just look at it and be like, “Yeah, that’s roughly right. We should just ship that.”

So even the person who built the Prototype Playground still spends most of his time in Figma. Figma isn’t dying just yet, but its scope is narrowing. But for AI features specifically, Lovin’s case is hard to argue with: you need the real model running to know if the design works.

The interview gets most interesting when Lovin describes his operating philosophy for AI agents and how to get them to run longer:

My philosophy on this has been anytime the AI asks you to do something, you should, before responding, try your best to see if you could teach the AI to answer that question for itself. […] So, for example, I’ve taught Claude, “Hey, check your work. One, you can run commands like eslint, right? And like look for actual TypeScript errors.” The second is you can give it access to MCP tools. […] Before installing this, Claude would say to you, “Hey, I’ve implemented your feature. Go take a look at it and let me know what you think.” And remember, our rule is anytime Claude tells you to do something? Ask if you can teach it to do that thing for itself. So, I don’t want to have to look at the browser every time to see if I did it correctly. So, instead, I teach Claude, “Actually, you should be the one to go and open the browser.”

Every interruption from the AI is interrupting your flow state. That’s orchestration in practice: building infrastructure that lets the AI handle its own quality checks so you the designer stays in the flow of deciding what to build and whether it’s right.

Lovin again:

You want your designs to encounter reality as early as possible. And if you imagine this gradient of like I’m scribbling on a napkin on one side to I’m shipping to production and showing customers on the other side, our goal as designers is to move up that gradient towards prod as quickly as possible. […] I just find that when you’re designing something in Figma and then you actually try it in the browser, in the browser you notice a ton of problems. All of a sudden you’re clicking things, you notice loading states, you notice “ah, that didn’t quite work on this screen size.”

Encounter reality as early as possible. That’s the whole argument in six words. There’s a lot more in this conversation, and it’s worth the full watch.

How Notion designers ship live prototypes in minutes | Brian Lovin (Product designer)

Brian Lovin is a designer at Notion AI who has transformed how the design team builds prototypes, by creating a shared code environment powered by Claude Code. Instead of designers working in isolated repositories or limited to static Figma designs, Brian built a collaborative “prototype…

youtube.com iconyoutube.com

AI tools made designers faster. The question nobody’s answering is whether their organizations can keep up.

Cameron Worboys, head of product design at Cash App, talking to Michael Riddering on Dive Club:

I think the biggest blockers across all of the tech industry in the next 2 years will not be the speed of building. It’s going to be the operational side and being able to move something from like we have built this thing. How does it move through the operational cogs of product development in order to like get it live to customers? So my view is like how do we set ourselves up for the new world? You have to make sure that your organization is capable at running at the same speed as the AI tools. And these AI tools move fucking fast.

The bottleneck migrated. Building isn’t the constraint anymore. Getting what you’ve built through approvals, reviews, compliance, and deployment is. Cash App’s response has been radical: they’ve flattened to three management layers (they call it “core plus three”), deleted design crits, and are pushing every designer to ship production code.

Worboys on what quality actually looks like at this speed:

The quality piece, there’s a misconception that it comes from a designer sitting in some cave for 3 months and pontificating about the future of software. It literally doesn’t. It comes from reps and the speed which you can be wrong and the speed that you can go again and experiment and experiment and experiment. And I think that’s what we’ve seen change, is the amount designers can produce has exponentially increased and the amount of like bureaucracy and layers you need to run an organization has changed a lot as well.

Quality through iteration, not pontification. That’s always been true, but when each iteration takes minutes instead of days, the gap between teams that ship and teams that sit in review becomes enormous.

Worboys on where this leads:

I believe one of the primary ways which you will create lock-in in the new world is creating apps that feel completely one of one. […] When you think about the future of software development and where it’s going with generative UI, there is nothing in the future that’s going to prevent us from creating these completely one of one experiences. So that’s what is top of mind for me at the moment. And I do think we will get there relatively quickly, that every Cash App does feel unique and completely designed around the person. And then from a business perspective, it creates this deeper, harder to quantify emotional connection with a product that is the same as like your wardrobe. Clothes are by and large like an expression of personal identity.

This is the most concrete product bet I’ve seen on generative UI. Not widgets inside a chat window. Entire apps shaped around the individual. I still think core app chrome should stay stable. But Worboys is betting that consumer fintech is where that line starts to blur.

Cameron Worboys - Inside an AI-native design org

Today’s episode with Cameron Worboys (https://x.com/camworboys) (Head of Product Design at Cash App) is an inside look at how an AI-native design org operates and the ways designers can thrive in this new world.

youtube.com iconyoutube.com
A red-crowned crane soaring over misty mountain waterfalls in a Japanese ink-wash style illustration with pink-blossomed trees and teal rocky cliffs.

Spec-Driven Development: It Looks Like Waterfall (And I Feel Fine)

We’ve been talking a lot about agentic engineering, how software is now getting built with AI. As I look to see how design can complement this new development paradigm, a newish methodology called spec-driven development caught my eye. The idea is straightforward: you write a detailed specification first, then AI agents generate the code from it. The specification becomes the source of truth, not the code.

My first reaction when I started reading about SDD was: wait, isn’t this just waterfall?

Seriously. You gather requirements. You write them down in a structured document. You hand that document to someone (or something) that builds to spec. That’s the waterfall pattern. We spent two decades running away from it, and now it’s back wearing a blue Patagonia vest and calling itself a methodology.

The design process isn’t dead. It’s changing. My belief is that the high-level steps are exactly the same, but where designers spend their time is being redistributed.

Jenny Wen, head of design for Claude at Anthropic (formerly at Figma), on Lenny’s Podcast:

This design process that designers have been taught, we sort of treat it as gospel. That’s basically dead. I think it was sort of dying before the age of AI, but given now that engineers can go off and spin off their seven Claudes, I think as designers, we really have to let go of that process.

It’s a strong headline. But Wen then describes her actual day-to-day, and it sounds familiar:

We are still prototyping stuff. I’m still mocking stuff up. I think it’s just I have a wider set of tools now, and I think the proportion of time I spend doing each thing just has changed.

So the process isn’t dead. The proportions shifted. Wen breaks it down:

A few years ago, 60 to 70% of it was mocking and prototyping, but now I feel the mocking up part of it is 30 to 40%. And then there’s that other 30 to 40% there that is now jamming and pairing directly with engineers. And then there’s a slice of it that is now implementation as well.

What’s missing from that breakdown is user research and discovery. Wen mentions having a researcher on the team, mentions reading studies and feedback, but those activities don’t factor into the breakdown at all. For a team building products where, by Wen’s own admission, “you can’t mock up all the states” and “you actually discover use cases as you see people using them,” you’d think research would be eating a larger share of the pie, not disappearing from the conversation entirely. In my day-to-day, the designers on my team spend 30–40% on discovery and flows. Maybe 40–50% on mockups and prototypes. We’re basically already at her breakdown.

There’s also a context problem. Wen’s “ship fast, iterate publicly, build trust through speed” approach makes sense for Anthropic. They’re building greenfield AI products where nobody knows the right interaction patterns yet. The models are non-deterministic. Labeling something a “research preview” and iterating in public is the right call when the design space is that undefined.

That approach gets harder with a product that has an established install base. When you’re updating features that millions of people depend on, “ship it and iterate” has real costs. Sonos learned this. Or if your product is mission-critical as Figma learned when it shipped its UI3 and designers revolted. Or worse, an essential service like a CRM or operational software. The slow, unglamorous work of discovery and user testing exists because breaking what already works is expensive. Wen has the advantage of building greenfield — there’s no install base to protect. Not every team has that luxury.

The interview gets more interesting when Wen turns to hiring. She describes three archetypes: the “block-shaped” strong generalist who’s 80th percentile across multiple skills, the deep T-shaped specialist who’s in the top 10% of their area, and then a third she says the industry is overlooking:

My last one is probably the one that I think we’re all overlooking, which is what I call the crack new grad. It’s just somebody who’s early career and feels, like, wise and experienced beyond their years, but is also just very humble and very eager to learn. I think this person is really interesting right now because I think most companies are just hiring senior talent, folks that have done things before, are super experienced, but given how much the roles are changing and what we’re expected to do is changing, I think having somebody who almost has a blank slate, and is just a really quick learner and is really eager to learn new tactics and stuff like that, and doesn’t have all these baked in processes and rituals in their mind, that’s super valuable.

Wen’s “crack new grad” maps closely to the strategies I wrote for entry-level designers: build things, get comfortable with AI tools, be what Josh Silverman calls the “dangerous generalist.” Someone without baked-in rituals who learns fast and ships. That a design leader at a frontier lab is actively looking for this profile matters, because most of the industry is still filtering for ten years of experience.

The design process is dead. Here’s what’s replacing it. | Jenny Wen (head of design at Claude)

Jenny Wen leads design for Claude at Anthropic. Prior to this, she was Director of Design at Figma, where she led the teams behind FigJam and Slides. Before that, she was a designer at Dropbox, Square, and Shopify.

youtube.com iconyoutube.com

The instinct when working with AI agents is to write more. More instructions, more constraints. Turns out that’s exactly wrong.

Addy Osmani, writing for O’Reilly, digs into the research:

Research has confirmed what many devs anecdotally saw: as you pile on more instructions or data into the prompt, the model’s performance in adhering to each one drops significantly. One study dubbed this the “curse of instructions”, showing that even GPT-4 and Claude struggle when asked to satisfy many requirements simultaneously. In practical terms, if you present 10 bullet points of detailed rules, the AI might obey the first few and start overlooking others.

So the answer is a smarter spec, not a longer one. Osmani pulls from GitHub’s analysis of over 2,500 agent configuration files and finds that effective specs cover six areas: commands, testing, project structure, code style, git workflow, and boundaries.

The boundaries piece is worth lingering on. Osmani recommends a three-tier system:

Always do: Actions the agent should take without asking. “Always run tests before commits.” “Always follow the naming conventions in the style guide.”

Ask first: Actions that require human approval. “Ask before modifying database schemas.” “Ask before adding new dependencies.”

Never do: Hard stops. “Never commit secrets or API keys.” “Never edit node_modules/ or vendor/.” “Never remove a failing test without explicit approval.”

That framing—always, ask first, never—gives the AI a decision framework instead of a wall of instructions. It maps to how you’d manage a person, too. Osmani quotes Simon Willison on the comparison: getting good results from a coding agent feels “uncomfortably close to managing a human intern.”

Klaassen’s compound engineering is one version of this. Osmani’s spec framework is another. The principle underneath both: teach fewer things well rather than everything at once.

Two humanoid robots inspect a giant iridescent aqua scroll unrolling from a metal roller in a sunlit hall.

How to Write a Good Spec for AI Agents

This post first appeared on Addy Osmani’s Elevate Substack newsletter and is being republished here with the author’s permission.TL;DR: Aim for a clear

oreilly.com iconoreilly.com

Most people using AI to write code are still reviewing every line. Kieran Klaassen stopped doing that months ago.

Kieran Klaassen, CTO of Cora at Every, on Peter Yang’s channel, He calls his approach compound engineering:

AI can learn. If you invest time to have the AI learn what you like and learn what it does wrong, it won’t do it the next time. So that’s the seed for compound engineering. There are four steps: planning first, working—which is just doing the work from the plan—then assessing and reviewing, making sure the work that’s done is correct, and then taking the learnings from that process and codifying them. So the next time you create a plan, it’s there. It learned.

Plan, build, review, codify. Each cycle teaches the AI something it keeps. You hit a problem, you capture the fix, and that fix lives in your repo as documentation the AI reads next time. The learnings compound across sessions.

The result: Klaassen says 100% of his code is now AI-written. He hasn’t opened Cursor in three months. But he’s not winging it. On what that trust actually requires:

It’s a little bit more of like, I trust you. I don’t need to look at all the code. I don’t need to read all the code, but I have systems and ways I work with AI that I trust, and through that I can let AI do things.

That trust is earned through the loop. Mistakes get caught, codified, and they don’t happen twice. Klaassen compares it to onboarding:

It’s similar to onboarding a person on your team. You need to get them on board, get them used to your code. But once that is done, you can let them go and really just run with it.

How to Make Claude Code Better Every Time You Use It (50 Min Tutorial) | Kieran Klaassen

Kieran my favorite Claude Code power user and teacher. In our interview, he walked through his Compound Engineering system that makes Claude Code better every time you use it. This same system has been embraced by the Claude Code team and others. Kieran is like Morpheus introducing me to the matrix, so don’t miss this episode 🙂

youtube.com iconyoutube.com

Why AI isn’t showing up in productivity data? Chetan Dube offers one answer in Fast Company: most companies are bolting AI onto existing roles instead of redesigning the work.

Most managers are using AI the same way they use any productivity tool: to move faster. It summarizes meetings, drafts responses, and clears small tasks off the plate. That helps, but it misses the real shift. The real change begins when AI stops assisting and starts acting. When systems resolve issues, trigger workflows, and make routine decisions without human involvement, the work itself changes. And when the work changes, the job has to change too.

McKinsey data backs this up—78% of organizations now use AI in at least one function, “though some are still applying it on top of existing roles rather than redesigning work around it.” That’s the Solow paradox in one sentence.

Dube’s lost luggage example is a good one:

Generative AI can explain what steps to take to recover a lost bag. Agentic AI aims to actually find the bag, reroute it, and deliver it. The person that was working in lost luggage, doing these easily automated tasks, can now be freed to become more of a concierge for these disgruntled passengers.

The job goes from processing to judgment. And if leaders don’t get ahead of it:

If leaders don’t redesign the job intentionally, it will be redesigned for them, by the technology, by urgent failures, and by the slow erosion of clarity inside their teams.

That slow erosion of clarity is already visible. People less and less sure what they’re supposed to be doing because the tasks they were hired for are quietly handled by a system nobody put in charge.

Four-person open-plan desk with monitors, keyboards, office chairs and potted plants on a white oval amid colorful isometric cubes

If AI is doing the work, leaders need to redesign jobs

AI is taking a lot of work off of employees’ plates, but that doesn’t mean work has vanished. Now, there’s different work, and leaders need to craft jobs to match this new reality.

fastcompany.com iconfastcompany.com

The software development process has accumulated decades of ceremony. Boris Tane argues AI agents are collapsing the whole thing.

On engineers who started their careers after Cursor:

They don’t know what the software development lifecycle is. They don’t know what’s DevOps or what’s an SRE. Not because they’re bad engineers. Because they never needed it. They’ve never sat through sprint planning. They’ve never estimated story points. They’ve never waited three days for a PR review.

I read that and thought about design. How much of our process is ceremony too? The Figma-to-developer handoff. The pixel-perfect QA pass. The design review where six people debate border radius. If an AI agent can generate working UI from a design system in three prompts—which I’ve done—a lot of what we treat as process is friction we’ve institutionalized.

Tane’s conclusion:

The quality of what you build with agents is directly proportional to the quality of context you give them. Not the process. Not the ceremony. The context.

For engineering, context means specs, tests, architectural constraints. For design, it means your design system—the component docs and the rules for how things fit together. If that context is thin, the agent produces garbage. If it’s thorough and machine-readable, the output lands close to production-ready.

Tane again:

Requirements aren’t a phase anymore. They’re a byproduct of iteration.

Same for mockups. When you can generate and evaluate working UI faster than you can annotate a Figma frame, the mockup stops being a deliverable and becomes a sketch you might skip entirely. The design system becomes the spec. Context engineering becomes the job.

The Software Development Lifecycle Is Dead — Feb 21, 2026; Boris Tane Blog

The Software Development Lifecycle Is Dead

AI agents didn’t make the SDLC faster. They killed it.

boristane.com iconboristane.com

I’ve been arguing that the designer’s job is shifting from execution to orchestration—directing AI agents rather than pushing pixels. I made that case from the design side. Addy Osmani just made it from engineering based on what he’s seeing.

Osmani draws a hard line between vibe coding and what he calls “agentic engineering.” On vibe coding:

Vibe coding means going with the vibes and not reviewing the code. That’s the defining characteristic. You prompt, you accept, you run it, you see if it works. If it doesn’t, you paste the error back and try again. You keep prompting. The human is a prompt DJ, not an engineer.

“Prompt DJ” is good. But Osmani’s description of the disciplined version is what caught my attention—it’s the same role I’ve been arguing designers need to grow into:

You’re orchestrating AI agents - coding assistants that can execute, test, and refine code - while you act as architect, reviewer, and decision-maker.

Osmani again:

AI didn’t cause the problem; skipping the design thinking did.

An engineer wrote that. The spec-first workflow Osmani describes is design process applied to code. Designers have been saying “define the problem before you jump to solutions” for decades. AI just made that advice load-bearing for engineers too.

The full piece goes deep on skill gaps, testing discipline, and evaluation frameworks—worth a complete read.

White serif text reading "Agentic Engineering" centered on a black background.

Agentic Engineering

Agentic Engineering is a disciplined approach to AI-assisted software development that emphasizes human oversight and engineering rigor, distinguishing it fr...

addyosmani.com iconaddyosmani.com