Skip to content

127 posts tagged with “process”

14 min read
Pointillist-style painting of a formally dressed figure in a black top hat holding a glowing green laptop, surrounded by a crowd of early 20th-century people.

A Sunday Afternoon with Claude Design

It’s really hard to get momentum on a side project when you have a full-time job with lots of travel, an active blog, and a newsletter. But I had to recapture that momentum because this side project is important. It’s for a preschool website for my cousin.

Walking into My Little Learning Tree is like stepping into pure warmth. Yes, yes, preschools are inherently fun environments, but the kids and the teachers there create a visceral energy that is simply special. I wanted to capture that specialness in a long-overdue website redesign project.

Looking at my in-progress design, something felt off. I had these long horizontal lines preceding the eyebrows—the small text above a heading that names the section—that didn’t feel right. First, they were straight. Second, the lines only occurred before the text, not also after. I clicked on the Comment button to enter Comment mode, then clicked on the eyebrow and prompted, “These lines aren’t playful enough. Let’s make them squiggles and have them before and after the eyebrow text.”

And then Claude Design did its thing.

“Taste is the scarce thing” has become shorthand for what designers still own in the AI era. I’ve written about it in the abstract more than once. Chris R Becker, writing for UX Collective, opens with an old Marshall McLuhan-era line—“we shape our tools and then our tools shape us”—and then shows how to keeping doing the shaping.

Becker cites the Steve Jobs-attributed 10-80-10 rule:

Start away from any AI. Use the 10–80–10 rule. 10% away thinking, defining, establishing vision. 80% making use of AI to assist the vision. 10% away from AI critiquing, testing, and evaluating the solution.

The bookends are the work. Both 10% slots sit explicitly away from the model, which is another way of saying they’re the judgment layer. The first defines what good looks like before inviting AI in. The second evaluates what came out. AI collapses the cost of the 80%, which is the whole productivity story. But that collapse means the bookends are no longer preamble and postscript. They’re most of the job.

Becker gets at why the closing 10% matters:

The authority bestowed on institutions, educators, and SMEs (subject matter experts) is being absorbed by AI and spread thin like butter on toast. An AI appears to slather knowledge evenly, but the quality of the knowledge butter is deliberately made opaque.

AI output arrives looking uniformly authoritative, the same confident tone whether the underlying source is a peer-reviewed paper or a forum post from 2013. Provenance gets flattened. Without a prior standard to judge against, the designer reviewing output has nothing to push back on. That’s Becker’s larger point:

The irony, I suppose, is that Designers are, hopefully, trained not to be “yes men” but rather to ask hard questions, challenge the prevailing motivations of business over our users, and, most importantly, find the root cause of the problem, rather than just the surface reaction. AI, unfortunately, is not built to push back; it will not say… “I don’t know,” or “I think that is a bad idea,” or “what if you did this… instead,” or “I understand YOU (CEO) wants this feature, but the user research and ‘our users’ want something different.” AI is designed to serve, and in the hands of people in an organization who are looking for the least amount of pushback, it is a recipe for deep institutional implementation and, frankly, a lot of bad ideas, fast.

“A recipe for deep institutional implementation.” A sycophantic tool plus an organization that wants frictionless agreement equals speed in the wrong direction. The 10-80-10 rule is a personal discipline. What’s still unresolved is how teams build that discipline into the process before the wrong direction becomes the default.

Pen-and-ink illustration of a thoughtful man seated in a chair holding a hammer, with rows of large server racks filling a data center behind him.

We become what we behold

A discussion of AI + Design and our shifting roles.

uxdesign.cc iconuxdesign.cc

When generation gets cheap, craft becomes judgment. Raj Nandan Sharma, writing on his blog, puts it bluntly:

Before AI, mediocre work often reflected a lack of time, resources, or execution skill. Today mediocre work often means something else: the person stopped at the first acceptable draft. That is the economic shift AI introduces. It compresses the cost of first drafts, which means the value moves downstream… In other words, the scarce skill is not generation. It is refusal.

Refusal—knowing what to throw out and why—is what’s scarce in a world where anyone can generate ten competent drafts before lunch.

But Sharma doesn’t stop there. He warns that elevating taste alone can quietly corner humans into an end-of-pipeline selector role:

There is a strong version of the “taste matters” argument that quietly pushes humans into a narrow role. In that version, AI generates many outputs and the human stands at the end of the pipeline selecting the best one. That is a useful role, but it is also too small… The warning is not that taste has no value. It does. The warning is that taste without authorship, stake, or construction can become a narrow and eventually fragile role.

The warning Sharma adds is the part the “taste is the moat” conversation tends to skip. Refusal without authorship is still selector work, and selector work has a ceiling. The durable position pairs refined taste with authorship—owning what ships and the stake for getting it wrong.

Abstract swirling ink or fluid art in dark and pink tones with white text reading "Good Taste: The Only Real Moat Left.

Good Taste the Only Real Moat Left

AI makes competent output cheap. That makes taste more valuable, but also more incomplete. The real edge comes from pairing judgment with context, stakes, and the willingness to build.

rajnandan.com iconrajnandan.com

Two podcast conversations with frontier-lab design leaders on what designing at an AI lab looks like day-to-day. I previously linked to Lenny Rachitsky’s interview with Jenny Wen, head of design for Claude, where she described a redistribution of designer hours: less mocking, more pairing with engineers, a sliver of direct implementation. The activities themselves still look like design.

Ian Silber, head of product design at OpenAI, on Michael Riddering’s Dive Club, describes work that doesn’t fit the same list:

Designers working on this are hopefully spending a lot less time in Figma or whatever tool you use to draw pixels, and more time really thinking about how you interact with this thing, and the fact that the model really is the core product.

Silber’s concrete example is onboarding. Instead of building a first-run tutorial, his team shapes what the model already knows about the person:

We have this super intelligent model that could probably do a much better job trying to understand what this person’s goals are […] We’re really stripping back a lot of what you might traditionally do and trying to say, “Well, actually […] let’s think about like how we should give this context to the model that this person is brand new and they might need some handholding.”

The traditional response adds UI around the problem. Silber’s team takes it out and gives the model enough context to meet the user where they are.

That kind of work needs its own scaffolding, and OpenAI is building it:

We have a whole system called the Dynamic User Interface Library, which allows us to design things that the model can then interpret.

Primitives the model composes at runtime, shaped by system prompts and context rather than drawn flow by flow. Wen is describing a redistribution of designer hours inside activities that still look recognizable. Silber is describing activities that don’t quite have names yet. And yes, that is still design.

Ian Silber - What it’s like designing at OpenAI

If you’re like me you gotta be curious... what’s it like designing at OpenAI?

youtube.com iconyoutube.com

The gap between an AI-produced prototype and a shippable product has a shape. Most of us assume it’s the visual 20%: the polish AI output drifts on. Chad Johnson’s case is that the 20% is the trivial part, and the real gap sits upstream of everything visible.

Chad Johnson, writing in his newsletter:

The deeper issue was that nobody had asked whether a prototype was even the right artifact to produce at that stage. The PM had made three assumptions about user intent that we hadn’t validated. They’d skipped past a critical question about whether this flow needed to exist at all, or whether the real problem was upstream in the information architecture. They’d built a beautiful answer to a question nobody had confirmed was worth asking. That’s the part that stuck with me. Not the visual gaps. The thinking gaps.

That lines up with what I’ve been calling C+ out of the box: artifacts that read well and seem credible until you apply critical thinking. Johnson gets specific about what’s actually missing, and none of it is visual: the assumption nobody validated, the upstream question nobody asked. The interface was fine. The thinking was absent from the (probably) AI-generated PRD.

Johnson again:

…design production got democratized, but design judgment didn’t. Anyone can make something now. Almost nobody new learned how to think well about what should be made, why, and for whom. And that gap, between what’s possible to produce and what’s actually been thought through, is now the entire playing field for our profession. Designers aren’t becoming obsolete. They’re becoming stewards.

Judgment still takes years to build, and no tool compresses that.

The last 20% is rarely the gap that matters. The first question—should we build this?—almost always is. Very few teams have the muscle to ask it.

Abstract digital art featuring curved, layered surfaces with fine parallel lines in warm orange, red, and deep blue gradients.

The Last 20% and Who’s Asking Why?

Everyone can build now. Almost nobody stops to ask if they should.

chadsnewsletter.substack.com iconchadsnewsletter.substack.com
A sleek high-speed bullet train with glowing headlights crossing a bridge through dense fog over a misty landscape.

Acceleration Is Not Automation

I’ve been wandering the wilderness to understand where the software design profession is going. Via this blog and my newsletter, I’ve been exploring the possibilities by reading, commenting, and writing. Many other designers are in the same boat, with Erika Flowers’s Zero Vector design methodology being the most defined. Kudos to her for being one of the first—if not the first—to plant the flag.

Directionally Flowers is right. But for me, working in a team and on B2B software, it feels too simplistic and ignores the realities of working with customers and counterparts in product management and engineering. (That’s her whole point: one person to do it all, no handoff.)

The destination is within view. But it’s hazy and distant. The path to get there is unclear, like driving through soupy fog when your headlights reflecting off the mist are all you can see.

I’ve written that AI-era design work reduces to taste and judgment. Elizabeth Goodspeed’s case for designer-writers gets there from a different direction.

Elizabeth Goodspeed, writing for It’s Nice That:

You can get away with a lot in design: conceptual ideas are able to sit inside a visual piece of work without ever being fully spelled out. They’re gestured at rather than articulated. Writing forces you to figure out exactly what your idea is; if it isn’t working, you’ll know immediately. Where design is like a ballet – implicit ideas carried through form – then writing is closer to a theatre – your thinking has to be explicitly spoken.

Goodspeed’s point is that design lets you gesture at an idea without ever articulating it, and writing forces you to name it. A designer who can’t explain why a choice works has taste they can’t grow or pass on.

Goodspeed’s second point goes further:

Writing is to graphic design what clay is to pottery. It’s the material designer shape and massage into form. To work with text well, you have to really be able to read and understand what you’re setting – not just how it looks and basics like not hyphenating a word in a bad spot, but what it means on a deeper level. Just as reading makes you a better writer, writing makes you a better reader.

Product designers don’t usually think of themselves as writers. But user stories are writing, and articulating what a user should be able to do through an experience and why is essential.

Worth reading in full. She makes writing feel like a design discipline.

Bold black text reading "Placeholder Text" and "Elizabeth Goodspeed" on a pink background, flanked by columns of lorem ipsum-style body copy.

Elizabeth Goodspeed on why design writing needs designers writing

Without designers writing about their own work, design is easy to misunderstand. Writing helps designers work through what they think – and makes that thinking visible to others.

itsnicethat.com iconitsnicethat.com

Every few weeks, another essay or YouTube video announces that AI has killed craft. One of my favorite designers writing about design, Christopher Butler, goes the other way:

No knowledge I possess about design—the incorporeal understanding that makes what I create better than an off-the-shelf template or something done by someone without my experience—is made irrelevant by AI. Nor is it contradicted by my use of AI tools. Structure still communicates before content. Visual hierarchy still guides attention. Negative space still creates rhythm. These principles don’t vanish because I’m working through AI rather than directly manipulating pixels. The craft migrates to a different level of abstraction. But it remains craft.

Butler’s claim is that the principles don’t vanish; they operate at a higher altitude. The unfinished part is naming where that altitude actually is. For product designers, it’s concept and hierarchy: the decisions that require knowing the user and the stake someone is willing to carry. The generated layout and the choice of components are still outputs. What’s left of design is the judgment that picks between them.

Butler’s sharper line is the binary between consumption and practice:

Someone who generates an interface with AI and calls it done isn’t practicing craft. They’re consuming convenience. Someone who generates an interface, inspects it, questions what it’s actually communicating, refines the structure, generates again, compares variations, understands why one serves the user better than another—they’re practicing craft. They’re building knowledge through iteration. The tool doesn’t determine whether you’re working with craft. Your approach does.

That’s Jiro Ono’s shokunin applied to interfaces: craft as lifelong practice, not manual labor. A camera doesn’t take a picture, and a model doesn’t make a design. That decision is the craft.

Butler’s argument reassures me. What worries me is how optional that decision is becoming. The output already looks finished. The designers who keep asking why one version serves the user better than another will still be designers in five years. The rest may still have jobs, as operators of a tool doing the work their taste used to do.

Close-up of a vibrant fingerprint with swirling ridge patterns in orange, red, blue, and yellow iridescent colors with glittery highlights.

Craft is Untouchable

I have a vested interest in the title of this piece being true. I’ve spent decades developing craft—not just making things, but understanding systems, seeing patterns, making judgments that can’t be reduced to prompts. If AI eliminates the need for that expertise, I’m in trouble.

chrbutler.com iconchrbutler.com

Specialization is the whole game. Give an agent a specific role and clear constraints, and the quality of the output changes completely. I’ve been learning this firsthand with Claude Code skills.

Marie Claire Dean took that principle and scaled it into an open-source system called Designpowers. Her reasoning:

Most AI tools give you one assistant. You ask it something, it answers, and you figure out what to do next. That’s not how design teams work.

Design teams work because a strategist thinks differently from a visual designer, who thinks differently from a content writer, who thinks differently from someone doing accessibility review. The handoffs between those perspectives are where the work gets better. The friction is productive.

Her team of ten covers the full pipeline from discovery through shipping, with dedicated specialists for strategy, visual design, content, motion, accessibility, and critique. All sharing one design state document, with the human directing.

On what she learned building it:

The act of encoding a design process forces you to decide what the handoffs actually are. When does strategy end and visual design begin? What does the content writer need from the strategist before they can start? What happens when the accessibility reviewer and the design critic disagree?

That’s the same clarity I’ve found writing Claude Code skills: what does this agent need to know, and where does its scope end? On where the human stays essential:

The idea is simple: agents can verify that a design is correct, aligned to the brief, accessible, consistent. They can’t tell you whether it’s beautiful. That’s your job.

The full system is on GitHub.

3D illustration of abstract biological structures resembling a protein or molecule, with colorful folded shapes, helices, and spheres floating against a dark blue background.

I Built a Design Team Out of AI Agents

...and they’re free!

marieclairedean.substack.com iconmarieclairedean.substack.com

I’ve watched design team values die in a Confluence page. The offsite happens, the Post-Its get transcribed, the principles get written up with care, and then everyone goes back to their desks and ships exactly the way they did before. I’ve seen it with product principles and brand values too. The deck gets built, implementation starts, and the deck gets forgotten.

Vitaly Friedman, writing for Smashing Magazine, on why this matters more than ever:

We often see design principles as rigid guidelines that dictate design decisions. But actually, they are an incredible tool to rally the team around a shared purpose and document the values and beliefs that an organization embodies. They align teams and inform decision-making. They also keep us afloat amidst all the hype, big assumptions, desire for faster delivery, and AI workslop.

Friedman again:

In times when we can generate any passable design and code within minutes, we need to decide better what’s worth designing and building — and what values we want our products to embody. It’s similar to voice and tone. You might not design it intentionally, but then end users will define it for you. And so, without principles, many company initiatives are random, sporadic, ad-hoc — and feel vague, inconsistent, or simply dull to the outside world.

You might not write principles intentionally, but your product will have them anyway. The question is whether you chose them or inherited them by default.

Friedman closes with the part most teams skip:

Creating principles is only a small portion of the work; most work is about effectively sharing and embedding them. It’s difficult to get anywhere without finding ways to make design principles a default — by revisiting settings, templates, naming conventions, and output. Principles help avoid endless discussions that often stem from personal preferences or taste. But design should not be a matter of taste; it must be guided by our goals and values.

Creating principles feels productive. But alignment without embedding is a Confluence page nobody opens twice. Principles have to show up in the Figma component library, the ticket template, the review rubric. They have to be repeated so that they are ingrained. They have to become the path of least resistance.

Smashing Magazine article title card: "A Practical Guide To Design Principles" by Vitaly Friedman, tagged Design, UX, UI.

A Practical Guide To Design Principles — Smashing Magazine

Design principles with references, examples, and methods for quick look-up. Brought to you by Design Patterns For AI Interfaces, **friendly video courses on UX** and design patterns by Vitaly.

smashingmagazine.com iconsmashingmagazine.com

Silicon Valley’s pitch to designers is that AI is the more knowledgeable partner now, so they should get good at prompting it. Write better instructions, get better output.

Peter Zakrzewski, writing for UX Collective, pushes back:

The current Silicon Valley pitch to designers is essentially this: AI is your MKO now. It knows more patterns than you do. It executes faster than you do. It can code. Your job is to learn how to give it good instructions — to become a fluent prompter of a more capable system. I want to challenge that framing directly.

His challenge starts with a concrete test. He asked three leading AI systems to render a dining table with a concrete slab top resting on dry spaghetti legs, then show the scene five seconds after the legs gave way. All three rendered the impossibility with total confidence. None could feel that the physics don’t work.

That test illustrates what Zakrzewski calls the Inversion Error:

We have built a Symbolic Giant resting on an Enactive Void. These systems can write about gravity with technical or even poetic fluency but cannot feel it. They can describe a structure but cannot tell you whether it will stand or fall. The ground is shaking because the floor is missing.

“Symbolic Giant resting on an Enactive Void” is a mouthful, but the floor metaphor does the work: AI’s language fluency masks a total absence of spatial, embodied reasoning. The kind designers rely on every day without naming it. Zakrzewski on what that means for the prompting pitch:

Designers do not think primarily in sentences. Our human cognition is deeply embodied. We think in diagrams, in spatial relationships, in load paths and sight lines and in the non-discursive logic of things that must connect to other things in three-dimensional space. […] We are being asked to compress years of embodied cognition and our three-dimensional spatial judgment into a text prompt and then accept whatever the machine generates as an adequate rendering of our intent. We are, in other words, being asked to abandon the very capability that the AI lacks and that our projects require.

When someone tells designers to compress spatial judgment into a text prompt, they’re asking designers to throw away the one capability AI genuinely lacks and the one we’re genuinely great at.

There was a theme to some of the posts on this blog last week—about how words should come before the pixels. I made a similar argument in the newsletter: the work is getting more verbal and conceptual, but the eye stays. Zakrzewski makes the case for what words alone can’t carry: the spatial, embodied judgment that tells you whether the thing will actually stand.

A mechanical robotic hand reaching upward against a stormy sky, overlaid with a bold red banner reading "Form follows nothing.

The ground is shaking: Why designers must flip the script on AI

Something has shifted in the way the design field operates, and I think most of us can sense it even if we haven’t yet found the words or…

uxdesign.cc iconuxdesign.cc

The first time I wrote about Jenny Wen, I pushed back. She said the design process was dead, and I argued the proportions had shifted but the process itself was intact. I also noted a context problem: her “ship fast, iterate publicly” approach makes sense for greenfield AI products at Anthropic but gets harder with established install bases.

Wen has been making the rounds and in a new interview, I’m finding a lot that I’m nodding my head to.

Jenny Wen, speaking on Tommy Geoco’s State of Play:

Often design needs to follow what the model is capable of and design from there, as opposed to starting from a design vision first. I think that can feel tough as a designer because you’re like, oh, I want to be design-led, we should be designing it first and then the technology should follow. But I think that’s just the reality of working at a research lab where the technology is emergent and you have to sort of decide what to do with it.

“Design follows the model” is an interesting phrase from a design leader. It inverts the dogma that design should lead and engineering should follow. But Wen isn’t being defeatist. She’s describing a practical reality at at a leading AI lab where the models’ capabilities are changing faster than any roadmap can account for.

This shows up concretely in how her team works:

The big thing is designers are implementing code, through using Claude Code. That has been the biggest difference from working at Anthropic versus back when I worked at Figma. […] Even today, we were reporting some bugs and some quality issues, and one of the designers was like, “Cool, let me just fix them.” And that was cool to just not have to tag an engineer for them to do anything.

A designer casually fixing production bugs without tagging an engineer. Just another Tuesday at Anthropic.

Geoco’s summary of Wen’s argument crystallizes something we’ve all been thinking quietly about:

She said, having taste versus being able to execute are two completely different things. They’re usually bundled together, but they don’t have to be. And in a world where AI can increasingly execute, the question becomes, and it’s kind of uncomfortable, do you actually have good taste or are you just pushing pixels around?

That’s the thread tying all of this together. When designers are closer to the product, fixing bugs in production, prototyping against the live model, the judgment they’re applying isn’t visual. It’s product sense: knowing which of those 12 options is worth shipping, which edge case will break trust, when the model’s output is good enough for real users. That’s the taste Wen is describing, and it has very little to do with pixels.

A lot of designers have been coasting on execution skills that felt like taste. They debate corner radii and centering labels in a button with amateur vs pro designer memes. Who cares! AI is about to make the difference visible.

The New Era of UX Designers

Jenny Wen led design on FigJam, one of the most playful tools to hit design in a decade. Now she’s at Anthropic designing Claude. Not just the model, but the product that millions use daily.

youtube.com iconyoutube.com

When I was a younger designer, I always started with a pen and sketchbook. Sketch first, think with your hands. Now I write first to understand the problem space, then sketch. The images come after the words.

Elizabeth Goodspeed, speaking on Nicola Hamilton’s DesignThinkers podcast, takes this further than I ever would—she can barely picture images at all:

I am far more towards aphantasia. I have a very limited view of things in my mind. I think the analogy I use is it’s looking at an apple in a dark room and the lights are turning on and off and I’m wearing sunglasses and also the apple’s moving.

Her ideas don’t start as images. They start as words:

My ideas are usually very conceptual verbal, not even sentences. I guess I’m a robot—I don’t have an inner voice either. It’s just a pure void concept up there.

That might explain why Goodspeed is one of the sharpest design writers working. When you can’t conjure images internally, language becomes your primary tool for developing ideas. The archives and ephemera she’s known for aren’t aesthetic mood boards—they’re external memory for a mind that processes concepts before forms.

Goodspeed on the myth of the visually inspired designer:

That to me is damaging to creatives because it has this idea that we’re this noble savage where these images just move through us and we see everything in this Willy Wonka kind of way. In reality, I think it’s a process just like any other making process, whether that’s a carpenter or writer or anything else. It actually, I think at its best, is methodical and not just this inspired bolt of lightning.

The best design work starts with a concept, not a visual. Goodspeed just happens to have a neurological reason for working that way. The rest of us had to learn it. Worth listening to the full conversation—she also covers teaching, thesis panic, and why she calls her own work “graphic design fan art.”

RGD DesignThinkers Podcast episode 041 cover featuring Elizabeth Goodspeed, with a green-tinted portrait of a woman with dark curly hair and bangs.

DesignThinkers: Elizabeth Goodspeed

Elizabeth Goodspeed discusses how research, design history, and close attention to visual culture can help creatives develop deeper, more original work beyond trends.

printmag.com iconprintmag.com

Gui Seiz designs at Figma. His team uses Claude Code to bridge design and code. And he still reaches for the canvas when precision matters.

Seiz, speaking on Claire Vo’s How I AI podcast:

I don’t think we’re there yet in general with these code tools in terms of the precision editing that you want to do. […] I think still the gold standard for me is just being able to drag stuff around. And you can do a lot with a click that would take you a hundred words to write and to really precisely nail. No one wants to prompt for the exact hex code or the shade of yellow and that kind of stuff. That’s just easier to just quickly do and directly manipulate.

Seiz isn’t anti-AI. His team pulls production code into Figma via MCP, edits it visually, and pushes it back to the codebase. He’s bullish on what that does to the old workflow:

It’s definitely changed our workflows in a way that it’s really blown up what a workflow even is. Before, for the majority of our careers, we’ve had a very linear, agreed-upon workflow where you increase fidelity as you go on. Because it’s really expensive to work in code, and it’s really cheap just to trade ideas and sketch them out. But AI basically collapsed that, and it’s just as cheap to riff in code as it is to riff in design.

The cost of exploration collapsed. The need for direct manipulation didn’t. Both can be true.

How Figma engineers sync designs with Claude Code and Codex

Most teams are still passing static design files back and forth, and most Figma files are already out of date by the time they reach engineering. Gui Seiz (designer) and Alex Kern (engineer) from Figma walk through the exact workflow their team uses to bridge that gap with AI, live onscreen. They…

youtube.com iconyoutube.com

Sarah Gibbons and Huei-Hsin Wang, writing for Nielsen Norman Group:

What looks like “skipping the process” is just compressing it — running faster through the stages and using experience as a guide. […] What gets called “intuition” is really process, compressed and internalized through years of doing the work. The intuition designers trust was built by the very process they dismiss.

Gibbons and Wang on what comes after you stop pretending you’re not using one:

The real skill in modern design is not the ability to abandon process — it’s process literacy: picking the right approach and tool for the problem. Know which process fits the job and understand the risks of not following it. Better yet, don’t claim you’re not using a process if you’re just applying it differently.

The article responds directly to Anthropic’s Jenny Wen’s interview. Wen’s advice works because she’s a senior designer inside a well-resourced AI company with strong design culture. But we only hear about the wins. The solution-first prototypes that went nowhere, the features that shipped and saw no adoption, don’t make it into any public interviews. Most teams don’t have Wen’s conditions. And even inside teams that do, the advice assumes seniority. Junior designers haven’t accumulated the experience that make compression possible. They’re being told to skip a step they haven’t taken yet.

Two overlapping diamond shapes in purple and violet with dashed outlines illustrate compression, alongside the title "Design Process Isn't Dead, It's Compressed" from NN/G.

Design Process Isn’t Dead, It’s Compressed

As AI speeds up design work, the argument to “throw out the process” misrepresents how experienced designers work.

nngroup.com iconnngroup.com

The Sonos app disaster taught me something about roadmaps. Leadership kept adding initiatives—Sonos Radio, the Ace headphones—without ever naming what those additions displaced. QA got squeezed. Stability testing got cut. The designers who warned them were overruled. No leader said out loud what was being sacrificed to make room.

Yusuf Aytas names exactly this failure:

People like to talk about priorities as if the main problem is choosing what matters. In practice, the deterministic factor is capacity. Team capacity. System capacity. The share you lose to maintenance, interruptions, coordination, and keeping the machine fit to run. Ignoring these physical limits turns an ambitious roadmap into a collective illusion.

“Collective illusion.” That’s the right name for it. Aytas on where the dishonesty starts:

A new customer request appears. Leadership wants a visible bet. Sales needs something for a deal. Everyone talks about importance. Almost nobody says what gets pushed out. That is the real decision. They have only added pressure and left the team to absorb the contradiction later.

Aytas builds the whole piece around a carpentry metaphor—one saw, limited operators, timber that needs oiling and adjustment before it can be cut. Software hides the constraint better, but the physics are the same. There’s more in the piece on shaping work before it competes for capacity, using visible investment buckets, and why reallocation is never free.

A green manual press machine surrounded by bulging white sacks inside a rustic mud-walled storage shed with a corrugated metal roof.

Capacity Is the Roadmap

Most roadmap problems are capacity problems. Make investment buckets visible, budget interrupts, and force trade-offs into the open.

yusufaytas.com iconyusufaytas.com

David Hoang, writing for Proof of Concept, proposes a squad model for tackling a company’s hardest, most ambiguous problems:

The squad: a forward deployed engineer, a forward deployed designer, and a researcher. Three people. That’s it. They operate like a startup-within-the-company, deployed against a specific, ambiguous problem. […] This is a product discovery team with teeth — they don’t just produce insights and hand them off. They produce working prototypes and validated direction. […] Three people don’t need standups, retros, or Jira boards. They need a shared problem and a whiteboard.

No PM. The shared problem replaces the roadmap, and a researcher replaces the product manager. Hoang borrows the concept from Palantir’s Forward Deployed Engineers and extends it to design. His argument: AI tools have given designers enough technical leverage to prototype at engineering speed, so the designer who finds the problem can build the first cut of the solution.

A three-person team with AI tools in 2026 can cover the ground that used to require a ten-person cross-functional team. That’s the direct result of collapsing the build cost of exploration.

Hoang argues that the rotation model matters as much as the squad composition. Four to eight weeks, then disband. The team doesn’t calcify into a feature factory. Designers rotate through the company’s hardest problems instead of sitting on the same product team filing tickets for years.

Although, my counter to that would be designers sitting in the same problem space will gain deeper knowledge and context. Rotation could be counterproductive if not handled deliberately.

Hand-drawn Venn diagram showing three overlapping circles labeled Researcher, Design Engineer, and GTM, with the center intersection labeled "Forward Deployed Designer.

Forward deployed designer

In the early 2010s, Palantir coined a role that didn’t exist before: the Forward Deployed Software Engineer. These weren’t engineers building features on a roadmap. They were engineers embedded directly at client companies — sitting with analysts, operators, and decision-makers — to discover the problem and build the solution in the same motion. The role spread. Databricks, Scale AI, and OpenAI adopted variations.

proofofconcept.pub iconproofofconcept.pub

I’ve argued that design tools should be canvas-first, not chatbox-first. Jeff, writing in Abduzeedo makes the case for the opposite:

Designers have always borrowed from developers. Version control, component systems, token-based design — these ideas crossed the aisle from engineering and reshaped how visual work gets done. Vibe designing follows the same logic. Instead of opening Figma and reaching for a drag-and-drop panel, designers drop into the terminal. They prompt an AI model directly from the CLI, pipe the output into a file, and iterate without ever touching a mouse.

He isn’t theorizing. He published this article using browser automation and AI, with minimal manual clicking.

I don’t think the answer is CLI or canvas. It’s both. Designers are visual thinkers—that’s the cognitive foundation of the discipline, not a limitation to engineer away. Going fully terminal assumes we can be retrained to work without seeing what we’re making, or that the profession will attract people with entirely different skills.

What does look right is the plumbing underneath. Jeff on Paper.design’s MCP integration:

Its canvas is built natively on web standards — HTML and CSS — which means AI agents working through Paper’s MCP server can read and write design files directly. Tools like get_screenshot, get_jsx, write_html, and update_styles give Claude Code or Cursor direct read-write access to the design canvas.

HyperCard figured this out in 1987: direct manipulation on top of a scripting layer. The tools are finally catching up, with AI as the scripting engine.

VS Code editor with a browser preview showing the "Abduzeedo Editor" app, displaying a portrait photo with a VHS glitch shader effect applied.

Vibe Designing with Bash Access

Vibe designing is the design equivalent of vibe coding — where bash scripts, AI tools, and CLI commands are finally replacing traditional GUI-only tools.

abduzeedo.com iconabduzeedo.com

Intercom’s design team published numbers that show what happens when agents take over the build. John Moriarty, writing for Fin Ideas:

At Intercom, how we design and build software is unrecognizable from 12 months ago. Our engineering team is already at the point where 90% of pull requests are authored by Claude Code, part of an internal initiative called 2x, where the explicit goal is to double productivity using AI.

When 90% of your pull requests are AI-authored, the designer’s job changes whether you update the title or not. Moriarty’s framework for what comes next:

As the rate of execution accelerates, the role of design becomes sharper. Agents can generate artefacts, but they cannot decide which problems matter, set intent, resolve trade-offs, or hold the bar for quality. Our craft shifts with that reality. […] Agents will own the middle, the build. Design’s value concentrates at the edges, deciding what to build and then determining whether the output is good enough.

Design’s value lands at the edges, not the middle, and Intercom is already adapting their infrastructure to match. They’ve repositioned their design system as what Moriarty calls “agentic infrastructure”:

In a world where Agents write most of the code, design systems become the infrastructure that protects quality. Components, libraries and guidelines are the foundation that Agents and teams build on top of. The better the system, the better everything produced. Strong systems allow quality to scale without adding review overhead.

This tracks with the argument that design systems are becoming AI infrastructure—and Intercom is running it in production. The design system is the quality control layer that lets agents ship at speed without designers reviewing every screen.

Moriarty’s full piece covers how they’re restructuring day-to-day work—moving designers into code, treating Figma as a whiteboard, running structured AI fluency training. Worth a full read.

A paintbrush dissolves into digital code lines and circuitry, with the text "How we design when the code writes itself" and "Fin/ideas" logo.

How we design when the code writes itself

AI isn’t just increasing the speed of building, it’s changing how we work

ideas.fin.ai iconideas.fin.ai

Karo Zieminski spent nine days breaking Claude Cowork before writing this guide:

I’ve seen enough of shallow tutorials that simply rephrase the official docs to know I wanted to do something different. So I rebuilt some of my workflows from scratch, tracked what failed, measured what saved time, and mapped 56 practical tips into the resource I wish existed when I started.

I appreciate her methodical breakdown of the app, especially when to use which flavor of Claude, which for me TBH, has been an issue.

Comparison table of Claude Chat, Cowork, and Code modes across six aspects: interface, best for, output, sub-agents, file access, and target user.

Zieminski’s nice breakdown of the differences between Claude Chat, Cowork, and Code.

The guide barely talks about prompting. It’s almost entirely about the pre-work: dedicated folder structures, global instructions via CLAUDE.md, chunked skills, delegation patterns that define end-states instead of steps. The distinction Karo draws between Chat skills and Cowork skills:

Skills in Chat were useful. Skills in Cowork are operational. They shape autonomous work. Your brand guidelines skill doesn’t just influence a reply. It governs every file Claude creates. Your writing guidelines skill doesn’t just shape a draft. It governs every article Claude writes autonomously.

Zieminski on skill architecture:

Chunk your skills instead of building one giant skill that tries to handle everything. I’ve tested both approaches and the results from one giant skill were much worse. For example, I use three separate writing skills instead of one: an overall voice skill, a corporate writing skill, and a newsletter writing skill. Each handles its own context. Claude never confuses who I’m writing for.

If you’re already using Claude Cowork or just Cowork curious, bookmark this one.

Cartoon girl with a ponytail standing on a stool, hammering a nail into a wall to hang a blank canvas or paper.

Claude Cowork Guide for Power Users: 50+ Tested Tips on Plugins, Skills, Sub-Agents, and Memory

What works, what breaks, and how to make Claude Cowork genuinely useful in 2026.

karozieminski.substack.com iconkarozieminski.substack.com

In high school and through college, I worked at a desktop publishing service bureau in San Francisco. We had Macintosh computers and Linotronic imagesetters (super hi-res laser printers), not Linotype machines. Down the street, those traditional type shops still existed, but their business was already thinning out. Occasionally a graphic designer would send us type to set, and we’d do it in QuarkXPress. The fact that the job landed on our desk at all told you everything about where the industry was headed. The shop’s real business was pre-press and color separations, and eventually direct-to-plate eliminated even that.

Erika Flowers has been building out her Zero-Vector Design framework, and two of her pieces read as a pair. “Zero Stage to Orbit” on UX Magazine uses the rocket equation as a structural lens for the design-to-development pipeline. “The Last Typesetter” on her Substack uses the death of the typesetting profession to make the same argument from a different direction. Together they make the case that the design role, not the skill, is dissolving.

In “The Last Typesetter,” Flowers draws on Sennett:

When suddenly everyone could set type, the difference between good typography and bad typography went from an industry concern to a public epidemic. Bad kerning everywhere. Rivers running through justified text. Orphaned words dangling at the tops of columns like socks left on a clothesline. The people who understood typography were needed more than ever.

But not as typesetters.

Richard Sennett wrote about this in The Craftsman: the difference between a skill and the institutional container built around that skill. Containers look permanent until they are not. The skill outlives every container it has ever occupied.

That’s what happened at the service bureau. The skill—color, typography, print production—survived. The container—the shop, the role, the apprenticeship—did not.

In “Zero Stage to Orbit,” Flowers maps the pipeline onto rocket science:

Each stage in the traditional pipeline is designed to compensate for the limitations of the previous one. Research to inform design. Design to spec for developers. Specs to survive handoff. QA to catch what handoff broke. Retros to discuss why QA caught so much. Process to manage process.

Fuel to carry fuel. The modern development pipeline is not a solution. It is a multi-stage rocket. And most of the energy is going to overhead.

The overhead diagnosis is sharp, and the launch pad economy—consultancies, workflow tools, Agile coaching certifications—has a financial interest in keeping the rocket grounded.

Flowers addresses why the “unicorn” solution failed:

The design technologist did not fail because no one person can possess all the skills. The design technologist failed because no one can hold all the skills while still fighting gravity. They were still launching from the ground, still hauling the translation overhead, just with one person doing all the hauling instead of a team.

The problem was never the number of stages. It was the gravity well.

A product manager I work with recently told me he could think of a solution to a user need, but not a creative solution the way the designer on his team could. Specialization produces real expertise. The design technologist wasn’t wrong about the vision. They were wrong about the physics. AI changes the gravity, not the skills.

What separates both pieces from the standard “AI changes everything” take:

I am also uncertain here, also mid-journey, also discovering orbit’s real constraints in real time. My career, work, and livelihood are just as much at risk as everyone else’s. But that doesn’t discount the facts about the transition to new capabilities.

She’s out on a limb, reflecting a shift the entire industry can feel, without pretending she has the map. In “The Last Typesetter,” she puts it more bluntly: “Defend the role, or follow the skill.”

The skill will survive. It always has. But the transition is real, and not everyone can afford to be mid-journey. Truthfully, I am uncertain either. The thing I’ve loved to do since the 7th grade, the thing that has been my identity for most of my life is changing, possibly dissolving into something else.

Shiny metallic rocket launching diagonally upward against a blue sky, with the text "Design never had a process problem but a gravity one."

Zero Stage to Orbit

What if the pipeline was never broken — it was just never meant to get you to orbit? From handoff docs to sprint ceremonies, every tool and role we built was rational until Orbit became available. Find out what it really means to ship from there.

uxmag.com iconuxmag.com

After nine years of failed attempts at his typeface Nave, Jamie Clarke did something counterintuitive: he threw out the files and started drawing from memory.

Jamie Clarke, writing for I Love Typography:

I began again from scratch, drawing from memory rather than reworking the old outlines (a great tip from Gerry Leonidas), and the results were instantly better.

Memory is a taste filter. When you draw from memory, you keep only the ideas that have lodged deep enough to matter. The cruft—the half-committed decisions, the accumulated compromises—falls away. Clarke’s breakthrough came not from refining what he had, but from forgetting most of it.

The second breakthrough was lateral. While flipping through specimen books, he landed on something unrelated to his project:

One day, while flicking through some specimen books, I came across a specimen of Futura Black. It had little in common with what I was trying to do, but it sparked an idea for the capitals. Paul Renner’s stencil forms look as if they were carved out of solid blocks, which puts all the emphasis on the negative shapes. Thinking this way allowed me to keep the outer shapes formal while letting the internal cuts be more playful. That balance finally gave me the capital forms I had been searching for and brought the design back in line with my original aim.

That recognition only works after enough reps. Clarke spent a decade shipping other typefaces—Brim Narrow, Rig Shaded, Span—before he had the vocabulary to see what Futura Black was telling him.

A type specimen sheet displaying large-scale serif typeface characters set in multiple lines, annotated with handwritten red critique notes. The text reads pangram fragments ("nymph blitz quick vex / dwarf jogs an walts jo / b veaenexeneaeed a qu / ick frong ingk duniper"). Red ink annotations point out design issues including "imbalanced," "different," "too shy," "rounds seem wide," "still wobbles," "bigger," "n has thick shoulder / a doesn't," and "dark," with corresponding arrows and underlines marking specific letterforms.

How Not to Take 10 Years to Design a Typeface

I have often heard type designers talk about the many years they spend developing a typeface. I would listen with awe and think, “That must have been a real challenge. It must be exquisitely crafted and probably a little bit groundbreaking too.” So it feels slightly absurd to admit that […]

ilovetypography.com iconilovetypography.com

If you’re a designer who feels the ground shifting but doesn’t know where to step, Erika Flowers built a free, structured curriculum for exactly that moment. Zero-Vector Design is her framework for collapsing the handoff between design and engineering, using AI agents as crew rather than replacements. The distinction she draws between this and vibe coding is worth internalizing:

You bring the systems thinking, the architecture, the years of knowing what good looks like. The AI extends your reach, not your judgment. Speed without intention is just faster failure. Speed with intention is leverage.

Six levels, 60+ lessons, all free. Worth bookmarking.

Zero-Vector Design brand card on dark background with tagline "From intent to artifact, directly." and website zerovector.design

Zero-Vector Design

A design philosophy for the age of AI. No intermediary. No translation layer. No friction. From intent to artifact, directly.

zerovector.design iconzerovector.design

Most design teams treat the design system as the starting point. Open a new project, pull in the component library, start assembling. It’s efficient. It’s also a trap according to one designer.

David Hoang, writing for Proof of Concept:

I start without a design system. This is deliberate. Production-grade components carry assumptions—spacing, hierarchy, interaction patterns—that narrow the solution space before you’ve had a chance to explore it. If I’m proposing a feature, the design system is the right starting point. But in exploration mode, the system comes later. Sketches are for divergence; design systems are instruments of convergence.

Design systems exist to create consistency, not ideas. When you reach for them too early, you may be converging before you’ve diverged.

Hoang’s workflow inverts the order: sketch unconstrained in code, dial up technical fidelity first, bring the design system in only after you’ve found directions worth pursuing. LLMs make that final step nearly free:

The design system isn’t a starting point—it’s a finishing move. You sketch unconstrained to explore the problem space, then snap your best ideas onto the system’s rails to see if they hold up. The LLM makes that snap nearly instant, so I can run the full loop—sketch, evaluate, systemize—multiple times in a single session. Ideas that break under the system’s constraints get caught early. Ideas that survive get stronger.

The designer makes every structural decision. The LLM handles the re-skinning. Production work, not judgment work.

And ideas that break the system’s constraints surface gaps worth contributing back. That’s the part most design system teams miss. The system should learn from the exploration it constrains, not just gate it.

Hand-drawn diagram showing multiple "Code slides" feeding into a central "Draw tool" grid, which outputs to a "Solution" box on the right.

Sketching with code

Issue 286: Treating code like a pencil, not a blueprint

proofofconcept.pub iconproofofconcept.pub

Director. Orchestrator. Architect. Different words for the same shift. Stop making things one at a time. Start building systems that make things.

Weber Wong, writing for Every, gives this shift a useful name: artifact thinking.

I call this mental model artifact thinking: creative work that produces discrete outputs, one at a time, each beginning from scratch. Traditional tools like Photoshop and Illustrator, which demand endless hand-tuned adjustments and manual refinements to produce a single polished image, trap you in this way of working. Midjourney and DALL-E feel like liberation because they generate outputs so quickly, and you can communicate with them in the same language you speak every day. But visual prompts, too, are one-time, disposable things. You can’t hand them to a colleague and be confident you will get the same result. The magic of near-instantaneous generation masks the fact that you are still in artifact thinking.

That last line is the sharp one. Adopting Midjourney doesn’t mean you’ve left artifact thinking. You’re still producing one-offs—just faster ones. The orchestrator gap isn’t about which tool you use. It’s about whether you’re building systems or pressing buttons.

Wong’s proposed fix is node-based visual programming—workflows you can inspect, modify, and share. He knows it sounds like he’s asking designers to become engineers:

I understand the resistance to this idea. Some people hear “visual programming” and think we’re trying to turn designers into engineers. That’s backwards. We’re trying to give creative professionals the power that programmers have always had: the ability to build systems that work while you sleep, that can be stored as multiple versions and shared and improved, and that take what people already know how to do and make it something anyone can run.

I’ve been asking for canvas-first tools, not chatbox-first ones. Wong is right that chat alone isn’t enough for professional creative work. “Artifact thinking” is a concept worth keeping—regardless of whether Flora is the tool that finally kills it.

Person wearing a "node-pilled" cap typing at a keyboard with red strings tangled around their fingers, overlaid with the word "THESIS.

Creative Work Is About to Look a Lot More Like Programming

Flora’s Weber Wong on why creative professionals need to stop thinking in artifacts and start thinking in systems

every.to iconevery.to