Skip to content

205 posts tagged with “user experience”

The designer’s role is widening at both ends of the product stack. Earlier, I linked to a post by Chad Johnson arguing designers gain influence by moving upstream: becoming orientation devices for the team, shaping the problem before it gets named. Daniel Mitev, writing for UX Collective, argues designers gain authorship by moving downstream, into the code:

The industry has been asking whether designers should code for over a decade. It was always the wrong question, or at least the wrong framing. It implied the barrier was technical: that designers lacked something fundamental, something that required years of study to acquire. Learn TypeScript. Understand the DOM. Earn your way across the divide. That wasn’t the barrier.

Mitev’s argument comes down to access. AI tooling compresses the translation layer and returns authorship to the designer:

What AI tooling gives back is authorship over the surface layer — the part users actually touch. A designer can now open the codebase, adjust how an element behaves, change how a transition feels, and verify the output against their own intent in real time. The easing curve gets set by the person who decided what it should feel like. The hover state gets defined by the person who thought through why it matters. That work no longer requires an interpreter.

He points at Alan’s “Everyone Can Build” initiative—283 pull requests shipped by non-engineers over two quarters, each merged after engineering review—as evidence it’s already happening.

Johnson and Mitev aren’t in conflict. They’re describing the same shift from opposite ends. The interpreters at the top of the product stack—PMs who owned problem framing and prioritization—are compressing. The interpreters at the bottom—frontend engineers translating intent into code—are compressing too. Both jobs return to the designer who understood the intent first.

The role widens. Some designers will gravitate to one end or the other. The designers who stretch the full range—orientation work and authorship—are working the widest version of the job.

A hand pressing an Enter key above a terminal showing a git commit command, with text reading "Designers finally have a say in the product they design.

Designers finally have a say in the product they design

AI didn’t teach designers to code. It gave them back the decisions that were always theirs.

uxdesign.cc iconuxdesign.cc

Two podcast conversations with frontier-lab design leaders on what designing at an AI lab looks like day-to-day. I previously linked to Lenny Rachitsky’s interview with Jenny Wen, head of design for Claude, where she described a redistribution of designer hours: less mocking, more pairing with engineers, a sliver of direct implementation. The activities themselves still look like design.

Ian Silber, head of product design at OpenAI, on Michael Riddering’s Dive Club, describes work that doesn’t fit the same list:

Designers working on this are hopefully spending a lot less time in Figma or whatever tool you use to draw pixels, and more time really thinking about how you interact with this thing, and the fact that the model really is the core product.

Silber’s concrete example is onboarding. Instead of building a first-run tutorial, his team shapes what the model already knows about the person:

We have this super intelligent model that could probably do a much better job trying to understand what this person’s goals are […] We’re really stripping back a lot of what you might traditionally do and trying to say, “Well, actually […] let’s think about like how we should give this context to the model that this person is brand new and they might need some handholding.”

The traditional response adds UI around the problem. Silber’s team takes it out and gives the model enough context to meet the user where they are.

That kind of work needs its own scaffolding, and OpenAI is building it:

We have a whole system called the Dynamic User Interface Library, which allows us to design things that the model can then interpret.

Primitives the model composes at runtime, shaped by system prompts and context rather than drawn flow by flow. Wen is describing a redistribution of designer hours inside activities that still look recognizable. Silber is describing activities that don’t quite have names yet. And yes, that is still design.

Ian Silber - What it’s like designing at OpenAI

If you’re like me you gotta be curious... what’s it like designing at OpenAI?

youtube.com iconyoutube.com

The gap between an AI-produced prototype and a shippable product has a shape. Most of us assume it’s the visual 20%: the polish AI output drifts on. Chad Johnson’s case is that the 20% is the trivial part, and the real gap sits upstream of everything visible.

Chad Johnson, writing in his newsletter:

The deeper issue was that nobody had asked whether a prototype was even the right artifact to produce at that stage. The PM had made three assumptions about user intent that we hadn’t validated. They’d skipped past a critical question about whether this flow needed to exist at all, or whether the real problem was upstream in the information architecture. They’d built a beautiful answer to a question nobody had confirmed was worth asking. That’s the part that stuck with me. Not the visual gaps. The thinking gaps.

That lines up with what I’ve been calling C+ out of the box: artifacts that read well and seem credible until you apply critical thinking. Johnson gets specific about what’s actually missing, and none of it is visual: the assumption nobody validated, the upstream question nobody asked. The interface was fine. The thinking was absent from the (probably) AI-generated PRD.

Johnson again:

…design production got democratized, but design judgment didn’t. Anyone can make something now. Almost nobody new learned how to think well about what should be made, why, and for whom. And that gap, between what’s possible to produce and what’s actually been thought through, is now the entire playing field for our profession. Designers aren’t becoming obsolete. They’re becoming stewards.

Judgment still takes years to build, and no tool compresses that.

The last 20% is rarely the gap that matters. The first question—should we build this?—almost always is. Very few teams have the muscle to ask it.

Abstract digital art featuring curved, layered surfaces with fine parallel lines in warm orange, red, and deep blue gradients.

The Last 20% and Who’s Asking Why?

Everyone can build now. Almost nobody stops to ask if they should.

chadsnewsletter.substack.com iconchadsnewsletter.substack.com
A sleek high-speed bullet train with glowing headlights crossing a bridge through dense fog over a misty landscape.

Acceleration Is Not Automation

I’ve been wandering the wilderness to understand where the software design profession is going. Via this blog and my newsletter, I’ve been exploring the possibilities by reading, commenting, and writing. Many other designers are in the same boat, with Erika Flowers’s Zero Vector design methodology being the most defined. Kudos to her for being one of the first—if not the first—to plant the flag.

Directionally Flowers is right. But for me, working in a team and on B2B software, it feels too simplistic and ignores the realities of working with customers and counterparts in product management and engineering. (That’s her whole point: one person to do it all, no handoff.)

The destination is within view. But it’s hazy and distant. The path to get there is unclear, like driving through soupy fog when your headlights reflecting off the mist are all you can see.

Specialization is the whole game. Give an agent a specific role and clear constraints, and the quality of the output changes completely. I’ve been learning this firsthand with Claude Code skills.

Marie Claire Dean took that principle and scaled it into an open-source system called Designpowers. Her reasoning:

Most AI tools give you one assistant. You ask it something, it answers, and you figure out what to do next. That’s not how design teams work.

Design teams work because a strategist thinks differently from a visual designer, who thinks differently from a content writer, who thinks differently from someone doing accessibility review. The handoffs between those perspectives are where the work gets better. The friction is productive.

Her team of ten covers the full pipeline from discovery through shipping, with dedicated specialists for strategy, visual design, content, motion, accessibility, and critique. All sharing one design state document, with the human directing.

On what she learned building it:

The act of encoding a design process forces you to decide what the handoffs actually are. When does strategy end and visual design begin? What does the content writer need from the strategist before they can start? What happens when the accessibility reviewer and the design critic disagree?

That’s the same clarity I’ve found writing Claude Code skills: what does this agent need to know, and where does its scope end? On where the human stays essential:

The idea is simple: agents can verify that a design is correct, aligned to the brief, accessible, consistent. They can’t tell you whether it’s beautiful. That’s your job.

The full system is on GitHub.

3D illustration of abstract biological structures resembling a protein or molecule, with colorful folded shapes, helices, and spheres floating against a dark blue background.

I Built a Design Team Out of AI Agents

...and they’re free!

marieclairedean.substack.com iconmarieclairedean.substack.com

Dan Saffer applies mid-century existentialism to the question of what “meaning” actually requires of the people building digital products, and the result is unusually rigorous. His sharpest move is applying Sartre’s concept of “projects” to AI tools:

When someone uses ChatGPT to write an essay, the Sartrean question is: whose project is this really? If the user is exploring ideas and using the tool as a thinking partner, they’re taking it up into their own meaning-making project. But if they’re pasting in a prompt and submitting the output unchanged, the system has effectively become the meaning-maker, and the user has become a delivery mechanism. The same tool can function either way. The design question is which relationship the system encourages.

Saffer connects this to Camus and the problem of frictionless design:

When every friction is removed in the name of efficiency, the activity can be hollowed out. There is nothing left to push against, and meaning drains away. This is something that AI systems have become exceedingly good at. Push the sparkle button, the task is done for you, and you have learned nothing and enjoyed nothing.

The HCI/UX field spent decades optimizing for friction removal. Saffer’s argument is that some friction is where the meaning lives. Design the struggle away and you don’t help the user. You empty the experience. Not every friction should be removed.

Saffer’s closing:

This sensibility insists that users are not information processors, not customers, not eyeballs, not tapping fingers, and not data sources. They are meaning-making beings whose freedom and dignity are at stake in every interaction. It asks designers to take seriously the existential weight of what they build. The systems we design become part of the conditions of human existence, shaping what people can choose, what they can see, who they can become.

Saffer covers Sartre, Camus, Kierkegaard, Heidegger, and de Beauvoir in the full piece, each applied to contemporary design problems. It’s a lot, and it’s all good.

Collage of five black-and-white portrait photos of mid-20th century philosophers, including one woman and four men, one holding a pipe.

The Existential Designer: Facilitating Meaning Through Interaction

Designers like to talk about making meaningful products or using the tools of design to make meaning.

odannyboy.medium.com iconodannyboy.medium.com

Silicon Valley’s pitch to designers is that AI is the more knowledgeable partner now, so they should get good at prompting it. Write better instructions, get better output.

Peter Zakrzewski, writing for UX Collective, pushes back:

The current Silicon Valley pitch to designers is essentially this: AI is your MKO now. It knows more patterns than you do. It executes faster than you do. It can code. Your job is to learn how to give it good instructions — to become a fluent prompter of a more capable system. I want to challenge that framing directly.

His challenge starts with a concrete test. He asked three leading AI systems to render a dining table with a concrete slab top resting on dry spaghetti legs, then show the scene five seconds after the legs gave way. All three rendered the impossibility with total confidence. None could feel that the physics don’t work.

That test illustrates what Zakrzewski calls the Inversion Error:

We have built a Symbolic Giant resting on an Enactive Void. These systems can write about gravity with technical or even poetic fluency but cannot feel it. They can describe a structure but cannot tell you whether it will stand or fall. The ground is shaking because the floor is missing.

“Symbolic Giant resting on an Enactive Void” is a mouthful, but the floor metaphor does the work: AI’s language fluency masks a total absence of spatial, embodied reasoning. The kind designers rely on every day without naming it. Zakrzewski on what that means for the prompting pitch:

Designers do not think primarily in sentences. Our human cognition is deeply embodied. We think in diagrams, in spatial relationships, in load paths and sight lines and in the non-discursive logic of things that must connect to other things in three-dimensional space. […] We are being asked to compress years of embodied cognition and our three-dimensional spatial judgment into a text prompt and then accept whatever the machine generates as an adequate rendering of our intent. We are, in other words, being asked to abandon the very capability that the AI lacks and that our projects require.

When someone tells designers to compress spatial judgment into a text prompt, they’re asking designers to throw away the one capability AI genuinely lacks and the one we’re genuinely great at.

There was a theme to some of the posts on this blog last week—about how words should come before the pixels. I made a similar argument in the newsletter: the work is getting more verbal and conceptual, but the eye stays. Zakrzewski makes the case for what words alone can’t carry: the spatial, embodied judgment that tells you whether the thing will actually stand.

A mechanical robotic hand reaching upward against a stormy sky, overlaid with a bold red banner reading "Form follows nothing.

The ground is shaking: Why designers must flip the script on AI

Something has shifted in the way the design field operates, and I think most of us can sense it even if we haven’t yet found the words or…

uxdesign.cc iconuxdesign.cc

Nate Parrott, a product designer at Anthropic, in an interview with Ryan Mather for AI Design Field Guide:

More Google Docs than you’d think. More Slack posts than you’d think. I meant what I said earlier: I think that this is the era of designers who design with words more so than designing with pixels.

Parrott describes a content design team whose job is making alien concepts legible:

We have several people at the company on the design team whose job is content design. Their job is basically to look at concepts which are very alien, and figure out how to make them legible to human beings. They don’t draw any pixels, but their work is really important because they are literally thinking about the words we use to describe and the mental models we expect people to put on that will make this stuff work.

The Figma work, Parrott says, is “the easy part.” He uses Anthropic’s design system, drops in components, and moves on. The hard work is upstream: expressing the ideas, figuring out the right language, talking to users. The production of screens has become the smallest slice of the job.

Jenny Wen described designers at Anthropic shipping code, prototyping against the live model, stretching into PM territory. Parrott is describing the same shift from a different angle. The deliverable used to be the mockup. Now the deliverable is the thinking that precedes it.

Vibrant abstract illustration of stylized flowers with glowing, blurred edges in bold red, yellow, orange, pink, and blue tones against a soft gradient background.

AI Design Field Guide

Learn techniques from the designers behind OpenAI, Anthropic, Figma, Notion & more

aidesignfieldguide.com iconaidesignfieldguide.com

The AI debate has a binary problem. You’re either an optimist or a doomer, a booster or a skeptic. Anthropic published something that cuts through that false dichotomy.

They interviewed 80,508 Claude users across 159 countries and 70 languages about what they want from AI and what they fear. What Anthropic says is the largest and most multilingual qualitative study of AI users ever conducted, and the findings don’t sort neatly.

The core framework: “light and shade.” The benefits and harms don’t sort into different camps. They coexist in the same person. Someone who values emotional support from AI is three times more likely to also fear becoming dependent on it. One respondent:

“Removing friction from tasks lets you do more with less. But removing friction from relationships removes something necessary for growth.”

That’s someone holding both truths at once. The study found this pattern across every tension they measured, from learning vs. cognitive atrophy to productivity vs. job displacement.

The individual voices are why this study sticks. A Ukrainian soldier:

“In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life—my AI friends.”

A mute user in Ukraine:

“I am mute, and [Claude and I] made this text-to-speech bot together—I can communicate with friends almost in live format without taking up their time reading… [this was] something I dreamed about and thought was impossible.”

An Indian lawyer who’d carried a math phobia since school:

“I developed a phobia for maths from doing so badly in school, and I once feared Shakespeare. Now I sit with AI, get paragraphs translated into simple English, and I’ve already read 15 pages of Hamlet. I started learning trigonometry again, successfully. I’ve learned I am not as dumb I once thought I was.”

These are access stories: people reaching things that were previously out of reach because of disability, geography, war, or economics.

And then the shade. A student in South Korea:

“I got excellent grades using AI’s answers, not what I’d actually learned. I just memorized what AI gave me… That’s when I feel the most self-reproach.”

The same capability producing opposite outcomes. The study is long and the quote wall is worth spending time with.

Globe illustration with green and blue dots marking locations worldwide, overlaid with the text "What 81,000 people want from AI.

What 81,000 people want from AI

Last December, tens of thousands of Claude users around the world had a conversation with our AI interviewer to share how they use AI, what they dream it could make possible, and what they fear it might do.

anthropic.com iconanthropic.com

The first time I wrote about Jenny Wen, I pushed back. She said the design process was dead, and I argued the proportions had shifted but the process itself was intact. I also noted a context problem: her “ship fast, iterate publicly” approach makes sense for greenfield AI products at Anthropic but gets harder with established install bases.

Wen has been making the rounds and in a new interview, I’m finding a lot that I’m nodding my head to.

Jenny Wen, speaking on Tommy Geoco’s State of Play:

Often design needs to follow what the model is capable of and design from there, as opposed to starting from a design vision first. I think that can feel tough as a designer because you’re like, oh, I want to be design-led, we should be designing it first and then the technology should follow. But I think that’s just the reality of working at a research lab where the technology is emergent and you have to sort of decide what to do with it.

“Design follows the model” is an interesting phrase from a design leader. It inverts the dogma that design should lead and engineering should follow. But Wen isn’t being defeatist. She’s describing a practical reality at at a leading AI lab where the models’ capabilities are changing faster than any roadmap can account for.

This shows up concretely in how her team works:

The big thing is designers are implementing code, through using Claude Code. That has been the biggest difference from working at Anthropic versus back when I worked at Figma. […] Even today, we were reporting some bugs and some quality issues, and one of the designers was like, “Cool, let me just fix them.” And that was cool to just not have to tag an engineer for them to do anything.

A designer casually fixing production bugs without tagging an engineer. Just another Tuesday at Anthropic.

Geoco’s summary of Wen’s argument crystallizes something we’ve all been thinking quietly about:

She said, having taste versus being able to execute are two completely different things. They’re usually bundled together, but they don’t have to be. And in a world where AI can increasingly execute, the question becomes, and it’s kind of uncomfortable, do you actually have good taste or are you just pushing pixels around?

That’s the thread tying all of this together. When designers are closer to the product, fixing bugs in production, prototyping against the live model, the judgment they’re applying isn’t visual. It’s product sense: knowing which of those 12 options is worth shipping, which edge case will break trust, when the model’s output is good enough for real users. That’s the taste Wen is describing, and it has very little to do with pixels.

A lot of designers have been coasting on execution skills that felt like taste. They debate corner radii and centering labels in a button with amateur vs pro designer memes. Who cares! AI is about to make the difference visible.

The New Era of UX Designers

Jenny Wen led design on FigJam, one of the most playful tools to hit design in a decade. Now she’s at Anthropic designing Claude. Not just the model, but the product that millions use daily.

youtube.com iconyoutube.com

I used Claude to author a process document for my team. After a lot of back and forth, it produced a thorough 4,000-word doc. And then I spent the next 30 minutes reading it, line by line, making sure every recommendation matched my intention.

The AI produced the document in minutes. I evaluated it at human reading and review speed.

Jakob Nielsen has a name for this bottleneck: evaluability. He argues it should replace execution efficiency as the central UX metric:

In command-based UIs, the user’s primary cognitive load was executing the task step-by-step. In intent-based systems, execution is cheap, but evaluation becomes the bottleneck. The usability metric shifts to how rapidly and accurately a user can verify that the AI’s output matches their actual goal. Interfaces must be optimized for “evaluability,” allowing users to judge quality and appropriateness without painstakingly combing through every detail of the result.

“Without painstakingly combing through every detail” is exactly what I was doing with my 4,000-word document. We don’t have the interfaces for this yet. We’re still reading AI output the way we’d read something a colleague wrote, except a colleague wouldn’t hand me 4,000 words and say “check this.” (Unless of course, they wrote it with AI and then, of course they would.)

In agentic engineering, you often hear that code review is the bottleneck.

Nielsen again:

Our designs must not act as cognitive wheelchairs that replace human agency; they must act as cognitive exoskeletons that support and enhance human flourishing, even as traditional work vanishes. Good AI UX will teach just enough, reveal plan structures, and leave a comprehensible trail of action so users can maintain digital judgment.

Most AI interfaces are optimized for generation speed. The harder problem is on the other end: helping humans evaluate what got generated. Until we solve that, productivity gains from AI come with an evaluation tax paid at human speed.

A Viking leader pointing forward from the bow of a dragon ship on stormy seas, crew behind him, with text reading "Intent by Discovery.

Intent by Discovery: Designing the AI User Experience

AI is not just a better chat box. It changes the user’s role from operator to supervisor, which forces UX to move from command-based interaction toward intent-based delegation, new usability metrics, orchestration layers, calibrated friction, and ultimately exploration-based interaction to clarify the user’s needs.

jakobnielsenphd.substack.com iconjakobnielsenphd.substack.com

Shubham Bose loaded a single New York Times article page and measured what happened:

With this page load, you would be leaping ahead of the size of Windows 95 (28 floppy disks). The OS that ran the world fits perfectly inside a single modern page load. […] I essentially downloaded an entire album’s worth of data just to read a few paragraphs of text.

The total: 422 network requests, 49MB of data. Ouch! Before the headline finishes loading, the browser is running a programmatic ad auction in the background on his computer. Bose found the Times named its consent endpoint purr. “A cat purring while it rifles through your pockets.”

Bose on the economics driving this:

Publishers aren’t evil but they are desperate. Caught in this programmatic ad-tech death spiral, they are trading long-term reader retention for short-term CPM pennies. […] The longer you’re trapped on the page, the higher the CPM the publisher can charge. Your frustration is the product.

The UX consequences are predictable. Bose tears down what a reader actually encounters: cookie banners eating the bottom 30% of the screen, a newsletter modal on first scroll, a browser notification prompt firing simultaneously. He calls it “Z-Index Warfare.” On The Guardian, actual content occupies 11% of the viewport. On the Economic Times, users face two simultaneous Google sign-in modals before reading a single sentence. Close buttons are deliberately undersized with tiny hit targets. Sticky video players detach and follow you down the page with a microscopic X.

And on how no one person decided to make it this way:

No individual engineer at the Times decided to make reading miserable. This architecture emerged from a thousand small incentive decisions, each locally rational yet collectively catastrophic.

text.npr.org is proof that a different path exists.

Hide the Pain Harold" meme figure giving thumbs up, overlaid on browser DevTools Network tab showing 422 requests and news websites with subscription prompts.

The 49MB Web Page

A look at modern news websites. How programmatic ad-tech, huge payloads and hostile architecture destroyed the reading experience.

thatshubham.com iconthatshubham.com

Sarah Gibbons and Huei-Hsin Wang, writing for Nielsen Norman Group:

What looks like “skipping the process” is just compressing it — running faster through the stages and using experience as a guide. […] What gets called “intuition” is really process, compressed and internalized through years of doing the work. The intuition designers trust was built by the very process they dismiss.

Gibbons and Wang on what comes after you stop pretending you’re not using one:

The real skill in modern design is not the ability to abandon process — it’s process literacy: picking the right approach and tool for the problem. Know which process fits the job and understand the risks of not following it. Better yet, don’t claim you’re not using a process if you’re just applying it differently.

The article responds directly to Anthropic’s Jenny Wen’s interview. Wen’s advice works because she’s a senior designer inside a well-resourced AI company with strong design culture. But we only hear about the wins. The solution-first prototypes that went nowhere, the features that shipped and saw no adoption, don’t make it into any public interviews. Most teams don’t have Wen’s conditions. And even inside teams that do, the advice assumes seniority. Junior designers haven’t accumulated the experience that make compression possible. They’re being told to skip a step they haven’t taken yet.

Two overlapping diamond shapes in purple and violet with dashed outlines illustrate compression, alongside the title "Design Process Isn't Dead, It's Compressed" from NN/G.

Design Process Isn’t Dead, It’s Compressed

As AI speeds up design work, the argument to “throw out the process” misrepresents how experienced designers work.

nngroup.com iconnngroup.com

Forty-four UI panels generated in ten minutes, each one grounded in real customer research. Jason Cyr, writing for The Human in the Loop, on what happened when his team pointed Claude Code at Cisco’s design system:

Last week, one of my design directors pointed Claude Code at Magnetic and asked it to build a security detection prototype. Real components, real navigation, theme switching, working admin panels — running in ten minutes. Then he connected it to our research repository and it built 44 detection detail panels, every design decision tracing back to something a real customer said. That happened because the AI had access to our design system.

Cyr’s takeaway: the design system was the design review.

Your design system is your leverage. It’s how your taste scales. The teams that invest here will see their design decisions show up in every agent-generated output, automatically. The teams that don’t will spend all their time cleaning up messes that a good system would have prevented.

Monday.com arrived at the same conclusion from the engineering side. They built a design-system MCP after their agents kept hardcoding colors and ignoring typography tokens.

Cyr doesn’t shy away from who this leaves behind, either: designers whose value lives entirely in production. “Not because they’re bad at their jobs — but because AI just got very good at theirs.”

Title card reading "Design Teams in the Agentic Era" with the subtitle "A manifesto for what comes next." on a dark background.

Design Teams in the Agentic Era

My thoughts on what comes next

jasoncyr.substack.com iconjasoncyr.substack.com

David Hoang, writing for Proof of Concept, proposes a squad model for tackling a company’s hardest, most ambiguous problems:

The squad: a forward deployed engineer, a forward deployed designer, and a researcher. Three people. That’s it. They operate like a startup-within-the-company, deployed against a specific, ambiguous problem. […] This is a product discovery team with teeth — they don’t just produce insights and hand them off. They produce working prototypes and validated direction. […] Three people don’t need standups, retros, or Jira boards. They need a shared problem and a whiteboard.

No PM. The shared problem replaces the roadmap, and a researcher replaces the product manager. Hoang borrows the concept from Palantir’s Forward Deployed Engineers and extends it to design. His argument: AI tools have given designers enough technical leverage to prototype at engineering speed, so the designer who finds the problem can build the first cut of the solution.

A three-person team with AI tools in 2026 can cover the ground that used to require a ten-person cross-functional team. That’s the direct result of collapsing the build cost of exploration.

Hoang argues that the rotation model matters as much as the squad composition. Four to eight weeks, then disband. The team doesn’t calcify into a feature factory. Designers rotate through the company’s hardest problems instead of sitting on the same product team filing tickets for years.

Although, my counter to that would be designers sitting in the same problem space will gain deeper knowledge and context. Rotation could be counterproductive if not handled deliberately.

Hand-drawn Venn diagram showing three overlapping circles labeled Researcher, Design Engineer, and GTM, with the center intersection labeled "Forward Deployed Designer.

Forward deployed designer

In the early 2010s, Palantir coined a role that didn’t exist before: the Forward Deployed Software Engineer. These weren’t engineers building features on a roadmap. They were engineers embedded directly at client companies — sitting with analysts, operators, and decision-makers — to discover the problem and build the solution in the same motion. The role spread. Databricks, Scale AI, and OpenAI adopted variations.

proofofconcept.pub iconproofofconcept.pub

There’s a distinction between designers learning front-end engineering and designers directing AI agents that produce code against a design system. They sound similar. They share a prerequisite: understanding the material you’re working with.

Adam Silver builds his argument on Frank Chimero’s essential essay “The Web’s Grain”:

The web is a material. Like wood, it has a grain. You can work with it or fight against it.

Silver borrows Chimero’s term for what happens when you fight the grain:

It is very impressive that you can teach a bear to ride a bicycle, and it is fascinating and novel. But perhaps it’s cruel? Because that’s not what bears are supposed to do. And that bear will never actually be good at riding a bicycle.

He makes this concrete with native form controls:

Most designers I worked with hated how the native <select> dropdown looked. So they designed a custom one to make it look good and match the brand. But that meant having to abandon the native element and build a custom dropdown from scratch. Even if you ignore the extra work, you lose: Keyboard navigation, Screen reader support, Automatic form submission, The native iOS scroll wheel, Functionality without JavaScript. Some of this is hard to recreate, some of it is impossible.

This is one of those fights that never ends well.

I agree with the diagnosis. Material literacy matters. Where I part ways is the prescription. Silver’s answer is to design in code using the GOV.UK Prototype Kit. That made sense when writing code was the only way to feel the grain push back. But directing an AI agent to build against a design system gives you the same feedback. You see what the browser does with your layout. You discover where the grain resists. You just didn’t write the CSS yourself. And that’s where we’re headed.

The more interesting question is one Silver points toward without arriving at: AI is a new material with its own grain. It’s probabilistic. It favors volume over precision. Designers who fight that grain — demanding pixel-perfect fidelity from a generative tool — are making the same mistake in a different medium.

Why designing in code makes you a better designer

Adam Silver – interaction designer – London, UK

adamsilver.io iconadamsilver.io

Proprioception is the body’s sense of where its parts are in space. Marcin Wichary borrows the term for software that knows where its hardware lives: where the buttons are, where the ports are, where the camera is. His proposed design principle:

The rule here would be, perhaps, a version of “show, don’t tell.” We could call it “point to, don’t describe.” (Describing what to do means cognitive effort to read the words and understand them. An arrow pointing to something should be easier to process.)

Wichary walks through a series of examples, mostly from Apple: the Apple Pay animation that points at the side button, the iPad camera prompt that points to the physical lens, Dynamic Island camouflaging missing pixels as a functional UI element. The one that caught my eye is the device Simulator matching the physical dimensions of your actual phone on-screen and staying accurate even when you change the display density. Reminds me of one of the earliest selling points of the Mac’s 72dpi—it matches the real world: 72 points to an inch.

The MacBook Neo is where Wichary applies the principle and finds Apple falling short. The new model has two USB-C ports with different speeds, and macOS notifies you with text:

I think this is nice! But it’s also just words. It feels a bit cheap. macOS knows exactly where the ports are, and could have thrown a little warning in the lower left corner of the screen, complete with an onscreen animation of swapping the plug to the other port – similar to what “double clicking to pay” does, so you wouldn’t have to look to the side to locate the socket first.

Close-up of a MacBook Touch Bar displaying "Unlock with Touch ID →" above the minus, plus, equals, and delete keys.

Software proprioception

A blog about software craft and quality

unsung.aresluna.org iconunsung.aresluna.org

Buzz Usborne on what happens when AI takes on more responsibility in a product:

AI doesn’t simply make products smarter — it redistributes thinking and decision-making between humans and machines. When AI absorbs cognition, it also inherits responsibility. And when it inherits responsibility, the cost of its mistakes rises.

Usborne frames this through three forces that determine whether AI features survive or fail: trust, value perception, and cognitive effort. They amplify each other. Low trust increases perceived effort. High effort reduces perceived value. Low value further undermines trust.

His answer is to earn autonomy through interaction, not demand trust upfront:

Trust does not always need to precede adoption, it can emerge through usage. Salesforce’s findings show that “Human validation of outputs is the biggest driver in trusting the outcome, over consistently accurate outputs.” In other words, users trust systems they can interrogate, shape, and verify. And instead of designing AI products that are perfect, we can earn trust by designing experiences that are controllable.

Controllable over perfect.

Circular diagram with purple arrows showing a cycle: trust leads to value perception, which leads to effort/cognitive load, which feeds back to trust.

Designing AI Experiences People Actually Use

AI doesn’t just add intelligence — it redistributes it. Here’s how that shift can make or break a product.

buzzusborne.com iconbuzzusborne.com

Most product teams adding AI start by building a new surface for it. A custom panel. A chat sidebar. A dedicated AI workspace. Alexandra Vasquez, writing for Bootcamp, describes her team making exactly that mistake:

We built a custom AI panel with its own navigation, input styles, and button treatments. It looked “futuristic” in the prototype. In user testing, people kept asking where things were and how to get back to their actual work. We had created a separate product inside our product.

The fix was simple: they deleted the panel and put agent actions in the same menus, modals, and toolbars people already used. Slack does this with its /command structure. Notion uses the same slash menu for manual and AI actions. The pattern is existing UI that happens to be smarter.

Vasquez argues most “AI failures” are actually system failures that agents expose at scale:

Designing for agents means treating information architecture and workflows as foundational. Before building an agent, audit your system’s foundations: Are labels consistent? Do hierarchies make sense? Can a new team member navigate workflows without constant help? If humans struggle, agents will fail faster and at scale. Fix the system first.

She’s right. And there’s a more radical version of this: agents don’t need human UI at all. As long as the APIs are available, an agent can complete tasks without ever touching a button or reading a screen. The interface is for the human, not the machine.

But that’s exactly the problem. If the agent bypasses the interface, the human’s ability to express intent and verify output becomes the whole game. Intent has to be crystal clear. Feedback has to be immediate and legible. And there’s a huge amount of trust to earn before anyone is comfortable letting an agent operate in the background on their behalf. Vasquez lands here too:

The AI model is the last thing we discuss, not the first. These are product decisions, and designers have outsized influence here.

The model is the least interesting part. The interesting part is designing the trust.

Humorous UI dialog titled "Applying AI changes" with three checked items—"Making water wet," "Raising dog cuteness," and "Burning fire hotter"—and a progress bar showing "Processing...

Agentic UX: 7 principles for designing systems with agents

Agents don’t need their own screen, they need better systems to operate in

medium.com iconmedium.com

If you’re a designer who feels the ground shifting but doesn’t know where to step, Erika Flowers built a free, structured curriculum for exactly that moment. Zero-Vector Design is her framework for collapsing the handoff between design and engineering, using AI agents as crew rather than replacements. The distinction she draws between this and vibe coding is worth internalizing:

You bring the systems thinking, the architecture, the years of knowing what good looks like. The AI extends your reach, not your judgment. Speed without intention is just faster failure. Speed with intention is leverage.

Six levels, 60+ lessons, all free. Worth bookmarking.

Zero-Vector Design brand card on dark background with tagline "From intent to artifact, directly." and website zerovector.design

Zero-Vector Design

A design philosophy for the age of AI. No intermediary. No translation layer. No friction. From intent to artifact, directly.

zerovector.design iconzerovector.design

Three people at three different companies, same conclusion. Former Apple designer Jason Yuan calls intelligence “the new materiality” in the previously linked Fast Company piece. Brian Lovin says Notion’s design team can’t design AI products in Figma because the material doesn’t live there. Jenny Blackburn, Google’s VP of UX for Gemini, puts it most directly.

Eli Woolery and Aarron Walter, writing for Design Better, synthesized interviews they’ve done with Google design leaders across YouTube, Search, and Gemini. Blackburn’s framing:

The model is the material that we are designing with, and the more you understand the material, the more you can innovate with it.

You can only direct as well as you understand. But this material behaves unlike anything designers have worked with before. Blackburn on the risk of over-constraining it:

One of the challenges is that these models are so capable. In many ways, they’re actually more capable than you even expect as a designer, and so the risk is that you actually add too much UI that limits the value that the model can provide that would come if you just facilitated a direct conversation between the user and the model.

The Gemini team’s response is smart. When users wrote too-short prompts for custom Gems, they didn’t add a tutorial. They added a “magic wand” that expands the prompt but doesn’t submit it. The user reviews, edits, learns. Teaching without lecturing.

Every previous design material—pixels, paper, aluminum—is deterministic. You shape it, it stays shaped. AI models are probabilistic. Same prompt, different results. Understanding this material isn’t like understanding clay. It’s like understanding weather.

The piece also covers YouTube’s disciplined “bundles” strategy and Search’s AI reimagining. Worth the full read.

Illustrated map of scattered islands in a blue ocean, each hosting different ecosystems and creatures including dinosaurs, large mammals, birds, and desert cacti.

The Roundup (in depth): Google’s 3 design strategies shaping their most popular products

We go deep into YouTube, Gemini, and Search design strategy

designbetterpodcast.com icondesignbetterpodcast.com

Every design system is an exercise in compression. You take contextual reasoning—why this spacing, why this type scale—and flatten it into tokens and components that can ship without the backstory.

Mark Anthony Cianfrani:

the reason that your line height is set to 1.1 is because your application is, or was at one point, very data-intensive and thus you needed to optimize for information density. Because one time someone complained about not being able to see a very important row in a table and that mistake cost so much money that you were hired to redesign the whole system. But that’s a mouthful. You can’t throw that over the wall. An engineer can’t implement that. So we make little boxes with all batteries included.

All of that reasoning gets flattened into line-height: 1.1. The token ships. The reasoning doesn’t. Every design system makes this trade-off: you lose the why to gain portability.

Cianfrani argues we don’t have to accept that trade-off anymore:

LLMs give us the ability to ship our exact train of thought, uncompressed, a little bit lossy but still significantly useful. Full context that is instantly digestable. Instead of shipping <Boxes>, ship a factory.

Design systems were never the end goal. They were the best compression format we had. Components and tokens became the shipping containers because the full reasoning was too unwieldy to hand off. That constraint is loosening. In spec-driven development, that factory looks like a structured document: design intent expressed in plain language that AI agents build against directly. The spec is the reasoning, uncompressed.

Even if the AI bet doesn’t pay off:

And if this whole AI thing turns out to burst, at least you’ve improved the one skill that some of the best designers I’ve ever worked with had in common—the ability to communicate their design decisions into words.

The compression problem was always worth solving, with or without LLMs.

Pale cream background with four small colored squares—teal, burgundy, orange-red, and mustard—aligned along the bottom-right edge.

Designing in English

Components are dead. Use your words.

cianfrani.dev iconcianfrani.dev

The transparency question in autonomous interfaces—what to surface, what to simplify, what to explain—needs a concrete framework. Daniel Ruston offers one.

Ruston names the next layer: the Orchestrated User Interface, where the user states intent and the system generates the right interface and executes across multiple agents. The label is less interesting than what it demands from designers:

We can no longer design rigid for “Happy Paths.” We must design for Probabilistic UX. The designer’s job is no longer drawing the buttons; the designer’s job is defining the thresholds for when the button “presses itself” or when the system needs user to clarify, correct or control.

Ruston makes this concrete with a confidence-threshold pattern:

Low Confidence (<60%): The system asks the user for clarification or provides a vague response requiring follow-up (“Which Jane do you want me to schedule with?”). Medium Confidence (60–90%): The system makes a tentative suggestion (“Shall I draft a reply based on your last meeting?”). High Confidence (>90%): The system acts and informs (“I’ve blocked this time on your calendar to prevent conflicts”).

That’s the design lever most AI products skip. They either act without explaining or ask permission for everything. The threshold gives designers something to actually spec: not “should the system do this?” but “how sure does it need to be before it does this without asking?”

Ruston borrows a metaphor from aviation to describe what this visibility should look like:

Analogue cockpits require pilots to look at individual gauges and mentally build a picture of the aircraft’s “system” state. The glass cockpit philosophy shifts the focus to a human-centered design that processes and integrates this data into an intuitive, graphical “picture” of flight.

Same problem, different domain. Most AI products today are analogue cockpits: individual agent outputs, raw status messages, no integrated picture. The confidence thresholds tell the system when to act. The glass cockpit tells the user what’s happening while it acts.

Colorful illustration of a laptop surrounded by keyboards, chat bubbles, sliders, graphs and emoji, connected by flowing ribbons.

The rise of the Orchestrated User Interface (OUI)

Designing for intent in a brave new world.

uxdesign.cc iconuxdesign.cc

The shift from mockups to code is one thing. The shift from designing tools to designing autonomous behavior is another. Sergio Ortega proposes expanding Human-Computer Interaction into Human-Machine Interaction. The label is less interesting than what it points at.

The part that matters for working designers is the transparency problem:

This is where design must decide what to show, what to simplify, and what to explain. Absolute transparency is unfeasible, total opacity should be unacceptable. In short, designing for autonomous systems means finding a balance between technological complexity and human trust.

When a system makes decisions the user didn’t ask for, someone has to decide what gets surfaced. Ortega:

The focus does not abandon user experience, but expands toward system behavior and its influence on human and organizational decisions. Design is no longer only about defining how technology is used, but about establishing the limits of its behavior.

And the implication for design teams:

When the machine acts, design becomes a mechanism of continuous balance.

Brass steampunk robot typing on a gear-driven computer in a cluttered workshop while a goggled inventor watches nearby

Human-Machine Interaction: the evolution of design and user experience

Human-Machine Interaction expands the traditional Human-Computer Interaction framework. An analysis of how autonomous systems and acting technologies are reshaping design and user experience.

sortega.com iconsortega.com

The pitch for generative UI is simple: stop making users navigate menus and let them say what they want. Every AI product demo shows the same thing: type a prompt, get a result, skip the 47-click workflow. It looks like progress.

Jakob Nielsen names what gets lost in the trade:

However, eliminating the Navigation Tax imposes a new Articulation Tax. In a menu-driven GUI, features are visible and therefore discoverable; a user can find a tool they didn’t know existed simply by browsing. In an intent-based AI interface, the user can only access what they can clearly describe.

“Articulation Tax” is the right frame. Menus are clunky, but they show you what’s possible. A blank prompt field assumes you already know what to ask for. That’s fine for power users. It’s a problem for everyone else. Nielsen:

The shift from WIMP to World Models represents a transition from Deterministic to Probabilistic interaction. In a WIMP interface, clicking an icon is deterministic: it produces the exact same result 100% of the time. In a generative world model, the system is probabilistic: the same prompt may yield different results on different attempts.

Deterministic to probabilistic is a trust problem. Users learned to trust GUIs because the same action always produced the same result. That contract is gone. Users will adjust eventually, but most aren’t there yet.

Comic-style History of the GUI showing Xerox Alto, Macintosh, windows/icons, mouse, touch phone, and holographic globe.

History of the Graphical User Interface: The Rise (and Fall?) of WIMP Design

Summary: The GUI’s success wasn’t about any single invention, but a synergy of 4 elements: Window, Icon, Menu, and Pointer, through a 60-year history of usability improvements.

jakobnielsenphd.substack.com iconjakobnielsenphd.substack.com