Skip to content

217 posts tagged with “user experience”

Forty-four UI panels generated in ten minutes, each one grounded in real customer research. Jason Cyr, writing for The Human in the Loop, on what happened when his team pointed Claude Code at Cisco’s design system:

Last week, one of my design directors pointed Claude Code at Magnetic and asked it to build a security detection prototype. Real components, real navigation, theme switching, working admin panels — running in ten minutes. Then he connected it to our research repository and it built 44 detection detail panels, every design decision tracing back to something a real customer said. That happened because the AI had access to our design system.

Cyr’s takeaway: the design system was the design review.

Your design system is your leverage. It’s how your taste scales. The teams that invest here will see their design decisions show up in every agent-generated output, automatically. The teams that don’t will spend all their time cleaning up messes that a good system would have prevented.

Monday.com arrived at the same conclusion from the engineering side. They built a design-system MCP after their agents kept hardcoding colors and ignoring typography tokens.

Cyr doesn’t shy away from who this leaves behind, either: designers whose value lives entirely in production. “Not because they’re bad at their jobs — but because AI just got very good at theirs.”

Title card reading "Design Teams in the Agentic Era" with the subtitle "A manifesto for what comes next." on a dark background.

Design Teams in the Agentic Era

My thoughts on what comes next

jasoncyr.substack.com iconjasoncyr.substack.com

David Hoang, writing for Proof of Concept, proposes a squad model for tackling a company’s hardest, most ambiguous problems:

The squad: a forward deployed engineer, a forward deployed designer, and a researcher. Three people. That’s it. They operate like a startup-within-the-company, deployed against a specific, ambiguous problem. […] This is a product discovery team with teeth — they don’t just produce insights and hand them off. They produce working prototypes and validated direction. […] Three people don’t need standups, retros, or Jira boards. They need a shared problem and a whiteboard.

No PM. The shared problem replaces the roadmap, and a researcher replaces the product manager. Hoang borrows the concept from Palantir’s Forward Deployed Engineers and extends it to design. His argument: AI tools have given designers enough technical leverage to prototype at engineering speed, so the designer who finds the problem can build the first cut of the solution.

A three-person team with AI tools in 2026 can cover the ground that used to require a ten-person cross-functional team. That’s the direct result of collapsing the build cost of exploration.

Hoang argues that the rotation model matters as much as the squad composition. Four to eight weeks, then disband. The team doesn’t calcify into a feature factory. Designers rotate through the company’s hardest problems instead of sitting on the same product team filing tickets for years.

Although, my counter to that would be designers sitting in the same problem space will gain deeper knowledge and context. Rotation could be counterproductive if not handled deliberately.

Hand-drawn Venn diagram showing three overlapping circles labeled Researcher, Design Engineer, and GTM, with the center intersection labeled "Forward Deployed Designer.

Forward deployed designer

In the early 2010s, Palantir coined a role that didn’t exist before: the Forward Deployed Software Engineer. These weren’t engineers building features on a roadmap. They were engineers embedded directly at client companies — sitting with analysts, operators, and decision-makers — to discover the problem and build the solution in the same motion. The role spread. Databricks, Scale AI, and OpenAI adopted variations.

proofofconcept.pub iconproofofconcept.pub

There’s a distinction between designers learning front-end engineering and designers directing AI agents that produce code against a design system. They sound similar. They share a prerequisite: understanding the material you’re working with.

Adam Silver builds his argument on Frank Chimero’s essential essay “The Web’s Grain”:

The web is a material. Like wood, it has a grain. You can work with it or fight against it.

Silver borrows Chimero’s term for what happens when you fight the grain:

It is very impressive that you can teach a bear to ride a bicycle, and it is fascinating and novel. But perhaps it’s cruel? Because that’s not what bears are supposed to do. And that bear will never actually be good at riding a bicycle.

He makes this concrete with native form controls:

Most designers I worked with hated how the native <select> dropdown looked. So they designed a custom one to make it look good and match the brand. But that meant having to abandon the native element and build a custom dropdown from scratch. Even if you ignore the extra work, you lose: Keyboard navigation, Screen reader support, Automatic form submission, The native iOS scroll wheel, Functionality without JavaScript. Some of this is hard to recreate, some of it is impossible.

This is one of those fights that never ends well.

I agree with the diagnosis. Material literacy matters. Where I part ways is the prescription. Silver’s answer is to design in code using the GOV.UK Prototype Kit. That made sense when writing code was the only way to feel the grain push back. But directing an AI agent to build against a design system gives you the same feedback. You see what the browser does with your layout. You discover where the grain resists. You just didn’t write the CSS yourself. And that’s where we’re headed.

The more interesting question is one Silver points toward without arriving at: AI is a new material with its own grain. It’s probabilistic. It favors volume over precision. Designers who fight that grain — demanding pixel-perfect fidelity from a generative tool — are making the same mistake in a different medium.

Why designing in code makes you a better designer

Adam Silver – interaction designer – London, UK

adamsilver.io iconadamsilver.io

Proprioception is the body’s sense of where its parts are in space. Marcin Wichary borrows the term for software that knows where its hardware lives: where the buttons are, where the ports are, where the camera is. His proposed design principle:

The rule here would be, perhaps, a version of “show, don’t tell.” We could call it “point to, don’t describe.” (Describing what to do means cognitive effort to read the words and understand them. An arrow pointing to something should be easier to process.)

Wichary walks through a series of examples, mostly from Apple: the Apple Pay animation that points at the side button, the iPad camera prompt that points to the physical lens, Dynamic Island camouflaging missing pixels as a functional UI element. The one that caught my eye is the device Simulator matching the physical dimensions of your actual phone on-screen and staying accurate even when you change the display density. Reminds me of one of the earliest selling points of the Mac’s 72dpi—it matches the real world: 72 points to an inch.

The MacBook Neo is where Wichary applies the principle and finds Apple falling short. The new model has two USB-C ports with different speeds, and macOS notifies you with text:

I think this is nice! But it’s also just words. It feels a bit cheap. macOS knows exactly where the ports are, and could have thrown a little warning in the lower left corner of the screen, complete with an onscreen animation of swapping the plug to the other port – similar to what “double clicking to pay” does, so you wouldn’t have to look to the side to locate the socket first.

Close-up of a MacBook Touch Bar displaying "Unlock with Touch ID →" above the minus, plus, equals, and delete keys.

Software proprioception

A blog about software craft and quality

unsung.aresluna.org iconunsung.aresluna.org

Buzz Usborne on what happens when AI takes on more responsibility in a product:

AI doesn’t simply make products smarter — it redistributes thinking and decision-making between humans and machines. When AI absorbs cognition, it also inherits responsibility. And when it inherits responsibility, the cost of its mistakes rises.

Usborne frames this through three forces that determine whether AI features survive or fail: trust, value perception, and cognitive effort. They amplify each other. Low trust increases perceived effort. High effort reduces perceived value. Low value further undermines trust.

His answer is to earn autonomy through interaction, not demand trust upfront:

Trust does not always need to precede adoption, it can emerge through usage. Salesforce’s findings show that “Human validation of outputs is the biggest driver in trusting the outcome, over consistently accurate outputs.” In other words, users trust systems they can interrogate, shape, and verify. And instead of designing AI products that are perfect, we can earn trust by designing experiences that are controllable.

Controllable over perfect.

Circular diagram with purple arrows showing a cycle: trust leads to value perception, which leads to effort/cognitive load, which feeds back to trust.

Designing AI Experiences People Actually Use

AI doesn’t just add intelligence — it redistributes it. Here’s how that shift can make or break a product.

buzzusborne.com iconbuzzusborne.com

Most product teams adding AI start by building a new surface for it. A custom panel. A chat sidebar. A dedicated AI workspace. Alexandra Vasquez, writing for Bootcamp, describes her team making exactly that mistake:

We built a custom AI panel with its own navigation, input styles, and button treatments. It looked “futuristic” in the prototype. In user testing, people kept asking where things were and how to get back to their actual work. We had created a separate product inside our product.

The fix was simple: they deleted the panel and put agent actions in the same menus, modals, and toolbars people already used. Slack does this with its /command structure. Notion uses the same slash menu for manual and AI actions. The pattern is existing UI that happens to be smarter.

Vasquez argues most “AI failures” are actually system failures that agents expose at scale:

Designing for agents means treating information architecture and workflows as foundational. Before building an agent, audit your system’s foundations: Are labels consistent? Do hierarchies make sense? Can a new team member navigate workflows without constant help? If humans struggle, agents will fail faster and at scale. Fix the system first.

She’s right. And there’s a more radical version of this: agents don’t need human UI at all. As long as the APIs are available, an agent can complete tasks without ever touching a button or reading a screen. The interface is for the human, not the machine.

But that’s exactly the problem. If the agent bypasses the interface, the human’s ability to express intent and verify output becomes the whole game. Intent has to be crystal clear. Feedback has to be immediate and legible. And there’s a huge amount of trust to earn before anyone is comfortable letting an agent operate in the background on their behalf. Vasquez lands here too:

The AI model is the last thing we discuss, not the first. These are product decisions, and designers have outsized influence here.

The model is the least interesting part. The interesting part is designing the trust.

Humorous UI dialog titled "Applying AI changes" with three checked items—"Making water wet," "Raising dog cuteness," and "Burning fire hotter"—and a progress bar showing "Processing...

Agentic UX: 7 principles for designing systems with agents

Agents don’t need their own screen, they need better systems to operate in

medium.com iconmedium.com

If you’re a designer who feels the ground shifting but doesn’t know where to step, Erika Flowers built a free, structured curriculum for exactly that moment. Zero-Vector Design is her framework for collapsing the handoff between design and engineering, using AI agents as crew rather than replacements. The distinction she draws between this and vibe coding is worth internalizing:

You bring the systems thinking, the architecture, the years of knowing what good looks like. The AI extends your reach, not your judgment. Speed without intention is just faster failure. Speed with intention is leverage.

Six levels, 60+ lessons, all free. Worth bookmarking.

Zero-Vector Design brand card on dark background with tagline "From intent to artifact, directly." and website zerovector.design

Zero-Vector Design

A design philosophy for the age of AI. No intermediary. No translation layer. No friction. From intent to artifact, directly.

zerovector.design iconzerovector.design

Three people at three different companies, same conclusion. Former Apple designer Jason Yuan calls intelligence “the new materiality” in the previously linked Fast Company piece. Brian Lovin says Notion’s design team can’t design AI products in Figma because the material doesn’t live there. Jenny Blackburn, Google’s VP of UX for Gemini, puts it most directly.

Eli Woolery and Aarron Walter, writing for Design Better, synthesized interviews they’ve done with Google design leaders across YouTube, Search, and Gemini. Blackburn’s framing:

The model is the material that we are designing with, and the more you understand the material, the more you can innovate with it.

You can only direct as well as you understand. But this material behaves unlike anything designers have worked with before. Blackburn on the risk of over-constraining it:

One of the challenges is that these models are so capable. In many ways, they’re actually more capable than you even expect as a designer, and so the risk is that you actually add too much UI that limits the value that the model can provide that would come if you just facilitated a direct conversation between the user and the model.

The Gemini team’s response is smart. When users wrote too-short prompts for custom Gems, they didn’t add a tutorial. They added a “magic wand” that expands the prompt but doesn’t submit it. The user reviews, edits, learns. Teaching without lecturing.

Every previous design material—pixels, paper, aluminum—is deterministic. You shape it, it stays shaped. AI models are probabilistic. Same prompt, different results. Understanding this material isn’t like understanding clay. It’s like understanding weather.

The piece also covers YouTube’s disciplined “bundles” strategy and Search’s AI reimagining. Worth the full read.

Illustrated map of scattered islands in a blue ocean, each hosting different ecosystems and creatures including dinosaurs, large mammals, birds, and desert cacti.

The Roundup (in depth): Google’s 3 design strategies shaping their most popular products

We go deep into YouTube, Gemini, and Search design strategy

designbetterpodcast.com icondesignbetterpodcast.com

Every design system is an exercise in compression. You take contextual reasoning—why this spacing, why this type scale—and flatten it into tokens and components that can ship without the backstory.

Mark Anthony Cianfrani:

the reason that your line height is set to 1.1 is because your application is, or was at one point, very data-intensive and thus you needed to optimize for information density. Because one time someone complained about not being able to see a very important row in a table and that mistake cost so much money that you were hired to redesign the whole system. But that’s a mouthful. You can’t throw that over the wall. An engineer can’t implement that. So we make little boxes with all batteries included.

All of that reasoning gets flattened into line-height: 1.1. The token ships. The reasoning doesn’t. Every design system makes this trade-off: you lose the why to gain portability.

Cianfrani argues we don’t have to accept that trade-off anymore:

LLMs give us the ability to ship our exact train of thought, uncompressed, a little bit lossy but still significantly useful. Full context that is instantly digestable. Instead of shipping <Boxes>, ship a factory.

Design systems were never the end goal. They were the best compression format we had. Components and tokens became the shipping containers because the full reasoning was too unwieldy to hand off. That constraint is loosening. In spec-driven development, that factory looks like a structured document: design intent expressed in plain language that AI agents build against directly. The spec is the reasoning, uncompressed.

Even if the AI bet doesn’t pay off:

And if this whole AI thing turns out to burst, at least you’ve improved the one skill that some of the best designers I’ve ever worked with had in common—the ability to communicate their design decisions into words.

The compression problem was always worth solving, with or without LLMs.

Pale cream background with four small colored squares—teal, burgundy, orange-red, and mustard—aligned along the bottom-right edge.

Designing in English

Components are dead. Use your words.

cianfrani.dev iconcianfrani.dev

The transparency question in autonomous interfaces—what to surface, what to simplify, what to explain—needs a concrete framework. Daniel Ruston offers one.

Ruston names the next layer: the Orchestrated User Interface, where the user states intent and the system generates the right interface and executes across multiple agents. The label is less interesting than what it demands from designers:

We can no longer design rigid for “Happy Paths.” We must design for Probabilistic UX. The designer’s job is no longer drawing the buttons; the designer’s job is defining the thresholds for when the button “presses itself” or when the system needs user to clarify, correct or control.

Ruston makes this concrete with a confidence-threshold pattern:

Low Confidence (<60%): The system asks the user for clarification or provides a vague response requiring follow-up (“Which Jane do you want me to schedule with?”). Medium Confidence (60–90%): The system makes a tentative suggestion (“Shall I draft a reply based on your last meeting?”). High Confidence (>90%): The system acts and informs (“I’ve blocked this time on your calendar to prevent conflicts”).

That’s the design lever most AI products skip. They either act without explaining or ask permission for everything. The threshold gives designers something to actually spec: not “should the system do this?” but “how sure does it need to be before it does this without asking?”

Ruston borrows a metaphor from aviation to describe what this visibility should look like:

Analogue cockpits require pilots to look at individual gauges and mentally build a picture of the aircraft’s “system” state. The glass cockpit philosophy shifts the focus to a human-centered design that processes and integrates this data into an intuitive, graphical “picture” of flight.

Same problem, different domain. Most AI products today are analogue cockpits: individual agent outputs, raw status messages, no integrated picture. The confidence thresholds tell the system when to act. The glass cockpit tells the user what’s happening while it acts.

Colorful illustration of a laptop surrounded by keyboards, chat bubbles, sliders, graphs and emoji, connected by flowing ribbons.

The rise of the Orchestrated User Interface (OUI)

Designing for intent in a brave new world.

uxdesign.cc iconuxdesign.cc

The shift from mockups to code is one thing. The shift from designing tools to designing autonomous behavior is another. Sergio Ortega proposes expanding Human-Computer Interaction into Human-Machine Interaction. The label is less interesting than what it points at.

The part that matters for working designers is the transparency problem:

This is where design must decide what to show, what to simplify, and what to explain. Absolute transparency is unfeasible, total opacity should be unacceptable. In short, designing for autonomous systems means finding a balance between technological complexity and human trust.

When a system makes decisions the user didn’t ask for, someone has to decide what gets surfaced. Ortega:

The focus does not abandon user experience, but expands toward system behavior and its influence on human and organizational decisions. Design is no longer only about defining how technology is used, but about establishing the limits of its behavior.

And the implication for design teams:

When the machine acts, design becomes a mechanism of continuous balance.

Brass steampunk robot typing on a gear-driven computer in a cluttered workshop while a goggled inventor watches nearby

Human-Machine Interaction: the evolution of design and user experience

Human-Machine Interaction expands the traditional Human-Computer Interaction framework. An analysis of how autonomous systems and acting technologies are reshaping design and user experience.

sortega.com iconsortega.com

The pitch for generative UI is simple: stop making users navigate menus and let them say what they want. Every AI product demo shows the same thing: type a prompt, get a result, skip the 47-click workflow. It looks like progress.

Jakob Nielsen names what gets lost in the trade:

However, eliminating the Navigation Tax imposes a new Articulation Tax. In a menu-driven GUI, features are visible and therefore discoverable; a user can find a tool they didn’t know existed simply by browsing. In an intent-based AI interface, the user can only access what they can clearly describe.

“Articulation Tax” is the right frame. Menus are clunky, but they show you what’s possible. A blank prompt field assumes you already know what to ask for. That’s fine for power users. It’s a problem for everyone else. Nielsen:

The shift from WIMP to World Models represents a transition from Deterministic to Probabilistic interaction. In a WIMP interface, clicking an icon is deterministic: it produces the exact same result 100% of the time. In a generative world model, the system is probabilistic: the same prompt may yield different results on different attempts.

Deterministic to probabilistic is a trust problem. Users learned to trust GUIs because the same action always produced the same result. That contract is gone. Users will adjust eventually, but most aren’t there yet.

Comic-style History of the GUI showing Xerox Alto, Macintosh, windows/icons, mouse, touch phone, and holographic globe.

History of the Graphical User Interface: The Rise (and Fall?) of WIMP Design

Summary: The GUI’s success wasn’t about any single invention, but a synergy of 4 elements: Window, Icon, Menu, and Pointer, through a 60-year history of usability improvements.

jakobnielsenphd.substack.com iconjakobnielsenphd.substack.com

The design process isn’t dead. It’s changing. My belief is that the high-level steps are exactly the same, but where designers spend their time is being redistributed.

Jenny Wen, head of design for Claude at Anthropic (formerly at Figma), on Lenny’s Podcast:

This design process that designers have been taught, we sort of treat it as gospel. That’s basically dead. I think it was sort of dying before the age of AI, but given now that engineers can go off and spin off their seven Claudes, I think as designers, we really have to let go of that process.

It’s a strong headline. But Wen then describes her actual day-to-day, and it sounds familiar:

We are still prototyping stuff. I’m still mocking stuff up. I think it’s just I have a wider set of tools now, and I think the proportion of time I spend doing each thing just has changed.

So the process isn’t dead. The proportions shifted. Wen breaks it down:

A few years ago, 60 to 70% of it was mocking and prototyping, but now I feel the mocking up part of it is 30 to 40%. And then there’s that other 30 to 40% there that is now jamming and pairing directly with engineers. And then there’s a slice of it that is now implementation as well.

What’s missing from that breakdown is user research and discovery. Wen mentions having a researcher on the team, mentions reading studies and feedback, but those activities don’t factor into the breakdown at all. For a team building products where, by Wen’s own admission, “you can’t mock up all the states” and “you actually discover use cases as you see people using them,” you’d think research would be eating a larger share of the pie, not disappearing from the conversation entirely. In my day-to-day, the designers on my team spend 30–40% on discovery and flows. Maybe 40–50% on mockups and prototypes. We’re basically already at her breakdown.

There’s also a context problem. Wen’s “ship fast, iterate publicly, build trust through speed” approach makes sense for Anthropic. They’re building greenfield AI products where nobody knows the right interaction patterns yet. The models are non-deterministic. Labeling something a “research preview” and iterating in public is the right call when the design space is that undefined.

That approach gets harder with a product that has an established install base. When you’re updating features that millions of people depend on, “ship it and iterate” has real costs. Sonos learned this. Or if your product is mission-critical as Figma learned when it shipped its UI3 and designers revolted. Or worse, an essential service like a CRM or operational software. The slow, unglamorous work of discovery and user testing exists because breaking what already works is expensive. Wen has the advantage of building greenfield — there’s no install base to protect. Not every team has that luxury.

The interview gets more interesting when Wen turns to hiring. She describes three archetypes: the “block-shaped” strong generalist who’s 80th percentile across multiple skills, the deep T-shaped specialist who’s in the top 10% of their area, and then a third she says the industry is overlooking:

My last one is probably the one that I think we’re all overlooking, which is what I call the crack new grad. It’s just somebody who’s early career and feels, like, wise and experienced beyond their years, but is also just very humble and very eager to learn. I think this person is really interesting right now because I think most companies are just hiring senior talent, folks that have done things before, are super experienced, but given how much the roles are changing and what we’re expected to do is changing, I think having somebody who almost has a blank slate, and is just a really quick learner and is really eager to learn new tactics and stuff like that, and doesn’t have all these baked in processes and rituals in their mind, that’s super valuable.

Wen’s “crack new grad” maps closely to the strategies I wrote for entry-level designers: build things, get comfortable with AI tools, be what Josh Silverman calls the “dangerous generalist.” Someone without baked-in rituals who learns fast and ships. That a design leader at a frontier lab is actively looking for this profile matters, because most of the industry is still filtering for ten years of experience.

The design process is dead. Here’s what’s replacing it. | Jenny Wen (head of design at Claude)

Jenny Wen leads design for Claude at Anthropic. Prior to this, she was Director of Design at Figma, where she led the teams behind FigJam and Slides. Before that, she was a designer at Dropbox, Square, and Shopify.

youtube.com iconyoutube.com

Geoffrey Huntley makes a claim that should bother every designer. He’s listing what isn’t a moat in the AI era:

Any product features or platforms that were designed for humans. I know that’s going to sound really wild, but understand these days I go window-shopping on SaaS companies’ websites for product features, rip a screenshot into Claude Code, and it rebuilds that product feature/platform. As we enter the era of hyper-personalised software, I think this will be the case more and more. In my latest creation, I have cloned Posthog, Jira, Pipedrive, and Calendly, and the list just keeps on growing because I want to build a hyper-personalised business that meets all my needs, with full control and everything first-party.

“Features designed for humans” aren’t a moat. Not because design doesn’t matter—because the implementation can be cloned from a screenshot. Huntley himself rebuilt versions of Posthog, Jira, Pipedrive, and Calendly.

Huntley invented the Ralph loop—a technique for running AI coding agents in continuous loops that ship production software at a fraction of the old cost. He’s been tracking the economic fallout for a year:

The cost of software development is $10.42 an hour, which is less than minimum wage and a burger flipper at macca’s gets paid more than that. What does it mean to be a software developer when everyone in the world can develop software? Just two nights ago, I was at a Cursor meetup, and nearly everyone in the room was not a software developer, showing off their latest and greatest creations.

Well, they just became software developers because Cursor enabled them to become one. You see, the knowledge and skill of being a software developer has been commoditised.

Swap “software developer” for “designer.” Anton Sten rebuilt his website and invoicing system without writing code. Édouard Wautier’s team skips Figma after the initial sketch and prototypes directly in code. The commoditization Huntley describes is already arriving for design:

AI erases traditional developer identities—backend, frontend, Ruby, or Node.js. Anyone can now perform these roles, creating emotional challenges for specialists with decades of experience.

“UI designer,” “UX designer,” “interaction designer”—these specializations made sense when each required distinct tools and workflows. When an AI agent can handle the execution across all three, the labels stop carrying weight.

So if the implementation layer isn’t the moat, what is? Huntley’s answer for business is distribution, utility pricing, and operating model-first. The design answer is adjacent: knowing what to build and what to leave out. Taste. Judgment. The ability to look at what Claude generated from a screenshot and know it’s solving the wrong problem.

Dark shipping container with painted pink roses on its closed doors, standing in heavy rain with puddles.

Software development now costs less than than the wage of a minimum wage worker

Hey folks, the last year I’ve been pondering about this and doing game theory around the discovery of Ralph, how good the models are getting and how that’s going to intersect with society. What follows is a cold, stark write-up of how I think it’s going to go down. And

ghuntley.com iconghuntley.com

“People are change averse,” Duolingo’s CEO Luis von Ahn said when users revolted against the app’s 2022 redesign. He refused to offer a revert option. The backlash was just resistance to change, and users would get over it, he argued.

Dora Czerna, writing for UX Collective, makes the case that von Ahn got it wrong. Users weren’t afraid of change. They’d lost something:

That old interface isn’t just a collection of buttons and menus–it’s ours. We’ve invested time learning it, built workflows around it, developed preferences and shortcuts. The new design might be objectively superior in controlled testing, but it requires us to surrender something we’ve claimed as our own.

That’s the endowment effect applied to software. The hours you spent learning an interface have real value, and a redesign zeroes them out. Calling that “change aversion” dismisses the investment.

Czerna points to Sonos as the worst-case scenario—users who’d spent thousands on home audio systems suddenly couldn’t adjust the volume after an app update. But even smaller changes trigger the same psychology. Google changed its crop tool from square corners to rounded ones and got enough backlash to reverse it.

Czerna on what happens when you tell users the new version tested better:

Telling users “we tested this, and it’s better” when they’re actively experiencing it as worse creates a disconnect. Acknowledging that change is difficult, explaining what you’re trying to achieve, and being responsive to legitimate concerns about lost functionality builds more goodwill than insisting everything is fine when it clearly isn’t.

What’s less common is teams treating the transition itself as a design problem worth solving. And of course it is.

Vintage Mac displays "OLD INTERFACE - OUTDATED" beside a tablet with a colorful "NEW UPDATE!" dialog; support tickets and charts on the desk.

Why your brain rebels against redesigns — even good ones

The redesign tested well. Users hate it anyway. Welcome to the paradox that costs companies millions and leaves everyone baffled.

uxdesign.cc iconuxdesign.cc

Claude skills are structured markdown files that tell Claude how to handle a specific type of task. It is—as the name suggests—a new skill Claude or any AI agent can “learn.” Each one defines a role for Claude to adopt, the inputs it needs, a step-by-step workflow, and a quality bar for the output. You can build them for anything—research synthesis, writing, code review, design critique. Once loaded, Claude follows the workflow instead of improvising.

Nick Babich, writing for UX Planet, put together 10 skills aimed at product designers. The three I’d reach for first are the UX Heuristic Review, the Design Critique Partner, and the Competitor Analysis Generator. All three give a solo designer a structured second opinion on demand: a heuristic eval against Nielsen’s 10, a senior-level design critique, or a competitive feature matrix.

Babich’s skill format is clean and worth studying even if you end up building your own from scratch. (Hint: or use Claude Code to write its own skills.)

Stylized black profile with hand-on-chin and white neuron-like network inside the head on terracotta background

Top 10 Claude Skills You Should Try in Product Design

Claude, Anthropic’s AI assistant, has become one of the most versatile tools in a product designer’s toolkit, capable of far more than…

uxplanet.org iconuxplanet.org

Boris Cherny, head of Claude Code at Anthropic, on Lenny’s Podcast:

I think at this point it’s safe to say that coding is largely solved. At least for the kind of programming that I do, it’s just a solved problem because Claude can do it. And so now we’re starting to think about what’s next, what’s beyond this. Claude is starting to come up with ideas. It’s looking through feedback. It’s looking at bug reports. It’s looking at telemetry for bug fixes and things to ship—a little more like a co-worker or something like that.

“Largely solved” is a big claim from the person running the tool that’s solving it. And then he goes further—Claude is starting to decide what to build. That’s product management work.

Cherny on what his team at Anthropic already looks like:

On the Claude Code team, everyone codes. Our product manager codes, our engineering manager codes, our designer codes, our finance guy codes, our data scientist codes.

And on where the role boundaries are heading:

There’s maybe a 50% overlap in these roles where a lot of people are actually just doing the same thing and some people have specialties. I think by the end of the year the title software engineer is going to start to go away and it’s just going to be replaced by builder. Or maybe everyone’s going to be a product manager and everyone codes.

But where does design fit in all this? A PM can define the problem, maybe even come up with a good solution. But does Cherny think that AI will be the designer?

Lenny ran polls asking engineers, PMs, and designers whether they enjoy their jobs more or less since adopting AI. Engineers and PMs: 70% said more. Designers went the other direction with only 55% who said they were enjoying their job more, and 18%—nearly twice as many as engineers—said they were enjoying their job less.

Cherny’s reaction:

Our designers largely code. So I think for them this is something that they have enjoyed because they can unblock themselves.

That’s an engineer’s answer to a design question. Designers at Anthropic are happy because they can ship without waiting on a developer. But “unblocking yourself” isn’t the same as “AI can do the design.” Cherny doesn’t touch the user experience, visual thinking, the spatial reasoning.

My theory: Designers are visual people. Typing to design doesn’t really compute. And who can blame us?

Head of Claude Code: What happens after coding is solved | Boris Cherny

Boris Cherny is the creator and head of Claude Code at Anthropic. What began as a simple terminal-based prototype just a year ago has transformed the role of software engineering and is increasingly transforming all professional work. *We discuss:* 1. How Claude Code grew from a quick hack to 4% of public GitHub commits, with daily active users doubling last month 2. The counterintuitive product principles that drove Claude Code’s success 3. Why Boris believes coding is “solved” 4. The latent demand that shaped Claude Code and Cowork 5. Practical tips for getting the most out of Claude Code and Cowork 6. How underfunding teams and giving them unlimited tokens leads to better AI products 7. Why Boris briefly left Anthropic for Cursor, then returned after just two weeks 8. Three principles Boris shares with every new team member *Brought to you by:* DX—The developer intelligence platform designed by leading researchers: https://getdx.com/lenny Sentry—Code breaks, fix it faster: https://sentry.io/lenny Metaview—The AI platform for recruiting: https://metaview.ai/lenny *Episode transcript:* https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens *Archive of all Lenny’s Podcast transcripts:* https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0 *Where to find Boris Cherny:* • X: https://x.com/bcherny • LinkedIn: https://www.linkedin.com/in/bcherny • Website: https://borischerny.com *Where to find Lenny:* • Newsletter: https://www.lennysnewsletter.com • X: https://twitter.com/lennysan • LinkedIn: https://www.linkedin.com/in/lennyrachitsky/ *In this episode, we cover:* (00:00) Introduction to Boris and Claude Code (03:45) Why Boris briefly left Anthropic for Cursor (and what brought him back) (05:35) One year of Claude Code (08:41) The origin story of Claude Code (13:29) How fast AI is transforming software development (15:01) The importance of experimentation in AI innovation (16:17) Boris’s current coding workflow (100% AI-written) (17:32) The next frontier (22:24) The downside of rapid innovation (24:02) Principles for the Claude Code team (26:48) Why you should give engineers unlimited tokens (27:55) Will coding skills still matter in the future? (32:15) The printing press analogy for AI’s impact (36:01) Which roles will AI transform next? (40:41) Tips for succeeding in the AI era (44:37) Poll: Which roles are enjoying their jobs more with AI (46:32) The principle of latent demand in product development (51:53) How Cowork was built in just 10 days (54:04) The three layers of AI safety at Anthropic (59:35) Anxiety when AI agents aren’t working (01:02:25) Boris’s Ukrainian roots (01:03:21) Advice for building AI products (01:08:38) Pro tips for using Claude Code effectively (01:11:16) Thoughts on Codex (01:12:13) Boris’s post-AGI plans (01:14:02) Lightning round and final thoughts *Referenced:* • Cursor: https://cursor.com • The rise of Cursor: The $300M ARR AI tool that engineers can’t stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell • Anthropic: https://www.anthropic.com • Anthropic’s CPO on what comes next | Mike Krieger (co-founder of Instagram): https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what-comes-next • Claude Code Is the Inflection Point: https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point • Spotify says its best developers haven’t written a line of code since December, thanks to AI: https://techcrunch.com/2026/02/12/spotify-says-its-best-developers-havent-written-a-line-of-code-since-december-thanks-to-ai/ • Anthropic co-founder on quitting OpenAI, AGI predictions, $100M talent wars, 20% unemployment, and the nightmare scenarios keeping him up at night | Ben Mann: https://www.lennysnewsletter.com/p/anthropic-co-founder-benjamin-mann • Haiku: https://www.anthropic.com/claude/haiku • Sonnet: https://www.anthropic.com/claude/sonnet • Opus: https://www.anthropic.com/claude/opus • Jenny Wen on X: https://x.com/jenny_wen • Johannes Gutenberg: https://en.wikipedia.org/wiki/Johannes_Gutenberg • Anthropic jobs: https://www.anthropic.com/careers/jobs • Lenny’s AI poll post on X: https://x.com/lennysan/status/2020266745722991051 • Fiona Fung on LinkedIn: https://www.linkedin.com/in/fionafung • Brandon Kurkela on LinkedIn: https://www.linkedin.com/in/bkurkela • Cowork: https://www.anthropic.com/webinars/future-of-ai-at-work-introducing-cowork • Chris Olah on X: https://x.com/ch402 • The Bitter Lesson: http://www.incompleteideas.net/IncIdeas/BitterLesson.html ...References continued at: https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com._ Lenny may be an investor in the companies discussed.

youtube.com iconyoutube.com

Victor Yocco lays out a UX research playbook for agentic AI in Smashing Magazine—autonomy taxonomy, research methods, metrics, the works. It’s one of the more practical pieces I’ve seen on designing AI that acts on behalf of users.

The autonomy framework is useful. Yocco maps four modes from passive monitoring to full autonomy, and the key insight is that trust isn’t binary:

A user might trust an agent to act autonomously for scheduling, but keep it in “suggestion mode” for financial transactions.

That tracks with how I think about designing AI features. The same user will want different levels of control depending on what’s at stake. Autonomy settings should be per-domain, not global.

On measuring whether it’s working:

For autonomous agents, we measure success by silence. If an agent executes a task and the user does not intervene or reverse the action within a set window, we count that as acceptance.

That’s a different and interesting way to think about design metrics—success as the absence of correction. Yocco pairs this with microsurveys on the undo action so you’re not just counting rollbacks but understanding why they happen.

The cautionary section is worth flagging. Yocco introduces “agentic sludge”—where traditional dark patterns add friction to trap users, agentic sludge removes friction so users agree to things that benefit the business without thinking. Pair that with LLMs that sound authoritative even when wrong, and you have a system that can quietly optimize against the user’s interests. We’ve watched this happen before with social media. The teams that skip the research Yocco describes are the ones most likely to build it again.

Beyond Generative: The Rise Of Agentic AI And User-Centric Design — Smashing Magazine header with author photo and red cat.

Beyond Generative: The Rise Of Agentic AI And User-Centric Design — Smashing Magazine

Developing effective agentic AI requires a new research playbook. When systems plan, decide, and act on our behalf, UX moves beyond usability testing into the realm of trust, consent, and accountability. Victor Yocco outlines the research methods needed to design agentic AI systems responsibly.

smashingmagazine.com iconsmashingmagazine.com

Most people know what a molly guard is, even if they don’t know the name—it’s the plastic cover over an important button that forces you to be deliberate before you press it. Marcin Wichary flips the concept:

it’s also worth thinking of reverse molly guards: buttons that will press themselves if you don’t do anything after a while.

Think OS update dialogs that restart your machine after a countdown, or mobile setup screens that auto-advance. Wichary on why these matter:

There is no worse feeling than waking up, walking up to the machine that was supposed to work through the night, and seeing it did absolutely nothing, stupidly waiting for hours for a response to a question that didn’t even matter.

This is the kind of observation you only make after years of staring at buttons, as Wichary has.

Close-up of a red rectangular guard inside a dark metal casing; caption below reads "Molly guard in reverse" and "Unsung.

Molly guard in reverse

A blog about software craft and quality

unsung.aresluna.org iconunsung.aresluna.org
Person wearing glasses typing at a computer keyboard, surrounded by flowing code and a halftone glitch effect

ASCII Me

Over the past couple months, I’ve noticed a wave of ASCII-related projects show up on my feeds. WTH is ASCII? It’s the basic set of letters, numbers, and symbols that old-school computers agreed to use for text.

ASCII (American Standard Code for Information Interchange) has 128 characters:

  • 95 printable characters: digits 0–9, uppercase A–Z, lowercase a–z, space, and common punctuation and symbols.
  • 33 control characters: non-printing codes like NUL, LF (line feed), CR (carriage return), and DEL used historically for devices like teletypes and printers.

Early internet users who remember plain text-only email and Usenet newsgroups would have encountered ASCII art like these:

 /\_/\
( o.o )
 > ^ <

It’s a cat. Artist unknown.

   __/\\\\\\\\\\\\\____/\\\\\\\\\\\\\_______/\\\\\\\\\\\___
    _\/\\\/////////\\\_\/\\\/////////\\\___/\\\/////////\\\_
     _\/\\\_______\/\\\_\/\\\_______\/\\\__\//\\\______\///__
      _\/\\\\\\\\\\\\\\__\/\\\\\\\\\\\\\\____\////\\\_________
       _\/\\\/////////\\\_\/\\\/////////\\\______\////\\\______
        _\/\\\_______\/\\\_\/\\\_______\/\\\_________\////\\\___
         _\/\\\_______\/\\\_\/\\\_______\/\\\__/\\\______\//\\\__
          _\/\\\\\\\\\\\\\/__\/\\\\\\\\\\\\\/__\///\\\\\\\\\\\/___
           _\/////////////____\/////////////______\///////////_____

Dimensional lettering.

Anyway, you’ve seen it before and get the gist. My guess is that with Claude Code’s halo effect, the terminal is making a comeback and generating interest in this long lost artform again. And it’s text-based which is now fuel for AI.

I’ve seen this at every company past a certain size: you spot a disjointed UX problem across the product, you know what needs to happen, and then you spend three months in alignment meetings trying to get six teams to agree on a button style.

A recent piece from Laura Klein at Nielsen Norman Group examines why most product teams aren’t actually empowered, despite what the org chart claims. Klein on fragmentation:

When you have dozens of empowered teams, each optimizing its own metrics and building its own features, you get a product that feels like it was designed by dozens of different companies. One team’s area uses a modal dialog for confirmations. Another team uses an inline message. A third team navigates to a new page. The buttons say Submit in one place, Save in another, and Continue in a third. The tone of the microcopy varies wildly from formal to casual.

Users don’t see teams. They don’t see component boundaries. They just see a confusing, inconsistent product that seems to have been designed by people who never talked to each other, because, in a sense, it was.

Each team was empowered to make the best decisions for their area, and it did! But nobody was empowered to maintain coherence across the whole experience.

That last line is the whole problem. “Coherence,” as Klein calls it, is a design leadership responsibility, and it gets harder as AI lets individual teams ship faster without coordinating with each other. If every squad can generate production UI in hours instead of weeks, the fragmentation described here accelerates. Design systems become the only thing standing between your product and a Frankenstein experience.

The article is also sharp on what happens to PMs inside this dysfunction:

Picture a PM who spends 70% of her time in meetings coordinating with other teams, getting buy-in for a small change, negotiating priorities, trying to align roadmaps, escalating conflicts, chasing down dependencies, and attending working groups created to solve coordination problems. She spends a tiny fraction of her time with users. The rest is spent writing documents that explain her team’s work to other teams, updating roadmaps, reporting status, and attending planning meetings. She was hired to be a strategic product thinker, but she’s become a project manager, focused entirely on logistics and coordination.

I’ve watched this happen to PMs I’ve worked with. The coordination tax eats the strategic work. Marty Cagan calls this “product management theater”—a surplus of PMs who function as overpaid project managers. If AI compresses the engineering work but the coordination overhead stays the same, that ratio gets even more lopsided.

The fix is smaller teams with real ownership and strong design systems that enforce coherence without requiring 14 alignment meetings. But that requires organizational courage most companies don’t have.

Why Most Product Teams Aren't Really Empowered' headline with three hands untangling a ball of dark-blue yarn and NN/G logo.

Why Most Product Teams Aren’t Really Empowered

Although product teams say they’re empowered, many still function as feature factories and must follow orders.

nngroup.com iconnngroup.com

My essay yesterday was about the mechanics of how product design is changing—designing in code, orchestrating AI agents, collapsing the Figma-to-production handoff. That piece got into specifics. This piece by Pavel Bukengolts, writing for UX Magazine, is about the mindset:

AI is changing the how — the tools, the workflows, the speed. But the why of UX? That’s timeless.

Bukengolts is right. UX as a discipline isn’t going anywhere. But I worry that articles like this—well-intentioned and directionally correct—give designers permission to keep doing exactly what they’re doing now. “Sharpen your critical thinking” and “be the conscience in the room” is good advice. It’s also the kind of advice that lets you nod along without changing anything about your Tuesday.

The article lists the skills designers need: critical thinking, systems thinking, AI literacy, ethical awareness, strategic communication. All valid. But none of that addresses what the actual production work looks like six months from now. Bukengolts again:

In a world where AI does the work, your value is knowing why it matters and who it affects.

I agree with this in principle. The problem is the gap between “UX matters” and “your current UX role is secure.” Those are very different statements. UX will absolutely matter in an AI-powered world—someone has to shape the experience, evaluate whether it actually works for people, catch the things the model gets wrong. But the number of people doing that work, and what the job requires of them, is changing fast. I wrote in my essay that junior designers who can’t critically assess AI-generated work will find their roles shrinking fast. The skill floor is rising. Saying “stay curious and principled” isn’t wrong, but it’s not enough.

The piece closes with reassurance:

Yes, this moment is big. Yes, you’ll need to adapt. But no, you are not obsolete.

I’d feel better about that line if the article spent more time on how to adapt—not in terms of thinking skills, but in terms of the actual work. Learn to design in code. Get comfortable directing AI agents. Understand your design system well enough to make it machine-readable. Those are the specific steps that will separate designers who thrive from designers who got the mindset right but missed the shift happening underneath them.

Black 3D letters spelling CHANGE on warm backdrop; caption reads: AI can design interfaces; humans provide empathy and ethics.

Design Smarter: Future-Proof Your UX Career in the Age of AI

Is UX still a thing? AI is rising fast, but UX isn’t disappearing. It’s evolving. The big shift isn’t just tools, it’s how we think: critical thinking to spot gaps, systems thinking to map complexity, and AI literacy to understand capabilities without pretending we build it all. Empathy and ethics become the edge: designers must ask who’s affected, what’s left out, and what unintended consequences might arise. In practice, we translate data and research into a story that matters, bridging users, business, and tech, with strategic communication that keeps everyone aligned. In an AI-powered world, human judgment, why it matters, and to whom, stays central. Stay curious, sharp, and principled.

uxmag.com iconuxmag.com

In my previous post about Google Reader, I wrote about Chris Wetherell’s original vision—a polymorphic information tool, not a feed reader. But even Google Reader ended up as a three-pane inbox. That layout didn’t originate with Reader, though. It’s older than that.

Terry Godier traces that layout to a single decision. In 2002, Brent Simmons released NetNewsWire, the first RSS reader that looked like an email client. Godier asked him why, and Simmons’ answer was pragmatic:

“I was actually thinking about Usenet, not email, but whatever. The question I asked myself then was how would I design a Usenet app for (then-new) Mac OS X in the year 2002?”

“The answer was pretty clear to me: instead of multiple windows, a single window with a sidebar, list of posts, and detail view.”

A reasonable choice in 2002. But then Godier shares Simmons reflecting on why everyone kept copying him twenty-two years later:

“But every new RSS reader ought to consider not being yet another three-paned-aggregator. There are surely millions of users who might prefer a river of news or other paradigms.”

“Why not have some fun and do something new, or at least different?”

The person who designed the original paradigm was asking, twenty-two years later, why everyone was still copying him.

Godier’s argument is that when Simmons borrowed the inbox layout, he inadvertently imported the inbox’s psychology. Unread counts. Bold text for new items. A backlog that accumulates. The visual language of social debt, applied to content nobody sent you:

When you dress a new thing in old clothes, people don’t just learn the shape. They inherit the feelings, the assumptions, the emotional weight. You can’t borrow the layout of an inbox without also borrowing some of its psychology.

He calls this “phantom obligation”—the guilt you feel for something no one asked you to do. And I’ll admit, I feel it. I open Inoreader every morning and when that number isn’t zero, some part of my brain registers it as a task. It shouldn’t. Nobody is waiting. But the interface says otherwise.

Godier’s best line is the one that frames the whole piece:

We’ve been laundering obligation. Each interface inherits legitimacy from the last, but the social contract underneath gets hollowed out.

The red dot on a game has the same visual weight as a text from your kid. We kept the weight and dropped the reason.

PHANTOM OBLIGATION — noun: The guilt you feel for something no one asked you to do.

Phantom Obligation

Why RSS readers look like email clients, and what that’s doing to us.

terrygodier.com iconterrygodier.com

Every article I share on this blog starts the same way: in my RSS reader. I use Inoreader to follow about a hundred feeds—design blogs, tech publications, and independent newsletters. Every morning I scroll through what’s new, mark what’s interesting, and the best stuff eventually becomes a link post here. It’s not a fancy workflow. It’s an RSS reader and a notes app. But it works because the format works.

This is a 2023 article, but I’m fascinated by it because Google Reader was so influential in my life. David Pierce, writing for The Verge, chronicles how Google Reader came to be and why Google killed it.

Chris Wetherell, who built the first prototype, wasn’t thinking about an RSS reader. He was thinking about a universal information layer:

“I drew a big circle on the whiteboard,” he recalls. “And I said, ‘This is information.’ And then I drew spokes off of it, saying, ‘These are videos. This is news. This is this and that.’” He told the iGoogle team that the future of information might be to turn everything into a feed and build a way to aggregate those feeds.

Jason Shellen, the product manager, saw the same thing:

“We were trying to avoid saying ‘feed reader,’” Shellen says, “or reading at all. Because I think we built a social product.”

Google couldn’t see it. Reader had 30 million users, many of them daily, but that was a rounding error by Google standards. Pierce captures the absurdity well:

Almost nothing ever hits Google scale, which is why Google kills almost everything.

So Google poured its resources into Google Plus instead. That product was dead within months of launch. Reader, the thing they killed to make room for it, had been a working social network the whole time. Jenna Bilotta, a designer on the team:

“They could have taken the resources that were allocated for Google Plus, invested them in Reader, and turned Reader into the amazing social network that it was starting to be.”

What gets me is that the vision Wetherell drew on that whiteboard—a single place to follow everything you care about, organized by your taste, shared with people you trust, and non-algorithmic—still doesn’t fully exist. RSS readers are the closest thing we have, and they’re good enough that I’ve built my entire reading and writing practice around one. But the curation layer Wetherell imagined is still unfinished.

Framed memorial reading IN LOVING MEMORY (2005–2013) with three colorful app icons, lit candles and white roses.

Who killed Google Reader?

Google Reader was supposed to be much more than a tool for nerds. But it never got the chance.

theverge.com icontheverge.com

Many designers I’ve worked with want to get to screens as fast as possible. Open Figma, start laying things out, figure out the structure as they go. It works often enough that nobody questions it. But Daniel Rosenberg makes a case for why it shouldn’t be the default.

Rosenberg, writing for the Interaction Design Foundation, argues that the conceptual model—the objects users manipulate, the actions they perform, and the attributes they change—should be designed before anyone touches a screen:

Even before you sketch your first screen it is beneficial to develop a designer’s conceptual model and use it as the baseline for guiding all future interaction design decisions.

Rosenberg maps this to natural language. Objects are nouns. Actions are verbs. Attributes are adjectives. The way these elements relate to each other is the grammar of your interface. Get the grammar wrong and no amount of visual polish will save you.

His example is painfully simple. A tax e-sign system asked him to “ENTER a PIN” when he’d never used the system before. There was no PIN to enter. The action should have been “CREATE.” One wrong verb and a UX expert with 40 years of experience couldn’t complete the task. His accountant confirmed that dozens of clients had called thinking the system was broken.

Rosenberg on why this cascades:

A suboptimal decision on any lower layer will cascade through all the layers above. This is why designing the conceptual model grammar with the lowest cognitive complexity at the very start… is so powerful.

This is the part I want my team to internalize. When you jump straight to screens, you’re making grammar decisions implicitly—choosing verbs for buttons, deciding which objects to surface, grouping attributes in panels. You’re doing conceptual modeling whether you know it or not. The question is whether you’re doing it deliberately.

Article title "The MAGIC of Semantic Interaction Design" with small "Article" label and Interaction Design Foundation logo at bottom left.

The MAGIC of Semantic Interaction Design

Blame the user: me, a UX expert with more than 40 years of experience, who has designed more than 100 successful commercial products and evaluated the inadequate designs of nearly 1, 000 more.

interaction-design.org iconinteraction-design.org