Skip to content

273 posts tagged with “product design”

Tommaso Nervegna writes about LinkedIn killing its Associate Product Manager program and replacing it with a new role called the “Full Stack Builder.” The structural bet is interesting, but the finding from their rollout is what matters:

The expectation was that AI would be a great equalizer: juniors would benefit most because AI would close their skill gaps, while seniors would resist the change. The reality was the opposite. Top performers adopted AI fastest and derived the most value from it. Why? Because they had the judgment and experience to know what to ask for, how to evaluate the output, and where to apply it for maximum leverage.

That tracks with everything I’ve predicted, experienced, and seen. The skill that makes AI useful is knowing what good looks like before and after the model generates something. That ability comes from reps.

Nervegna distills LinkedIn CPO Tomer Cohen’s thesis to five skills AI cannot automate:

The five skills that AI cannot automate, according to Cohen, are Vision, Empathy, Communication, Creativity, and Judgment. As he puts it: “I’m working hard to automate everything else.”

The operational version:

The critical insight: the builder orchestrates the agents. The agents execute. Judgment stays human. This is not about replacing people with AI. It’s about compressing the team needed to ship something meaningful from fifteen people to three - or even one.

I’ve been calling this the orchestrator gap: the distance between a designer who uses AI and one who directs it. LinkedIn just gave it a job title. I think we will see more companies go this way. Whether or not it’s a good idea remains to be seen.

A Renaissance-era man studies blueprint sketches on a glowing drafting table while a giant mechanical lobster draws on the plans with an ornate pen.

The Full Stack Builder: The End of the Design Process as We Know It

The double diamond is a liability. Engineers ship faster than designers can explore. The PM role is dissolving and the three profiles that will survive this era look nothing like who we’ve been hiring

nervegna.substack.com iconnervegna.substack.com

I’ve watched design team values die in a Confluence page. The offsite happens, the Post-Its get transcribed, the principles get written up with care, and then everyone goes back to their desks and ships exactly the way they did before. I’ve seen it with product principles and brand values too. The deck gets built, implementation starts, and the deck gets forgotten.

Vitaly Friedman, writing for Smashing Magazine, on why this matters more than ever:

We often see design principles as rigid guidelines that dictate design decisions. But actually, they are an incredible tool to rally the team around a shared purpose and document the values and beliefs that an organization embodies. They align teams and inform decision-making. They also keep us afloat amidst all the hype, big assumptions, desire for faster delivery, and AI workslop.

Friedman again:

In times when we can generate any passable design and code within minutes, we need to decide better what’s worth designing and building — and what values we want our products to embody. It’s similar to voice and tone. You might not design it intentionally, but then end users will define it for you. And so, without principles, many company initiatives are random, sporadic, ad-hoc — and feel vague, inconsistent, or simply dull to the outside world.

You might not write principles intentionally, but your product will have them anyway. The question is whether you chose them or inherited them by default.

Friedman closes with the part most teams skip:

Creating principles is only a small portion of the work; most work is about effectively sharing and embedding them. It’s difficult to get anywhere without finding ways to make design principles a default — by revisiting settings, templates, naming conventions, and output. Principles help avoid endless discussions that often stem from personal preferences or taste. But design should not be a matter of taste; it must be guided by our goals and values.

Creating principles feels productive. But alignment without embedding is a Confluence page nobody opens twice. Principles have to show up in the Figma component library, the ticket template, the review rubric. They have to be repeated so that they are ingrained. They have to become the path of least resistance.

Smashing Magazine article title card: "A Practical Guide To Design Principles" by Vitaly Friedman, tagged Design, UX, UI.

A Practical Guide To Design Principles — Smashing Magazine

Design principles with references, examples, and methods for quick look-up. Brought to you by Design Patterns For AI Interfaces, **friendly video courses on UX** and design patterns by Vitaly.

smashingmagazine.com iconsmashingmagazine.com

Dan Saffer applies mid-century existentialism to the question of what “meaning” actually requires of the people building digital products, and the result is unusually rigorous. His sharpest move is applying Sartre’s concept of “projects” to AI tools:

When someone uses ChatGPT to write an essay, the Sartrean question is: whose project is this really? If the user is exploring ideas and using the tool as a thinking partner, they’re taking it up into their own meaning-making project. But if they’re pasting in a prompt and submitting the output unchanged, the system has effectively become the meaning-maker, and the user has become a delivery mechanism. The same tool can function either way. The design question is which relationship the system encourages.

Saffer connects this to Camus and the problem of frictionless design:

When every friction is removed in the name of efficiency, the activity can be hollowed out. There is nothing left to push against, and meaning drains away. This is something that AI systems have become exceedingly good at. Push the sparkle button, the task is done for you, and you have learned nothing and enjoyed nothing.

The HCI/UX field spent decades optimizing for friction removal. Saffer’s argument is that some friction is where the meaning lives. Design the struggle away and you don’t help the user. You empty the experience. Not every friction should be removed.

Saffer’s closing:

This sensibility insists that users are not information processors, not customers, not eyeballs, not tapping fingers, and not data sources. They are meaning-making beings whose freedom and dignity are at stake in every interaction. It asks designers to take seriously the existential weight of what they build. The systems we design become part of the conditions of human existence, shaping what people can choose, what they can see, who they can become.

Saffer covers Sartre, Camus, Kierkegaard, Heidegger, and de Beauvoir in the full piece, each applied to contemporary design problems. It’s a lot, and it’s all good.

Collage of five black-and-white portrait photos of mid-20th century philosophers, including one woman and four men, one holding a pipe.

The Existential Designer: Facilitating Meaning Through Interaction

Designers like to talk about making meaningful products or using the tools of design to make meaning.

odannyboy.medium.com iconodannyboy.medium.com

Yours truly got quoted in Fast Company. Grace Snelling, surveying the industry reaction to Lenny Rachitsky’s TrueUp hiring data, pulled a comment I left under Rachitsky’s original Twitter post:

Designers have designed themselves out of the equation because of design systems. But, IMHO, the secret sauce has never been the UI. It was the workflows and looking across the experience holistically.

Let me expand on that. The UI has always been the easiest part of product design. Design systems made that even more true. What separates a great product from a mediocre one is understanding our users deeply enough to create experiences that actually delight them. That understanding is the work AI can’t do, and it’s the work too many teams were already skipping before any standoff started.

The data behind the standoff: Rachitsky’s analysis of TrueUp’s job market tracker shows design roles have been flat since early 2023 while PM and engineering roles surged. (Quick side note: this data is for tech startups, not the general tech industry or design industry at large.) His theory:

I don’t know exactly what’s going on here, but it does feel AI-related. […] Unlike PM and eng, which started growing in 2024 (two years post-ChatGPT), design didn’t. If I had to venture a theory, I’d say that because AI is allowing engineers to move so quickly, there’s less opportunity—and less desire—to involve the traditional design process.

Claire Vo, founder of ChatPRD, puts the harder version of why:

Often design teams & designers are the most resistant to change org in the EPD triad, with highly vocal AI opponents, and little skill or interest in the art of campaigning for influence or resources. […] If a PM or engineer can get 85% there with tailwind and a dream, you better come to the table with more than ‘I represent the user.’

“I represent the user” was never enough on its own. It just went unchallenged when designers were the only ones who could ship polished interfaces.

Anthropic’s chief design officer Joel Lewenstein on where the EPD triad actually lands:

I think there’s a lot of role collapse at the very beginning, but there are still pretty clear swim lanes as things get into the later stages of product development. […] It’s like a Venn diagram that’s coming closer together.

Three hands pointing toward a central point on a red background, surrounded by colorful lightning bolt shapes in green, blue, and pink.

Why are designers, engineers, and product managers in a ‘three-way standoff’?

New data has the design community in a debate about the future of their jobs.

fastcompany.com iconfastcompany.com

Nate Parrott, a product designer at Anthropic, in an interview with Ryan Mather for AI Design Field Guide:

More Google Docs than you’d think. More Slack posts than you’d think. I meant what I said earlier: I think that this is the era of designers who design with words more so than designing with pixels.

Parrott describes a content design team whose job is making alien concepts legible:

We have several people at the company on the design team whose job is content design. Their job is basically to look at concepts which are very alien, and figure out how to make them legible to human beings. They don’t draw any pixels, but their work is really important because they are literally thinking about the words we use to describe and the mental models we expect people to put on that will make this stuff work.

The Figma work, Parrott says, is “the easy part.” He uses Anthropic’s design system, drops in components, and moves on. The hard work is upstream: expressing the ideas, figuring out the right language, talking to users. The production of screens has become the smallest slice of the job.

Jenny Wen described designers at Anthropic shipping code, prototyping against the live model, stretching into PM territory. Parrott is describing the same shift from a different angle. The deliverable used to be the mockup. Now the deliverable is the thinking that precedes it.

Vibrant abstract illustration of stylized flowers with glowing, blurred edges in bold red, yellow, orange, pink, and blue tones against a soft gradient background.

AI Design Field Guide

Learn techniques from the designers behind OpenAI, Anthropic, Figma, Notion & more

aidesignfieldguide.com iconaidesignfieldguide.com

The first time I wrote about Jenny Wen, I pushed back. She said the design process was dead, and I argued the proportions had shifted but the process itself was intact. I also noted a context problem: her “ship fast, iterate publicly” approach makes sense for greenfield AI products at Anthropic but gets harder with established install bases.

Wen has been making the rounds and in a new interview, I’m finding a lot that I’m nodding my head to.

Jenny Wen, speaking on Tommy Geoco’s State of Play:

Often design needs to follow what the model is capable of and design from there, as opposed to starting from a design vision first. I think that can feel tough as a designer because you’re like, oh, I want to be design-led, we should be designing it first and then the technology should follow. But I think that’s just the reality of working at a research lab where the technology is emergent and you have to sort of decide what to do with it.

“Design follows the model” is an interesting phrase from a design leader. It inverts the dogma that design should lead and engineering should follow. But Wen isn’t being defeatist. She’s describing a practical reality at at a leading AI lab where the models’ capabilities are changing faster than any roadmap can account for.

This shows up concretely in how her team works:

The big thing is designers are implementing code, through using Claude Code. That has been the biggest difference from working at Anthropic versus back when I worked at Figma. […] Even today, we were reporting some bugs and some quality issues, and one of the designers was like, “Cool, let me just fix them.” And that was cool to just not have to tag an engineer for them to do anything.

A designer casually fixing production bugs without tagging an engineer. Just another Tuesday at Anthropic.

Geoco’s summary of Wen’s argument crystallizes something we’ve all been thinking quietly about:

She said, having taste versus being able to execute are two completely different things. They’re usually bundled together, but they don’t have to be. And in a world where AI can increasingly execute, the question becomes, and it’s kind of uncomfortable, do you actually have good taste or are you just pushing pixels around?

That’s the thread tying all of this together. When designers are closer to the product, fixing bugs in production, prototyping against the live model, the judgment they’re applying isn’t visual. It’s product sense: knowing which of those 12 options is worth shipping, which edge case will break trust, when the model’s output is good enough for real users. That’s the taste Wen is describing, and it has very little to do with pixels.

A lot of designers have been coasting on execution skills that felt like taste. They debate corner radii and centering labels in a button with amateur vs pro designer memes. Who cares! AI is about to make the difference visible.

The New Era of UX Designers

Jenny Wen led design on FigJam, one of the most playful tools to hit design in a decade. Now she’s at Anthropic designing Claude. Not just the model, but the product that millions use daily.

youtube.com iconyoutube.com

Stripe design manager Kris Puckett, speaking on Michael Riddering’s Dive Club, spent the first half of the conversation demoing metal shaders, custom ocean animations, and a full iOS reading app he built with Claude Code. Then he stopped himself:

AI native has to be beyond just “I made a really cool shader” or “I made this dither effect that every other person is making.” I was doing that today and then I was like, “Oh my gosh, this is… why am I doing this? There’s a hundred of these that are way better than what I’m making right now.”

So what does AI-native design actually look like? Puckett’s answer is “soul”—the quality that makes work feel specifically, unmistakably yours:

I think what people are going to be desperate for is more of that human side of things. They’re going to be longing for […] an era they’ve never experienced because they’re younger, that MySpace generation where your MySpace page was deeply personal to you. My MySpace page was complete custom Kris Puckett perfection at that time. And I think that we’re going to want to see that come back. And I think people are going to want more of those—your portfolio looks and feels like you.

“Soul” is doing a lot of work as a concept there. What Puckett is describing sounds a lot like taste—the ability to make something that feels intentional and specific rather than procedurally generated. His workflow backs that up. Being contrarian, he explicitly rejects the “let the agent run” approach:

I want off that cycle. I do not want to be riding that bike race with anyone else because that’s not how I view these things. They are a force multiplier, but I want them to be focused. I want it to be something that I feel is still authentically me.

What unlocked all of this for Puckett wasn’t technical skill—he’s a designer, not an engineer. It was admitting “I don’t know” and starting anyway. He’d been dreaming of building his own software for 20 years. Claude Code’s blinking cursor was enough to get him started.

Kris Puckett - Becoming an AI-native designer

Today’s episode is with Kris Puckett (https://x.com/krispuckett) who has led design at Mercury, Dropbox, and now as a design manager at Stripe. His journey is the perfect example of what it looks like to lean into this moment in time with AI.

youtube.com iconyoutube.com

Figma is opening its canvas as a writeable surface for AI agents. Matt Colyer, product director at Figma, on why this matters:

Design decisions—from color palettes and button padding, to typography and interactivity—have always defined how products take shape. No matter how small, those decisions add up. They make your product and user experience stand out from the rest. To date, AI agents haven’t had this context, which is why so many designs created by AI often feel unfamiliar and generic.

The fix is beefing up skills files, by encoding a team’s design decisions, conventions, and sequencing rules. Agents read them before they touch the canvas. The use_figma tool lets Claude Code, Codex, and other MCP clients create and update assets tied to your design system. Colyer on what that changes:

Your conventions are no longer static documentation. They become rules agents follow as they work—applied through components, variables, and the structure you’ve already defined.

The detail worth paying attention to is what Colyer describes as a self-healing loop. When an agent generates a screen, it screenshots the result, checks it against the design system, and iterates. Because it’s working with real components and auto layout, those corrections compound through the system itself, not just the pixels on screen.

It’s free during beta, with plans to move to a paid API. Figma is finally joining the party as Subframe, Paper, and Pencil all offer this workflow already.

Terminal window titled "earthling — zsh" showing an AI prompt to build a component set from a button.tsx file, with output confirming 72 button variants created, overlaid on a Figma canvas with UI components.

Agents, Meet the Figma Canvas

Starting today, you can use AI agents to design directly on the Figma canvas. And with skills, you can guide agents with context about your team’s decisions and intent.

figma.com iconfigma.com

Gui Seiz designs at Figma. His team uses Claude Code to bridge design and code. And he still reaches for the canvas when precision matters.

Seiz, speaking on Claire Vo’s How I AI podcast:

I don’t think we’re there yet in general with these code tools in terms of the precision editing that you want to do. […] I think still the gold standard for me is just being able to drag stuff around. And you can do a lot with a click that would take you a hundred words to write and to really precisely nail. No one wants to prompt for the exact hex code or the shade of yellow and that kind of stuff. That’s just easier to just quickly do and directly manipulate.

Seiz isn’t anti-AI. His team pulls production code into Figma via MCP, edits it visually, and pushes it back to the codebase. He’s bullish on what that does to the old workflow:

It’s definitely changed our workflows in a way that it’s really blown up what a workflow even is. Before, for the majority of our careers, we’ve had a very linear, agreed-upon workflow where you increase fidelity as you go on. Because it’s really expensive to work in code, and it’s really cheap just to trade ideas and sketch them out. But AI basically collapsed that, and it’s just as cheap to riff in code as it is to riff in design.

The cost of exploration collapsed. The need for direct manipulation didn’t. Both can be true.

How Figma engineers sync designs with Claude Code and Codex

Most teams are still passing static design files back and forth, and most Figma files are already out of date by the time they reach engineering. Gui Seiz (designer) and Alex Kern (engineer) from Figma walk through the exact workflow their team uses to bridge that gap with AI, live onscreen. They…

youtube.com iconyoutube.com

I published an article about the design talent crisis in Fast Company! The setup is what I’ve covered before on this blog extensively. But there’s a connection that I draw with the trades—the construction industry and how they have a solution that the design industry could learn from.

In the article, I write:

Construction has been running formal apprenticeship programs since the National Apprenticeship Act of 1937, and informally for centuries before that. The Department of Labor’s Registered Apprenticeship Programs enrolled roughly 940,000 people nationwide in fiscal year 2024. These aren’t casual internships. They’re structured, multi-year pathways that pair inexperienced workers with seasoned professionals and build skills through graduated responsibility. The retention numbers tell you everything: Apprenticeship programs report a 93% employee retention rate. For every $100 employers invest, they see an estimated $144 return.

The contractors I work with don’t debate whether to invest in their pipeline during a downturn. They know that if they stop training apprentices, they won’t have journeymen in four years, and they won’t have master tradespeople in 10. The pipeline is the business.

There’s a three-point plan to dig us out of this hole. But of course, it requires committments from design leaders and the C-suite:

  1. Stop tying junior hiring to project demand
  2. Formalize mentorship
  3. Accept the short-term cost

There is more to the article. Please take a read and share!

Smiling woman with short hair and round glasses looking down at a tablet, wearing a floral patterned blouse, with FC Executive Board branding.

Hire junior designers today or risk a broken pipeline

The tech industry keeps telling itself the pipeline will refill on its own. Construction figured out a century ago why that thinking is wrong.

fastcompany.com iconfastcompany.com

Forty-four UI panels generated in ten minutes, each one grounded in real customer research. Jason Cyr, writing for The Human in the Loop, on what happened when his team pointed Claude Code at Cisco’s design system:

Last week, one of my design directors pointed Claude Code at Magnetic and asked it to build a security detection prototype. Real components, real navigation, theme switching, working admin panels — running in ten minutes. Then he connected it to our research repository and it built 44 detection detail panels, every design decision tracing back to something a real customer said. That happened because the AI had access to our design system.

Cyr’s takeaway: the design system was the design review.

Your design system is your leverage. It’s how your taste scales. The teams that invest here will see their design decisions show up in every agent-generated output, automatically. The teams that don’t will spend all their time cleaning up messes that a good system would have prevented.

Monday.com arrived at the same conclusion from the engineering side. They built a design-system MCP after their agents kept hardcoding colors and ignoring typography tokens.

Cyr doesn’t shy away from who this leaves behind, either: designers whose value lives entirely in production. “Not because they’re bad at their jobs — but because AI just got very good at theirs.”

Title card reading "Design Teams in the Agentic Era" with the subtitle "A manifesto for what comes next." on a dark background.

Design Teams in the Agentic Era

My thoughts on what comes next

jasoncyr.substack.com iconjasoncyr.substack.com

David Hoang, writing for Proof of Concept, proposes a squad model for tackling a company’s hardest, most ambiguous problems:

The squad: a forward deployed engineer, a forward deployed designer, and a researcher. Three people. That’s it. They operate like a startup-within-the-company, deployed against a specific, ambiguous problem. […] This is a product discovery team with teeth — they don’t just produce insights and hand them off. They produce working prototypes and validated direction. […] Three people don’t need standups, retros, or Jira boards. They need a shared problem and a whiteboard.

No PM. The shared problem replaces the roadmap, and a researcher replaces the product manager. Hoang borrows the concept from Palantir’s Forward Deployed Engineers and extends it to design. His argument: AI tools have given designers enough technical leverage to prototype at engineering speed, so the designer who finds the problem can build the first cut of the solution.

A three-person team with AI tools in 2026 can cover the ground that used to require a ten-person cross-functional team. That’s the direct result of collapsing the build cost of exploration.

Hoang argues that the rotation model matters as much as the squad composition. Four to eight weeks, then disband. The team doesn’t calcify into a feature factory. Designers rotate through the company’s hardest problems instead of sitting on the same product team filing tickets for years.

Although, my counter to that would be designers sitting in the same problem space will gain deeper knowledge and context. Rotation could be counterproductive if not handled deliberately.

Hand-drawn Venn diagram showing three overlapping circles labeled Researcher, Design Engineer, and GTM, with the center intersection labeled "Forward Deployed Designer.

Forward deployed designer

In the early 2010s, Palantir coined a role that didn’t exist before: the Forward Deployed Software Engineer. These weren’t engineers building features on a roadmap. They were engineers embedded directly at client companies — sitting with analysts, operators, and decision-makers — to discover the problem and build the solution in the same motion. The role spread. Databricks, Scale AI, and OpenAI adopted variations.

proofofconcept.pub iconproofofconcept.pub

I’ve argued that design tools should be canvas-first, not chatbox-first. Jeff, writing in Abduzeedo makes the case for the opposite:

Designers have always borrowed from developers. Version control, component systems, token-based design — these ideas crossed the aisle from engineering and reshaped how visual work gets done. Vibe designing follows the same logic. Instead of opening Figma and reaching for a drag-and-drop panel, designers drop into the terminal. They prompt an AI model directly from the CLI, pipe the output into a file, and iterate without ever touching a mouse.

He isn’t theorizing. He published this article using browser automation and AI, with minimal manual clicking.

I don’t think the answer is CLI or canvas. It’s both. Designers are visual thinkers—that’s the cognitive foundation of the discipline, not a limitation to engineer away. Going fully terminal assumes we can be retrained to work without seeing what we’re making, or that the profession will attract people with entirely different skills.

What does look right is the plumbing underneath. Jeff on Paper.design’s MCP integration:

Its canvas is built natively on web standards — HTML and CSS — which means AI agents working through Paper’s MCP server can read and write design files directly. Tools like get_screenshot, get_jsx, write_html, and update_styles give Claude Code or Cursor direct read-write access to the design canvas.

HyperCard figured this out in 1987: direct manipulation on top of a scripting layer. The tools are finally catching up, with AI as the scripting engine.

VS Code editor with a browser preview showing the "Abduzeedo Editor" app, displaying a portrait photo with a VHS glitch shader effect applied.

Vibe Designing with Bash Access

Vibe designing is the design equivalent of vibe coding — where bash scripts, AI tools, and CLI commands are finally replacing traditional GUI-only tools.

abduzeedo.com iconabduzeedo.com

Intercom’s design team published numbers that show what happens when agents take over the build. John Moriarty, writing for Fin Ideas:

At Intercom, how we design and build software is unrecognizable from 12 months ago. Our engineering team is already at the point where 90% of pull requests are authored by Claude Code, part of an internal initiative called 2x, where the explicit goal is to double productivity using AI.

When 90% of your pull requests are AI-authored, the designer’s job changes whether you update the title or not. Moriarty’s framework for what comes next:

As the rate of execution accelerates, the role of design becomes sharper. Agents can generate artefacts, but they cannot decide which problems matter, set intent, resolve trade-offs, or hold the bar for quality. Our craft shifts with that reality. […] Agents will own the middle, the build. Design’s value concentrates at the edges, deciding what to build and then determining whether the output is good enough.

Design’s value lands at the edges, not the middle, and Intercom is already adapting their infrastructure to match. They’ve repositioned their design system as what Moriarty calls “agentic infrastructure”:

In a world where Agents write most of the code, design systems become the infrastructure that protects quality. Components, libraries and guidelines are the foundation that Agents and teams build on top of. The better the system, the better everything produced. Strong systems allow quality to scale without adding review overhead.

This tracks with the argument that design systems are becoming AI infrastructure—and Intercom is running it in production. The design system is the quality control layer that lets agents ship at speed without designers reviewing every screen.

Moriarty’s full piece covers how they’re restructuring day-to-day work—moving designers into code, treating Figma as a whiteboard, running structured AI fluency training. Worth a full read.

A paintbrush dissolves into digital code lines and circuitry, with the text "How we design when the code writes itself" and "Fin/ideas" logo.

How we design when the code writes itself

AI isn’t just increasing the speed of building, it’s changing how we work

ideas.fin.ai iconideas.fin.ai

Proprioception is the body’s sense of where its parts are in space. Marcin Wichary borrows the term for software that knows where its hardware lives: where the buttons are, where the ports are, where the camera is. His proposed design principle:

The rule here would be, perhaps, a version of “show, don’t tell.” We could call it “point to, don’t describe.” (Describing what to do means cognitive effort to read the words and understand them. An arrow pointing to something should be easier to process.)

Wichary walks through a series of examples, mostly from Apple: the Apple Pay animation that points at the side button, the iPad camera prompt that points to the physical lens, Dynamic Island camouflaging missing pixels as a functional UI element. The one that caught my eye is the device Simulator matching the physical dimensions of your actual phone on-screen and staying accurate even when you change the display density. Reminds me of one of the earliest selling points of the Mac’s 72dpi—it matches the real world: 72 points to an inch.

The MacBook Neo is where Wichary applies the principle and finds Apple falling short. The new model has two USB-C ports with different speeds, and macOS notifies you with text:

I think this is nice! But it’s also just words. It feels a bit cheap. macOS knows exactly where the ports are, and could have thrown a little warning in the lower left corner of the screen, complete with an onscreen animation of swapping the plug to the other port – similar to what “double clicking to pay” does, so you wouldn’t have to look to the side to locate the socket first.

Close-up of a MacBook Touch Bar displaying "Unlock with Touch ID →" above the minus, plus, equals, and delete keys.

Software proprioception

A blog about software craft and quality

unsung.aresluna.org iconunsung.aresluna.org

Thu Do set up Figma MCP + Claude Code and audited her entire design system in 10 minutes. The setup took 4 hours. But the reframe she arrives at matters more than the tooling:

Design tokens used to be “nice to have” for consistency. Now they’re infrastructure for AI-to-code-to-design workflows. AI agents read tokens to understand design intent. Proper tokenization = accurate code generation. Inconsistent systems = AI making wrong assumptions.

The bar for design systems just shifted from visual consistency to machine readability.

3D illustration of a large red X shape constructed from hundreds of small red geometric block pieces on a dark background.

Your Design System Isn’t a Style Guide Anymore — It’s AI Infrastructure

I humbled myself quickly. Six months ago, I managed design systems the way most teams do: make and isolate small changes, coordinate with developers on implementation, write documentation manually, run audits when time allowed, and hand off specs for each new feature.

linkedin.com iconlinkedin.com

Buzz Usborne on what happens when AI takes on more responsibility in a product:

AI doesn’t simply make products smarter — it redistributes thinking and decision-making between humans and machines. When AI absorbs cognition, it also inherits responsibility. And when it inherits responsibility, the cost of its mistakes rises.

Usborne frames this through three forces that determine whether AI features survive or fail: trust, value perception, and cognitive effort. They amplify each other. Low trust increases perceived effort. High effort reduces perceived value. Low value further undermines trust.

His answer is to earn autonomy through interaction, not demand trust upfront:

Trust does not always need to precede adoption, it can emerge through usage. Salesforce’s findings show that “Human validation of outputs is the biggest driver in trusting the outcome, over consistently accurate outputs.” In other words, users trust systems they can interrogate, shape, and verify. And instead of designing AI products that are perfect, we can earn trust by designing experiences that are controllable.

Controllable over perfect.

Circular diagram with purple arrows showing a cycle: trust leads to value perception, which leads to effort/cognitive load, which feeds back to trust.

Designing AI Experiences People Actually Use

AI doesn’t just add intelligence — it redistributes it. Here’s how that shift can make or break a product.

buzzusborne.com iconbuzzusborne.com

Most product teams adding AI start by building a new surface for it. A custom panel. A chat sidebar. A dedicated AI workspace. Alexandra Vasquez, writing for Bootcamp, describes her team making exactly that mistake:

We built a custom AI panel with its own navigation, input styles, and button treatments. It looked “futuristic” in the prototype. In user testing, people kept asking where things were and how to get back to their actual work. We had created a separate product inside our product.

The fix was simple: they deleted the panel and put agent actions in the same menus, modals, and toolbars people already used. Slack does this with its /command structure. Notion uses the same slash menu for manual and AI actions. The pattern is existing UI that happens to be smarter.

Vasquez argues most “AI failures” are actually system failures that agents expose at scale:

Designing for agents means treating information architecture and workflows as foundational. Before building an agent, audit your system’s foundations: Are labels consistent? Do hierarchies make sense? Can a new team member navigate workflows without constant help? If humans struggle, agents will fail faster and at scale. Fix the system first.

She’s right. And there’s a more radical version of this: agents don’t need human UI at all. As long as the APIs are available, an agent can complete tasks without ever touching a button or reading a screen. The interface is for the human, not the machine.

But that’s exactly the problem. If the agent bypasses the interface, the human’s ability to express intent and verify output becomes the whole game. Intent has to be crystal clear. Feedback has to be immediate and legible. And there’s a huge amount of trust to earn before anyone is comfortable letting an agent operate in the background on their behalf. Vasquez lands here too:

The AI model is the last thing we discuss, not the first. These are product decisions, and designers have outsized influence here.

The model is the least interesting part. The interesting part is designing the trust.

Humorous UI dialog titled "Applying AI changes" with three checked items—"Making water wet," "Raising dog cuteness," and "Burning fire hotter"—and a progress bar showing "Processing...

Agentic UX: 7 principles for designing systems with agents

Agents don’t need their own screen, they need better systems to operate in

medium.com iconmedium.com

Jason Lemkin, writing for SaaStr, identifies a structural problem with niche SaaS vendors: the TAM is too small to fund the engineering team that would make the product great. His argument is about what happens when customers can finally do something about it:

Before vibe coding, building a custom app almost never made sense. Custom development cost $50K-$100K minimum, took months, and you owned a buggy codebase forever with no support. The math didn’t work. Vibe coding changes the math. When you can build a working application in hours instead of months, the question stops being “can we afford to build this?” and becomes “can we afford to keep using a product that doesn’t do what we need?”

Lemkin’s SaaStr team replaced a $10K/year sponsor portal in days. Then they built “10K,” an AI marketing agent that ingests four years of their data to run Monday meetings and generate a daily executable marketing plan. No vendor built it because the TAM for “exactly Jason Lemkin’s Monday meeting” is one.

The threat gradient for vendors:

Small niche tools with $5K-$50K contracts — thin markets, thin engineering teams, products that evolve slowly. Your customers now have a real alternative to waiting for your roadmap. They’ll build around you.

But Lemkin is honest about the other side:

We now manage 10+ vibe coded apps and 20+ AI agents. That’s real overhead. It’s manageable because the apps pull their weight. But be honest about what you’re taking on.

Three humans and 20+ agents is an impressive ratio and a fragile one. Maintenance is yours permanently. No support ticket. Complexity compounds. The vendors most at risk are the $10K-$50K niche tools whose moat was the cost of custom development. That moat is gone. The ones that survive will be the ones whose value lives in accumulated domain data, not in features a customer can rebuild over a weekend.

SaaStr AI 2026 Annual campus map showing a 3D overhead view of the 40+ acre event grounds with numbered locations including Hanger West, Hanger East, sponsor expo halls, stages, and registration areas.

The Rise of the “N=1” App: When Building It Yourself Really Beats Buying It.

The Rise of the “N=1” App: When Building It Yourself Really Beats Buying It So we built 2 more vibe coded app for SaaStr. Even though we didn’t want to. We’re already managing 20+ AI ag…

saastr.com iconsaastr.com

The question for vertical SaaS used to be: how do I make a better tool for this professional? Julien Bek, writing for Sequoia Capital, argues the question has changed:

If you sell the tool, you’re in a race against the model. But if you sell the work, every improvement in the model makes your service faster, cheaper, and harder to compete with. A company might spend $10K a year for QuickBooks and $120K on an accountant to close the books. The next legendary company will just close the books.

Bek draws a clean line between intelligence work (rule-based execution AI can already handle) and judgment work (experience, taste, strategic calls):

Writing code is mostly intelligence. Knowing what to build next is judgement. […] Deciding which feature to build next, whether to take on tech debt, when to ship before it’s ready.

That split tells product builders where to start: outsourced, intelligence-heavy tasks where a budget line already exists and the buyer is already purchasing an outcome. Replacing an outsourcing contract is a vendor swap. Replacing headcount is a reorg. Start with the swap.

But the part that should reshape how designers think about product strategy is the convergence thesis:

Today’s judgement will become tomorrow’s intelligence. As AI systems accumulate proprietary data about what good judgement looks like in their domain, the frontier will shift. Copilots and autopilots will converge.

This is data recipes given a business model. The moat for the next generation of vertical products won’t be the interface or even the model underneath it. It’ll be the compounding dataset of domain-specific decisions—what “good” looks like in insurance brokerage or medical coding or contract law. Every task the autopilot completes teaches it something the copilot never learns, because the copilot hands that knowledge back to the human.

Bek maps this across a dozen verticals with TAM estimates. Worth reading the full piece if you’re thinking about how to build the next generation of AI tools.

Silhouetted conductor's hand raising a baton and a cat watching an explosive burst of glowing data streams and network connections on a dark background.

Services: The New Software

The next $1T company will be a software company masquerading as a services firm.

sequoiacap.com iconsequoiacap.com

In high school and through college, I worked at a desktop publishing service bureau in San Francisco. We had Macintosh computers and Linotronic imagesetters (super hi-res laser printers), not Linotype machines. Down the street, those traditional type shops still existed, but their business was already thinning out. Occasionally a graphic designer would send us type to set, and we’d do it in QuarkXPress. The fact that the job landed on our desk at all told you everything about where the industry was headed. The shop’s real business was pre-press and color separations, and eventually direct-to-plate eliminated even that.

Erika Flowers has been building out her Zero-Vector Design framework, and two of her pieces read as a pair. “Zero Stage to Orbit” on UX Magazine uses the rocket equation as a structural lens for the design-to-development pipeline. “The Last Typesetter” on her Substack uses the death of the typesetting profession to make the same argument from a different direction. Together they make the case that the design role, not the skill, is dissolving.

In “The Last Typesetter,” Flowers draws on Sennett:

When suddenly everyone could set type, the difference between good typography and bad typography went from an industry concern to a public epidemic. Bad kerning everywhere. Rivers running through justified text. Orphaned words dangling at the tops of columns like socks left on a clothesline. The people who understood typography were needed more than ever.

But not as typesetters.

Richard Sennett wrote about this in The Craftsman: the difference between a skill and the institutional container built around that skill. Containers look permanent until they are not. The skill outlives every container it has ever occupied.

That’s what happened at the service bureau. The skill—color, typography, print production—survived. The container—the shop, the role, the apprenticeship—did not.

In “Zero Stage to Orbit,” Flowers maps the pipeline onto rocket science:

Each stage in the traditional pipeline is designed to compensate for the limitations of the previous one. Research to inform design. Design to spec for developers. Specs to survive handoff. QA to catch what handoff broke. Retros to discuss why QA caught so much. Process to manage process.

Fuel to carry fuel. The modern development pipeline is not a solution. It is a multi-stage rocket. And most of the energy is going to overhead.

The overhead diagnosis is sharp, and the launch pad economy—consultancies, workflow tools, Agile coaching certifications—has a financial interest in keeping the rocket grounded.

Flowers addresses why the “unicorn” solution failed:

The design technologist did not fail because no one person can possess all the skills. The design technologist failed because no one can hold all the skills while still fighting gravity. They were still launching from the ground, still hauling the translation overhead, just with one person doing all the hauling instead of a team.

The problem was never the number of stages. It was the gravity well.

A product manager I work with recently told me he could think of a solution to a user need, but not a creative solution the way the designer on his team could. Specialization produces real expertise. The design technologist wasn’t wrong about the vision. They were wrong about the physics. AI changes the gravity, not the skills.

What separates both pieces from the standard “AI changes everything” take:

I am also uncertain here, also mid-journey, also discovering orbit’s real constraints in real time. My career, work, and livelihood are just as much at risk as everyone else’s. But that doesn’t discount the facts about the transition to new capabilities.

She’s out on a limb, reflecting a shift the entire industry can feel, without pretending she has the map. In “The Last Typesetter,” she puts it more bluntly: “Defend the role, or follow the skill.”

The skill will survive. It always has. But the transition is real, and not everyone can afford to be mid-journey. Truthfully, I am uncertain either. The thing I’ve loved to do since the 7th grade, the thing that has been my identity for most of my life is changing, possibly dissolving into something else.

Shiny metallic rocket launching diagonally upward against a blue sky, with the text "Design never had a process problem but a gravity one."

Zero Stage to Orbit

What if the pipeline was never broken — it was just never meant to get you to orbit? From handoff docs to sprint ceremonies, every tool and role we built was rational until Orbit became available. Find out what it really means to ship from there.

uxmag.com iconuxmag.com

If you’re a designer who feels the ground shifting but doesn’t know where to step, Erika Flowers built a free, structured curriculum for exactly that moment. Zero-Vector Design is her framework for collapsing the handoff between design and engineering, using AI agents as crew rather than replacements. The distinction she draws between this and vibe coding is worth internalizing:

You bring the systems thinking, the architecture, the years of knowing what good looks like. The AI extends your reach, not your judgment. Speed without intention is just faster failure. Speed with intention is leverage.

Six levels, 60+ lessons, all free. Worth bookmarking.

Zero-Vector Design brand card on dark background with tagline "From intent to artifact, directly." and website zerovector.design

Zero-Vector Design

A design philosophy for the age of AI. No intermediary. No translation layer. No friction. From intent to artifact, directly.

zerovector.design iconzerovector.design

Weber Wong’s “artifact thinking” names the problem: creative work that produces one-off outputs, each beginning from scratch. Prompts are artifacts. Skills are not.

Nick Babich, following up his earlier roundup of Claude skills, looks at Anthropic’s skill-creator, a meta-skill that generates and evaluates new skills. His framing of what a skill actually is:

Many people explain the role of a skill as a set of instructions that Claude automatically activates for a particular task. While this is a correct way to describe its behavior, it’s better to think of a skill as a recipe. Just like when we cook something, we rely on a recipe to do the job correctly, Claude will rely on a dedicated skill.

Recipes compound. You refine them, share them, adapt them for new contexts. Prompts are disposable. Skills persist.

And now skills can write other skills. Babich walks through the full skill-creator setup, and the most interesting detail is the self-evaluation loop:

The great thing about Skill Creator is that it triggers a process that evaluates the quality of output a newly created skill will produce. This evaluation is exactly what helps you achieve better results with your skill.

Worth following along if you’re building your own. (And you should be!)

Title graphic for "Claude Skills 2.0" featuring a terracotta square with a white silhouetted head containing a flower or starburst design.

Claude Skills 2.0 for Product Designers

Anthropic has recently improved the process of creating new Claude Skills, and this improvement is so significant that it almost feels like…

uxplanet.org iconuxplanet.org

Designers have been saying this for years. Cameras don’t take pictures, photographers do. Tools don’t make you a better designer. Now the PM world is arriving at the same conclusion.

Shreyas Doshi argues that AI tools will commoditize across companies—any effective tool becomes common knowledge—and the only durable career moat is the human judgment applied on top of AI outputs. He calls it “Product Sense.”

Tools have never been a significant source of alpha in product success and that is not changing with AI tools. What this means for you personally is that - while you can and should use all the AI tools you can - you cannot bank on your use of those AI tools today to provide you a long-term advantage in your product career.

Replace “product people” with “designers” and this could be a post on my blog. The five skills Shreyas decomposes Product Sense into—empathy, simulation, strategic thinking, taste, creative execution—are skills good designers have cultivated under different names for decades.

The piece includes an appended Claude conversation that stress-tests the argument. The sharpest exchange challenges the Silicon Valley orthodoxy that fast B+ beats slow A+:

In practice, the B+ decision made quickly tends to create a cascade of follow-on decisions, each of which is also slightly off, and you end up significantly off-course in ways that are expensive to correct. Whereas the A+ decision, even if it takes longer, tends to set you on a trajectory where subsequent decisions are easier and more obvious. The compounding effect favors quality of judgment, not speed of judgment.

Good judgment compounds. Bad judgment compounds too, just in the wrong direction.

Definition slide: "Product Sense is the ability to make correct product decisions, both macro & micro, in the presence of significant ambiguity.

Why Product Sense is the only product skill that will matter in the AI age

I get asked all the time:

shreyasdoshi.substack.com iconshreyasdoshi.substack.com

Eugene O’Neill had a line: “Critics? I love every bone in their heads.” I think about it whenever someone proposes that what design really needs is more people who understand it without doing it.

Jon Kolko, writing for Interactions Magazine, argues that design is experiencing a disciplinary “turn”—away from making and toward literacy. Drawing on Richard Buchanan’s 1992 framework of design as a “liberal art of technological culture,” he proposes a future with fewer practitioners and more people who can read, critique, and discuss designed artifacts without designing them.

Rather than viewing design as an applied craft, a liberal art of technological culture recasts design as a way of understanding our role in the designed world around us. It’s difficult for many practitioners to imagine this, because making things is so integral to the idea of design, and embedding design in the humanities is very different from viewing it as an organizing principle like the humanities. But if design is not about making things, but instead about understanding the things that are made, vocation is no longer a goal of design education.

Kolko’s diagnosis is sharp—the layoffs, the AI anxiety, the assembly-line feeling of modern product design. And he sits with the discomfort rather than cheerleading:

As a craftsperson and a maker, I don’t like the way this turn feels, because it appears threatening to the fundamentals of the profession. Understanding design without making things seems impossible. I don’t like this development as an educator either, because it means my students, trained to be practitioners, may find no design jobs, despite getting a very expensive education. But as someone observing the various trends chipping away at what is actually meaningful about being a designer—our ability to humanize the dysfunction of technological change—I am drawn to this turn.

I respect the honesty. But I have a love/hate relationship with critics. It’s easy to throw stones from a perch. When you’re in it—fighting organizational politics, staring at data, listening to customers, compromising with engineering—the outcomes are never as clean as you’d hoped. Design literacy matters. But literacy divorced from practice produces critics, not designers. The world doesn’t need more critics. It needs more people who understand why the compromises were made via lived experience.

Jon Kolko - A Design Turn

Designers are anxious. Layoffs have not let up, AI has seemingly trivialized our magic skill of making things, and practicing designers describe the assembly-style nature of software design as soul-crushing.

jonkolko.com iconjonkolko.com