Skip to content

46 posts tagged with “design systems”

14 min read
Pointillist-style painting of a formally dressed figure in a black top hat holding a glowing green laptop, surrounded by a crowd of early 20th-century people.

A Sunday Afternoon with Claude Design

It’s really hard to get momentum on a side project when you have a full-time job with lots of travel, an active blog, and a newsletter. But I had to recapture that momentum because this side project is important. It’s for a preschool website for my cousin.

Walking into My Little Learning Tree is like stepping into pure warmth. Yes, yes, preschools are inherently fun environments, but the kids and the teachers there create a visceral energy that is simply special. I wanted to capture that specialness in a long-overdue website redesign project.

Looking at my in-progress design, something felt off. I had these long horizontal lines preceding the eyebrows—the small text above a heading that names the section—that didn’t feel right. First, they were straight. Second, the lines only occurred before the text, not also after. I clicked on the Comment button to enter Comment mode, then clicked on the eyebrow and prompted, “These lines aren’t playful enough. Let’s make them squiggles and have them before and after the eyebrow text.”

And then Claude Design did its thing.

The designer’s role is widening at both ends of the product stack. Earlier, I linked to a post by Chad Johnson arguing designers gain influence by moving upstream: becoming orientation devices for the team, shaping the problem before it gets named. Daniel Mitev, writing for UX Collective, argues designers gain authorship by moving downstream, into the code:

The industry has been asking whether designers should code for over a decade. It was always the wrong question, or at least the wrong framing. It implied the barrier was technical: that designers lacked something fundamental, something that required years of study to acquire. Learn TypeScript. Understand the DOM. Earn your way across the divide. That wasn’t the barrier.

Mitev’s argument comes down to access. AI tooling compresses the translation layer and returns authorship to the designer:

What AI tooling gives back is authorship over the surface layer — the part users actually touch. A designer can now open the codebase, adjust how an element behaves, change how a transition feels, and verify the output against their own intent in real time. The easing curve gets set by the person who decided what it should feel like. The hover state gets defined by the person who thought through why it matters. That work no longer requires an interpreter.

He points at Alan’s “Everyone Can Build” initiative—283 pull requests shipped by non-engineers over two quarters, each merged after engineering review—as evidence it’s already happening.

Johnson and Mitev aren’t in conflict. They’re describing the same shift from opposite ends. The interpreters at the top of the product stack—PMs who owned problem framing and prioritization—are compressing. The interpreters at the bottom—frontend engineers translating intent into code—are compressing too. Both jobs return to the designer who understood the intent first.

The role widens. Some designers will gravitate to one end or the other. The designers who stretch the full range—orientation work and authorship—are working the widest version of the job.

A hand pressing an Enter key above a terminal showing a git commit command, with text reading "Designers finally have a say in the product they design.

Designers finally have a say in the product they design

AI didn’t teach designers to code. It gave them back the decisions that were always theirs.

uxdesign.cc iconuxdesign.cc

Two podcast conversations with frontier-lab design leaders on what designing at an AI lab looks like day-to-day. I previously linked to Lenny Rachitsky’s interview with Jenny Wen, head of design for Claude, where she described a redistribution of designer hours: less mocking, more pairing with engineers, a sliver of direct implementation. The activities themselves still look like design.

Ian Silber, head of product design at OpenAI, on Michael Riddering’s Dive Club, describes work that doesn’t fit the same list:

Designers working on this are hopefully spending a lot less time in Figma or whatever tool you use to draw pixels, and more time really thinking about how you interact with this thing, and the fact that the model really is the core product.

Silber’s concrete example is onboarding. Instead of building a first-run tutorial, his team shapes what the model already knows about the person:

We have this super intelligent model that could probably do a much better job trying to understand what this person’s goals are […] We’re really stripping back a lot of what you might traditionally do and trying to say, “Well, actually […] let’s think about like how we should give this context to the model that this person is brand new and they might need some handholding.”

The traditional response adds UI around the problem. Silber’s team takes it out and gives the model enough context to meet the user where they are.

That kind of work needs its own scaffolding, and OpenAI is building it:

We have a whole system called the Dynamic User Interface Library, which allows us to design things that the model can then interpret.

Primitives the model composes at runtime, shaped by system prompts and context rather than drawn flow by flow. Wen is describing a redistribution of designer hours inside activities that still look recognizable. Silber is describing activities that don’t quite have names yet. And yes, that is still design.

Ian Silber - What it’s like designing at OpenAI

If you’re like me you gotta be curious... what’s it like designing at OpenAI?

youtube.com iconyoutube.com

Tara Tan surveyed more than a dozen AI design tools for The Review. Her field audit sits alongside the design-process compression argument:

In working with these tools, one insight emerged for me: the tools that understand your design system produce better output than the ones that don’t. […] The competitive moat in this market is not generative quality, which is commoditizing fast. The moat is the design system graph: the tokens, components, spacing scales, typography rules, and conventions that make your product look like your product and not a generic template. Whoever makes that system machine-readable for agents will win the enterprise.

That’s the operational reason my proposal for an agent design team hinges on a rock-solid design system. What distinguishes output across the tools Tan surveyed is whether the generator respects your existing design system or treats every request as a fresh mood board.

Tan’s other finding is the role-shift:

The same shift is happening in design. At Uber, Ian Guisard didn’t stop being a design systems lead when uSpec automated his spec-writing. His job shifted from producing documentation to encoding expertise, writing agent skills, defining validation rules, deciding what “correct” means for each component across seven platforms. The human became the system designer, not the system operator. […] The canary is singing. And the song is about the work shifting from execution to judgment, from operating the system to designing the system itself.

Same title, different job. Ian Guisard’s taste still matters; it lives in the skills and validation rules now, not the deliverables. That’s “follow the skill, not the role” made concrete. Guisard used to write specs; now he writes the rules the system follows to validate them.

The infrastructure is catching up to the process. Tan’s implicit prescription is straightforward: make the design system machine-readable, win the enterprise. Some of that tooling is already out in the open. Southleft’s Figma Console MCP (which Uber’s uSpec is built on) lets agents operate on tokens and components without a custom platform.

But tooling alone isn’t enough. Most of us aren’t Uber. The path for teams without a dedicated design systems lead still needs someone to do the work Guisard did: encoding the expertise and defining what “correct” looks like across platforms. That’s where the next round of tooling needs to land.

The Design Agent Landscape" diagram categorizing AI design tools into three groups: Agent-first canvas (Pencil, Paper, OpenPencil), Design system-first (Figma MCP, Console MCP, Google Stitch), and Code-native (Subframe, MagicPath, Tempo, Polymet, Magic Patterns, Lovable, Bolt, v0, Replit).

The Design-Build Loop

Design is where AI product workflows meet their hardest test: an audience that will always, primarily, be human. A look at the tools, teams, and infrastructure emerging around AI design agents.

thereview.strangevc.com iconthereview.strangevc.com

I’ve watched design team values die in a Confluence page. The offsite happens, the Post-Its get transcribed, the principles get written up with care, and then everyone goes back to their desks and ships exactly the way they did before. I’ve seen it with product principles and brand values too. The deck gets built, implementation starts, and the deck gets forgotten.

Vitaly Friedman, writing for Smashing Magazine, on why this matters more than ever:

We often see design principles as rigid guidelines that dictate design decisions. But actually, they are an incredible tool to rally the team around a shared purpose and document the values and beliefs that an organization embodies. They align teams and inform decision-making. They also keep us afloat amidst all the hype, big assumptions, desire for faster delivery, and AI workslop.

Friedman again:

In times when we can generate any passable design and code within minutes, we need to decide better what’s worth designing and building — and what values we want our products to embody. It’s similar to voice and tone. You might not design it intentionally, but then end users will define it for you. And so, without principles, many company initiatives are random, sporadic, ad-hoc — and feel vague, inconsistent, or simply dull to the outside world.

You might not write principles intentionally, but your product will have them anyway. The question is whether you chose them or inherited them by default.

Friedman closes with the part most teams skip:

Creating principles is only a small portion of the work; most work is about effectively sharing and embedding them. It’s difficult to get anywhere without finding ways to make design principles a default — by revisiting settings, templates, naming conventions, and output. Principles help avoid endless discussions that often stem from personal preferences or taste. But design should not be a matter of taste; it must be guided by our goals and values.

Creating principles feels productive. But alignment without embedding is a Confluence page nobody opens twice. Principles have to show up in the Figma component library, the ticket template, the review rubric. They have to be repeated so that they are ingrained. They have to become the path of least resistance.

Smashing Magazine article title card: "A Practical Guide To Design Principles" by Vitaly Friedman, tagged Design, UX, UI.

A Practical Guide To Design Principles — Smashing Magazine

Design principles with references, examples, and methods for quick look-up. Brought to you by Design Patterns For AI Interfaces, **friendly video courses on UX** and design patterns by Vitaly.

smashingmagazine.com iconsmashingmagazine.com

Jessica Deseo, writing for PRINT Magazine, reports on a talk by Ric Edwards, VP of Brand Design at LA28. His challenge: branding an Olympics for a city that resists a single identity. Edwards on LA:

“There’s no one version of it. You would do a disservice if you limited it to one story.”

I spent a few years in Los Angeles and visit regularly. It’s sprawling and each area is distinct. Edwards is right. So instead of a fixed logo, LA28 built a system. The “A” in the emblem is a canvas, reinterpreted by athletes, artists, and communities. The L, 2, and 8 are set in different typefaces. The brand holds many narratives rather than collapsing into one.

“We’re trying to be a stage for all of those stories.”

That word, “stage,” is the whole strategy in one sentence. A stage doesn’t perform. It creates the conditions for others to perform on it. That’s a fundamentally different job than traditional branding, which is usually about control: one mark, one voice, one set of guidelines. LA28 is designing for distributed authorship at global scale, and Edwards is honest about what that costs:

“Operationally, it’s a nightmare.”

Every variation of the emblem has to work across stadiums, broadcast, merchandise, and digital. And then each creative contribution has to pass through legal, production, and brand governance. The ambition is real, and so is the complexity behind it. The Olympics is…well…the Olympics of branding.

LA28 Olympics logo with three colorful tiles against a blurred bird of paradise flower background.

Beyond the Logo: How LA28 Turns Branding into a Platform for Culture

At SEGD, LA28’s design lead, Ric Edwards unpacked the challenge of creating an Olympic identity for a city defined by so much heritage and culture.

printmag.com iconprintmag.com

Figma is opening its canvas as a writeable surface for AI agents. Matt Colyer, product director at Figma, on why this matters:

Design decisions—from color palettes and button padding, to typography and interactivity—have always defined how products take shape. No matter how small, those decisions add up. They make your product and user experience stand out from the rest. To date, AI agents haven’t had this context, which is why so many designs created by AI often feel unfamiliar and generic.

The fix is beefing up skills files, by encoding a team’s design decisions, conventions, and sequencing rules. Agents read them before they touch the canvas. The use_figma tool lets Claude Code, Codex, and other MCP clients create and update assets tied to your design system. Colyer on what that changes:

Your conventions are no longer static documentation. They become rules agents follow as they work—applied through components, variables, and the structure you’ve already defined.

The detail worth paying attention to is what Colyer describes as a self-healing loop. When an agent generates a screen, it screenshots the result, checks it against the design system, and iterates. Because it’s working with real components and auto layout, those corrections compound through the system itself, not just the pixels on screen.

It’s free during beta, with plans to move to a paid API. Figma is finally joining the party as Subframe, Paper, and Pencil all offer this workflow already.

Terminal window titled "earthling — zsh" showing an AI prompt to build a component set from a button.tsx file, with output confirming 72 button variants created, overlaid on a Figma canvas with UI components.

Agents, Meet the Figma Canvas

Starting today, you can use AI agents to design directly on the Figma canvas. And with skills, you can guide agents with context about your team’s decisions and intent.

figma.com iconfigma.com

Forty-four UI panels generated in ten minutes, each one grounded in real customer research. Jason Cyr, writing for The Human in the Loop, on what happened when his team pointed Claude Code at Cisco’s design system:

Last week, one of my design directors pointed Claude Code at Magnetic and asked it to build a security detection prototype. Real components, real navigation, theme switching, working admin panels — running in ten minutes. Then he connected it to our research repository and it built 44 detection detail panels, every design decision tracing back to something a real customer said. That happened because the AI had access to our design system.

Cyr’s takeaway: the design system was the design review.

Your design system is your leverage. It’s how your taste scales. The teams that invest here will see their design decisions show up in every agent-generated output, automatically. The teams that don’t will spend all their time cleaning up messes that a good system would have prevented.

Monday.com arrived at the same conclusion from the engineering side. They built a design-system MCP after their agents kept hardcoding colors and ignoring typography tokens.

Cyr doesn’t shy away from who this leaves behind, either: designers whose value lives entirely in production. “Not because they’re bad at their jobs — but because AI just got very good at theirs.”

Title card reading "Design Teams in the Agentic Era" with the subtitle "A manifesto for what comes next." on a dark background.

Design Teams in the Agentic Era

My thoughts on what comes next

jasoncyr.substack.com iconjasoncyr.substack.com

Intercom’s design team published numbers that show what happens when agents take over the build. John Moriarty, writing for Fin Ideas:

At Intercom, how we design and build software is unrecognizable from 12 months ago. Our engineering team is already at the point where 90% of pull requests are authored by Claude Code, part of an internal initiative called 2x, where the explicit goal is to double productivity using AI.

When 90% of your pull requests are AI-authored, the designer’s job changes whether you update the title or not. Moriarty’s framework for what comes next:

As the rate of execution accelerates, the role of design becomes sharper. Agents can generate artefacts, but they cannot decide which problems matter, set intent, resolve trade-offs, or hold the bar for quality. Our craft shifts with that reality. […] Agents will own the middle, the build. Design’s value concentrates at the edges, deciding what to build and then determining whether the output is good enough.

Design’s value lands at the edges, not the middle, and Intercom is already adapting their infrastructure to match. They’ve repositioned their design system as what Moriarty calls “agentic infrastructure”:

In a world where Agents write most of the code, design systems become the infrastructure that protects quality. Components, libraries and guidelines are the foundation that Agents and teams build on top of. The better the system, the better everything produced. Strong systems allow quality to scale without adding review overhead.

This tracks with the argument that design systems are becoming AI infrastructure—and Intercom is running it in production. The design system is the quality control layer that lets agents ship at speed without designers reviewing every screen.

Moriarty’s full piece covers how they’re restructuring day-to-day work—moving designers into code, treating Figma as a whiteboard, running structured AI fluency training. Worth a full read.

A paintbrush dissolves into digital code lines and circuitry, with the text "How we design when the code writes itself" and "Fin/ideas" logo.

How we design when the code writes itself

AI isn’t just increasing the speed of building, it’s changing how we work

ideas.fin.ai iconideas.fin.ai

Thu Do set up Figma MCP + Claude Code and audited her entire design system in 10 minutes. The setup took 4 hours. But the reframe she arrives at matters more than the tooling:

Design tokens used to be “nice to have” for consistency. Now they’re infrastructure for AI-to-code-to-design workflows. AI agents read tokens to understand design intent. Proper tokenization = accurate code generation. Inconsistent systems = AI making wrong assumptions.

The bar for design systems just shifted from visual consistency to machine readability.

3D illustration of a large red X shape constructed from hundreds of small red geometric block pieces on a dark background.

Your Design System Isn’t a Style Guide Anymore — It’s AI Infrastructure

I humbled myself quickly. Six months ago, I managed design systems the way most teams do: make and isolate small changes, coordinate with developers on implementation, write documentation manually, run audits when time allowed, and hand off specs for each new feature.

linkedin.com iconlinkedin.com

Most design teams treat the design system as the starting point. Open a new project, pull in the component library, start assembling. It’s efficient. It’s also a trap according to one designer.

David Hoang, writing for Proof of Concept:

I start without a design system. This is deliberate. Production-grade components carry assumptions—spacing, hierarchy, interaction patterns—that narrow the solution space before you’ve had a chance to explore it. If I’m proposing a feature, the design system is the right starting point. But in exploration mode, the system comes later. Sketches are for divergence; design systems are instruments of convergence.

Design systems exist to create consistency, not ideas. When you reach for them too early, you may be converging before you’ve diverged.

Hoang’s workflow inverts the order: sketch unconstrained in code, dial up technical fidelity first, bring the design system in only after you’ve found directions worth pursuing. LLMs make that final step nearly free:

The design system isn’t a starting point—it’s a finishing move. You sketch unconstrained to explore the problem space, then snap your best ideas onto the system’s rails to see if they hold up. The LLM makes that snap nearly instant, so I can run the full loop—sketch, evaluate, systemize—multiple times in a single session. Ideas that break under the system’s constraints get caught early. Ideas that survive get stronger.

The designer makes every structural decision. The LLM handles the re-skinning. Production work, not judgment work.

And ideas that break the system’s constraints surface gaps worth contributing back. That’s the part most design system teams miss. The system should learn from the exploration it constrains, not just gate it.

Hand-drawn diagram showing multiple "Code slides" feeding into a central "Draw tool" grid, which outputs to a "Solution" box on the right.

Sketching with code

Issue 286: Treating code like a pencil, not a blueprint

proofofconcept.pub iconproofofconcept.pub

On Jayneil Dalal’s Sneak Peek, Domingo Widen, a staff designer at Intercom, walks through their version of an AI-native design org: Figma MCP plus Claude Code plus Code Connect, producing prototypes that deploy as PRs to GitHub. Designers never check the code. Engineers get real components, not AI hallucinations.

The trick is in the plumbing:

This is something that designers don’t understand, that sometimes they use the MCP without an actual proper code connection, which is good, right? Like the link that you’re sending to AI, it’s going to include a lot of information around the spacing, the token, the color. But it’s not real code connection. The real power that you find is that when you actually connect these components. […] You’re actually giving Claude the actual path to that component in the codebase. so that when you send the link, the button already exists under this path. You don’t need to create it again. You can just import it.

Without Code Connect mapping every component to its import path, AI gets visual information but reinvents components from scratch. The judgment is encoded in the infrastructure, not the model.

Widen again:

In the background, every single pattern that we add to the system, every single component that we add to the system, it becomes a new piece of code that designers can use to prototype, that PMs can use to prototype, that engineers can use to prototype and build. And it’s kind of like a compounding effect essentially. So the more things we add to our design system in terms of components and patterns, the better cleanups that we do and the more tunings that we do, everybody kind of can benefit from them.

The compounding is real, but so is the upfront cost. Intercom needed a dedicated team, a prototyping hub, documentation, tutorials, and months of skills engineering to get here. A 20-person startup isn’t replicating this workflow anytime soon.

I wrote about this gap after getting pushback on my own AI-in-design arguments. The tooling works if you already have the infrastructure and the experience. For most designers, that’s not where they are yet.

How I Vibe Code as a Designer at Intercom

👋 Welcome to Sneak Peek with Jay, a series where you will see how top design teams use AI. In this interview Jay chats with Domingo Widen (Staff Product Designer) who shows the AI design process at Intercom!

youtube.com iconyoutube.com

Every design system is an exercise in compression. You take contextual reasoning—why this spacing, why this type scale—and flatten it into tokens and components that can ship without the backstory.

Mark Anthony Cianfrani:

the reason that your line height is set to 1.1 is because your application is, or was at one point, very data-intensive and thus you needed to optimize for information density. Because one time someone complained about not being able to see a very important row in a table and that mistake cost so much money that you were hired to redesign the whole system. But that’s a mouthful. You can’t throw that over the wall. An engineer can’t implement that. So we make little boxes with all batteries included.

All of that reasoning gets flattened into line-height: 1.1. The token ships. The reasoning doesn’t. Every design system makes this trade-off: you lose the why to gain portability.

Cianfrani argues we don’t have to accept that trade-off anymore:

LLMs give us the ability to ship our exact train of thought, uncompressed, a little bit lossy but still significantly useful. Full context that is instantly digestable. Instead of shipping <Boxes>, ship a factory.

Design systems were never the end goal. They were the best compression format we had. Components and tokens became the shipping containers because the full reasoning was too unwieldy to hand off. That constraint is loosening. In spec-driven development, that factory looks like a structured document: design intent expressed in plain language that AI agents build against directly. The spec is the reasoning, uncompressed.

Even if the AI bet doesn’t pay off:

And if this whole AI thing turns out to burst, at least you’ve improved the one skill that some of the best designers I’ve ever worked with had in common—the ability to communicate their design decisions into words.

The compression problem was always worth solving, with or without LLMs.

Pale cream background with four small colored squares—teal, burgundy, orange-red, and mustard—aligned along the bottom-right edge.

Designing in English

Components are dead. Use your words.

cianfrani.dev iconcianfrani.dev

Jonny Burch argued that design’s source of truth is moving from Figma to code. Édouard Wautier is already there. He wrote up a field report on how Dust’s design team prototypes directly in code.

After the initial analysis and quick sketchbook phase, when I need to give the idea shape and pressure-test it, I don’t open Figma. I open my development environment, pull the latest version of our repo, and create a branch. Then I ask an agent to scaffold a new prototype, and I describe what I’m trying to make.

The prototype isn’t a picture of the product—it’s built from the same design system components and tokens. So what is Wautier optimizing for at this stage?

At this point I mostly care about trying the idea and seeing whether the interaction holds. I’ll build small flows, prototype the transitions, and sanity-check the parts that static screens often hide (state changes, error cases, motion, empty states, keyboard/navigation/accessibility basics).

He’s honest about the trade-offs. You occasionally lose 30 minutes to a tooling issue. Prototypes can invite premature polish because they look real too early. And handoff clarity gets muddy—engineers aren’t always sure what’s prototype-only versus reusable.

Wautier’s closing:

More like clay than drafting: you shape, you test, you feel, you adjust — with an instantaneous feedback loop. The artifact is no longer a description of the thing. It starts to become the thing, or at least a runnable slice of it.

I believe this is the future.

3D avatar with glasses and hand on chin between a UI canvas of colorful rounded shapes and a JavaScript code editor.

Field study: prototypes over mockups

A practical guide to designing with code in 2026

uxdesign.cc iconuxdesign.cc

The source of truth for product design is shifting from Figma to code. I’ve been making that argument from the design side. Jonny Burch is making it from the tooling side, with a sharper prediction about what replaces Figma: nothing owned by one company.

Burch on where design interfaces are headed:

As product, design and engineering collapse together, design interfaces will start to look more like dependencies in the code itself.

A mature design system already lives in code—the Figma components are a mirror, not the original. Once AI agents can read and build against that code directly, the mirror becomes optional. Burch sees this leading to a fragmented ecosystem of code-first plugins and open tools rather than a single Figma replacement. I think he’s right about the direction, if aggressive on the timeline.

On why the pressure is building:

In modern teams it’s no longer acceptable for a designer to spend 2 weeks in their mind palace creating the perfect UI.

It’s starting to happen on my own team. Engineers with AI agents are producing working features in hours. The design phase—the Figma phase—is now the slowest part of the cycle. That’s a new and uncomfortable feeling for designers who grew up in a world where engineering was always the bottleneck.

Burch on Figma’s position in all of this:

They’re suddenly the slow incumbent with the wrong tech stack and a large enterprise customer-base adding drag.

I watched the same dynamic play out when Figma displaced Sketch. The dominant tool doesn’t always adapt fast enough. Sometimes the market just routes around it.

To be sure, I don’t wish for the death of Figma. Designers are visual thinkers and that’s what makes us different than PMs and engineers. I’m sure we’ll continue to use Figma for initial UI explorations. But instead of building out 40-screen flows, we’ll quickly move into code and generate a prototype that’ll look and feel like what we’re going to ship.

Life after Figma is coming (and it will be glorious). Subtext: As code becomes source of truth. Author: Jonny Burch.

Life after Figma is coming (and it will be glorious)

As code becomes source of truth, design tools become interfaces on code, not the other way round.

jonnyburch.com iconjonnyburch.com

I’ve seen this at every company past a certain size: you spot a disjointed UX problem across the product, you know what needs to happen, and then you spend three months in alignment meetings trying to get six teams to agree on a button style.

A recent piece from Laura Klein at Nielsen Norman Group examines why most product teams aren’t actually empowered, despite what the org chart claims. Klein on fragmentation:

When you have dozens of empowered teams, each optimizing its own metrics and building its own features, you get a product that feels like it was designed by dozens of different companies. One team’s area uses a modal dialog for confirmations. Another team uses an inline message. A third team navigates to a new page. The buttons say Submit in one place, Save in another, and Continue in a third. The tone of the microcopy varies wildly from formal to casual.

Users don’t see teams. They don’t see component boundaries. They just see a confusing, inconsistent product that seems to have been designed by people who never talked to each other, because, in a sense, it was.

Each team was empowered to make the best decisions for their area, and it did! But nobody was empowered to maintain coherence across the whole experience.

That last line is the whole problem. “Coherence,” as Klein calls it, is a design leadership responsibility, and it gets harder as AI lets individual teams ship faster without coordinating with each other. If every squad can generate production UI in hours instead of weeks, the fragmentation described here accelerates. Design systems become the only thing standing between your product and a Frankenstein experience.

The article is also sharp on what happens to PMs inside this dysfunction:

Picture a PM who spends 70% of her time in meetings coordinating with other teams, getting buy-in for a small change, negotiating priorities, trying to align roadmaps, escalating conflicts, chasing down dependencies, and attending working groups created to solve coordination problems. She spends a tiny fraction of her time with users. The rest is spent writing documents that explain her team’s work to other teams, updating roadmaps, reporting status, and attending planning meetings. She was hired to be a strategic product thinker, but she’s become a project manager, focused entirely on logistics and coordination.

I’ve watched this happen to PMs I’ve worked with. The coordination tax eats the strategic work. Marty Cagan calls this “product management theater”—a surplus of PMs who function as overpaid project managers. If AI compresses the engineering work but the coordination overhead stays the same, that ratio gets even more lopsided.

The fix is smaller teams with real ownership and strong design systems that enforce coherence without requiring 14 alignment meetings. But that requires organizational courage most companies don’t have.

Why Most Product Teams Aren't Really Empowered' headline with three hands untangling a ball of dark-blue yarn and NN/G logo.

Why Most Product Teams Aren’t Really Empowered

Although product teams say they’re empowered, many still function as feature factories and must follow orders.

nngroup.com iconnngroup.com

My essay yesterday was about the mechanics of how product design is changing—designing in code, orchestrating AI agents, collapsing the Figma-to-production handoff. That piece got into specifics. This piece by Pavel Bukengolts, writing for UX Magazine, is about the mindset:

AI is changing the how — the tools, the workflows, the speed. But the why of UX? That’s timeless.

Bukengolts is right. UX as a discipline isn’t going anywhere. But I worry that articles like this—well-intentioned and directionally correct—give designers permission to keep doing exactly what they’re doing now. “Sharpen your critical thinking” and “be the conscience in the room” is good advice. It’s also the kind of advice that lets you nod along without changing anything about your Tuesday.

The article lists the skills designers need: critical thinking, systems thinking, AI literacy, ethical awareness, strategic communication. All valid. But none of that addresses what the actual production work looks like six months from now. Bukengolts again:

In a world where AI does the work, your value is knowing why it matters and who it affects.

I agree with this in principle. The problem is the gap between “UX matters” and “your current UX role is secure.” Those are very different statements. UX will absolutely matter in an AI-powered world—someone has to shape the experience, evaluate whether it actually works for people, catch the things the model gets wrong. But the number of people doing that work, and what the job requires of them, is changing fast. I wrote in my essay that junior designers who can’t critically assess AI-generated work will find their roles shrinking fast. The skill floor is rising. Saying “stay curious and principled” isn’t wrong, but it’s not enough.

The piece closes with reassurance:

Yes, this moment is big. Yes, you’ll need to adapt. But no, you are not obsolete.

I’d feel better about that line if the article spent more time on how to adapt—not in terms of thinking skills, but in terms of the actual work. Learn to design in code. Get comfortable directing AI agents. Understand your design system well enough to make it machine-readable. Those are the specific steps that will separate designers who thrive from designers who got the mindset right but missed the shift happening underneath them.

Black 3D letters spelling CHANGE on warm backdrop; caption reads: AI can design interfaces; humans provide empathy and ethics.

Design Smarter: Future-Proof Your UX Career in the Age of AI

Is UX still a thing? AI is rising fast, but UX isn’t disappearing. It’s evolving. The big shift isn’t just tools, it’s how we think: critical thinking to spot gaps, systems thinking to map complexity, and AI literacy to understand capabilities without pretending we build it all. Empathy and ethics become the edge: designers must ask who’s affected, what’s left out, and what unintended consequences might arise. In practice, we translate data and research into a story that matters, bridging users, business, and tech, with strategic communication that keeps everyone aligned. In an AI-powered world, human judgment, why it matters, and to whom, stays central. Stay curious, sharp, and principled.

uxmag.com iconuxmag.com

If building is cheap and the real bottleneck is knowing what to build, interface design faces the same squeeze. Nielsen Norman Group’s annual State of UX report argues that UI is no longer a differentiator.

Kate Moran, Raluca Budiu, and Sarah Gibbons, writing for Nielsen Norman Group:

UI is still important, but it’ll gradually become less of a differentiator. Equating UX with UI today doesn’t just mislabel our work — it can lead to the mistaken conclusion that UX is becoming irrelevant, simply because the interface is becoming less central.

Design systems standardized the components. AI-mediated interactions now sit on top of the interface itself. The screen matters less when users talk to an agent instead of navigating pages. The report lays out where that leaves designers:

As AI-powered design tools improve, the power of standardization will be amplified and anyone will be able to make a decent-looking UI (at least from a distance). If you’re just slapping together components from a design system, you’re already replaceable by AI. What isn’t easy to automate? Curated taste, research-informed contextual understanding, critical thinking, and careful judgment.

The whole report is worth reading. The thread through all of it—job market, AI fatigue, UI commodification—is that surface-level work won’t survive leaner teams and stronger scrutiny. The value is in depth.

State of UX 2026: Design Deeper to Differentiate headline, NN/g logo, red roller-coaster with stick-figure riders flying off a loop.

State of UX in 2026

UX faced instability from layoffs, hiring freezes, and AI hype; now, the field is stabilizing, but differentiation and business impact are vital.

nngroup.com iconnngroup.com

Brand guidelines have always been a compromise. You document the rules—colors, typography, spacing, logo usage—and hope people follow them. They don’t, or they follow the letter while missing the spirit. Every designer who’s inherited a brand system knows the drift: assets that are technically on-brand but feel wrong, or interpretations that stretch “flexibility” past recognition.

Luke Wroblewski is pointing at something different:

Design projects used to end when “final” assets were sent over to a client. If more assets were needed, the client would work with the same designer again or use brand guidelines to guide the work of others. But with today’s AI software development tools, there’s a third option: custom tools that create assets on demand, with brand guidelines encoded directly in.

The key word is encoded. Not documented. Not explained in a PDF that someone skims once. Built into software that enforces the rules automatically.

Wroblewski again:

So instead of handing over static assets and static guidelines, designers can deliver custom software. Tools that let clients create their own on-brand assets whenever they need them.

That is a super interesting way of looking at it.

He built a proof of concept—the LukeW Character Maker—where an LLM rewrites user requests to align with brand style before the image model generates anything. The guidelines aren’t a reference document; they’re guardrails in the code.

This isn’t purely theoretical. When Pentagram designed Performance.gov in 2024, they delivered a library of 1,500 AI-generated icons that any federal agency could use going forward. Paula Scher defended the approach by calling it “self-sustaining”—the deliverable wasn’t a fixed set of illustrations but a system that could produce more:

The problem that’s plagued government publishing is the inability to put together a program because of the interference of different people with different ideas. This solved that.

I think this is an interesting glimpse into the future. Brand guidelines might have software with them. I can even see a day where AI can generate new design system components based on guidelines.

Timeline showing three green construction-worker mascots growing larger from 2000 to 2006, final one with red hard hat reading a blueprint.

Design Tools Are The New Design Deliverables

Design projects used to end when “final” assets were sent over to a client. If more assets were needed, the client would work with the same designer again or us...

lukew.com iconlukew.com

I started my career in print. I remember specifying designs in fractional inches and points, and expecting the printed piece to match the comp exactly. When I moved to the web in the late ’90s, I brought that same expectation with me because that’s how we worked back then. Our Photoshop files were precise. But if we’re being honest—that the web is an interactive, quickly malleable medium—that expectation is misplaced. I’ve long since changed my mind, of course.

Web developer Amit Sheen, writing for Smashing Magazine, articulates the problem with “pixel perfect” better than I’ve seen anyone do it:

When a designer asks for a “pixel-perfect” implementation, what are they actually asking for? Is it the colors, the spacing, the typography, the borders, the alignment, the shadows, the interactions? Take a moment to think about it. If your answer is “everything”, then you’ve just identified the core issue… When we say “make it pixel perfect,” we aren’t giving a directive; we’re expressing a feeling.

According to Sheen, “pixel perfect” sounds like a specification but functions as a vibe. It tells the developer nothing actionable.

He traces the problem back to print’s influence on early web design:

In the print industry, perfection was absolute. Once a design was sent to the press, every dot of ink had a fixed, unchangeable position on a physical page. When designers transitioned to the early web, they brought this “printed page” mentality with them. The goal was simple: The website must be an exact, pixel-for-pixel replica of the static mockup created in design applications like Photoshop and QuarkXPress.

Sheen doesn’t just tear down the old model. He offers replacement language. Instead of demanding “pixel perfect,” teams should ask for things like “visually consistent with the design system” or “preserves proportions and alignment logic.” These phrases describe actual requirements rather than feelings.

Sheen again, addressing designers directly:

When you hand over a design, don’t give us a fixed width, but a set of rules. Tell us what should stretch, what should stay fixed, and what should happen when the content inevitably overflows. Your “perfection” lies in the logic you define, not the pixels you draw.

I’m certain advanced designers and design teams know all of the above already. I just appreciated Sheen’s historical take. A Figma file is a hypothesis, a picture of what to build. The browser is the truth.

Smashing Magazine article header: "Rethinking 'Pixel Perfect' Web Design" with tags, author Amit Sheen and a red cat-and-bird illustration.

Rethinking “Pixel Perfect” Web Design — Smashing Magazine

Amit Sheen takes a hard look at the “Pixel Perfect” legacy concept, explaining why it’s failing us and redefining what “perfection” actually looks like in a multi-device, fluid world.

smashingmagazine.com iconsmashingmagazine.com

One of the most interesting things about design systems is how many of them are public—maybe not open source, but public so that we can all learn from them.

The earliest truly public, documented design systems showed up in the early 2010s. There isn’t a single “first,” but a few set the tone. GOV.UK published openly and became the public‑sector benchmark. Google’s Material landed in 2014 with a comprehensive spec. Salesforce’s Lightning started surfacing around 2013–2014 and matured mid‑decade. IBM’s Carbon followed soon after. Earlier frameworks like Bootstrap and Foundation (2011) acted like de facto systems for many teams, but they weren’t a company’s product design system made public.

PJ Onori says that public design systems are a “marketplace of ideas.”

Public design systems have lifted all boats in the harbor. Most design system teams do the rounds to see how other teams have tackled problems. Every system that raises bar puts healthy pressure on others to meet or exceed it. This shared ecosystem may be the most important facet of the design systems practice.

Onori also says that there may be a growing trend to shut down public design systems:

There’s a growing trend to close down public systems. Funny enough, the first thing I did when I left Pinterest was clone the Gestalt repo. I had this spidey sense it wouldn’t be around forever. Yes, their web codebase is still open source, but the docs have gone private. That one stung. Gestalt wasn’t the first design system to be public. It wasn’t the best one either. But it’s hat was in the ring–and that’s what mattered.

But that’s only one design system, right? Sadly, I’m hearing more chatting about mounting pressure to privatize their systems.

This is an incredibly shitty idea.

Why? Because that’s how we all learn from each other. That’s how something like the Component Gallery can exist as a resource for all of us.

Open design systems are the library for people wanting to get into design systems. They’re a free resource to expand their understanding. There’s no college of design systems. Bootcamps exist, but they’re bootcamps–and I’ll leave it at that. The generation who shaped design systems didn’t create universities–they built libraries. Those libraries can train the next generation once people like me age out. When the libraries go, so does the transfer of knowledge.

Public design systems are worth it

Public design systems are worth it

It’s incredibly valuable to make a design system available to all–no matter what the bean-counters say.

pjonori.blog iconpjonori.blog

I’ve linked to a footer gallery, a navbar gallery, and now to round us out, here is a full-on Component Gallery. Web developer Iain Bean has been maintaining this library since 2019.

Bean writes in the about page:

The original idea for this site came from A Pattern Language2, a 1977 book focused on architecture, building and planning, which describes over 250 ‘patterns’: forms which fit specific contexts, or to put it another way, solutions to design problems. Examples include: ‘Beer hall’, ‘Positive outdoor space’ and ‘Light on two sides of every room’.

Whereas the book focuses on the physical world, my original aim with this site is was focus on those patterns that appear on the web; these often borrow the word ‘pattern’ (see Patterns on the GOV.UK design system), but are more commonly called components, hence ‘the component gallery’ — unlike a component library, most of these components aren’t ready to use off-the-shelf, but they’ll hopefully inspire you to design your own solution to the problem you’re working to solve.

So if you ever need a reference for how different design systems handle certain components (e.g., combobox, segmented control, or toast ), this is your site.

The Component Gallery

The Component Gallery

An up-to-date repository of interface components based on examples from the world of design systems, designed to be a reference for anyone building user interfaces.

component.gallery iconcomponent.gallery

I love this piece in The Pudding by Michelle Pera-McGhee, where she breaks down what motifs are and how they’re used in musicals. Using audio samples from Wicked, Les Miserables, and Hamilton, it’s a fun, interactive—sound on!—essay.

Music is always telling a story, but here that is quite literal. This is especially true in musicals like Les Misérables or Hamilton where the entire story is told through song, with little to no dialogue. These musicals rely on motifs to create structure and meaning, to help tell the story.

So a motif doesn’t just exist, it represents something. This creates a musical storytelling shortcut: when the audience hears a motif, that something is evoked. The audience can feel this information even if they can’t consciously perceive how it’s being delivered.

If you think about it, motifs are the design systems of musicals.

Pera-McGhee lists out the different use cases and techniques for motifs:

  • Representing a character with a recurring musical idea, often updated as the character evolves.
  • Representing an abstract idea (love, struggle, hope) via leitmotifs that recur across scenes.
  • Creating emotional layers by repeating the same motif in contrasting contexts (joy vs. grief).
  • Weaving multiple motifs together at key structural moments (end-of-act ensembles like “One Day More” and “Non-Stop”).

I’m also reminded of this excellent video about the motifs in Hamilton.

Play
Explore 80+ motifs at left; Playbill covers for Hamilton, Wicked, Les Misérables center; yellow motif arcs over timeline labeled Act 1 | Act 2.

How musicals use motifs to tell stories

Explore motifs from Hamilton, Wicked, and Les Misérables.

pudding.cool iconpudding.cool

Designer and front-end dev Ondřej Konečný has a lovely presentation of his book collection.

My favorites that I’ve read include:

  • Creative Selection by Ken Kocienda (my review)
  • Grid Systems in Graphic Design by Josef Müller-Brockmann
  • Steve Jobs by Walter Isaacson
  • Don’t Make Me Think by Steve Krug
  • Responsive Web Design by Ethan Marcotte

(h/t Jeffrey Zeldman)

Books page showing a grid of colorful book covers with titles, authors, and years on a light background.

Ondřej Konečný | Books

Ondřej Konečný’s personal website.

ondrejkonecny.com iconondrejkonecny.com

Ryan Feigenbaum created a fun Teenage Engineering-inspired color palette generator he calls “ColorPalette Pro.” Back in 2023, he was experimenting with programatic palette generation. But he didn’t like his work, calling the resulting palettes “gross, with luminosity all over the place, clashing colors, and garish combinations.”

So Feigenbaum went back to the drawing board:

That set off a deep dive into color theory, reading various articles and books like Josef Albers’ Interaction of Color (1963), understanding color space better, all of which coincided with an explosion of new color methods and technical support on the web.

These frustrations and browser improvements culminated in a realization and an app.

Here he is, demoing his app:

Play
COLORPALETTE PRO UI showing Vibrant Violet: color wheel, purple-to-orange swatch grid, and lightness/chroma/hue sliders.

Color Palette Pro — A Synthesizer for Color Palettes

Generate customizable color palettes in advanced color spaces that can be easily shared, downloaded, or exported.

colorpalette.pro iconcolorpalette.pro