Skip to content

Brand guidelines have always been a compromise. You document the rules—colors, typography, spacing, logo usage—and hope people follow them. They don’t, or they follow the letter while missing the spirit. Every designer who’s inherited a brand system knows the drift: assets that are technically on-brand but feel wrong, or interpretations that stretch “flexibility” past recognition.

Luke Wroblewski is pointing at something different:

Design projects used to end when “final” assets were sent over to a client. If more assets were needed, the client would work with the same designer again or use brand guidelines to guide the work of others. But with today’s AI software development tools, there’s a third option: custom tools that create assets on demand, with brand guidelines encoded directly in.

The key word is encoded. Not documented. Not explained in a PDF that someone skims once. Built into software that enforces the rules automatically.

Wroblewski again:

So instead of handing over static assets and static guidelines, designers can deliver custom software. Tools that let clients create their own on-brand assets whenever they need them.

That is a super interesting way of looking at it.

He built a proof of concept—the LukeW Character Maker—where an LLM rewrites user requests to align with brand style before the image model generates anything. The guidelines aren’t a reference document; they’re guardrails in the code.

This isn’t purely theoretical. When Pentagram designed Performance.gov in 2024, they delivered a library of 1,500 AI-generated icons that any federal agency could use going forward. Paula Scher defended the approach by calling it “self-sustaining”—the deliverable wasn’t a fixed set of illustrations but a system that could produce more:

The problem that’s plagued government publishing is the inability to put together a program because of the interference of different people with different ideas. This solved that.

I think this is an interesting glimpse into the future. Brand guidelines might have software with them. I can even see a day where AI can generate new design system components based on guidelines.

Timeline showing three green construction-worker mascots growing larger from 2000 to 2006, final one with red hard hat reading a blueprint.

Design Tools Are The New Design Deliverables

Design projects used to end when "final" assets were sent over to a client. If more assets were needed, the client would work with the same designer again or us...

lukew.com iconlukew.com

I spent all of last week linking to articles that say designers need to be more strategic. I still stand by that. But that doesn’t mean we shouldn’t understand the technical side of things.

Benhur Senabathi, writing for UX Collective, shipped 3 apps and 15+ working prototypes in 2025 using Claude Code and Cursor. His takeaway:

I didn’t learn to code this year. I learned to orchestrate. The difference matters. Coding is about syntax. Orchestration is about intent, systems, and knowing what ‘done’ looks like. Designers have been doing that for years. The tools finally caught up.

The skills that make someone good at design—defining outcomes, anticipating edge cases, communicating intent to people who don’t share your context—are exactly what AI-assisted building requires.

Senabathi again:

Prompting well isn’t about knowing to code. It’s about articulating the ‘what’ and ‘why’ clearly enough that the AI can handle the ‘how.’

This echoes how Boris Cherny uses Claude Code. Cherny runs 10-15 parallel sessions, treating AI as capacity to orchestrate rather than a tool to use. Same insight, different vantage point: Cherny from engineering, Senabathi from design.

GitHub contributions heatmap reading "701 contributions in the last year" with Jan–Sep labels and varying green activity squares

Designers as agent orchestrators: what I learnt shipping with AI in 2025

Why shipping products matters in the age of AI and what designers can learn from it

uxdesign.cc iconuxdesign.cc

One of my favorite parts of shipping a product is finding out how people actually use it. Not how we intended them to use it—how they bend it, repurpose it, surprise us with it. That’s when you learn what you really built.

Karo Zieminski, writing for Product with Attitude, captures a great example of this in her breakdown of Anthropic’s Cowork launch. She quotes Anthropic engineer Boris Cherny:

Since we launched Claude Code, we saw people using it for all sorts of non-coding work: conducting vacation research, creating slide presentations, organizing emails, cancelling subscriptions, retrieving wedding photos from hard drives, tracking plant growth, and controlling ovens.

Controlling ovens. I love it. Users took a coding tool and turned it into a general-purpose assistant because that’s what they needed it to be.

Simon Willison had already spotted this:

Claude Code is a general agent disguised as a developer tool. What it really needs is a UI that doesn’t involve the terminal and a name that doesn’t scare away non-developers.

That’s exactly what Anthropic shipped in Cowork. Same engine, new packaging, name that doesn’t say “developers only.”

This is the beauty of what we do. Once you create something, it’s really up to users to show you how it should be used. Your job is to pay attention—and have the humility to build what the behavior is asking for, not what your roadmap says.

Cartoon girl with ponytail wearing an oversized graduation cap with yellow tassel, carrying books and walking while pointing ahead.

Anthropic Shipped Claude Cowork in 10 Days Using Its Own AI. Here’s Why That Changes Everything.

The acceleration that should make product leaders sit up.

open.substack.com iconopen.substack.com

When I managed over 40 creatives at a digital agency, the hardest part wasn’t the work itself—it was resource allocation. Who’s got bandwidth? Who’s blocked waiting on feedback? Who’s deep in something and shouldn’t be interrupted? You learn to think of your team not as individuals you assign tasks to, but as capacity you orchestrate.

I was reminded of that when I read about Boris Cherny’s approach to Claude Code. Cherny is a Staff Engineer at Anthropic who helped build Claude Code. Karo Zieminski, writing in her Product with Attitude Substack, breaks down how Cherny actually uses his own tool:

He keeps ~10–15 concurrent Claude Code sessions alive: 5 in terminal (tabbed, numbered, with OS notifications). 5–10 in the browser. Plus mobile sessions he starts in the morning and checks in on later. He hands off sessions between environments and sometimes teleports them back and forth.

Zieminski’s analysis is sharp:

Boris doesn’t see AI as a tool you use, but as a capacity you schedule. He’s distributing cognition like compute: allocate it, queue it, keep it hot, switch contexts only when value is ready. The bottleneck isn’t generation; it’s attention allocation.

Most people treat AI assistants like a single very smart coworker. You give it a task, wait for the answer, evaluate, iterate. Cherny treats Claude like a team—multiple parallel workers, each holding different context, each making progress while he’s focused elsewhere.

Zieminski again:

Each session is a separate worker with its own context, not a single assistant that must hold everything. The “fleet” approach is basically: don’t make one brain do all jobs; run many partial brains.

I’ve been using Claude Code for months, but mostly one session at a time. Reading this, I realize I’ve been thinking too small. The parallel session model is about working efficiently. Start a research task in one session, let it run while you code in another, check back when it’s ready.

Looks like the new skill on the block is orchestration.

Cartoon avatar in an orange cap beside text "I'm Boris and I created Claude Code." with "6.4M Views" in a sketched box.

How Boris Cherny Uses Claude Code

An in-depth analysis of how Boris Cherny, creator of Claude Code, uses it — and what it reveals about AI agents, responsibility, and product thinking.

open.substack.com iconopen.substack.com

I started my career in print. I remember specifying designs in fractional inches and points, and expecting the printed piece to match the comp exactly. When I moved to the web in the late ’90s, I brought that same expectation with me because that’s how we worked back then. Our Photoshop files were precise. But if we’re being honest—that the web is an interactive, quickly malleable medium—that expectation is misplaced. I’ve long since changed my mind, of course.

Web developer Amit Sheen, writing for Smashing Magazine, articulates the problem with “pixel perfect” better than I’ve seen anyone do it:

When a designer asks for a “pixel-perfect” implementation, what are they actually asking for? Is it the colors, the spacing, the typography, the borders, the alignment, the shadows, the interactions? Take a moment to think about it. If your answer is “everything”, then you’ve just identified the core issue… When we say “make it pixel perfect,” we aren’t giving a directive; we’re expressing a feeling.

According to Sheen, “pixel perfect” sounds like a specification but functions as a vibe. It tells the developer nothing actionable.

He traces the problem back to print’s influence on early web design:

In the print industry, perfection was absolute. Once a design was sent to the press, every dot of ink had a fixed, unchangeable position on a physical page. When designers transitioned to the early web, they brought this “printed page” mentality with them. The goal was simple: The website must be an exact, pixel-for-pixel replica of the static mockup created in design applications like Photoshop and QuarkXPress.

Sheen doesn’t just tear down the old model. He offers replacement language. Instead of demanding “pixel perfect,” teams should ask for things like “visually consistent with the design system” or “preserves proportions and alignment logic.” These phrases describe actual requirements rather than feelings.

Sheen again, addressing designers directly:

When you hand over a design, don’t give us a fixed width, but a set of rules. Tell us what should stretch, what should stay fixed, and what should happen when the content inevitably overflows. Your “perfection” lies in the logic you define, not the pixels you draw.

I’m certain advanced designers and design teams know all of the above already. I just appreciated Sheen’s historical take. A Figma file is a hypothesis, a picture of what to build. The browser is the truth.

Smashing Magazine article header: "Rethinking 'Pixel Perfect' Web Design" with tags, author Amit Sheen and a red cat-and-bird illustration.

Rethinking “Pixel Perfect” Web Design — Smashing Magazine

Amit Sheen takes a hard look at the “Pixel Perfect” legacy concept, explaining why it’s failing us and redefining what “perfection” actually looks like in a multi-device, fluid world.

smashingmagazine.com iconsmashingmagazine.com

I became an associate creative director (ACD) in 2005, ten years after I started working professionally. I was hired by the digital agency Organic into that role. I remembered struggling mightily with trusting my team to do the work. In my previous job as an art director, I hated it when my ACD or CD would go into my files after I’d gone home and just redo stuff. I didn’t do that, but it was very difficult to fight the urge or to avoid designing my own direction. (I failed on the latter.) That’s an intrinsic problem.

Sometimes, the issue is extrinsic, especially when you’re promoted into a leadership role from being an individual contributor (IC). The transition is a struggle. You get promoted because you were great at the work, and then the organization keeps pulling you back to do the work instead of leading at the level your new role demands.

Sabina Nawaz, writing for Harvard Business Review, explains why promotions grant potential but not always permission:

Research shows many midlevel and senior leaders still spend a disproportionate amount of time on tactical work rather than enterprise leadership. In my coaching work with senior leaders, I’ve found that while promotions provide the potential to lead strategically, they don’t always grant permission to do so. To gain that, you must do the hidden (and harder) work of redefining how you think, behave, and interact within the system.

That phrase, “potential but not permission,” is the whole problem in four words. You have the title, but the org’s muscle memory keeps treating you like your old self.

Nawaz identifies a common culprit: bosses who can’t let go of their former role:

Because the SVP had personally run my client’s division for years, he struggled to let go of intervening in the VP’s work. Six months into the transition, the SVP was still reviewing every decision, overriding calls, and re-engaging in tactical discussions he no longer needed to oversee. While he explained his involvement as giving feedback and advice, he was “overhelping,” a seemingly benign act that research suggests can ultimately erode trust, autonomy, and performance.

I’ve watched this pattern derail design organizations. A new creative director gets promoted, but the VP who used to hold that role keeps jumping into design reviews, redlining layouts, second-guessing type choices. The CD never develops their own judgment because their boss never leaves the room.

Nawaz’s advice for breaking the cycle is direct:

Take a quick glance at your calendar and ask yourself if it still reflects the activities, information flow, and ownership items of your prior role. Just as you need your boss to step back to empower you, you must redesign where you spend your time and which decisions to let your team fully own.

Your calendar doesn’t lie. If it’s packed with the same meetings you attended before your promotion, you haven’t actually made the transition. You’ve just added a new title to your old job.

Older person with short gray hair and glasses in profile, hand on chin, overlaid with orange dots and black swirling line.

Your New Role Requires Strategic Thinking…But You’re Stuck in the Weeds

Senior-level promotions are an opportunity for leaders to impact a company’s strategy, but it’s easy to get pulled back into the tactical weeds. A visibly higher spot on the organizational chart doesn’t guarantee time for strategic thinking. To gain that, you must do the hidden (and harder) work of redefining how you think, behave, and interact within the system, and be adaptable to the unpredictable needs of stakeholders you need to influence. Here’s how to protect your ability to lead at the altitude your new role requires—and that your team needs to succeed.

hbr.org iconhbr.org

Nice mini-site from the Figma showcasing the “iconic interactions” of the last 20 years. It explores how software has become inseparable from how we think and connect—and how AI is accelerating that shift toward adaptive, conversational interfaces. Made with Figma Make, of course.

Centered bold white text "Software is culture" on a soft pastel abstract gradient background (pink, purple, green, blue).

Software Is Culture

Yesterday's software has shaped today's generation. To understand what's next as software grows more intelligent, we look back on 20 years of interaction design.

figma.com iconfigma.com

Every designer has noticed that specific seafoam green in photos of mid-century control rooms. It shows up in nuclear plants, NASA mission control, old hospitals. Wasn’t the hospital in 1975’s One Flew Over the Cuckoo’s Nest that color? It’s too consistent to be coincidence.

Beth Mathews traced the origin back to color theorist Faber Birren, who consulted for DuPont and created the industrial color safety codes still in use today. His reasoning:

“The importance of color in factories is first to control brightness in the general field of view for an efficient seeing condition. Interiors can then be conditioned for emotional pleasure and interest, using warm, cool, or luminous hues as working conditions suggest. Color should be functional and not merely decorative.”

Color should be functional and not merely decorative. These weren’t aesthetic choices—they were human factors engineering decisions, made in environments where one mistake could be catastrophic. The seafoam green was specifically chosen to reduce visual fatigue. Kinda cool.

Vintage teal industrial control room with wall-mounted analog gauges and switches, wooden swivel chair and yellow rope barrier.

Why So Many Control Rooms Were Seafoam Green

The Color Theory Behind Industrial Seafoam Green

open.substack.com iconopen.substack.com

“Taste” gets invoked constantly in conversations about what AI can’t replace. But it’s often left undefined—a hand-wave toward something ineffable that separates good work from average work.

Yan Liu offers a working definition:

Product taste is the ability to quickly recognize whether something is high quality or not.

That’s useful because it frames taste as judgment, not aesthetics. Can you tell if a feature addresses a real problem? Can you sense what’s off about an AI-generated PRD even when it’s formatted correctly? Can you distinguish short-term growth tactics from long-term product health?

Liu cites Rick Rubin’s formula:

Great taste = Sensitivity × Standards

Sensitivity is how finely you perceive—noticing friction, asking why a screen exists, catching the moment something feels wrong. Standards are your internal reference system for what “good” actually looks like. Both can be trained.

This connects to something Dan Ramsden wrote in his piece on design’s value in product organizations: “taste without a rationale is just an opinion.” Liu’s framework gives taste a rationale. It’s not magic. It’s pattern recognition built through deliberate exposure and reflection.

The closing line is the one that sticks:

The real gap won’t be between those who use AI well and those who don’t. It will be between those who already know what “good” looks like before they ever open an AI tool.

Yellow background with centered black text "Product: It's all about Taste!" and thin black corner brackets.

Everyone Talks about “Taste”. What Is It? Why It Matters?

In 2025, you may have heard a familiar line repeated across the product world:

uxplanet.org iconuxplanet.org

If design’s value isn’t execution—and AI is making that argument harder to resist—then what is it? Dan Ramsden offers a framework I find useful.

He breaks thinking into three types: deduction (drawing conclusions from data), induction (building predictions from patterns), and abduction—generating something new. Design’s unique contribution is abductive thinking:

When we use deduction, we discover users dropping off during a registration flow. Induction might tell us why. Abduction would help us imagine new flows to fix it.

Product managers excel at sense-making (aka “Why?”). Engineers build the thing. Design makes the difference—moving from “what is” to “what could be.”

On AI and the temptation to retreat to “creativity” or “taste” as design’s moat, Ramsden is skeptical:

Some might argue that it comes down to “taste”. I don’t think that’s quite right — taste without a rationale is just an opinion. I think designers are describers.

I appreciate that distinction. Taste without rationale is just preference. Design’s value is translating ideas through increasing levels of fidelity—from sketch to prototype to tested solution—validating along the way.

His definition of design in a product context:

Design is a set of structured processes to translate intent into experiments.

That’s a working definition I can use. It positions design not as the source of ideas (those can come from anywhere, including AI), but as the discipline that manages ideas through validation. The value isn’t in generating the concept—it’s in making it real while managing risk.

Two overlapping blue circles: left text "Making sense to generate a problem"; right text "Making a difference to generate value

The value of Design in a product organisation

Clickbait opening: There’s no such thing as Product Design

medium.com iconmedium.com

This piece cites my own research on the collapse of entry-level design hiring, but it goes further—arguing that AI didn’t cause the crisis. It exposed one that’s been building for over a decade.

Dolphia, writing for UX Collective:

We told designers they didn’t need technical knowledge. Then we eliminated their jobs when they couldn’t influence technical decisions. That’s not inclusion. That’s malpractice.

The diagnosis is correct. The design industry spent years telling practitioners they didn’t need to understand implementation. And now those same designers can’t evaluate AI-generated output, can’t participate in architecture discussions, can’t advocate effectively when technical decisions are being made.

Dolphia’s evidence is damning. When Figma Sites launched, it generated 210 WCAG accessibility violations on demo sites—and designers couldn’t catch it because they didn’t know what to look for:

The paradox crystalizes: tools marketed as democratization require more technical knowledge than traditional workflows, not less.

Where I’d add nuance: the answer isn’t “designers should learn to code.” It’s that designers need to understand the medium they’re designing for. There’s a difference between writing production code and understanding what code does, between implementing a database schema and knowing why data models influence user workflows.

I’ve been rebuilding my own site with AI assistance for over a year now. I can’t write JavaScript from scratch. But I understand enough about static site generation, database trade-offs, and performance constraints to make informed architectural decisions and direct AI effectively. That’s the kind of technical literacy that matters—not syntax, but systems thinking.

In “From Craft to Curation,” I argued that design value is shifting from execution to direction. Dolphia’s piece is the corollary: you can’t provide direction if you don’t understand what you’re directing.

Speaker on stage wearing a black "Now with AI" T-shirt and headset mic, against a colorful sticky-note presentation backdrop.

Why AI is exposing design’s craft crisis

AI didn’t create the craft crisis in design — it exposed the technical literacy gap that’s been eroding strategic influence for over a…

uxdesign.cc iconuxdesign.cc

The data from Lenny’s Newsletter’s AI productivity survey showed PMs ranking prototyping as their #2 use case for AI, ahead of designers. Here’s what that looks like in practice.

Figma is now teaching PMs to build prototypes instead of writing PRDs. Using Figma Make, product managers can go from idea to interactive prototype without waiting on design. Emma Webster writing in Figma’s blog:

By turning early directions into interactive, high-fidelity prototypes, you can more easily explore multiple concepts and take ideas further. Instead of spending time writing documentation that may not capture the nuances of a product, prototypes enable you to show, rather than tell.

The piece walks through how Figma’s own PMs use Make for exploration, validation, and decision-making. One PM prototyped a feature flow and ran five user interviews—all within two days. Another used it to workshop scrolling behavior options that were “almost impossible to describe” in words.

The closing is direct about what this means for roles:

In this new landscape, the PMs who thrive will be those who embrace real-time iteration, moving fluidly across traditional role boundaries.

“Traditional role boundaries” being design’s territory.

This isn’t a threat if designers are already operating upstream—defining what to build, not just how it looks. But if your value proposition is “I make the mockups,” PMs now have tools to do that themselves.

Abstract blue scene with potted plants and curving vines, birds perched, a trumpet and ladder amid geometric icons.

Prototypes Are the New PRDs

Inside Figma Make, product managers are pressure-testing assumptions early, building momentum, and rallying teams around something tangible.

figma.com iconfigma.com

The optimistic case for designers in an AI-driven world is that design becomes strategy—defining what to build, not just how it looks. But are designers actually making that shift?

Noam Segal and Lenny Rachitsky, writing for Lenny’s Newsletter, share results from a survey of 1,750 tech workers. The headline is that AI is “overdelivering”—55% say it exceeded expectations, and most report saving at least half a day per week. But the findings by role tell a different story for designers:

Designers are seeing the fewest benefits. Only 45% report a positive ROI (compared with 78% of founders), and 31% report that AI has fallen below expectations, triple the rate among founders.

Meanwhile, founders are using AI to think—for decision support, product ideation, and strategy. They treat it as a thought partner, not a production tool. And product managers are building prototypes themselves:

Compare prototyping: PMs have it at #2 (19.8%), while designers have it at #4 (13.2%). AI is unlocking skills for PMs outside of their core work, whereas designers aren’t seeing the marginal improvement benefits from AI doing their core work.

The survey found that AI helps designers with work around design—research synthesis, copy, ideation—but visual design ranks #8 at just 3.3%. As Segal puts it:

AI is helping designers with everything around design, but pushing pixels remains stubbornly human.

This is the gap. The strategic future is available, but designers aren’t capturing it at the same rate as other roles. The question is why—and what to do about it.

Checked clipboard showing items like Speed, Quality and Research, next to headline "How AI is impacting productivity for tech workers

AI tools are overdelivering: results from our large-scale AI productivity survey

What exactly AI is doing for people, which AI tools have product-market fit, where the biggest opportunities remain, and what it all means

lennysnewsletter.com iconlennysnewsletter.com

Previously, I linked to Doug O’Laughlin’s piece arguing that UIs are becoming worthless—that AI agents, not humans, will be the primary consumers of software. It’s a provocative claim, and as a designer, I’ve been chewing on it.

Jeff Veen offers the counterpoint. Veen—a design veteran who cofounded Typekit and led products at Adobe—argues that an agentic future doesn’t diminish design. It clarifies it:

An agentic future elevates design into pure strategy, which is what the best designers have wanted all along. Crafting a great user experience is impossible if the way in which the business expresses its capabilities is muddied, vague or deceptive.

This is a more optimistic take than O’Laughlin’s, but it’s rooted in the same observation: when agents strip applications down to their primitives—APIs, CLI commands, raw capabilities, (plus data structures, I’d argue)—what’s left is the truth of what a business actually does.

Veen’s framing through responsive design is useful. Remember “mobile first”? The constraint of the small screen forced organizations to figure out what actually mattered. Everything else was cruft. Veen again:

We came to realize that responsive design wasn’t just about layouts, it was about forcing organizations to confront what actually mattered.

Agentic workflows do the same thing, but more radically. If your product can only be expressed through its API, there’s no hiding behind a slick dashboard or clever microcopy.

His closing question is great:

If an agent used your product tomorrow, what truths would it uncover about your organization?

For designers, this is the strategic challenge. The interface layer may become ephemeral—generated on the fly, tailored to the user, disposable. But someone still has to define what the product is. That’s design work. It’s just not pixel work.

Three smartphone screens showing search-result lists of app shortcuts: Wells Fargo actions, Contacts actions, and KAYAK trip/flight actions.

On Coding Agents and the Future of Design

How Claude Code is showing us what apps may become

veen.com iconveen.com

The rise of micro apps describes what’s happening from the bottom up—regular people building their own tools instead of buying software. But there’s a top-down story too: the structural obsolescence of traditional software companies.

Doug O’Laughlin makes the case using a hardware analogy—the memory hierarchy. AI agents are fast, ephemeral memory (like DRAM), while traditional software companies need to become persistent storage (like NAND, or ROM if you’re old school like me). The implication:

Human-oriented consumption software will likely become obsolete. All horizontal software companies oriented at human-based consumption are obsolete.

That’s a bold claim. O’Laughlin goes further:

Faster workflows, better UIs, and smoother integrations will all become worthless, while persistent information, a la an API, will become extremely valuable.

As a designer, this is where I start paying close attention. The argument is that if AI agents become the primary consumers of software—not humans—then the entire discipline of UI design is in question. O’Laughlin names names:

Figma could be significantly disrupted if UIs, as a concept humans create for other humans, were to disappear.

I’m not ready to declare UIs dead. People still want direct manipulation, visual feedback, and the ability to see what they’re doing. But the shift O’Laughlin describes is real: software’s value is migrating from presentation to data. The interface becomes ephemeral—generated on the fly, tailored to the task—while the source of truth persists.

This is what I was getting at in my HyperCard essay: the tools we build tomorrow won’t look like the apps we buy today. They’ll be temporary, personal, and assembled by AI from underlying APIs and data. The SaaS companies that survive will be the ones who make their data accessible to agents, not the ones with the prettiest dashboards.

Memory hierarchy pyramid: CPU registers and cache (L1–L3) top; RAM; SSD flash; file-based virtual memory bottom; speed/cost/capacity notes.

The Death of Software 2.0 (A Better Analogy!)

The age of PDF is over. The time of markdown has begun. Why Memory Hierarchies are the best analogy for how software must change. And why Software it’s unlikely to command the most value.

fabricatedknowledge.com iconfabricatedknowledge.com

Almost a year ago, I linked to Lee Robinson’s essay “Personal Software” and later explored why we need a HyperCard for the AI era. The thesis: people would stop searching the App Store and start building what they need. Disposable tools for personal problems.

That future is arriving. Dominic-Madori Davis, writing for TechCrunch, documents the trend:

It is a new era of app creation that is sometimes called micro apps, personal apps, or fleeting apps because they are intended to be used only by the creator (or the creator plus a select few other people) and only for as long as the creator wants to keep the app. They are not intended for wide distribution or sale.

What I find compelling here is the word “fleeting.” We’ve been conditioned to think of software as permanent infrastructure—something you buy, maintain, and eventually migrate away from. But these micro apps are disposable by design. One founder built a gaming app for his family to play over the holidays, then shut it down when vacation ended. That’s not a failed product. That’s software that did exactly what it needed to do.

Howard University professor Legand L. Burge III frames it well:

It’s similar to how trends on social media appear and then fade away. But now, [it’s] software itself.

The examples in the piece range from practical (an allergy tracker, a parking ticket auto-payer) to whimsical (a “vice tracker” for monitoring weekend hookah consumption). But the one that stuck with me was the software engineer who built his friend a heart palpitation logger so she could show her doctor her symptoms. That’s software as a favor. Software as care.

Christina Melas-Kyriazi from Bain Capital Ventures offers what I think is the most useful framing:

It’s really going to fill the gap between the spreadsheet and a full-fledged product.

This is exactly right. For years, spreadsheets have been the place where non-developers build their own tools—janky, functional, held together with VLOOKUP formulas and conditional formatting. Micro apps are the evolution of that impulse, but with real interfaces and actual logic.

The quality concerns are real—bugs, security flaws, apps that only their creator can debug. But for personal tools that handle personal problems, “good enough for one” is genuinely good enough.

Woman with white angel wings holding a glowing wand, wearing white dress and boots, hovering above a glowing smartphone.

The rise of ‘micro’ apps: non-developers are writing apps instead of buying them

A new era of app creation is here. It’s fun, it’s fast, and it’s fleeting.

techcrunch.com icontechcrunch.com

Claude Code is having a moment. Anthropic’s agentic coding tool has gone viral over the past few weeks, with engineers and non-engineers alike discovering what it feels like to hand real work over to an AI and watch it execute autonomously. The popular tech podcast Hard Fork has already had two segments on it in the last two weeks. In the first, hosts Kevin Roose and Casey Newton share their Claude Code projects. And in the second, they highlight some from their listeners. (Alas, my Severance fan project did not make the cut.)

I’ve been using Cursor and Claude Code to build and rebuild this site for over a year now, so when I read this piece and see coders describing their experience with it, I understand the feeling.

Bradley Olson (gift link), writing for the Wall Street Journal:

Some described a feeling of awe followed by sadness at the realization that the program could easily replicate expertise they had built up over an entire career.

“It’s amazing, and it’s also scary,” said Andrew Duca, chief executive of Awaken Tax, a cryptocurrency tax platform. Duca has been coding since he was in middle school. “I spent my whole life developing this skill, and it’s literally one-shotted by Claude Code.”

Duca decided not to hire the engineers he’d been planning to bring on. He thinks Claude makes him five times more productive.

The productivity numbers throughout the piece are striking:

Malte Ubl is chief technology officer at Vercel, which helps develop and host websites and apps for users of Claude Code and other such tools. He said he used the tool to finish a complex project in a week that would’ve taken him about a year without AI. Ubl spent 10 hours a day on his vacation building new software and said each run gave him an endorphin rush akin to playing a Vegas slot machine.

But what caught my attention is what people are using it for beyond code—analyzing MRI data, recovering wedding photos from corrupted drives, monitoring tomato plants with a webcam. Olson again:

Unlike most app- or web-bound chatbots now in wide use, it can operate autonomously, with broad access to user files, a web browser and other applications. While technologists have predicted a coming era of AI “agents” capable of doing just about anything for humans, that future has been slow to develop. Using Claude Code was the first time many users interacted with this kind of AI, offering an inkling of what may be in store.

Anthropic took notice of course and launched a beta of Cowork last week.

Instead of the MS-DOS-like “command line” interface that the core app has, Cowork displays a more friendly, graphical user interface. They built the product in about 10 days—using Claude Code.

The closing question is the right one:

“The bigger story here is going to be when this goes beyond software engineering,” said David Hsu, chief executive of Retool, a business-AI startup. Software engineers make up a tiny fraction of the U.S. labor force. “How far does it go?”

Replace “software engineering” with “design” and you have the question I’m exploring this week.

Claude Code v2.0.0' terminal greeting "Welcome back Meaghan!" with orange pixel mascot; right column lists recent activity and new commands.

Claude Is Taking the AI World by Storm, and Even Non-Nerds Are Blown Away

(Gift link) Developers and hobbyists are comparing the viral moment for Anthropic’s Claude Code to the launch of generative AI

wsj.com iconwsj.com

My wife is an obesity medicine and women’s health specialist, so she’s been in my ear talking about ultraprocessed foods for years. That’s why the processed food analogy for AI-generated software resonates. We industrialized agriculture and got abundance, yes—but also obesity, diabetes, and 318 million people still experiencing acute hunger. The problem was never production capacity.

Chris Loy applies this lens to where software is heading:

Industrial systems reliably create economic pressure toward excess, low quality goods. This is not because producers are careless, but because once production is cheap enough, junk is what maximises volume, margin, and reach. The result is not abundance of the best things, but overproduction of the most consumable ones.

Loy introduces the term “disposable software”—software created with no expectation of ownership, maintenance, or long-term understanding. Vibe-coded apps. AI slop. Whatever you want to call it, the economics are different: easy reproducibility means each output has less value, which means volume becomes the only game. Just look in the App Store for any popular category such as todo lists, notetakers, and word puzzles. Or look in r/SaaS and notice the glut of single people building and selling their own products.

Loy goes on to compare this movement with mass-produced fashion as well:

For example, prior to industrialisation, clothing was largely produced by specialised artisans, often coordinated through guilds and manual labour, with resources gathered locally, and the expertise for creating durable fabrics accumulated over years, and frequently passed down in family lines. Industrialisation changed that completely, with raw materials being shipped intercontinentally, fabrics mass produced in factories, clothes assembled by machinery, all leading to today’s world of fast, disposable, exploitative fashion.

Disposable fashion leads to vast overproduction, with estimates that 20–40% (up to 30–60 billion pieces) go unsold. There’s a waste of people’s time, tokens, electricity, and ultimately consumer dollars that AI enables.

The silver lining that Loy observes is in innovation. Entirely human-written code isn’t the answer. It’s doing the necessary research and development to innovate. My take is that’s exactly where designers need to be sitting.

Sepia-toned scene of a stone watermill with a large wooden wheel by a river, small rowboat and ducks, arched bridge and distant smokestacks.

The rise of industrial software

For most of its history, software has been closer to craft than manufacture: costly, slow, and dominated by the need for skills and experience. AI coding is changing that, by making available paths of production which are cheaper, faster, and increasingly disconnected from the expertise of humans.

chrisloy.dev iconchrisloy.dev

Last December, Cursor announced their visual editor—a way to edit UI directly in the browser. Karri Saarinen, the designer who co-founded Linear, saw it and called it a trap. Ryo Lu, the head of design at Cursor, pushed back. The Twitter back-and-forth went on for a couple days until they conceded they mostly agreed. Tommy Geoco digs into what the debate actually surfaced.

The traditional way we talk about design tools is floor versus ceiling—does the tool make good design more accessible, or does it push what’s possible? Geoco argues the Saarinen/Lu exchange revealed a second axis: unconstrained exploration versus material exploration. Sketching on napkins versus building in code.

Saarinen’s concern:

Whenever a designer becomes more of a builder, some idealism and creativity dies. It’s not because building is bad, but because you start introducing constraints earlier in the process than you should.

Lu’s counter:

The truth only reveals itself once you start to build. Not when you think about building, not when you sketch possibilities in a protected space, but when you actually make the thing real and let reality talk back.

Both are right, and Geoco’s reframing is useful:

The question is not should designers code. It’s are you using the new speed to explore more territory or just arriving at the same destination faster?

That’s the question I keep asking myself. When I use AI tools, am I discovering ideas I wouldn’t have found otherwise, or am I just getting to obvious ideas faster? The tools make iteration cheap, but cheap iteration on the same territory isn’t progress.

I think about it this way—back when I was starting out, sketching thumbnails was the technique I used. It was very quick and easy to sketch out dozens of ideas in a sketchbook, especially when they were logo or poster ideas. When sketching interaction ideas, the technique is closer to a storyboard—connected thumbnails. But for me, once I get into a high-fidelity design or prototype, there is tremendous pull to just keep tweaking the design rather than coming up with multiple options. In other words, convergence is happening rather than continued divergence.

This was the biggest debate in design [last] year

Two designers: One built Linear. One leads design at Cursor. They got into it on Twitter for 48 hours about the use of AI coding tools in the design work. This debate perfectly captures both sides of what's happening in software design right now. I've spent the year exploring how designers are experimenting on both sides of this argument. This is what I've found.

youtube.com iconyoutube.com

I’ve spent a lot of my product design career pushing for metrics—proving ROI, showing impact, making the case for design in business terms. But I’ve also seen how metrics become the goal rather than a signal pointing toward the goal. When the number goes up, we celebrate. When it doesn’t, we tweak the collection process. Meanwhile, the user becomes secondary. Last week’s big idea was around metrics, this piece piles on.

Pavel Samsonov calls this out:

Managers can only justify their place in value chains by inventing metrics for those they manage to make it look like they are managing.

I’ve sat in meetings where we debated which numbers to report to leadership—not which work to prioritize for users. The metrics become theater. So-called “vanity metrics” that always go up and to the right.

But here’s where Pavel goes somewhere unexpected. He doesn’t let designers off the hook either:

Defining success by a metric of beauty offers a useful kind of vagueness, one that NDS seems to hide behind despite the slow loading times or unnavigability that seem to define their output; you can argue with slow loading times or difficulty finding a form, but you cannot meaningfully argue with “beautiful.”

“Taste” and “beauty” are just another avoidance strategy. That’s a direct challenge to the design discourse that’s been dominant lately—the return to craft, the elevation of aesthetic judgment. Pavel’s saying it’s the same disease, different symptom. Both metrics obsession and taste obsession are ways to avoid the ambiguity of actually defining user success.

So what’s the alternative? Pavel again:

Fundamentally, the work of design is intentionally improving conditions under uncertainty. The process necessarily involves a lot of arguments over the definition and parameters of “improvement”, but the primary barrier to better is definitely not how long it takes to make artifacts.

The work is the argument. The work is facing the ambiguity rather than hiding behind numbers or aesthetics. Neither Figma velocity nor visual polish is a substitute for the uncomfortable conversation about what “better” actually means for the people using your product.

Bold "Product Picnic" text over a black-and-white rolling hill and cloudy sky, with a large outlined "50" on the right.

Your metrics are an avoidance strategy

Being able to quantify outcomes doesn’t make them meaningful. Moving past artificial metrics requires building shared intention with colleagues.

productpicnic.beehiiv.com iconproductpicnic.beehiiv.com

“I want my MTV!” That is the line that many music artists spoke to camera in a famous campaign by George Lois to get fans to call their cable companies to ask for MTV. It worked.

While MTV’s international music-only channels went off the air at the end of 2025, its US channels still exist. They’re just not all-music all the time like it was in the 1980s.

That’s where MTV Rewind comes in. It’s a virtual TV where you can relive MTV programming as it was. Built by an artist going by FlexasaurusRex, it’s an archive of Day 1 programming, and then different channels (YouTube playlists) to shuffle through the different shows, including 120 Minutes.

MTV Rewind logo: yellow M with red "tv" and REWIND gradient text on a blue background patterned with pink wavy stripes.

MTV REWIND

Celebrating 44 years of continuous music videos. Stream classic music videos 24/7.

wantmymtv.vercel.app iconwantmymtv.vercel.app

Secretary of State Marco Rubio’s State Department font switch is a political signal dressed up as design rationale. At least that’s what Chenyang “Platy” Hsu argues. In her deep dive into the decision and with a detour into the history of certain fonts, Hsu says Times New Roman is a newspaper workhorse made for economy, not ceremony. And many U.S. institutions favor stronger serif families or purpose-built sans-serifs.

Hsu:

…the design and historical reasons cited in Rubio’s memo don’t hold up. The formality and authority of serif typefaces are largely socially constructed, and Times New Roman’s origin story and design constraints don’t express these qualities. If Times New Roman carries authority at all, it’s primarily borrowed from the authority of institutions that have adhered to it. If the sincere goal were to “return to tradition” by returning to a serif, there are many choices with deeper pedigree and more fitting gravitas.

Times New American: A Tale of Two Fonts

Times New American: A Tale of Two Fonts

A less romantic truth is that aesthetic standards rarely travel alone; power tends to follow in their wake. An episode at the U.S. State Department this month makes exactly this point.

hsu.cy iconhsu.cy

It’s January and by now millions of us have made resolutions and probably broken them already. The second Friday of January is known as “Quitter’s Day.”

OKRs—objectives and key results—are a method for businesses to set and align company goals. The objective is your goal and the KRs are the ways to reach your goals. Venture capitalist John Doerr learned about OKRs while working at Intel, brought it to Google, and later became the framework’s leading evangelist.

Christina Wodtke talks about how to use OKRs for your personal life, and maybe as a way to come up with better New Year’s resolutions. She looked at her past three years of personal OKRs:

Looking at the pattern laid out in front of me, I finally saw what I’d been missing. My problem wasn’t work-life balance. My problem was that I didn’t like the kind of work I was doing.

The key results kept failing because the objective was wrong. It wasn’t about balance. It was about joy.

This is the second thing key results do for you: when they consistently fail, they’re telling you something. Not that you lack discipline—that you might be chasing the wrong goal entirely.

And I love Wodtke’s line here: “New Year’s resolutions fail because they’re wishes, not plans.“ She continues:

They fail because “eat better” and “be healthier” and “find balance” are too vague to act on and too fuzzy to measure.

Key results fix this. Not because measurement is magic, but because the act of measuring forces clarity. It makes you confront what you actually want. And sometimes, when the data piles up, it reveals that what you wanted wasn’t the thing you needed at all.

Your Resolution Isn’t the Problem. Your Measurement Is.

Your Resolution Isn’t the Problem. Your Measurement Is.

It’s January, and millions of people have made the same resolution: “Eat better.” By February, most will have abandoned it. Not because they lack willpower or discipline. Because …

eleganthack.com iconeleganthack.com

Building on our earlier link about measuring the impact of features, how can we keep track of the overall health of the product? That’s where a North Star Metric comes in.

Julia Sholtz writes and introduction to North Star Metrics in the analytics provider Amplitude’s blog:

Your North Star Metric should be the key measure of success for your company’s product team. It defines the relationship between the customer problems your product team is trying to solve and the revenue you aim to generate by doing so.

How is it done? The first step is to figure out the “game” your business is playing: how your business engages with customers:

  1. The Attention Game: How much time are your customers willing to spend in your product?
  2. The Transaction Game: How many transactions does your user make on your platform?
  3. The Productivity Game: How efficiently and effectively can someone get their work done in your product?

They have a whole resource section on this topic that’s worth exploring.

Every Product Needs a North Star Metric: Here’s How to Find Yours

Every Product Needs a North Star Metric: Here’s How to Find Yours

Get an introduction to product strategy with examples of North Star Metrics across industries.

amplitude.com iconamplitude.com