Skip to content

240 posts tagged with “ai”

When I managed over 40 creatives at a digital agency, the hardest part wasn’t the work itself—it was resource allocation. Who’s got bandwidth? Who’s blocked waiting on feedback? Who’s deep in something and shouldn’t be interrupted? You learn to think of your team not as individuals you assign tasks to, but as capacity you orchestrate.

I was reminded of that when I read about Boris Cherny’s approach to Claude Code. Cherny is a Staff Engineer at Anthropic who helped build Claude Code. Karo Zieminski, writing in her Product with Attitude Substack, breaks down how Cherny actually uses his own tool:

He keeps ~10–15 concurrent Claude Code sessions alive: 5 in terminal (tabbed, numbered, with OS notifications). 5–10 in the browser. Plus mobile sessions he starts in the morning and checks in on later. He hands off sessions between environments and sometimes teleports them back and forth.

Zieminski’s analysis is sharp:

Boris doesn’t see AI as a tool you use, but as a capacity you schedule. He’s distributing cognition like compute: allocate it, queue it, keep it hot, switch contexts only when value is ready. The bottleneck isn’t generation; it’s attention allocation.

Most people treat AI assistants like a single very smart coworker. You give it a task, wait for the answer, evaluate, iterate. Cherny treats Claude like a team—multiple parallel workers, each holding different context, each making progress while he’s focused elsewhere.

Zieminski again:

Each session is a separate worker with its own context, not a single assistant that must hold everything. The “fleet” approach is basically: don’t make one brain do all jobs; run many partial brains.

I’ve been using Claude Code for months, but mostly one session at a time. Reading this, I realize I’ve been thinking too small. The parallel session model is about working efficiently. Start a research task in one session, let it run while you code in another, check back when it’s ready.

Looks like the new skill on the block is orchestration.

Cartoon avatar in an orange cap beside text "I'm Boris and I created Claude Code." with "6.4M Views" in a sketched box.

How Boris Cherny Uses Claude Code

An in-depth analysis of how Boris Cherny, creator of Claude Code, uses it — and what it reveals about AI agents, responsibility, and product thinking.

open.substack.com iconopen.substack.com

Nice mini-site from the Figma showcasing the “iconic interactions” of the last 20 years. It explores how software has become inseparable from how we think and connect—and how AI is accelerating that shift toward adaptive, conversational interfaces. Made with Figma Make, of course.

Centered bold white text "Software is culture" on a soft pastel abstract gradient background (pink, purple, green, blue).

Software Is Culture

Yesterday’s software has shaped today’s generation. To understand what’s next as software grows more intelligent, we look back on 20 years of interaction design.

figma.com iconfigma.com

“Taste” gets invoked constantly in conversations about what AI can’t replace. But it’s often left undefined—a hand-wave toward something ineffable that separates good work from average work.

Yan Liu offers a working definition:

Product taste is the ability to quickly recognize whether something is high quality or not.

That’s useful because it frames taste as judgment, not aesthetics. Can you tell if a feature addresses a real problem? Can you sense what’s off about an AI-generated PRD even when it’s formatted correctly? Can you distinguish short-term growth tactics from long-term product health?

Liu cites Rick Rubin’s formula:

Great taste = Sensitivity × Standards

Sensitivity is how finely you perceive—noticing friction, asking why a screen exists, catching the moment something feels wrong. Standards are your internal reference system for what “good” actually looks like. Both can be trained.

This connects to something Dan Ramsden wrote in his piece on design’s value in product organizations: “taste without a rationale is just an opinion.” Liu’s framework gives taste a rationale. It’s not magic. It’s pattern recognition built through deliberate exposure and reflection.

The closing line is the one that sticks:

The real gap won’t be between those who use AI well and those who don’t. It will be between those who already know what “good” looks like before they ever open an AI tool.

Yellow background with centered black text "Product: It's all about Taste!" and thin black corner brackets.

Everyone Talks about “Taste”. What Is It? Why It Matters?

In 2025, you may have heard a familiar line repeated across the product world:

uxplanet.org iconuxplanet.org

This piece cites my own research on the collapse of entry-level design hiring, but it goes further—arguing that AI didn’t cause the crisis. It exposed one that’s been building for over a decade.

Dolphia, writing for UX Collective:

We told designers they didn’t need technical knowledge. Then we eliminated their jobs when they couldn’t influence technical decisions. That’s not inclusion. That’s malpractice.

The diagnosis is correct. The design industry spent years telling practitioners they didn’t need to understand implementation. And now those same designers can’t evaluate AI-generated output, can’t participate in architecture discussions, can’t advocate effectively when technical decisions are being made.

Dolphia’s evidence is damning. When Figma Sites launched, it generated 210 WCAG accessibility violations on demo sites—and designers couldn’t catch it because they didn’t know what to look for:

The paradox crystalizes: tools marketed as democratization require more technical knowledge than traditional workflows, not less.

Where I’d add nuance: the answer isn’t “designers should learn to code.” It’s that designers need to understand the medium they’re designing for. There’s a difference between writing production code and understanding what code does, between implementing a database schema and knowing why data models influence user workflows.

I’ve been rebuilding my own site with AI assistance for over a year now. I can’t write JavaScript from scratch. But I understand enough about static site generation, database trade-offs, and performance constraints to make informed architectural decisions and direct AI effectively. That’s the kind of technical literacy that matters—not syntax, but systems thinking.

In “From Craft to Curation,” I argued that design value is shifting from execution to direction. Dolphia’s piece is the corollary: you can’t provide direction if you don’t understand what you’re directing.

Speaker on stage wearing a black "Now with AI" T-shirt and headset mic, against a colorful sticky-note presentation backdrop.

Why AI is exposing design’s craft crisis

AI didn’t create the craft crisis in design — it exposed the technical literacy gap that’s been eroding strategic influence for over a…

uxdesign.cc iconuxdesign.cc

The data from Lenny’s Newsletter’s AI productivity survey showed PMs ranking prototyping as their #2 use case for AI, ahead of designers. Here’s what that looks like in practice.

Figma is now teaching PMs to build prototypes instead of writing PRDs. Using Figma Make, product managers can go from idea to interactive prototype without waiting on design. Emma Webster writing in Figma’s blog:

By turning early directions into interactive, high-fidelity prototypes, you can more easily explore multiple concepts and take ideas further. Instead of spending time writing documentation that may not capture the nuances of a product, prototypes enable you to show, rather than tell.

The piece walks through how Figma’s own PMs use Make for exploration, validation, and decision-making. One PM prototyped a feature flow and ran five user interviews—all within two days. Another used it to workshop scrolling behavior options that were “almost impossible to describe” in words.

The closing is direct about what this means for roles:

In this new landscape, the PMs who thrive will be those who embrace real-time iteration, moving fluidly across traditional role boundaries.

“Traditional role boundaries” being design’s territory.

This isn’t a threat if designers are already operating upstream—defining what to build, not just how it looks. But if your value proposition is “I make the mockups,” PMs now have tools to do that themselves.

Abstract blue scene with potted plants and curving vines, birds perched, a trumpet and ladder amid geometric icons.

Prototypes Are the New PRDs

Inside Figma Make, product managers are pressure-testing assumptions early, building momentum, and rallying teams around something tangible.

figma.com iconfigma.com

The optimistic case for designers in an AI-driven world is that design becomes strategy—defining what to build, not just how it looks. But are designers actually making that shift?

Noam Segal and Lenny Rachitsky, writing for Lenny’s Newsletter, share results from a survey of 1,750 tech workers. The headline is that AI is “overdelivering”—55% say it exceeded expectations, and most report saving at least half a day per week. But the findings by role tell a different story for designers:

Designers are seeing the fewest benefits. Only 45% report a positive ROI (compared with 78% of founders), and 31% report that AI has fallen below expectations, triple the rate among founders.

Meanwhile, founders are using AI to think—for decision support, product ideation, and strategy. They treat it as a thought partner, not a production tool. And product managers are building prototypes themselves:

Compare prototyping: PMs have it at #2 (19.8%), while designers have it at #4 (13.2%). AI is unlocking skills for PMs outside of their core work, whereas designers aren’t seeing the marginal improvement benefits from AI doing their core work.

The survey found that AI helps designers with work around design—research synthesis, copy, ideation—but visual design ranks #8 at just 3.3%. As Segal puts it:

AI is helping designers with everything around design, but pushing pixels remains stubbornly human.

This is the gap. The strategic future is available, but designers aren’t capturing it at the same rate as other roles. The question is why—and what to do about it.

Checked clipboard showing items like Speed, Quality and Research, next to headline "How AI is impacting productivity for tech workers

AI tools are overdelivering: results from our large-scale AI productivity survey

What exactly AI is doing for people, which AI tools have product-market fit, where the biggest opportunities remain, and what it all means

lennysnewsletter.com iconlennysnewsletter.com

Previously, I linked to Doug O’Laughlin’s piece arguing that UIs are becoming worthless—that AI agents, not humans, will be the primary consumers of software. It’s a provocative claim, and as a designer, I’ve been chewing on it.

Jeff Veen offers the counterpoint. Veen—a design veteran who cofounded Typekit and led products at Adobe—argues that an agentic future doesn’t diminish design. It clarifies it:

An agentic future elevates design into pure strategy, which is what the best designers have wanted all along. Crafting a great user experience is impossible if the way in which the business expresses its capabilities is muddied, vague or deceptive.

This is a more optimistic take than O’Laughlin’s, but it’s rooted in the same observation: when agents strip applications down to their primitives—APIs, CLI commands, raw capabilities, (plus data structures, I’d argue)—what’s left is the truth of what a business actually does.

Veen’s framing through responsive design is useful. Remember “mobile first”? The constraint of the small screen forced organizations to figure out what actually mattered. Everything else was cruft. Veen again:

We came to realize that responsive design wasn’t just about layouts, it was about forcing organizations to confront what actually mattered.

Agentic workflows do the same thing, but more radically. If your product can only be expressed through its API, there’s no hiding behind a slick dashboard or clever microcopy.

His closing question is great:

If an agent used your product tomorrow, what truths would it uncover about your organization?

For designers, this is the strategic challenge. The interface layer may become ephemeral—generated on the fly, tailored to the user, disposable. But someone still has to define what the product is. That’s design work. It’s just not pixel work.

Three smartphone screens showing search-result lists of app shortcuts: Wells Fargo actions, Contacts actions, and KAYAK trip/flight actions.

On Coding Agents and the Future of Design

How Claude Code is showing us what apps may become

veen.com iconveen.com

The rise of micro apps describes what’s happening from the bottom up—regular people building their own tools instead of buying software. But there’s a top-down story too: the structural obsolescence of traditional software companies.

Doug O’Laughlin makes the case using a hardware analogy—the memory hierarchy. AI agents are fast, ephemeral memory (like DRAM), while traditional software companies need to become persistent storage (like NAND, or ROM if you’re old school like me). The implication:

Human-oriented consumption software will likely become obsolete. All horizontal software companies oriented at human-based consumption are obsolete.

That’s a bold claim. O’Laughlin goes further:

Faster workflows, better UIs, and smoother integrations will all become worthless, while persistent information, a la an API, will become extremely valuable.

As a designer, this is where I start paying close attention. The argument is that if AI agents become the primary consumers of software—not humans—then the entire discipline of UI design is in question. O’Laughlin names names:

Figma could be significantly disrupted if UIs, as a concept humans create for other humans, were to disappear.

I’m not ready to declare UIs dead. People still want direct manipulation, visual feedback, and the ability to see what they’re doing. But the shift O’Laughlin describes is real: software’s value is migrating from presentation to data. The interface becomes ephemeral—generated on the fly, tailored to the task—while the source of truth persists.

This is what I was getting at in my HyperCard essay: the tools we build tomorrow won’t look like the apps we buy today. They’ll be temporary, personal, and assembled by AI from underlying APIs and data. The SaaS companies that survive will be the ones who make their data accessible to agents, not the ones with the prettiest dashboards.

Memory hierarchy pyramid: CPU registers and cache (L1–L3) top; RAM; SSD flash; file-based virtual memory bottom; speed/cost/capacity notes.

The Death of Software 2.0 (A Better Analogy!)

The age of PDF is over. The time of markdown has begun. Why Memory Hierarchies are the best analogy for how software must change. And why Software it’s unlikely to command the most value.

fabricatedknowledge.com iconfabricatedknowledge.com

Almost a year ago, I linked to Lee Robinson’s essay “Personal Software“ and later explored why we need a HyperCard for the AI era. The thesis: people would stop searching the App Store and start building what they need. Disposable tools for personal problems.

That future is arriving. Dominic-Madori Davis, writing for TechCrunch, documents the trend:

It is a new era of app creation that is sometimes called micro apps, personal apps, or fleeting apps because they are intended to be used only by the creator (or the creator plus a select few other people) and only for as long as the creator wants to keep the app. They are not intended for wide distribution or sale.

What I find compelling here is the word “fleeting.” We’ve been conditioned to think of software as permanent infrastructure—something you buy, maintain, and eventually migrate away from. But these micro apps are disposable by design. One founder built a gaming app for his family to play over the holidays, then shut it down when vacation ended. That’s not a failed product. That’s software that did exactly what it needed to do.

Howard University professor Legand L. Burge III frames it well:

It’s similar to how trends on social media appear and then fade away. But now, [it’s] software itself.

The examples in the piece range from practical (an allergy tracker, a parking ticket auto-payer) to whimsical (a “vice tracker” for monitoring weekend hookah consumption). But the one that stuck with me was the software engineer who built his friend a heart palpitation logger so she could show her doctor her symptoms. That’s software as a favor. Software as care.

Christina Melas-Kyriazi from Bain Capital Ventures offers what I think is the most useful framing:

It’s really going to fill the gap between the spreadsheet and a full-fledged product.

This is exactly right. For years, spreadsheets have been the place where non-developers build their own tools—janky, functional, held together with VLOOKUP formulas and conditional formatting. Micro apps are the evolution of that impulse, but with real interfaces and actual logic.

The quality concerns are real—bugs, security flaws, apps that only their creator can debug. But for personal tools that handle personal problems, “good enough for one” is genuinely good enough.

Woman with white angel wings holding a glowing wand, wearing white dress and boots, hovering above a glowing smartphone.

The rise of ‘micro’ apps: non-developers are writing apps instead of buying them

A new era of app creation is here. It’s fun, it’s fast, and it’s fleeting.

techcrunch.com icontechcrunch.com

Claude Code is having a moment. Anthropic’s agentic coding tool has gone viral over the past few weeks, with engineers and non-engineers alike discovering what it feels like to hand real work over to an AI and watch it execute autonomously. The popular tech podcast Hard Fork has already had two segments on it in the last two weeks. In the first, hosts Kevin Roose and Casey Newton share their Claude Code projects. And in the second, they highlight some from their listeners. (Alas, my Severance fan project did not make the cut.)

I’ve been using Cursor and Claude Code to build and rebuild this site for over a year now, so when I read this piece and see coders describing their experience with it, I understand the feeling.

Bradley Olson (gift link), writing for the Wall Street Journal:

Some described a feeling of awe followed by sadness at the realization that the program could easily replicate expertise they had built up over an entire career.

“It’s amazing, and it’s also scary,” said Andrew Duca, chief executive of Awaken Tax, a cryptocurrency tax platform. Duca has been coding since he was in middle school. “I spent my whole life developing this skill, and it’s literally one-shotted by Claude Code.”

Duca decided not to hire the engineers he’d been planning to bring on. He thinks Claude makes him five times more productive.

The productivity numbers throughout the piece are striking:

Malte Ubl is chief technology officer at Vercel, which helps develop and host websites and apps for users of Claude Code and other such tools. He said he used the tool to finish a complex project in a week that would’ve taken him about a year without AI. Ubl spent 10 hours a day on his vacation building new software and said each run gave him an endorphin rush akin to playing a Vegas slot machine.

But what caught my attention is what people are using it for beyond code—analyzing MRI data, recovering wedding photos from corrupted drives, monitoring tomato plants with a webcam. Olson again:

Unlike most app- or web-bound chatbots now in wide use, it can operate autonomously, with broad access to user files, a web browser and other applications. While technologists have predicted a coming era of AI “agents” capable of doing just about anything for humans, that future has been slow to develop. Using Claude Code was the first time many users interacted with this kind of AI, offering an inkling of what may be in store.

Anthropic took notice of course and launched a beta of Cowork last week.

Instead of the MS-DOS-like “command line” interface that the core app has, Cowork displays a more friendly, graphical user interface. They built the product in about 10 days—using Claude Code.

The closing question is the right one:

“The bigger story here is going to be when this goes beyond software engineering,” said David Hsu, chief executive of Retool, a business-AI startup. Software engineers make up a tiny fraction of the U.S. labor force. “How far does it go?”

Replace “software engineering” with “design” and you have the question I’m exploring this week.

Claude Code v2.0.0' terminal greeting "Welcome back Meaghan!" with orange pixel mascot; right column lists recent activity and new commands.

Claude Is Taking the AI World by Storm, and Even Non-Nerds Are Blown Away

(Gift link) Developers and hobbyists are comparing the viral moment for Anthropic’s Claude Code to the launch of generative AI

wsj.com iconwsj.com

My wife is an obesity medicine and women’s health specialist, so she’s been in my ear talking about ultraprocessed foods for years. That’s why the processed food analogy for AI-generated software resonates. We industrialized agriculture and got abundance, yes—but also obesity, diabetes, and 318 million people still experiencing acute hunger. The problem was never production capacity.

Chris Loy applies this lens to where software is heading:

Industrial systems reliably create economic pressure toward excess, low quality goods. This is not because producers are careless, but because once production is cheap enough, junk is what maximises volume, margin, and reach. The result is not abundance of the best things, but overproduction of the most consumable ones.

Loy introduces the term “disposable software”—software created with no expectation of ownership, maintenance, or long-term understanding. Vibe-coded apps. AI slop. Whatever you want to call it, the economics are different: easy reproducibility means each output has less value, which means volume becomes the only game. Just look in the App Store for any popular category such as todo lists, notetakers, and word puzzles. Or look in r/SaaS and notice the glut of single people building and selling their own products.

Loy goes on to compare this movement with mass-produced fashion as well:

For example, prior to industrialisation, clothing was largely produced by specialised artisans, often coordinated through guilds and manual labour, with resources gathered locally, and the expertise for creating durable fabrics accumulated over years, and frequently passed down in family lines. Industrialisation changed that completely, with raw materials being shipped intercontinentally, fabrics mass produced in factories, clothes assembled by machinery, all leading to today’s world of fast, disposable, exploitative fashion.

Disposable fashion leads to vast overproduction, with estimates that 20–40% (up to 30–60 billion pieces) go unsold. There’s a waste of people’s time, tokens, electricity, and ultimately consumer dollars that AI enables.

The silver lining that Loy observes is in innovation. Entirely human-written code isn’t the answer. It’s doing the necessary research and development to innovate. My take is that’s exactly where designers need to be sitting.

Sepia-toned scene of a stone watermill with a large wooden wheel by a river, small rowboat and ducks, arched bridge and distant smokestacks.

The rise of industrial software

For most of its history, software has been closer to craft than manufacture: costly, slow, and dominated by the need for skills and experience. AI coding is changing that, by making available paths of production which are cheaper, faster, and increasingly disconnected from the expertise of humans.

chrisloy.dev iconchrisloy.dev

Product manager Adrian Raudaschl offered some reflections on 2025 from his point of view. It’s a mixture of life advice, product recommendations, and thoughts about the future of tech work.

The first quote I’ll pull out is this one, about creativity and AI:

Ultimately, if we fail to maintain active engagement with the creative process and merely delegate tasks to AI without reflection, there is a risk that delegation becomes abdication of responsibility and authorship.

“Active engagement” with the tasks that we delegate to AI. This reminds me of the humble machines argument by Dr. Maya Ackerman.

On vibe coding:

The most important thing, I think, that most people in knowledge work should be doing is learning to vibe code. Vibe code anything: a diary, a picture book for your mum, a fan page for your local farm. Anything. It’s not about learning to code, but rather appreciating how much more we could do with machines than before. This is what I mean about the generalist product manager: being able to prototype, test, and build without being held back by technical constraints.

I concur 100%. Even if you don’t think you’re a developer, even if you don’t quite understand code, vibe coding something will be illuminating. I think it’s different than asking ChatGPT for a bolognese sauce recipe or how to change a tire. Building something that will instantly run on your computer and seeing the adjustments made in real-time from your plain English prompts is very cool and gives you a glimpse into how LLMs problem-solve.

A product manager’s 48 reflections on 2025

A product manager’s 48 reflections on 2025

and why I’ve been making Bob Dylan songs about Sonic the Hedgehog

uxdesign.cc iconuxdesign.cc

Yesterday, Anthropic launched Cowork, a research preview that is essentially Claude Code but for non-coders.

From the blog announcement:

How is using Cowork different from a regular conversation? In Cowork, you give Claude access to a folder of your choosing on your computer. Claude can then read, edit, or create files in that folder. It can, for example, re-organize your downloads by sorting and renaming each file, create a new spreadsheet with a list of expenses from a pile of screenshots, or produce a first draft of a report from your scattered notes.

In Cowork, Claude completes work like this with much more agency than you’d see in a regular conversation. Once you’ve set it a task, Claude will make a plan and steadily complete it, while looping you in on what it’s up to. If you’ve used Claude Code, this will feel familiar—Cowork is built on the very same foundations. This means Cowork can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks.

Apparently, Cowork was built very quickly using—naturally—Claude Code. Michael Nuñez in VentureBeat:

…according to company insiders, the team built the entire feature in approximately a week and a half, largely using Claude Code itself.

Alas, this is only available to Claude Max subscribers ($100–200 per month). I will need to check it out when it’s more widely available.

White jagged lightning-shape on a terracotta background with a black zigzag line connecting three solid black dots.

Introducing Cowork | Claude | Claude

Claude Code’s agentic capabilities, now for everyone. Give Claude access to your files and let it organize, create, and edit documents while you focus on what matters.

claude.com iconclaude.com

AI threatens to let product teams ship faster. Faster PRDs, faster designs, and faster code. But going too fast can often lead to incurring design and tech debt, or even worse, shipping the wrong thing.

Anton Sten sagely warns:

The biggest pattern I have seen across startups is that skipping clarity never saves time. It costs time. The fastest teams are not the ones shipping the most. They are the ones who understand why they are shipping. That is the difference between moving for the sake of movement and moving with purpose. It is the difference between speed and true velocity.

How do you avoid this? Sten:

The reset is simple and almost always effective. Before building anything, pause long enough to ask, “What problem am I solving, and for whom?” It sounds basic, but this question forces alignment. It replaces assumptions with clarity and shifts attention back to the user instead of internal preferences. When teams do this consistently, the entire atmosphere changes. Decisions become easier. Roadmaps make more sense. People contribute more of themselves. You can feel momentum return.

The hidden cost of shipping too fast

Speed often gets treated as progress even when no one has agreed on what progress actually means. Here’s why clarity matters more than velocity.

antonsten.com iconantonsten.com

Imagine working for seven years designing the prototyping features at Figma and then seeing GPT-4 and realizing what AI can soon do in the future. That’s the story of Figma designer–turned–product manager Nikolas Klein. He shares his journey via a lovely illustrated comic—Webtoon style.

Klein emphasizes:

The truth is: There will always be new problems to solve. New ideas to take further. Even with AI, hard problems are still hard. An answer may come faster, but it’s not always right.

Hard Problems Are Still Hard: A Story About the Tools That Change and the Work That Doesn’t | Figma Blog

Hard Problems Are Still Hard: A Story About the Tools That Change and the Work That Doesn’t | Figma Blog

Figma designer–turned–product manager Nikolas Klein worked on building prototyping tools for seven years. Then AI changed the game.

figma.com iconfigma.com

We’ve been feeling it for a while. AI-generated posts and comments filling up the feeds on LinkedIn. Em dashes were said to be the tell that AI wrote the content. Other patterns are easy to spot, like overuse of emojis in headings and my personal most-hated, the “it’s not X, it’s Y.” That type of construction is called an antithesis and it’s exploded. And now that I’ve pointed it out, I’m sure you’ll notice it everywhere too. Sorry, not sorry.

Sam Kriss, exploring why AI writes the way it does:

A lot of A.I.’s choices make sense when you understand that it’s…trying to write well. It knows that good writing involves subtlety: things that are said quietly or not at all, things that are halfway present and left for the reader to draw out themselves. So to reproduce the effect, it screams at the top of its voice about how absolutely everything in sight is shadowy, subtle and quiet. Good writing is complex. A tapestry is also complex, so A.I. tends to describe everything as a kind of highly elaborate textile. Everything that isn’t a ghost is usually woven. Good writing takes you on a journey, which is perhaps why I’ve found myself in coffee shops that appear to have replaced their menus with a travel brochure. “Step into the birthplace of coffee as we journey to the majestic highlands of Ethiopia.” This might also explain why A.I. doesn’t just present you with a spreadsheet full of data but keeps inviting you, like an explorer standing on the threshold of some half-buried temple, to delve in.

All of this contributes to the very particular tone of A.I.-generated text, always slightly wide-eyed, overeager, insipid but also on the verge of some kind of hysteria. But of course, it’s not just the words — it’s what you do with them. As well as its own repertoire of words and symbols, A.I. has its own fundamentally manic rhetoric. For instance, A.I. has a habit of stopping midway through a sentence to ask itself a question. This is more common when the bot is in conversation with a user, rather than generating essays for them: “You just made a great point. And honestly? That’s amazing.”

Why Does A.I. Write Like … That?

Why Does A.I. Write Like … That?

(Gift Link) If only they were robotic! Instead, chatbots have developed a distinctive — and grating — voice.

nytimes.com iconnytimes.com
Storyboard grid showing a young man and family: kitchen, driving, airplane, supermarket, night house, grill, dinner.

Directing AI: How I Made an Animated Holiday Short

My first taste of generating art with AI was back in 2021 with Wombo Dream. I even used it to create very trippy illustrations for a series I wrote on getting a job as a product designer. To be sure, the generations were weird, if not even ugly. But it was my first test of getting an image by typing in some words. Both Stable Diffusion and Midjourney gained traction the following year and I tried both as well. The results were never great or satisfactory. Years upon years of being an art director had made me very, very picky—or put another way, I had developed taste.

I didn’t touch generative AI art again until I saw a series of photos by Lars Bastholm playing with Midjourney.

Child in yellow jacket smiling while holding a leash to a horned dragon by a park pond in autumn.

Lars Bastholm created this in Midjourney, prompting “What if, in the 1970s, they had a ‘Bring Your Monster’ festival in Central Park?”

That’s when I went back to Midjourney and started to illustrate my original essays with images generated by it, but usually augmented by me in Photoshop.

In the intervening years, generative AI art tools had developed a common set of functionality that was all very new to me: inpainting, style, chaos, seed, and more. Beyond closed systems like Midjourney and OpenAI’s DALL-E, open source models from Stable Diffusion, Flux, and now a plethora of Chinese models offer even better prompt adherence and controllability via even more opaque-sounding functionality like control nets, LoRAs, CFG, and other parameters. It’s funny to me that for a very artistic field, the associated products to enable these creations are very technical.

Foggy impressionist painting of a steam train crossing a bridge, plume of steam and a small rowboat on the river below.

The Year AI Changed Design

At the beginning of this year, AI prompt-to-code tools were still very new to the market. Lovable had just relaunched in December and Bolt debuted just a couple months before that. Cursor was my first taste of using AI to code back in November of 2024. As we sit here in December, just 12 months later, our profession and the discipline of design has materially changed. Now, of course, the core is still the same. But how we work, how we deliver, and how we achieve results, are different.

When ChatGPT got good (around GPT-4), I began using it as a creative sounding board. Design is never a solitary activity and feedback from peers and partners has always been a part of the process. To be able to bounce ideas off of an always-on, always-willing creative partner was great. To be sure, I didn’t share sketches or mockups; I was playing with written ideas.

Now, ChatGPT or Gemini’s deep research features are often where I start when I begin to tackle a new feature. And after the chatbot has written the report, I’ll read it and ask a lot of questions as a way of learning and internalizing the material. I’ll then use that as a jumping off point for additional research. Many designers on my team do the same.

Huei-Hsin Wang at NN/g published a post about how to write better prompts for AI prompt-to-code tools.

When we asked AI-prototyping tools to generate a live-training profile page for NN/G course attendees, a detailed prompt yielded quality results resembling what a human designer created, whereas a vague prompt generated inconsistent and unpredictable outcomes across the board.

There’s a lot of detailing of what can often go wrong. Personally, I don’t need to read about what I experience daily, so the interesting bit for me is about two-thirds of the way into the article. Wang lists five strategies to employ to get better results.

  • Visual intent: Name the style precisely—use concrete design vocabulary or frameworks instead of vague adjectives. Anchor prompts with recognizable patterns so the model locks onto the look and structure, not “clean/modern” fluff.
  • Lightweight references: Drop in moodboards, screenshots, or system tokens to nudge aesthetics without pixel-pushing. Expect resemblance, not perfection; judge outcomes on hierarchy and clarity, not polish alone.
  • Text-led visual analysis: Have AI describe a reference page’s layout and style in natural language, then distill those characteristics into a tighter prompt. Combine with an image when possible to reinforce direction.
  • Mock data first: Provide realistic sample content or JSON so the layout respects information architecture. Content-driven prompts produce better grouping, hierarchy, and actionable UI than filler lorem ipsum.
  • Code snippets for precision: Attach component or layout code from your system or open-source libraries to reduce ambiguity. It’s the most exact context, but watch length; use selectively to frame structure.
Prompt to Design Interfaces: Why Vague Prompts Fail and How to Fix Them

Prompt to Design Interfaces: Why Vague Prompts Fail and How to Fix Them

Create better AI-prototyping designs by using precise visual keywords, references, analysis, as well as mock data and code snippets.

nngroup.com iconnngroup.com

On the heels of OpenAI’s report “The state of enterprise AI,” Anthropic published a blog post detailing research about how AI is being used by the employees building AI. The researchers surveyed 132 engineers and researchers, conducted 53 interviews, and looked at Claude usage data.

Our research reveals a workplace facing significant transformations: Engineers are getting a lot more done, becoming more “full-stack” (able to succeed at tasks beyond their normal expertise), accelerating their learning and iteration speed, and tackling previously-neglected tasks. This expansion in breadth also has people wondering about the trade-offs—some worry that this could mean losing deeper technical competence, or becoming less able to effectively supervise Claude’s outputs, while others embrace the opportunity to think more expansively and at a higher level. Some found that more AI collaboration meant they collaborated less with colleagues; some wondered if they might eventually automate themselves out of a job.

The post highlights several interesting patterns.

  • Employees say Claude now touches about 60% of their work and boosts output by roughly 50%.
  • Employees say that 27% of AI‑assisted tasks is work that wouldn’t have happened otherwise—like papercut fixes, tooling, and exploratory prototypes.
  • Engineers increasingly use it for new feature implementation and even design/planning.

Perhaps most provocative is career trajectory. Many engineers describe becoming managers of AI agents, taking accountability for fleets of instances and spending more time reviewing than writing net‑new code. Short‑term optimism meets long‑term uncertainty: productivity is up, ambition expands, but the profession’s future shape—levels of abstraction, required skills, and pathways for growth—remains unsettled. See also my series on the design talent crisis.

Two stylized black line-drawn hands over a white rectangle on a pale green background, suggesting typing.

How AI Is Transforming Work at Anthropic

How AI Is Transforming Work at Anthropic

anthropic.com iconanthropic.com

This is a fascinating watch. Ryo Lu, Head of Design at Cursor builds a retro Mac calculator using Cursor agents while being interviewed. Lu’s personal website is an homage to Mac OX X, complete with Aqua-style UI elements. He runs multiple local background agents without stepping on each other, fixes bugs live, and themes UI to match system styles so it feels designed—not “purple AI slop,” as he calls it.

Lu, as interview by Peter Yang, on how engineers and designers work together at Cursor (lightly edited for clarity):

So at Cursor, the roles between designers, PM, and engineers are really muddy. We kind of do the part [that is] our unique strength. We use the agent to tie everything. And when we need help, we can assemble people together to work on the thing.

Maybe some of [us] focus more on the visuals or interactions. Some focus more on the infrastructure side of things, where you design really robust architecture to scale the thing. So yeah, there is a lot less separation between roles and teams or even tools that we use. So for doing designs…we will maybe just prototype in Cursor, because that lets us really interact with the live states of the app. It just feels a lot more real than some pictures in Figma.

And surprisingly, they don’t have official product managers at Cursor. Yang asks, “Did you actually actually hire a PM because last time I talked to Lee [Robinson] there was like no PMs.”

Lu again, and edited lightly for clarity:

So we did not hire a PM yet, but we do have an engineer who used to be a founder. He took a lot more of the PM-y side of the job, and then became the first PM of the company. But I would still say a lot of the PM jobs are kind of spread across the builders in the team.

That mostly makes sense because it’s engineers building tools for engineers. You are your audience, which is rare.

Full Tutorial: Design to Code in 45 Min with Cursor’s Head of Design | Ryo Lu

Design-to-code tutorial: Watch Cursor’s Head of Design Ryo Lu build a retro Mac calculator with agents - a 45-minute, hands-on walkthrough to prototype and ship

youtube.com iconyoutube.com

It’s always interesting for me to read how other designers use AI to vibe code their projects. I think using Figma Make to conjure a prototype is one thing, but vibe coding something in production is entirely different. Personally, I’ve been through it a couple of times that I’ve already detailed here and here.

Anton Sten recently wrote about his process. Like me, he starts in Figma:

This might be the most important part: I don’t start by talking to AI. I start in Figma.

I know Figma. I can move fast there. So I sketch out the scaffolding first—general theme, grids, typography, color. Maybe one or two pages. Nothing polished, just enough to know what I’m building.

Why does this matter? Because AI will happily design the wrong thing for you. If you open Claude Code with a vague prompt and no direction, you’ll get something—but it probably won’t be what you needed. AI is a builder, not an architect. You still have to be the architect.

I appreciate Sten’s conclusion to not let the AI do all of it for you, echoing Dr. Maya Ackerman’s sentiment of humble creative machines:

But—and this is important—you still need design thinking and systems thinking. AI handles the syntax, but you need to know what you’re building, why you’re building it, and how the pieces fit together. The hard part was never the code. The hard part is the decisions.

Vibe coding for designers: my actual process | Anton Sten

An honest breakdown of how I built and maintain antonsten.com using AI—what actually works, where I’ve hit walls, and why designers should embrace this approach.

antonsten.com iconantonsten.com

Economics PhD student Prashant Garg performed a fascinating analysis of Bob Dylan’s lyrics from 1962 to 2012 using AI. He detailed his project in Aeon:

So I fed Dylan’s official discography from 1962 to 2012 into a large language model (LLM), building a network of the concepts and connections in his songs. The model combed through each lyric, extracting pairs of related ideas or images. For example, it might detect a relationship between ‘wind’ and ‘answer’ in ‘Blowin’ in the Wind’ (1962), or between ‘joker’ and ‘thief’ in ‘All Along the Watchtower’ (1967). By assembling these relationships, we can construct a network of how Dylan’s key words and motifs braid together across his songs.

The resulting dataset is visualized in a series of node graphs and bar charts. What’s interesting is that AI is able to see Dylan’s work through a new lens, something that prior scholarship may have missed.

…Yet, when used as a lens rather than an oracle, the same models can jolt even seasoned critics out of interpretive ruts and reveal themes they might have missed. Far from reducing Dylan to numbers, this approach highlights how intentionally intricate his songwriting is: a restless mind returning to certain images again and again, recombining them in ever-new mosaics. In short, AI lets us test the folklore around Dylan, separating the theories that data confirm from those they quietly refute.

Black-and-white male portrait overlaid by colorful patterned strips radiating across the face, each strip bearing small single-word labels.

Can AI tell us anything meaningful about Bob Dylan’s songs?

Generative AI sheds new light on the underlying engines of metaphor, mood and reinvention in six decades of songs

aeon.co iconaeon.co

T-shaped, M-shaped, and now Σ-shaped designers?! Feels like a personality quiz or something. Or maybe designers are overanalyzing as usual.

Here’s Darren Yeo telling us what it means:

The Σ-shape defines the new standard for AI expertise: not deep skills, but deep synthesis. This integrator manages the sum of complex systems (Σ) by orchestrating the continuous, iterative feedback loops (σ), ensuring system outputs align with product outcomes and ethical constraints.

Whether you subscribe to the Three Lens framework as proposed by Oliver West, or this sigma-shaped one being proposed by Darren Yeo, just be yourself and don’t bring it up in interviews.

Large purple sigma-shaped graphic on a grid-paper background with the text "Sigma shaped designer".

The AI era needs Sigma (Σ) shaped designers (Not T or π)

For years, design and tech teams have relied on shape metaphors to describe expertise. We had T-shaped people (one deep skill, broad…

uxdesign.cc iconuxdesign.cc

Hey designer, how are you? What is distracting you? Who are you having trouble working with?

Those are a couple of the questions designer Nikita Samutin and UX researcher Elizaveta Demchenko asked 340 product designers in a survey and in 10 interviews. They published their findings in a report called “State of Product Design: An Honest Conversation About the Profession.”

When I look at the calendars of the designers on my team, I see loads of meetings scheduled. So it’s no surprise to me that 64% of respondents said that switching between tasks distracted them. “Multitasking and unpredictable communication are among the main causes of distraction and stress for product designers,” the researchers wrote.

The most interesting to me, are the results in the section, “How Designers See Their Role.” Sixty-percent of respondents want to develop leadership skills and 47% want to improve presenting ideas.

For many, “leadership” doesn’t mean managing people—it means scaling influence: shaping strategy, persuading stakeholders, and leading high-impact projects. In other words, having a stronger voice in what gets built and why.

It’s telling because I don’t see pixel-pushing in the responses. And that’s a good thing in the age of AI.

Speaking of which, 77% of designers aren’t afraid that AI may replace them. “Nearly half of respondents (49%) say AI has already influenced their work, and many are actively integrating new tools into their processes. This reflects the state of things in early 2025.”

I’m sure that number would be bigger if the survey were conducted today.

State of Product Design: An Honest Conversation About the Profession — ’25; author avatars and summary noting a survey of 340 designers and 10 interviews.

State of Product Design 2025

2025 Product Design report: workflows, burnout, AI impact, career growth, and job market insights across regions and company types.

sopd.design iconsopd.design