Skip to content

Product manager Adrian Raudaschl offered some reflections on 2025 from his point of view. It’s a mixture of life advice, product recommendations, and thoughts about the future of tech work.

The first quote I’ll pull out is this one, about creativity and AI:

Ultimately, if we fail to maintain active engagement with the creative process and merely delegate tasks to AI without reflection, there is a risk that delegation becomes abdication of responsibility and authorship.

”Active engagement” with the tasks that we delegate to AI. This reminds me of the humble machines argument by Dr. Maya Ackerman.

On vibe coding:

The most important thing, I think, that most people in knowledge work should be doing is learning to vibe code. Vibe code anything: a diary, a picture book for your mum, a fan page for your local farm. Anything. It’s not about learning to code, but rather appreciating how much more we could do with machines than before. This is what I mean about the generalist product manager: being able to prototype, test, and build without being held back by technical constraints.

I concur 100%. Even if you don’t think you’re a developer, even if you don’t quite understand code, vibe coding something will be illuminating. I think it’s different than asking ChatGPT for a bolognese sauce recipe or how to change a tire. Building something that will instantly run on your computer and seeing the adjustments made in real-time from your plain English prompts is very cool and gives you a glimpse into how LLMs problem-solve.

A product manager’s 48 reflections on 2025

A product manager’s 48 reflections on 2025

and why I’ve been making Bob Dylan songs about Sonic the Hedgehog

uxdesign.cc iconuxdesign.cc

Yesterday, Anthropic launched Cowork, a research preview that is essentially Claude Code but for non-coders.

From the blog announcement:

How is using Cowork different from a regular conversation? In Cowork, you give Claude access to a folder of your choosing on your computer. Claude can then read, edit, or create files in that folder. It can, for example, re-organize your downloads by sorting and renaming each file, create a new spreadsheet with a list of expenses from a pile of screenshots, or produce a first draft of a report from your scattered notes.

In Cowork, Claude completes work like this with much more agency than you’d see in a regular conversation. Once you’ve set it a task, Claude will make a plan and steadily complete it, while looping you in on what it’s up to. If you’ve used Claude Code, this will feel familiar—Cowork is built on the very same foundations. This means Cowork can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks.

Apparently, Cowork was built very quickly using—naturally—Claude Code. Michael Nuñez in VentureBeat:

…according to company insiders, the team built the entire feature in approximately a week and a half, largely using Claude Code itself.

Alas, this is only available to Claude Max subscribers ($100–200 per month). I will need to check it out when it’s more widely available.

White jagged lightning-shape on a terracotta background with a black zigzag line connecting three solid black dots.

Introducing Cowork | Claude | Claude

Claude Code’s agentic capabilities, now for everyone. Give Claude access to your files and let it organize, create, and edit documents while you focus on what matters.

claude.com iconclaude.com

AI threatens to let product teams ship faster. Faster PRDs, faster designs, and faster code. But going too fast can often lead to incurring design and tech debt, or even worse, shipping the wrong thing.

Anton Sten sagely warns:

The biggest pattern I have seen across startups is that skipping clarity never saves time. It costs time. The fastest teams are not the ones shipping the most. They are the ones who understand why they are shipping. That is the difference between moving for the sake of movement and moving with purpose. It is the difference between speed and true velocity.

How do you avoid this? Sten:

The reset is simple and almost always effective. Before building anything, pause long enough to ask, “What problem am I solving, and for whom?” It sounds basic, but this question forces alignment. It replaces assumptions with clarity and shifts attention back to the user instead of internal preferences. When teams do this consistently, the entire atmosphere changes. Decisions become easier. Roadmaps make more sense. People contribute more of themselves. You can feel momentum return.

The hidden cost of shipping too fast

Speed often gets treated as progress even when no one has agreed on what progress actually means. Here’s why clarity matters more than velocity.

antonsten.com iconantonsten.com

Graphic designer Emily Sneddon performed some impressive typographic anthropology to digitize the font used on San Francisco’s MUNI light rail cars. She tracked down the original engineer who worked on the destination display signs for a company called Trans-Lite.

Sneddon, on her blog:

Learning that the alphabet came from an engineer really explains its temperament and why I was drawn to it in the first place. The signs were designed for sufficiency: fixed segments, fixed grid, and no extras. Characters were created only as destinations required them, while other characters, like the Q, X, and much of the punctuation, were never programmed into the signs. In reducing everything to its bare essentials, somehow character emerged, and it’s what inspired me to design Fran Sans.

Rows of rectangular black-and-white tiled blocks forming a geometric uppercase alphabet, digits, and symbols using white grid lines.

Fran Sans by Emily Sneddon

Fran Sans: Emily Sneddon digitizes Muni's Breda 3x5 display alphabet, traces Trans-Lite engineer Gary Wallberg, preserving a fading San Francisco type.

emilysneddon.com iconemilysneddon.com

Echoing my series on the design talent crisis and other articles warning against the practice of cutting back on junior talent, Vaughn Tan offers yet another dimension: subjective decision-making skills are only honed through practice. But the opportunities given to junior staff for this type of decision-making are slim.

But to back up, here’s Tan explaining what subjective decision-making is:

These are decisions where there’s no one “correct” answer and the answers that work can’t be known in advance. Subjective decisionmaking requires critical thinking skills to make strongly reasoned arguments, identify appropriate evidence, understand the tradeoffs of different arguments, make decisions that may (or may not) be correct, and develop compelling narratives for those decisions.

While his article isn’t about AI nor is it about companies not hiring juniors, it is about companies not developing juniors and allowing them to practice this type of decision-making in low-stakes situations.

Critical thinking and judgment require practice. Practice needs to be frequent, and needs to begin at a low level with very few consequences that are important. This small-scale training in subjective decisionmaking and critical thinking is the best way to learn how to do it properly in more consequential situations.

If you wait until someone is senior to teach judgment, their first practice attempts have serious consequences. High-stakes decisionmaking pressure cannot be simulated realistically; learning how to deal with it requires actual practice with real consequences that progressively increases in scope and consequentiality.

And why is this all important? Not developing junior staff means there will be a bottleneck issue—only seniors can make these judgement calls—and one day, there will be a succession problem, i.e., who takes over when the seniors leave or retire.

Judgment from the ground up

tl;dr: Critical thinking is foundational for making decisions that require subjective judgment. People learn how to do subjective decisionmaking

vaughntan.org iconvaughntan.org

This short article could easily fall under the motivational category, but I couldn’t help but draw parallels to what we do as designers when working as part of a product team.

Hazel Weakly says that people who see systems also tend to become in charge of them, sooner or later. And to be a leader is to “understand that you’ll find yourself stranded in the middle of the ocean one day.”

Not just you, but everyone you lead. And you’ll need to chart a course. In the ever-changing winds, the ever-shifting tides, the unknown weather, and with an inability to see up or down or basically anywhere except a few minutes away. You won’t have the time to find your bearings even if you could. Yet, somehow, in this sea of swirling and infinite complexities and probabilities, in the midst of incalculable odds, you will find yourself needing to have simultaneously several different things…

The first thing is that you will fail. I equate this to knowing that design is about trial and error, testing and measuring, and then adjusting.

The second thing is unshakable conviction that “you will succeed.” I see success as solving the problem, coming up with a solution that helps users do what they need. And you know what? Designers will succeed when they follow the design process.

Finally, the third thing is to “prepare and make ready everyone around you.” Which means influencing your product and engineering counterparts and other stakeholders that the solution your advocating for is the right one.

To Be a Leader of Systems | Hazel Weakly

To Be a Leader of Systems

Picture with me, if you will, the absurdity of finding yourself swimming in the middle of the ocean. First think about the ocean and how deep and infinitely…

hazelweakly.me iconhazelweakly.me

One of the most interesting things about design systems is how many of them are public—maybe not open source, but public so that we can all learn from them.

The earliest truly public, documented design systems showed up in the early 2010s. There isn’t a single “first,” but a few set the tone. GOV.UK published openly and became the public‑sector benchmark. Google’s Material landed in 2014 with a comprehensive spec. Salesforce’s Lightning started surfacing around 2013–2014 and matured mid‑decade. IBM’s Carbon followed soon after. Earlier frameworks like Bootstrap and Foundation (2011) acted like de facto systems for many teams, but they weren’t a company’s product design system made public.

PJ Onori says that public design systems are a “marketplace of ideas.”

Public design systems have lifted all boats in the harbor. Most design system teams do the rounds to see how other teams have tackled problems. Every system that raises bar puts healthy pressure on others to meet or exceed it. This shared ecosystem may be the most important facet of the design systems practice.

Onori also says that there may be a growing trend to shut down public design systems:

There’s a growing trend to close down public systems. Funny enough, the first thing I did when I left Pinterest was clone the Gestalt repo. I had this spidey sense it wouldn’t be around forever. Yes, their web codebase is still open source, but the docs have gone private. That one stung. Gestalt wasn’t the first design system to be public. It wasn’t the best one either. But it’s hat was in the ring–and that’s what mattered.

But that’s only one design system, right? Sadly, I’m hearing more chatting about mounting pressure to privatize their systems.

This is an incredibly shitty idea.

Why? Because that’s how we all learn from each other. That’s how something like the Component Gallery can exist as a resource for all of us.

Open design systems are the library for people wanting to get into design systems. They’re a free resource to expand their understanding. There’s no college of design systems. Bootcamps exist, but they’re bootcamps–and I’ll leave it at that. The generation who shaped design systems didn’t create universities–they built libraries. Those libraries can train the next generation once people like me age out. When the libraries go, so does the transfer of knowledge.

Public design systems are worth it

Public design systems are worth it

It’s incredibly valuable to make a design system available to all–no matter what the bean-counters say.

pjonori.blog iconpjonori.blog

There’s a myth that B2B marketing needs to be boring. Wrong. I’ve long believed that B2B advertising and marketing can and should be more consumer-like because at the end of the day, it’s a human on the other side of that message that needs to receive it. Sure, the buying cycle and decision-making is different, but the initial recipient is one person.

Creative director Scott McGuffie agrees, arguing in PRINT Magazine:

The best B2B work today doesn’t look different for the sake of it; it feels relevant to the world around it. Whether through wit, humanity, storytelling, or design, great B2B work connects to the same sensibilities that drive consumer creativity, allowing B2B to show up in new spaces, such as entertainment streaming services, once considered only a B2C space. It proves that professionalism and imagination are not mutually exclusive.

B2B Doesn’t Need to Be Dull – PRINT Magazine

B2B Doesn’t Need to Be Dull

Expectations say that B2B campaigns must be rational and serious, while B2C are creative and emotional. Yet that no longer reflects the world we live in.

printmag.com iconprintmag.com

Imagine working for seven years designing the prototyping features at Figma and then seeing GPT-4 and realizing what AI can soon do in the future. That’s the story of Figma designer–turned–product manager Nikolas Klein. He shares his journey via a lovely illustrated comic—Webtoon style.

Klein emphasizes:

The truth is: There will always be new problems to solve. New ideas to take further. Even with AI, hard problems are still hard. An answer may come faster, but it’s not always right.

Hard Problems Are Still Hard: A Story About the Tools That Change and the Work That Doesn’t | Figma Blog

Hard Problems Are Still Hard: A Story About the Tools That Change and the Work That Doesn’t | Figma Blog

Figma designer–turned–product manager Nikolas Klein worked on building prototyping tools for seven years. Then AI changed the game.

figma.com iconfigma.com

We’ve been feeling it for a while. AI-generated posts and comments filling up the feeds on LinkedIn. Em dashes were said to be the tell that AI wrote the content. Other patterns are easy to spot, like overuse of emojis in headings and my personal most-hated, the “it’s not X, it’s Y.” That type of construction is called an antithesis and it’s exploded. And now that I’ve pointed it out, I’m sure you’ll notice it everywhere too. Sorry, not sorry.

Sam Kriss, exploring why AI writes the way it does:

A lot of A.I.’s choices make sense when you understand that it’s…trying to write well. It knows that good writing involves subtlety: things that are said quietly or not at all, things that are halfway present and left for the reader to draw out themselves. So to reproduce the effect, it screams at the top of its voice about how absolutely everything in sight is shadowy, subtle and quiet. Good writing is complex. A tapestry is also complex, so A.I. tends to describe everything as a kind of highly elaborate textile. Everything that isn’t a ghost is usually woven. Good writing takes you on a journey, which is perhaps why I’ve found myself in coffee shops that appear to have replaced their menus with a travel brochure. “Step into the birthplace of coffee as we journey to the majestic highlands of Ethiopia.” This might also explain why A.I. doesn’t just present you with a spreadsheet full of data but keeps inviting you, like an explorer standing on the threshold of some half-buried temple, to delve in.

All of this contributes to the very particular tone of A.I.-generated text, always slightly wide-eyed, overeager, insipid but also on the verge of some kind of hysteria. But of course, it’s not just the words — it’s what you do with them. As well as its own repertoire of words and symbols, A.I. has its own fundamentally manic rhetoric. For instance, A.I. has a habit of stopping midway through a sentence to ask itself a question. This is more common when the bot is in conversation with a user, rather than generating essays for them: “You just made a great point. And honestly? That’s amazing.”

Why Does A.I. Write Like … That?

Why Does A.I. Write Like … That?

(Gift Link) If only they were robotic! Instead, chatbots have developed a distinctive — and grating — voice.

nytimes.com iconnytimes.com
Storyboard grid showing a young man and family: kitchen, driving, airplane, supermarket, night house, grill, dinner.

Directing AI: How I Made an Animated Holiday Short

My first taste of generating art with AI was back in 2021 with Wombo Dream. I even used it to create very trippy illustrations for a series I wrote on getting a job as a product designer. To be sure, the generations were weird, if not even ugly. But it was my first test of getting an image by typing in some words. Both Stable Diffusion and Midjourney gained traction the following year and I tried both as well. The results were never great or satisfactory. Years upon years of being an art director had made me very, very picky—or put another way, I had developed taste.

I didn’t touch generative AI art again until I saw a series of photos by Lars Bastholm playing with Midjourney.

Child in yellow jacket smiling while holding a leash to a horned dragon by a park pond in autumn.

Lars Bastholm created this in Midjourney, prompting “What if, in the 1970s, they had a ‘Bring Your Monster’ festival in Central Park?”

That’s when I went back to Midjourney and started to illustrate my original essays with images generated by it, but usually augmented by me in Photoshop.

In the intervening years, generative AI art tools had developed a common set of functionality that was all very new to me: inpainting, style, chaos, seed, and more. Beyond closed systems like Midjourney and OpenAI’s DALL-E, open source models from Stable Diffusion, Flux, and now a plethora of Chinese models offer even better prompt adherence and controllability via even more opaque-sounding functionality like control nets, LoRAs, CFG, and other parameters. It’s funny to me that for a very artistic field, the associated products to enable these creations are very technical.

My Site Stats for 2025

In 2025, I published 328 posts with a total of 118,445 words on this blog. Of course, in most of the posts, I’m quoting others, so excluding block quotes—those quoted passages greater than a sentence—I’m down to 76,226 words. Still pretty impressive, I’d say.

Post analysis 2025 - 328 posts. Top months: Oct 45, Jul 42, Mar 4. Link posts 283 (86%). Total words 118,445, avg 361.

I used Claude Code to write a little script that analyzed my posts from last year.

In reviewing data from my analytics package Umami, it is also interesting which posts received the most views. By far it was “Beyond the Prompt,” my AI prompt-to-code shootout article. The others in the top five were:

That last one has always surprised me. I must’ve hit the Google lottery on it for some reason.

Speaking of links, since April—no data before—visitors clicked on links mentioned on this blog 2,949 times. I also wanted to see which linked items were most popular, by outbound clicks:

  1. AI 2027, naturally
  2. Smith & Diction’s catalog of “Usable Google Fonts
  3. Matt Webb’s post on Do What I Mean
  4. A visualization called “The Authoritarian Stack” that shows how power, money, and companies connect
  5. The New York Times list of the “25 Most Influential Magazine Covers of All Time” (sadly, the gift link has since expired)

And finally, the totals of the year for views were 58,187, with 42,075 visitors. That works out to be an average of about 3,500 visitors per month. Tiny compared with other blogs out there. But my readers mean the world to me.

Anyway, some interesting stats, at least to me. Here’s to more in 2026.

Foggy impressionist painting of a steam train crossing a bridge, plume of steam and a small rowboat on the river below.

The Year AI Changed Design

At the beginning of this year, AI prompt-to-code tools were still very new to the market. Lovable had just relaunched in December and Bolt debuted just a couple months before that. Cursor was my first taste of using AI to code back in November of 2024. As we sit here in December, just 12 months later, our profession and the discipline of design has materially changed. Now, of course, the core is still the same. But how we work, how we deliver, and how we achieve results, are different.

When ChatGPT got good (around GPT-4), I began using it as a creative sounding board. Design is never a solitary activity and feedback from peers and partners has always been a part of the process. To be able to bounce ideas off of an always-on, always-willing creative partner was great. To be sure, I didn’t share sketches or mockups; I was playing with written ideas.

Now, ChatGPT or Gemini’s deep research features are often where I start when I begin to tackle a new feature. And after the chatbot has written the report, I’ll read it and ask a lot of questions as a way of learning and internalizing the material. I’ll then use that as a jumping off point for additional research. Many designers on my team do the same.

I’ve linked to a footer gallery, a navbar gallery, and now to round us out, here is a full-on Component Gallery. Web developer Iain Bean has been maintaining this library since 2019.

Bean writes in the about page:

The original idea for this site came from A Pattern Language2, a 1977 book focused on architecture, building and planning, which describes over 250 ‘patterns’: forms which fit specific contexts, or to put it another way, solutions to design problems. Examples include: ‘Beer hall’, ‘Positive outdoor space’ and ‘Light on two sides of every room’.

Whereas the book focuses on the physical world, my original aim with this site is was focus on those patterns that appear on the web; these often borrow the word ‘pattern’ (see Patterns on the GOV.UK design system), but are more commonly called components, hence ‘the component gallery’ — unlike a component library, most of these components aren’t ready to use off-the-shelf, but they’ll hopefully inspire you to design your own solution to the problem you’re working to solve.

So if you ever need a reference for how different design systems handle certain components (e.g., combobox, segmented control, or toast ), this is your site.

The Component Gallery

The Component Gallery

An up-to-date repository of interface components based on examples from the world of design systems, designed to be a reference for anyone building user interfaces.

component.gallery iconcomponent.gallery

One more post down memory lane. Phil Gyford chronicled his first few months online, thirty years ago in 1995. He talks of modems, floppies, email, Usenet, IRC, and friendly strangers on the internet.

I had forgotten how onerous it was to get online back then. Gyford writes:

It’s hard to convey how difficult it was to set things up. So new and alien to me. When reading computer magazines I’d always skipped articles about networking and while the computers at university had been connected together, that was only for the purposes of printing, scanning and transferring files.

First there was the issue of getting online at all. The Internet Starter Kit spent 59 pages explaining how to set up MacTCP, and PPP or SLIP, two different methods of connecting to the internet, the differences of which happily escape me now. I spent a lot of late nights fiddling with control panels and extensions, learning about IP addresses, domain name servers, etc.

And Gyford reminds us just how marvelous the invention of the internet was:

Before the web – and all the rest of it – how could you have shared your words with anyone? Write a letter to a newspaper or magazine and hope they published it a few days or months later? Create your own fanzine and distribute copies one-by-one to strangers, and posted in individually addressed and stamped envelopes? That was it, unless you were going to become a successful journalist or writer. Your reach, your world, was tiny.

But now, then, you could put anything you wanted on your own website and instantly it was visible by anyone in the world. OK, anyone in the world who was also online, which wasn’t many then, and they were all quite similar, but, still… they could be anywhere! And their number was growing.

And you could chat to people in real time and it didn’t matter where they were, they were here in front of you. Send emails back-and-forth to friends without writing letters, and buying stamps, and waiting days or weeks for a response. Instant! Weightless!

The post is worth a read. It’s complete with pictures of some artifacts from that time, including newspaper clippings, invoices, and journal entries.

My first months in cyberspace

My first months in cyberspace

Recalling the difficulties and wonder of getting online for the first time in 1995, including diary extracts from the time.

gyford.com icongyford.com

If you were into computers like I was between 1975 and 1998, you read Byte magazine. It wasn’t just product reviews and spec sheets—Byte offered serious technical depth, covering everything from assembly language programming to hardware architecture to the philosophy of human-computer interaction. The magazine documented the PC revolution as it happened, becoming required reading for anyone building or thinking deeply about the future of computing. It was also thick as hell.

Someone made a visual archive of Byte magazine, showing each page of the printed pages in a zoomable interface:

Before Hackernews, before Twitter, before blogs, before the web had been spun, when the internet was just four universities in a trenchcoat, there was BYTE. A monthly mainline of the entire personal computing universe, delivered on dead trees for a generation of hackers. Running from September 1975 to July 1998, its 277 issues chronicled the Cambrian explosion of the microcomputer, from bare-metal kits to the dawn of the commercial internet. Forget repackaged corporate press releases—BYTE was for the builders.

It’s a fun glimpse into the past before thin laptops, smartphones, and disco-colored gaming PCs.

Grid collage of vintage technology magazine pages and ads, featuring colorful retro layouts, BYTE covers and articles.

Byte - a visual archive

Explore a zoomable visual archive of BYTE magazine: all 277 issues (Sep 1975 - Jul 1998) scanned page-by-page, a deep searchable glimpse into the PC revolution.

byte.tsundoku.io iconbyte.tsundoku.io

The Whole Earth Catalog, published by Stewart Brand several times a year between 1968 and 1972 (and occasionally until 1998), was the internet before the internet existed. It curated tools, books, and resources for self-education and DIY living, embodying an ethos of access to information that would later define the early web. Steve Jobs famously called it “one of the bibles of my generation,” and for good reason—its approach to democratizing knowledge and celebrating user agency directly influenced the philosophy of personal computing and the participatory culture we associate with the web’s early days.

Curated by Barry Threw and collaborators, the Whole Earth Index is a near-complete archive of the issues of the Whole Earth Catalog.

Here lies a nearly-complete archive of Whole Earth publications, a series of journals and magazines descended from the Whole Earth Catalog, published by Stewart Brand and the POINT Foundation between 1968 and 2002. They are made available here for scholarship, education, and research purposes.

The info page also includes a quote from Stewart Brand:

“Dateline Oct 2023, Exactly 55 years ago, in 1968, the Whole Earth Catalog first came to life. Thanks to the work of an ongoing community of people, it prospered in various forms for 32 years—sundry editions of the Whole Earth Catalog, CoEvolution Quarterly, The WELL, the Whole Earth Software Catalog, Whole Earth Review, etc. Their impact in the world was considerable and sustained. Hundreds of people made that happen—staff, editors, major contributors, board members, funders, WELL conference hosts, etc. Meet them here.” —Stewart Brand

Brand’s mention of The WELL is particularly relevant here—he founded that pioneering online community in 1985 as a digital extension of the Whole Earth ethos, creating one of the internet’s first thriving social networks.

View of Earth against black space with large white serif text "Whole Earth Index" overlaid across the globe.

Whole Earth Index

Here lies a nearly-complete archive of Whole Earth publications, a series of journals and magazines descended from the Whole Earth Catalog, published by Stewart Brand and the POINT Foundation between 1968 and 2002.

wholeearth.info iconwholeearth.info

Huei-Hsin Wang at NN/g published a post about how to write better prompts for AI prompt-to-code tools.

When we asked AI-prototyping tools to generate a live-training profile page for NN/G course attendees, a detailed prompt yielded quality results resembling what a human designer created, whereas a vague prompt generated inconsistent and unpredictable outcomes across the board.

There’s a lot of detailing of what can often go wrong. Personally, I don’t need to read about what I experience daily, so the interesting bit for me is about two-thirds of the way into the article. Wang lists five strategies to employ to get better results.

  • Visual intent: Name the style precisely—use concrete design vocabulary or frameworks instead of vague adjectives. Anchor prompts with recognizable patterns so the model locks onto the look and structure, not “clean/modern” fluff.
  • Lightweight references: Drop in moodboards, screenshots, or system tokens to nudge aesthetics without pixel-pushing. Expect resemblance, not perfection; judge outcomes on hierarchy and clarity, not polish alone.
  • Text-led visual analysis: Have AI describe a reference page’s layout and style in natural language, then distill those characteristics into a tighter prompt. Combine with an image when possible to reinforce direction.
  • Mock data first: Provide realistic sample content or JSON so the layout respects information architecture. Content-driven prompts produce better grouping, hierarchy, and actionable UI than filler lorem ipsum.
  • Code snippets for precision: Attach component or layout code from your system or open-source libraries to reduce ambiguity. It’s the most exact context, but watch length; use selectively to frame structure.
Prompt to Design Interfaces: Why Vague Prompts Fail and How to Fix Them

Prompt to Design Interfaces: Why Vague Prompts Fail and How to Fix Them

Create better AI-prototyping designs by using precise visual keywords, references, analysis, as well as mock data and code snippets.

nngroup.com iconnngroup.com

On the heels of OpenAI’s report “The state of enterprise AI,” Anthropic published a blog post detailing research about how AI is being used by the employees building AI. The researchers surveyed 132 engineers and researchers, conducted 53 interviews, and looked at Claude usage data.

Our research reveals a workplace facing significant transformations: Engineers are getting a lot more done, becoming more “full-stack” (able to succeed at tasks beyond their normal expertise), accelerating their learning and iteration speed, and tackling previously-neglected tasks. This expansion in breadth also has people wondering about the trade-offs—some worry that this could mean losing deeper technical competence, or becoming less able to effectively supervise Claude’s outputs, while others embrace the opportunity to think more expansively and at a higher level. Some found that more AI collaboration meant they collaborated less with colleagues; some wondered if they might eventually automate themselves out of a job.

The post highlights several interesting patterns.

  • Employees say Claude now touches about 60% of their work and boosts output by roughly 50%.
  • Employees say that 27% of AI‑assisted tasks is work that wouldn’t have happened otherwise—like papercut fixes, tooling, and exploratory prototypes.
  • Engineers increasingly use it for new feature implementation and even design/planning.

Perhaps most provocative is career trajectory. Many engineers describe becoming managers of AI agents, taking accountability for fleets of instances and spending more time reviewing than writing net‑new code. Short‑term optimism meets long‑term uncertainty: productivity is up, ambition expands, but the profession’s future shape—levels of abstraction, required skills, and pathways for growth—remains unsettled. See also my series on the design talent crisis.

Two stylized black line-drawn hands over a white rectangle on a pale green background, suggesting typing.

How AI Is Transforming Work at Anthropic

How AI Is Transforming Work at Anthropic

anthropic.com iconanthropic.com

This is a fascinating watch. Ryo Lu, Head of Design at Cursor builds a retro Mac calculator using Cursor agents while being interviewed. Lu’s personal website is an homage to Mac OX X, complete with Aqua-style UI elements. He runs multiple local background agents without stepping on each other, fixes bugs live, and themes UI to match system styles so it feels designed—not “purple AI slop,” as he calls it.

Lu, as interview by Peter Yang, on how engineers and designers work together at Cursor (lightly edited for clarity):

So at Cursor, the roles between designers, PM, and engineers are really muddy. We kind of do the part [that is] our unique strength. We use the agent to tie everything. And when we need help, we can assemble people together to work on the thing.

Maybe some of [us] focus more on the visuals or interactions. Some focus more on the infrastructure side of things, where you design really robust architecture to scale the thing. So yeah, there is a lot less separation between roles and teams or even tools that we use. So for doing designs…we will maybe just prototype in Cursor, because that lets us really interact with the live states of the app. It just feels a lot more real than some pictures in Figma.

And surprisingly, they don’t have official product managers at Cursor. Yang asks, “Did you actually actually hire a PM because last time I talked to Lee [Robinson] there was like no PMs.”

Lu again, and edited lightly for clarity:

So we did not hire a PM yet, but we do have an engineer who used to be a founder. He took a lot more of the PM-y side of the job, and then became the first PM of the company. But I would still say a lot of the PM jobs are kind of spread across the builders in the team.

That mostly makes sense because it’s engineers building tools for engineers. You are your audience, which is rare.

Full Tutorial: Design to Code in 45 Min with Cursor's Head of Design | Ryo Lu

Design-to-code tutorial: Watch Cursor's Head of Design Ryo Lu build a retro Mac calculator with agents - a 45-minute, hands-on walkthrough to prototype and ship

youtube.com iconyoutube.com

It’s always interesting for me to read how other designers use AI to vibe code their projects. I think using Figma Make to conjure a prototype is one thing, but vibe coding something in production is entirely different. Personally, I’ve been through it a couple of times that I’ve already detailed here and here.

Anton Sten recently wrote about his process. Like me, he starts in Figma:

This might be the most important part: I don’t start by talking to AI. I start in Figma.

I know Figma. I can move fast there. So I sketch out the scaffolding first—general theme, grids, typography, color. Maybe one or two pages. Nothing polished, just enough to know what I’m building.

Why does this matter? Because AI will happily design the wrong thing for you. If you open Claude Code with a vague prompt and no direction, you’ll get something—but it probably won’t be what you needed. AI is a builder, not an architect. You still have to be the architect.

I appreciate Sten’s conclusion to not let the AI do all of it for you, echoing Dr. Maya Ackerman’s sentiment of humble creative machines:

But—and this is important—you still need design thinking and systems thinking. AI handles the syntax, but you need to know what you’re building, why you’re building it, and how the pieces fit together. The hard part was never the code. The hard part is the decisions.

Vibe coding for designers: my actual process | Anton Sten

An honest breakdown of how I built and maintain antonsten.com using AI—what actually works, where I’ve hit walls, and why designers should embrace this approach.

antonsten.com iconantonsten.com

A new documentary called The Age of Audio traces the history and impact of podcasting, exploring the resurgence of audio storytelling in the 21st century. In a clip from the doc in the form of a short, Ben Hammersley tells the story of how he coined the term “podcast.”

I’m Ben Hammersley, and I do many things, but mostly I’m the person who invented the word podcast. And I am very sorry.

I can tell you the story. This was in 2004, and I was a writer for the Guardian newspaper in the UK. And at the time, the newspaper was paper-centric, which meant that all of the deadlines were for the print presses to run. And I’d written this article about this sort of emerging idea of downloadable audio content that was automatically downloaded because of an RSS feed.

I submitted the article on time, but then I got a phone call from my editor about 15 minutes before the presses were due to roll saying, “Hey, that piece is about a sentence short for the shape of the page. We don’t have time to move the page around. Can you just write us another sentence?”

And so I just made up a sentence which says something like, “But what do we call this phenomenon?” And then I made up some silly words. It went out, it went into the article, didn’t think any more of it.

And then about six months later or so, I got an email from the Oxford American Dictionary saying, “Hey, where did you get that word from that was in the article you wrote? It seems to be the first citation of the word ‘podcast.’ Now here we are almost 20 years later, and it became part of the discourse.” I’m totally fine with it now.

(h/t Jason Kottke / Kottke.org)

Older man with glasses and mustache in plaid shirt looking right beside a green iPod-style poster labeled "Age of Audio.

Age of Audio – A documentary about podcasting

Explore the rise of podcasting through intimate conversations with industry pioneers including Marc Maron, Ira Glass, Kevin Smith, and more. A seven-year journey documenting the audio revolution that changed how we tell stories.

aoamovie.com iconaoamovie.com

I love this piece in The Pudding by Michelle Pera-McGhee, where she breaks down what motifs are and how they’re used in musicals. Using audio samples from Wicked, Les Miserables, and Hamilton, it’s a fun, interactive—sound on!—essay.

Music is always telling a story, but here that is quite literal. This is especially true in musicals like Les Misérables or Hamilton where the entire story is told through song, with little to no dialogue. These musicals rely on motifs to create structure and meaning, to help tell the story.

So a motif doesn’t just exist, it represents something. This creates a musical storytelling shortcut: when the audience hears a motif, that something is evoked. The audience can feel this information even if they can’t consciously perceive how it’s being delivered.

If you think about it, motifs are the design systems of musicals.

Pera-McGhee lists out the different use cases and techniques for motifs:

  • Representing a character with a recurring musical idea, often updated as the character evolves.
  • Representing an abstract idea (love, struggle, hope) via leitmotifs that recur across scenes.
  • Creating emotional layers by repeating the same motif in contrasting contexts (joy vs. grief).
  • Weaving multiple motifs together at key structural moments (end-of-act ensembles like “One Day More” and “Non-Stop”).

I’m also reminded of this excellent video about the motifs in Hamilton.

Play
Explore 80+ motifs at left; Playbill covers for Hamilton, Wicked, Les Misérables center; yellow motif arcs over timeline labeled Act 1 | Act 2.

How musicals use motifs to tell stories

Explore motifs from Hamilton, Wicked, and Les Misérables.

pudding.cool iconpudding.cool

Economics PhD student Prashant Garg performed a fascinating analysis of Bob Dylan’s lyrics from 1962 to 2012 using AI. He detailed his project in Aeon:

So I fed Dylan’s official discography from 1962 to 2012 into a large language model (LLM), building a network of the concepts and connections in his songs. The model combed through each lyric, extracting pairs of related ideas or images. For example, it might detect a relationship between ‘wind’ and ‘answer’ in ‘Blowin’ in the Wind’ (1962), or between ‘joker’ and ‘thief’ in ‘All Along the Watchtower’ (1967). By assembling these relationships, we can construct a network of how Dylan’s key words and motifs braid together across his songs.

The resulting dataset is visualized in a series of node graphs and bar charts. What’s interesting is that AI is able to see Dylan’s work through a new lens, something that prior scholarship may have missed.

…Yet, when used as a lens rather than an oracle, the same models can jolt even seasoned critics out of interpretive ruts and reveal themes they might have missed. Far from reducing Dylan to numbers, this approach highlights how intentionally intricate his songwriting is: a restless mind returning to certain images again and again, recombining them in ever-new mosaics. In short, AI lets us test the folklore around Dylan, separating the theories that data confirm from those they quietly refute.

Black-and-white male portrait overlaid by colorful patterned strips radiating across the face, each strip bearing small single-word labels.

Can AI tell us anything meaningful about Bob Dylan’s songs?

Generative AI sheds new light on the underlying engines of metaphor, mood and reinvention in six decades of songs

aeon.co iconaeon.co