Skip to content

40 posts tagged with “product management”

Marcus Moretti’s guide to agent-native product management, in Every, is the orchestration shift showing up on the PM side of the team. The guide opens with the 1930s Procter & Gamble origin story: someone owns the product. The job has been rewritten so many times since then that PMs are now expected to be design partners, diplomats, sales people, and statisticians on top of running the 100+ software subscriptions the average company buys. What’s interesting is that the piece is describing the old role, finally legible again now that agents can absorb the administrative debt that piled up on top of it.

Now, much of the interdisciplinary work that goes into product management can be done by an LLM in minutes, sometimes seconds. What used to be a three-hour-long analytics investigation is now a simple back-and-forth with Claude. A product review that used to be a fortnightly chore emerges from a single typo-ridden chat message. This has been my recent experience, at least. I no longer struggle with semicolons in SQL queries or even write tickets. All of my product management work happens in conversation with, in my case, Claude Code. The conversation is the work.

“The conversation is the work” sounds like a description of the new job. Read it next to the 1930s origin story and it’s a description of the old one. The Brand Man at P&G wasn’t writing SQL; he was deciding what the product should be and who it was for. The intervening ninety years of accumulated tooling—agile ceremonies and ticket hygiene, analytics dashboards on top of those—was friction PMs had to push through to get back to the actual work. Moretti’s /ce-strategy command, modeled on Richard Rumelt’s Good Strategy Bad Strategy, isn’t a new artifact either. Strategy documents predate LLMs by decades. What’s new, Moretti says, is the cadence: every few months, the agent re-runs the strategy interview with the accumulated context of everything you’ve shipped.

Writing a strategy document cold is hard. The best way to do it, I’ve found, is to have an agent interview you. The ce-strategy skill does this. It runs through the sections in order and has built-in guidance about what makes a good answer (and what kinds of answers to push back on). […] The interview is deliberately conversational. If the first answer to, “What’s the core problem this product solves” is vague, the agent drills down: “Whose situation specifically? What do they try today, and why doesn’t it work?” The guidance here is taken from personal experience and from the Rumelt book.

The guide assumes a PM who has the taste to recognize when the agent’s follow-up has exposed a gap. The ones who don’t will end up with a strategy.md full of confident-sounding nonsense, generated quickly and reviewed lightly. Agent-native PM removes the alibi that you were too busy with tickets to do the actual thinking. That maps to a warning from Raj Nandan Sharma: when generation gets cheap, the scarce skill is refusal: knowing what to throw out and why. Moretti’s PM is doing exactly that, sentence by sentence, in the strategy interview.

Moretti closes:

LLMs have allowed our tools to catch up with the multifaceted duties of product managers. For me, product management has been reduced to the interesting parts: dreaming up features, thinking through designs, looking at interesting data, and talking to users. We all feel the economic imperative to embrace AI tools, but the better reason, I think, is to make work more fun.

Hand-drawn letter "G" in black chalk-style script on a light blue background, with a black bookmark icon in the top-left corner.

A Guide to Agent-native Product Management

A step-by-step guide to using agentic capabilities for better product management

every.to iconevery.to

Cat Wu, Anthropic’s Head of Product for Claude Code, describes the hiring filter on her team in her interview with Lenny Rachitsky:

I think all of the roles are merging. PMs are doing some engineering work. Engineers are doing PM work. Designers are PMing and also landing code. You can either hire a lot more engineers who have great product taste, or you can keep your engineering hiring the same and hire a lot more PMs to help guide some of their work. On our team, we’re pretty focused on hiring engineers with great product taste. This way we can reduce the amount of overhead for shipping any product. Like there are many engineers on our team who are fully able to end to end go from see user feedback on Twitter through to like ship a product at the end of the week with almost no product involvement. And this, I think, is actually like the most efficient way to ship something. So I think like engineer and PM are kind of overlapping and you will get a lot of benefit from having more of either. I think product taste is still a very rare skill to have and we’ll pretty much hire anyone who we feel has demonstrated this strongly.

This is what the Full Stack Builder pattern looks like as a hiring filter. The headline is the merging of roles. Wu’s own background says where the bench comes from:

Yeah, I was an engineer for many years. I was then a VC very briefly before joining Anthropic. And actually almost all the PMs on our team have either been engineers or ship code here on Claude Code. And so that’s one of the things that I think helps build trust with the team and also just enables us to move a lot faster. And then actually our designers also have been front-end engineers before.

So to be clear, Wu doesn’t say that the roles have merged, but what she’s describing is the continued blurring of lines.

How Anthropic’s product team moves faster than anyone else | Cat Wu (Head of Product, Claude Code)

Cat Wu is Head of Product for Claude Code and Cowork at Anthropic, building one of the most important AI products of this generation. Before joining Anthropic, Cat spent years as an engineer and briefly worked in VC. Today, she’s interviewing hundreds of product managers who are trying to break…

youtube.com iconyoutube.com

Ant Murphy opens with an eyebrow-raising McKinsey number:

McKinsey reports that 88% of organisations say they “use AI” but only about 1% have mature AI deployments delivering real value.

Murphy’s explanation for the gap is familiar: the diffusion of innovation, Geoffrey Moore’s chasm between early adopters and the majority, now applied to AI. What’s less common in the AI discourse is a behavioral explanation for why the adoption keeps stalling. Murphy:

AI is personal. It’s not another tool, to some it’s viewed as a replacement. “AI attacks our identity in a way that most software doesn’t” — Vikram Sreekanti

That resistance shows up in the record: a friend’s “I didn’t sign up for this”. Claire Vo described designers as the most resistant to change in the EPD triad, vocal AI opponents with little appetite for campaigning for resources. None of it is irrational. Daniel Kahneman and Amos Tversky found that humans weigh losses about twice as heavily as equivalent gains. Years of accumulated craft become our identity. AI doesn’t ask you to learn new tools; it asks you to renegotiate what made you worth hiring in the first place. The reskilling conversation treats that as a capability problem. Identity problems don’t resolve themselves through training on new tools.

Murphy on what that requires:

Surviving a paradigm shift like this is less about what your product does […] Instead it’s about you adapting to the change.

The 88% are held back by what AI is asking them to let go of. Murphy’s argument is that organizations clearing the chasm are doing the internal work first—on process, on how teams function—before it shows up in the product.

There’s an old relationship adage that you can’t be a good partner to someone until you’ve worked out your own stuff first. I think Murphy’s argument is the organizational equivalent.

Diagram labeled "The AI Bubble" with a red arrow pointing to a tiny red dot inside a large circle labeled "Everyone Else," illustrating how small the AI bubble is relative to the general population.

The AI Chasm — Ant Murphy

I challenge the hype around AI and share a more grounded perspective on how adoption actually works. Drawing on real data and firsthand experience, I break down why most companies are still early in the AI journey—and what product leaders should focus on instead.

antmurphy.me iconantmurphy.me

I’ve been pro-prototype: PMs replacing PRDs, designers prototyping interactions in code. Pavel Samsonov, writing at Product Picnic, aims at exactly that position. He opens by borrowing a distinction from Andy Polaine:

Demos and prototypes sit on a continuum, but I consider demos something to help you show a concept to other people in a form that looks and feels like the real thing. Prototypes are things you create to test something you don’t know until you build and test it.

Correct distinction. A demo succeeds on stakeholder approval; a prototype succeeds on learning. Both artifacts can be interactive and polished. What separates them is what counts as success. Samsonov on what happens when teams conflate them:

The only thing these demos are helping you test is whether your stakeholder likes what they see (the first loop) and as soon as they say “yes,” it becomes good enough to ship. Whether that second loop (releases go out, measurements come in) ever gets tracked or not is not something I’d be willing to put money on. Because once the demo is productionized, it goes from the realm of delivery velocity (which gets you shoutouts and promotions) into the realm of maintenance (which tends to be ignored even as it eats up more than half of the team’s bandwidth).

AI makes it easier to produce both, and Samsonov’s read on what happens when teams use the speedup wrong:

Shoving out more prototypes is not a heuristic for success; it is a heuristic for failure because it shows that you don’t know what you are trying to learn.

Agreed. Samsonov goes further:

This is exactly why AI-generated prototypes are not working, and have not helped anyone do anything ever. Some have accused me of going too far with this assertion, but I stand by it, because it is rooted in the very nature of what a prototype is (and is not), and what makes it successful (or does not).

Here’s where I differ. Brian Lovin’s Notion prototype playground exists because static mocks enforce golden-path thinking. The playground surfaces the messy middle of AI chat: follow-ups and latency changes no one mocks up. Édouard Wautier’s Dust team prototypes state changes and motion Figma can’t show. Figma PMs ran five user interviews in two days off an AI-built prototype, which is a textbook closed second loop. All three count as prototype work.

Samsonov’s diagnosis is right. His absolute stance is, well, too absolute. AI-generated prototypes haven’t helped anyone only if you assume they’re all demos, which is exactly what the distinction he just drew tells us not to assume.

Product Picnic 64 title card over a vintage black-and-white photo of three people eating and drinking outdoors on rocky terrain.

Designers will never have influence without understanding how organizations learn

We confuse prototypes with demos, and validation with confirmation bias. As a result, we cannot lead — instead, we are led.

productpicnic.beehiiv.com iconproductpicnic.beehiiv.com

(Second link to Chad Johnson this week, but I just discovered his Substack, so ¯\_(ツ)_/¯.)

Chad Johnson, writing in his newsletter, argues that designer influence in product decisions comes from something other than craft output. He lays out the underlying dynamic:

Roadmaps are shaped less by who has the best ideas and more by who controls the framing of tradeoffs. Every roadmap decision is a bet: build this instead of that, now instead of later, for these users instead of those. Whoever makes the risk feel smaller tends to win.

So where does the designer fit? Johnson:

The most influential designers at startups do not position themselves as makers of screens. They act as orientation devices for the team. Orientation is the ability to help a group understand where they are, what matters, and what tradeoffs are real. It precedes prioritization, and it makes decision-making possible.

A designer whose output stops at screens is working on the wrong layer of the problem. Johnson lists the skills that back the orientation role:

Designers who shape direction invest in strategic framing, business literacy, and narrative construction. They learn to say no with evidence and to disagree without drama.

Johnson’s list is right as far as it goes. He understates one skill: legibility. A lot of design influence breaks down at translation. The thinking is strategic; the communication stays in design vocabulary. A sharp problem statement understandable only to other designers stays in the design review. Designers who change the conversation make their analysis readable in product and business terms without flattening it. That’s the same move Johnson gestures at when he describes “decision-ready artifacts” as “tools for comparison… designed to provoke judgment, not admiration.”

Johnson’s closer calls the future of design leadership “quieter, more rigorous, and deeply strategic.” That’s right. It’s also a role that depends on being read by the people making the call.

Large-scale flowchart on a white wall with quirky decision questions including "Have you ever missed an airplane flight?" and "Are you good with names?

Why Most Designers Will Never Influence Product Roadmaps

A practical explanation of how roadmap decisions are really made, and how designers can gain influence

chadsnewsletter.substack.com iconchadsnewsletter.substack.com

Tommaso Nervegna writes about LinkedIn killing its Associate Product Manager program and replacing it with a new role called the “Full Stack Builder.” The structural bet is interesting, but the finding from their rollout is what matters:

The expectation was that AI would be a great equalizer: juniors would benefit most because AI would close their skill gaps, while seniors would resist the change. The reality was the opposite. Top performers adopted AI fastest and derived the most value from it. Why? Because they had the judgment and experience to know what to ask for, how to evaluate the output, and where to apply it for maximum leverage.

That tracks with everything I’ve predicted, experienced, and seen. The skill that makes AI useful is knowing what good looks like before and after the model generates something. That ability comes from reps.

Nervegna distills LinkedIn CPO Tomer Cohen’s thesis to five skills AI cannot automate:

The five skills that AI cannot automate, according to Cohen, are Vision, Empathy, Communication, Creativity, and Judgment. As he puts it: “I’m working hard to automate everything else.”

The operational version:

The critical insight: the builder orchestrates the agents. The agents execute. Judgment stays human. This is not about replacing people with AI. It’s about compressing the team needed to ship something meaningful from fifteen people to three - or even one.

I’ve been calling this the orchestrator gap: the distance between a designer who uses AI and one who directs it. LinkedIn just gave it a job title. I think we will see more companies go this way. Whether or not it’s a good idea remains to be seen.

A Renaissance-era man studies blueprint sketches on a glowing drafting table while a giant mechanical lobster draws on the plans with an ornate pen.

The Full Stack Builder: The End of the Design Process as We Know It

The double diamond is a liability. Engineers ship faster than designers can explore. The PM role is dissolving and the three profiles that will survive this era look nothing like who we’ve been hiring

nervegna.substack.com iconnervegna.substack.com

The Sonos app disaster taught me something about roadmaps. Leadership kept adding initiatives—Sonos Radio, the Ace headphones—without ever naming what those additions displaced. QA got squeezed. Stability testing got cut. The designers who warned them were overruled. No leader said out loud what was being sacrificed to make room.

Yusuf Aytas names exactly this failure:

People like to talk about priorities as if the main problem is choosing what matters. In practice, the deterministic factor is capacity. Team capacity. System capacity. The share you lose to maintenance, interruptions, coordination, and keeping the machine fit to run. Ignoring these physical limits turns an ambitious roadmap into a collective illusion.

“Collective illusion.” That’s the right name for it. Aytas on where the dishonesty starts:

A new customer request appears. Leadership wants a visible bet. Sales needs something for a deal. Everyone talks about importance. Almost nobody says what gets pushed out. That is the real decision. They have only added pressure and left the team to absorb the contradiction later.

Aytas builds the whole piece around a carpentry metaphor—one saw, limited operators, timber that needs oiling and adjustment before it can be cut. Software hides the constraint better, but the physics are the same. There’s more in the piece on shaping work before it competes for capacity, using visible investment buckets, and why reallocation is never free.

A green manual press machine surrounded by bulging white sacks inside a rustic mud-walled storage shed with a corrugated metal roof.

Capacity Is the Roadmap

Most roadmap problems are capacity problems. Make investment buckets visible, budget interrupts, and force trade-offs into the open.

yusufaytas.com iconyusufaytas.com

Designers have been saying this for years. Cameras don’t take pictures, photographers do. Tools don’t make you a better designer. Now the PM world is arriving at the same conclusion.

Shreyas Doshi argues that AI tools will commoditize across companies—any effective tool becomes common knowledge—and the only durable career moat is the human judgment applied on top of AI outputs. He calls it “Product Sense.”

Tools have never been a significant source of alpha in product success and that is not changing with AI tools. What this means for you personally is that - while you can and should use all the AI tools you can - you cannot bank on your use of those AI tools today to provide you a long-term advantage in your product career.

Replace “product people” with “designers” and this could be a post on my blog. The five skills Shreyas decomposes Product Sense into—empathy, simulation, strategic thinking, taste, creative execution—are skills good designers have cultivated under different names for decades.

The piece includes an appended Claude conversation that stress-tests the argument. The sharpest exchange challenges the Silicon Valley orthodoxy that fast B+ beats slow A+:

In practice, the B+ decision made quickly tends to create a cascade of follow-on decisions, each of which is also slightly off, and you end up significantly off-course in ways that are expensive to correct. Whereas the A+ decision, even if it takes longer, tends to set you on a trajectory where subsequent decisions are easier and more obvious. The compounding effect favors quality of judgment, not speed of judgment.

Good judgment compounds. Bad judgment compounds too, just in the wrong direction.

Definition slide: "Product Sense is the ability to make correct product decisions, both macro & micro, in the presence of significant ambiguity.

Why Product Sense is the only product skill that will matter in the AI age

I get asked all the time:

shreyasdoshi.substack.com iconshreyasdoshi.substack.com

Prototypes have always been alignment tools. Whether you’re testing with users or convincing leadership, the prototype’s job is to make the abstract concrete. That part isn’t new.

What’s worth noticing in Emma Webster’s case study roundup on the Figma blog is who’s doing the prototyping. Three stories. Three product managers. Zero designer protagonists.

ServiceNow’s Ram Devanathan explains the dynamic:

“They have a big portfolio, so they can’t always pivot directly to my project.”

So Ram built it himself in Make. His designer’s mockup missed the nuance he was after, so he took a crack at it:

“Make helped me show what I meant rather than trying to describe it in the abstract. I’m able to explain my ideas better. I’m able to convince people faster. That reduces the whole cycle for me.”

Ticketmaster PM Brian Muehlenkamp prototyped an AI assistant that wasn’t even on the roadmap and shipped it. Affirm’s SVP of Product Vishal Kapoor puts the value in craft terms:

“The real work lives in the variations, rabbit holes, and edge cases. It requires a lot of thinking, a lot of precision, and a lot of love.”

All three stories follow the same arc: PM has an idea, designer is unavailable or the mockup misses the mark, PM builds it in Make, team aligns faster. Designers aren’t the heroes of these stories. They’re the bottleneck the tool routes around.

I don’t think that’s Figma’s intended message. But it’s the one that came through to me.

Colorful abstract illustration mixing UI elements like toggles, cursors, and image placeholders with decorative floral patterns on a purple background.

3 Ways Teams Are Building Conviction Faster With Figma Make | Figma Blog

Product managers at ServiceNow, Ticketmaster, and Affirm are using Figma Make to prototype their way forward.

figma.com iconfigma.com

Darragh Curran’s 2× goal reads like a halftime speech. We can do this. The tools are here. The gap is behavioral. Double your output in twelve months.

Claire Vo wrote the post-game report:

If AI adoption had 7 stages of grief, almost all of you would be in denial. No matter how many AI memos your CEO sends, the amount of Claude that’s being Coded, the chatbots in app and the evals in data—I’m here to tell you: you’re not competing. In fact, you probably can’t anymore.

Vo’s target is the company that thinks it’s adapting: AI features shipped, internal power users, a natural-language interface named after a gem. She’s not buying it:

While they try on the bows and ribbons of an AI-native team, they ignore the fact that their bones are old and the company has calcified. For the most part: sales still sells the same and marketing is still talking about channels and CAC and product says “prioritize” and eng says “capacity” and the board is endlessly asking either about Q1 perf and Q2 projections or the ever-elusive “increase in product velocity.”

“Bows and ribbons” versus “bones.” That’s the whole post in one sentence.

I have some sympathy for the incumbents, though. Vo’s startup-swagger framing undersells how much gravitational pull a $100M business carries. Enterprise contracts, compliance obligations, a customer base that didn’t sign up for a pivot. The companies she’s diagnosing aren’t stupid. They’re heavy. And heavy things don’t accelerate the same way light things do, even when both see the cliff.

None of that makes her wrong. It just means even the companies that want to change are fighting physics. But they’ll have to figure it out sooner than later.

You’ve been kicked out of the arena, you just don’t know it yet

No matter how many AI memos your CEO sends, the amount of Claude that’s being Coded, the chatbots in app and the evals in data--I’m here to tell you: you’re not competing. In fact, you probably can’t anymore.

x.com iconx.com

I’ve seen this at every company past a certain size: you spot a disjointed UX problem across the product, you know what needs to happen, and then you spend three months in alignment meetings trying to get six teams to agree on a button style.

A recent piece from Laura Klein at Nielsen Norman Group examines why most product teams aren’t actually empowered, despite what the org chart claims. Klein on fragmentation:

When you have dozens of empowered teams, each optimizing its own metrics and building its own features, you get a product that feels like it was designed by dozens of different companies. One team’s area uses a modal dialog for confirmations. Another team uses an inline message. A third team navigates to a new page. The buttons say Submit in one place, Save in another, and Continue in a third. The tone of the microcopy varies wildly from formal to casual.

Users don’t see teams. They don’t see component boundaries. They just see a confusing, inconsistent product that seems to have been designed by people who never talked to each other, because, in a sense, it was.

Each team was empowered to make the best decisions for their area, and it did! But nobody was empowered to maintain coherence across the whole experience.

That last line is the whole problem. “Coherence,” as Klein calls it, is a design leadership responsibility, and it gets harder as AI lets individual teams ship faster without coordinating with each other. If every squad can generate production UI in hours instead of weeks, the fragmentation described here accelerates. Design systems become the only thing standing between your product and a Frankenstein experience.

The article is also sharp on what happens to PMs inside this dysfunction:

Picture a PM who spends 70% of her time in meetings coordinating with other teams, getting buy-in for a small change, negotiating priorities, trying to align roadmaps, escalating conflicts, chasing down dependencies, and attending working groups created to solve coordination problems. She spends a tiny fraction of her time with users. The rest is spent writing documents that explain her team’s work to other teams, updating roadmaps, reporting status, and attending planning meetings. She was hired to be a strategic product thinker, but she’s become a project manager, focused entirely on logistics and coordination.

I’ve watched this happen to PMs I’ve worked with. The coordination tax eats the strategic work. Marty Cagan calls this “product management theater”—a surplus of PMs who function as overpaid project managers. If AI compresses the engineering work but the coordination overhead stays the same, that ratio gets even more lopsided.

The fix is smaller teams with real ownership and strong design systems that enforce coherence without requiring 14 alignment meetings. But that requires organizational courage most companies don’t have.

Why Most Product Teams Aren't Really Empowered' headline with three hands untangling a ball of dark-blue yarn and NN/G logo.

Why Most Product Teams Aren’t Really Empowered

Although product teams say they’re empowered, many still function as feature factories and must follow orders.

nngroup.com iconnngroup.com

Every article I share on this blog starts the same way: in my RSS reader. I use Inoreader to follow about a hundred feeds—design blogs, tech publications, and independent newsletters. Every morning I scroll through what’s new, mark what’s interesting, and the best stuff eventually becomes a link post here. It’s not a fancy workflow. It’s an RSS reader and a notes app. But it works because the format works.

This is a 2023 article, but I’m fascinated by it because Google Reader was so influential in my life. David Pierce, writing for The Verge, chronicles how Google Reader came to be and why Google killed it.

Chris Wetherell, who built the first prototype, wasn’t thinking about an RSS reader. He was thinking about a universal information layer:

“I drew a big circle on the whiteboard,” he recalls. “And I said, ‘This is information.’ And then I drew spokes off of it, saying, ‘These are videos. This is news. This is this and that.’” He told the iGoogle team that the future of information might be to turn everything into a feed and build a way to aggregate those feeds.

Jason Shellen, the product manager, saw the same thing:

“We were trying to avoid saying ‘feed reader,’” Shellen says, “or reading at all. Because I think we built a social product.”

Google couldn’t see it. Reader had 30 million users, many of them daily, but that was a rounding error by Google standards. Pierce captures the absurdity well:

Almost nothing ever hits Google scale, which is why Google kills almost everything.

So Google poured its resources into Google Plus instead. That product was dead within months of launch. Reader, the thing they killed to make room for it, had been a working social network the whole time. Jenna Bilotta, a designer on the team:

“They could have taken the resources that were allocated for Google Plus, invested them in Reader, and turned Reader into the amazing social network that it was starting to be.”

What gets me is that the vision Wetherell drew on that whiteboard—a single place to follow everything you care about, organized by your taste, shared with people you trust, and non-algorithmic—still doesn’t fully exist. RSS readers are the closest thing we have, and they’re good enough that I’ve built my entire reading and writing practice around one. But the curation layer Wetherell imagined is still unfinished.

Framed memorial reading IN LOVING MEMORY (2005–2013) with three colorful app icons, lit candles and white roses.

Who killed Google Reader?

Google Reader was supposed to be much more than a tool for nerds. But it never got the chance.

theverge.com icontheverge.com

What’s Next in Vertical SaaS

After posting my essay about Wall Street and the B2B software stocks tumbling, I came across a few items that pulls on the thread even more, to something forward-looking.

Firstly, my old colleague Shawn Smith had a more nuanced reaction to the story. Smith has been both a customer many times over of Salesforce and a product manager there.

On the customer side, without exception, the sentiment was that Salesforce is an expensive partial solution. There were always gaps in what it could do, which were filled by janky workarounds. In every case, the organization at least considered building an in-house solution which would cover all the bases *and* cost less than the Salesforce contract. I think the threat of AI to Salesforce is very real in this sense. Companies will use it to build their own solutions, but this outcome is probably at least 2-5 years out in many cases because switching costs are real, and contracts are an obstacle.

He is less convinced about something like Adobe where individual preferences around tooling are more of the determining factor. The underlying threat in Smith’s analysis—that companies will build their own solutions—points to a deeper question about which software businesses have real moats. Especially with newer, AI-native upstarts.

There’s a version of product thinking that lives in frameworks and planning docs. And then there’s the version that shows up when someone looks at a screen and immediately knows something is off. That second version—call it product sense, call it taste or judgement—comes from doing the work, not reading about it.

Peter Yang, writing in his Behind the Craft newsletter, shares 25 product beliefs from a decade at Roblox, Reddit, Amazon, and Meta. The whole list is worth reading, but a few items stood out.

On actually using your own product:

I estimate that less than 10% of PMs actually dogfood their product on a weekly basis. Use your product like a first-time user and write a friction log of how annoying the experience is. Nobody is too senior to test their own shit.

Ten percent. If that number is even close to accurate, it’s damning. You can’t develop good product judgment if you’re not paying attention to the thing you ship. And this applies to designers just as much as PMs.

Yang again, on where that judgment actually shows up:

Default states, edge cases, and good copy — these details are what separates a great product from slop. It doesn’t matter how senior you are, you have to give a damn about the tiniest details to ship something that you can be proud of.

Knowing that default states matter, knowing which edge cases to care about, knowing when copy is doing too much or too little—you can’t learn that from a framework. That’s pattern recognition from years of seeing what good looks like and what falls apart.

And on what qualifies someone to do this work:

Nobody cares about your FAANG pedigree or AI product certificate. Hire high agency people who have built great side projects or demonstrated proof of work. The only credential that matters is what you’ve shipped and your ideas to improve the product.

Reps and shipped work, not reading and credentials. The people who’ve done the reps are the ones who can see the details everyone else misses.

Person with glasses centered, hands clasped; red text reads "10 years of PM lessons in 12 minutes"; logos for Meta, Amazon, Reddit, Roblox.

25 Things I Believe In to Build Great Products

What I believe in is often the opposite of how big companies like to work

creatoreconomy.so iconcreatoreconomy.so

Every designer I’ve managed who made the leap from good to great had one thing in common: they understood why things work, not just how to make them look right. They had product sense. And most of them didn’t learn it from a PM book.

Christina Wodtke, writing for Eleganthack, frames product sense as “compressed experience”:

Product sense works the same way. When a seasoned PM looks at a feature and immediately knows it’s wrong, they’re not being mystical. Their brain is processing hundreds of micro-signals: user flow friction, business model misalignment, technical complexity, competitive dynamics. Years of experience get compressed into a split-second gut reaction.

Swap “PM” for “designer” and this is exactly how design leadership works. The best design critiques I’ve been in aren’t about color choices or spacing—they’re about someone sensing that a flow is wrong before they can articulate why. That’s compressed experience doing its job.

Wodtke’s piece is aimed at product managers, but I think designers need it more. PMs at least have the business context baked into their role. Designers can spend years getting really good at craft without ever building the pattern recognition that tells them what to design, not just how.

This is the part that should be required reading for every designer:

Most people use apps passively — they open Spotify, play music, done. Product people need to use apps actively; not as a user but like a UX designer. They notice the three-tap onboarding flow. They see how the paywall appears after exactly the right amount of value demonstration. They understand why the search bar is positioned there, not there.

Wodtke literally says “like a UX designer.” That’s the standard she’s holding PMs to. So what’s our excuse?

She also nails why reading about product thinking isn’t enough:

Most people try to build product sense by reading about it. That’s like trying to learn tennis by studying physics. You need reps.

The designers on my team who do this—who actively pull apart flows, question trade-offs, study what real products actually ship—are the ones I can’t live without. They don’t need a spec to have an opinion. They already have the reps and consistently impress their PM counterparts.

Wodtke built a nine-week curriculum for her Stanford students that walks through onboarding, checkout, search, paywalls, error states, personalization, UGC, accessibility, and growth mechanics. Each week compares how three different products solve the same problem differently. It’s the kind of thing I wish I could assign to every junior designer on my team.

If you’re a designer and you’re only studying visual references on Dribbble, you’re doing half the work. Go do these exercises.

Building Product Sense: Why Your Gut Needs an Education

Building Product Sense: Why Your Gut Needs an Education

When AI researchers started obsessing over “taste” last year, I had to laugh. They’d discovered what product people have known forever: the ability to quickly distinguish good from bad, elegant fro…

eleganthack.com iconeleganthack.com

For as long as I’ve been in startups, execution speed has been the thing teams optimized for. The assumption was always that if you could just build faster, you’d win. That’s your moat. AI has mostly delivered on that promise—teams can now ship in weeks—see Claude Cowork—what used to take months. And the result is that a lot of teams are building the wrong things faster than ever.

Gale Robins, writing for UX Collective, opens with a scene I’ve lived through from both sides of the table:

I watched a talented software team present three major features they’d shipped on time, hitting all velocity metrics. When I asked, “What problem do these features solve?” silence followed. They could describe what they’d built and how they’d built it. But they couldn’t articulate why any of it mattered to customers.

Robins argues that judgment has replaced execution as the real constraint on product teams. And AI is making this worse, not better:

What once took six months of misguided effort now takes six weeks, or with AI, six days.

Six days to build the wrong thing. The build cycle compressed but the thinking didn’t. Teams are still skipping the same discovery steps, still assuming they know what users want. They’re just doing it at a pace that makes the waste harder to catch.

Robins again:

AI doesn’t make bad judgment cheaper or less damaging — it just accelerates how quickly those judgment errors compound.

She illustrates this with a cascade example: a SaaS company interviews only enterprise clients despite SMBs making up 70% of revenue. That one bad call—who to talk to—ripples through problem framing, solution design, feature prioritization, and evidence interpretation, costing $315K over ten months. With AI-accelerated development, the same cascade plays out in five months at the same cost. You just fail twice as fast.

The article goes on to map 19 specific judgment points across the product discovery process. The framework itself is worth a read, but the underlying argument is the part I keep coming back to: as execution gets cheaper, the quality of your decisions is the only thing that scales.

Circle split in half: left teal circuit-board lines with tech icons, right orange hands pointing to a central flowchart.

The anatomy of product discovery judgment

The 19 critical decision moments where human judgment determines whether teams build the right things.

uxdesign.cc iconuxdesign.cc

The data from Lenny’s Newsletter’s AI productivity survey showed PMs ranking prototyping as their #2 use case for AI, ahead of designers. Here’s what that looks like in practice.

Figma is now teaching PMs to build prototypes instead of writing PRDs. Using Figma Make, product managers can go from idea to interactive prototype without waiting on design. Emma Webster writing in Figma’s blog:

By turning early directions into interactive, high-fidelity prototypes, you can more easily explore multiple concepts and take ideas further. Instead of spending time writing documentation that may not capture the nuances of a product, prototypes enable you to show, rather than tell.

The piece walks through how Figma’s own PMs use Make for exploration, validation, and decision-making. One PM prototyped a feature flow and ran five user interviews—all within two days. Another used it to workshop scrolling behavior options that were “almost impossible to describe” in words.

The closing is direct about what this means for roles:

In this new landscape, the PMs who thrive will be those who embrace real-time iteration, moving fluidly across traditional role boundaries.

“Traditional role boundaries” being design’s territory.

This isn’t a threat if designers are already operating upstream—defining what to build, not just how it looks. But if your value proposition is “I make the mockups,” PMs now have tools to do that themselves.

Abstract blue scene with potted plants and curving vines, birds perched, a trumpet and ladder amid geometric icons.

Prototypes Are the New PRDs

Inside Figma Make, product managers are pressure-testing assumptions early, building momentum, and rallying teams around something tangible.

figma.com iconfigma.com

The optimistic case for designers in an AI-driven world is that design becomes strategy—defining what to build, not just how it looks. But are designers actually making that shift?

Noam Segal and Lenny Rachitsky, writing for Lenny’s Newsletter, share results from a survey of 1,750 tech workers. The headline is that AI is “overdelivering”—55% say it exceeded expectations, and most report saving at least half a day per week. But the findings by role tell a different story for designers:

Designers are seeing the fewest benefits. Only 45% report a positive ROI (compared with 78% of founders), and 31% report that AI has fallen below expectations, triple the rate among founders.

Meanwhile, founders are using AI to think—for decision support, product ideation, and strategy. They treat it as a thought partner, not a production tool. And product managers are building prototypes themselves:

Compare prototyping: PMs have it at #2 (19.8%), while designers have it at #4 (13.2%). AI is unlocking skills for PMs outside of their core work, whereas designers aren’t seeing the marginal improvement benefits from AI doing their core work.

The survey found that AI helps designers with work around design—research synthesis, copy, ideation—but visual design ranks #8 at just 3.3%. As Segal puts it:

AI is helping designers with everything around design, but pushing pixels remains stubbornly human.

This is the gap. The strategic future is available, but designers aren’t capturing it at the same rate as other roles. The question is why—and what to do about it.

Checked clipboard showing items like Speed, Quality and Research, next to headline "How AI is impacting productivity for tech workers

AI tools are overdelivering: results from our large-scale AI productivity survey

What exactly AI is doing for people, which AI tools have product-market fit, where the biggest opportunities remain, and what it all means

lennysnewsletter.com iconlennysnewsletter.com

It’s January and by now millions of us have made resolutions and probably broken them already. The second Friday of January is known as “Quitter’s Day.”

OKRs—objectives and key results—are a method for businesses to set and align company goals. The objective is your goal and the KRs are the ways to reach your goals. Venture capitalist John Doerr learned about OKRs while working at Intel, brought it to Google, and later became the framework’s leading evangelist.

Christina Wodtke talks about how to use OKRs for your personal life, and maybe as a way to come up with better New Year’s resolutions. She looked at her past three years of personal OKRs:

Looking at the pattern laid out in front of me, I finally saw what I’d been missing. My problem wasn’t work-life balance. My problem was that I didn’t like the kind of work I was doing.

The key results kept failing because the objective was wrong. It wasn’t about balance. It was about joy.

This is the second thing key results do for you: when they consistently fail, they’re telling you something. Not that you lack discipline—that you might be chasing the wrong goal entirely.

And I love Wodtke’s line here: “New Year’s resolutions fail because they’re wishes, not plans.“ She continues:

They fail because “eat better” and “be healthier” and “find balance” are too vague to act on and too fuzzy to measure.

Key results fix this. Not because measurement is magic, but because the act of measuring forces clarity. It makes you confront what you actually want. And sometimes, when the data piles up, it reveals that what you wanted wasn’t the thing you needed at all.

Your Resolution Isn’t the Problem. Your Measurement Is.

Your Resolution Isn’t the Problem. Your Measurement Is.

It’s January, and millions of people have made the same resolution: “Eat better.” By February, most will have abandoned it. Not because they lack willpower or discipline. Because …

eleganthack.com iconeleganthack.com

Building on our earlier link about measuring the impact of features, how can we keep track of the overall health of the product? That’s where a North Star Metric comes in.

Julia Sholtz writes and introduction to North Star Metrics in the analytics provider Amplitude’s blog:

Your North Star Metric should be the key measure of success for your company’s product team. It defines the relationship between the customer problems your product team is trying to solve and the revenue you aim to generate by doing so.

How is it done? The first step is to figure out the “game” your business is playing: how your business engages with customers:

  1. The Attention Game: How much time are your customers willing to spend in your product?
  2. The Transaction Game: How many transactions does your user make on your platform?
  3. The Productivity Game: How efficiently and effectively can someone get their work done in your product?

They have a whole resource section on this topic that’s worth exploring.

Every Product Needs a North Star Metric: Here’s How to Find Yours

Every Product Needs a North Star Metric: Here’s How to Find Yours

Get an introduction to product strategy with examples of North Star Metrics across industries.

amplitude.com iconamplitude.com

How do we know what we designed is working as intended? We measure. Vitaly Friedman shares something called the TARS framework to measure the impact of features.

We need UX metrics to understand and improve user experience. What I love most about TARS is that it’s a neat way to connect customers’ usage and customers’ experience with relevant product metrics.

Here’s TARS in a nutshell:

  • Target Audience (%): Measures the percentage of all product users who have the specific problem that a feature aims to solve.
  • Adoption (%): Tracks the percentage of the target audience that successfully and meaningfully engages with the feature.
  • Retention (%): Assesses how many users who adopted the feature continue to use it repeatedly over time.
  • Satisfaction Score (CES): Gauges the level of satisfaction, specifically how easy it was for retained users to solve their problem after using the feature.

Friedman has more details in the article, including how to use TARS to measure how well a feature is performing for your intended target audience.

How To Measure The Impact Of Features — Smashing Magazine

How To Measure The Impact Of Features

Meet TARS — a simple, repeatable, and meaningful UX metric designed specifically to track the performance of product features. Upcoming part of the Measure UX & Design Impact (use the code 🎟 IMPACT to save 20% off today).

smashingmagazine.com iconsmashingmagazine.com

I really appreciate the perspective of Lai-Jing Chu here as a Silicon Valley veteran. The struggle to prove the value of design is real.

I don’t know another function or role in the tech industry where it seems like we have to do our jobs at the same time as — and I will avoid saying “demonstrating value” here because it’s more than that — we carry out some sort of divine duty to make the product (let alone the world) a better place through our creativity.

Instead of more numbers like ROI calculations, Chu argues for counterintuitive approaches for advocacy, “not more left-brain exercises.”

Chu introduces us to W. Edwards Deming, an influential management consultant who wrote:

The most important figures needed for management of any organization are unknown and unknowable, but successful management must nevertheless take account of them.

One strategy she offers is to ask leadership a common-sense question: How would you grade the design?

Because when was the last time anyone did the most basic thing — to stop for a moment, hold the product in their hands, and take a good hard look at it? These questions throw the ball back in their court. It makes them wonder what they can do to help. Because chances are, most leaders want their product to have a good user or customer experience and understand that it makes a difference to their business success. You don’t just want buy-in — you want them to have true ownership.

I admire this approach, because chances are, leaders are already hearing about UX issues from customers. But to put this into practice in, let’s say, at any startup post-Series A will be an issue. There’s a lot of coordination and alignment that needs to happen because exec-level attention is much harder to come by.

What can’t be measured could break your business

What can’t be measured could break your business

Burned out from proving design’s value? Let’s change the conversation

uxdesign.cc iconuxdesign.cc

Here is a good reminder from B. Prendergast to “stop asking users what they want—and start watching what they do.”

Asking people what they want is one of the most natural instincts in product work. Surveys, interviews, and feature wish lists feel accessible, social, and collaborative. They open channels to understand and empathise with the user base. They help teams feel closer to the people they serve. For teams under pressure, a stack of opinions can feel like solid data.

But this breaks when we compare what users say to what they actually do (say-do gap).

We all want to present ourselves a certain way. We want to seem more competent than confused (social desirability bias). Our memories can be fuzzy, especially about routine tasks (recall bias). Standards for what feels “easy” or “intuitive” can vary wildly between people (reference bias).

And of course, as soon as we start to ask users to imagine what they’d want, they’ll solve based on their personal experiences—which might be the right solution for them, but might not be for other users in the same situation.

Prendergast goes on to suggest “watch what people do, measure what matters, and use what they say to add context.” This approach involves watching user interactions, analyzing real behaviors through analytics, and treating feature requests as signals of underlying problems to uncover genuine needs. Prioritizing decisions based on observed patterns and desired outcomes leads to more effective solutions than relying on user opinions alone.

Stop asking users what they want — and start watching what they do. - Annotated

Stop asking users what they want — and start watching what they do.

People’s opinions about themselves and the things they use rarely match real behaviour.

renderghost.leaflet.pub iconrenderghost.leaflet.pub

Product manager Adrian Raudaschl offered some reflections on 2025 from his point of view. It’s a mixture of life advice, product recommendations, and thoughts about the future of tech work.

The first quote I’ll pull out is this one, about creativity and AI:

Ultimately, if we fail to maintain active engagement with the creative process and merely delegate tasks to AI without reflection, there is a risk that delegation becomes abdication of responsibility and authorship.

“Active engagement” with the tasks that we delegate to AI. This reminds me of the humble machines argument by Dr. Maya Ackerman.

On vibe coding:

The most important thing, I think, that most people in knowledge work should be doing is learning to vibe code. Vibe code anything: a diary, a picture book for your mum, a fan page for your local farm. Anything. It’s not about learning to code, but rather appreciating how much more we could do with machines than before. This is what I mean about the generalist product manager: being able to prototype, test, and build without being held back by technical constraints.

I concur 100%. Even if you don’t think you’re a developer, even if you don’t quite understand code, vibe coding something will be illuminating. I think it’s different than asking ChatGPT for a bolognese sauce recipe or how to change a tire. Building something that will instantly run on your computer and seeing the adjustments made in real-time from your plain English prompts is very cool and gives you a glimpse into how LLMs problem-solve.

A product manager’s 48 reflections on 2025

A product manager’s 48 reflections on 2025

and why I’ve been making Bob Dylan songs about Sonic the Hedgehog

uxdesign.cc iconuxdesign.cc

AI threatens to let product teams ship faster. Faster PRDs, faster designs, and faster code. But going too fast can often lead to incurring design and tech debt, or even worse, shipping the wrong thing.

Anton Sten sagely warns:

The biggest pattern I have seen across startups is that skipping clarity never saves time. It costs time. The fastest teams are not the ones shipping the most. They are the ones who understand why they are shipping. That is the difference between moving for the sake of movement and moving with purpose. It is the difference between speed and true velocity.

How do you avoid this? Sten:

The reset is simple and almost always effective. Before building anything, pause long enough to ask, “What problem am I solving, and for whom?” It sounds basic, but this question forces alignment. It replaces assumptions with clarity and shifts attention back to the user instead of internal preferences. When teams do this consistently, the entire atmosphere changes. Decisions become easier. Roadmaps make more sense. People contribute more of themselves. You can feel momentum return.

The hidden cost of shipping too fast

Speed often gets treated as progress even when no one has agreed on what progress actually means. Here’s why clarity matters more than velocity.

antonsten.com iconantonsten.com