Skip to content

Jonny Burch argued that design’s source of truth is moving from Figma to code. Édouard Wautier is already there. He wrote up a field report on how Dust’s design team prototypes directly in code.

After the initial analysis and quick sketchbook phase, when I need to give the idea shape and pressure-test it, I don’t open Figma. I open my development environment, pull the latest version of our repo, and create a branch. Then I ask an agent to scaffold a new prototype, and I describe what I’m trying to make.

The prototype isn’t a picture of the product—it’s built from the same design system components and tokens. So what is Wautier optimizing for at this stage?

At this point I mostly care about trying the idea and seeing whether the interaction holds. I’ll build small flows, prototype the transitions, and sanity-check the parts that static screens often hide (state changes, error cases, motion, empty states, keyboard/navigation/accessibility basics).

He’s honest about the trade-offs. You occasionally lose 30 minutes to a tooling issue. Prototypes can invite premature polish because they look real too early. And handoff clarity gets muddy—engineers aren’t always sure what’s prototype-only versus reusable.

Wautier’s closing:

More like clay than drafting: you shape, you test, you feel, you adjust — with an instantaneous feedback loop. The artifact is no longer a description of the thing. It starts to become the thing, or at least a runnable slice of it.

I believe this is the future.

3D avatar with glasses and hand on chin between a UI canvas of colorful rounded shapes and a JavaScript code editor.

Field study: prototypes over mockups

A practical guide to designing with code in 2026

uxdesign.cc iconuxdesign.cc

The source of truth for product design is shifting from Figma to code. I’ve been making that argument from the design side. Jonny Burch is making it from the tooling side, with a sharper prediction about what replaces Figma: nothing owned by one company.

Burch on where design interfaces are headed:

As product, design and engineering collapse together, design interfaces will start to look more like dependencies in the code itself.

A mature design system already lives in code—the Figma components are a mirror, not the original. Once AI agents can read and build against that code directly, the mirror becomes optional. Burch sees this leading to a fragmented ecosystem of code-first plugins and open tools rather than a single Figma replacement. I think he’s right about the direction, if aggressive on the timeline.

On why the pressure is building:

In modern teams it’s no longer acceptable for a designer to spend 2 weeks in their mind palace creating the perfect UI.

It’s starting to happen on my own team. Engineers with AI agents are producing working features in hours. The design phase—the Figma phase—is now the slowest part of the cycle. That’s a new and uncomfortable feeling for designers who grew up in a world where engineering was always the bottleneck.

Burch on Figma’s position in all of this:

They’re suddenly the slow incumbent with the wrong tech stack and a large enterprise customer-base adding drag.

I watched the same dynamic play out when Figma displaced Sketch. The dominant tool doesn’t always adapt fast enough. Sometimes the market just routes around it.

To be sure, I don’t wish for the death of Figma. Designers are visual thinkers and that’s what makes us different than PMs and engineers. I’m sure we’ll continue to use Figma for initial UI explorations. But instead of building out 40-screen flows, we’ll quickly move into code and generate a prototype that’ll look and feel like what we’re going to ship.

Life after Figma is coming (and it will be glorious). Subtext: As code becomes source of truth. Author: Jonny Burch.

Life after Figma is coming (and it will be glorious)

As code becomes source of truth, design tools become interfaces on code, not the other way round.

jonnyburch.com iconjonnyburch.com

The software development process has accumulated decades of ceremony. Boris Tane argues AI agents are collapsing the whole thing.

On engineers who started their careers after Cursor:

They don’t know what the software development lifecycle is. They don’t know what’s DevOps or what’s an SRE. Not because they’re bad engineers. Because they never needed it. They’ve never sat through sprint planning. They’ve never estimated story points. They’ve never waited three days for a PR review.

I read that and thought about design. How much of our process is ceremony too? The Figma-to-developer handoff. The pixel-perfect QA pass. The design review where six people debate border radius. If an AI agent can generate working UI from a design system in three prompts—which I’ve done—a lot of what we treat as process is friction we’ve institutionalized.

Tane’s conclusion:

The quality of what you build with agents is directly proportional to the quality of context you give them. Not the process. Not the ceremony. The context.

For engineering, context means specs, tests, architectural constraints. For design, it means your design system—the component docs and the rules for how things fit together. If that context is thin, the agent produces garbage. If it’s thorough and machine-readable, the output lands close to production-ready.

Tane again:

Requirements aren’t a phase anymore. They’re a byproduct of iteration.

Same for mockups. When you can generate and evaluate working UI faster than you can annotate a Figma frame, the mockup stops being a deliverable and becomes a sketch you might skip entirely. The design system becomes the spec. Context engineering becomes the job.

The Software Development Lifecycle Is Dead — Feb 21, 2026; Boris Tane Blog

The Software Development Lifecycle Is Dead

AI agents didn’t make the SDLC faster. They killed it.

boristane.com iconboristane.com

Most people know what a molly guard is, even if they don’t know the name—it’s the plastic cover over an important button that forces you to be deliberate before you press it. Marcin Wichary flips the concept:

it’s also worth thinking of reverse molly guards: buttons that will press themselves if you don’t do anything after a while.

Think OS update dialogs that restart your machine after a countdown, or mobile setup screens that auto-advance. Wichary on why these matter:

There is no worse feeling than waking up, walking up to the machine that was supposed to work through the night, and seeing it did absolutely nothing, stupidly waiting for hours for a response to a question that didn’t even matter.

This is the kind of observation you only make after years of staring at buttons, as Wichary has.

Close-up of a red rectangular guard inside a dark metal casing; caption below reads "Molly guard in reverse" and "Unsung.

Molly guard in reverse

A blog about software craft and quality

unsung.aresluna.org iconunsung.aresluna.org

I’ve been arguing that the designer’s job is shifting from execution to orchestration—directing AI agents rather than pushing pixels. I made that case from the design side. Addy Osmani just made it from engineering based on what he’s seeing.

Osmani draws a hard line between vibe coding and what he calls “agentic engineering.” On vibe coding:

Vibe coding means going with the vibes and not reviewing the code. That’s the defining characteristic. You prompt, you accept, you run it, you see if it works. If it doesn’t, you paste the error back and try again. You keep prompting. The human is a prompt DJ, not an engineer.

“Prompt DJ” is good. But Osmani’s description of the disciplined version is what caught my attention—it’s the same role I’ve been arguing designers need to grow into:

You’re orchestrating AI agents - coding assistants that can execute, test, and refine code - while you act as architect, reviewer, and decision-maker.

Osmani again:

AI didn’t cause the problem; skipping the design thinking did.

An engineer wrote that. The spec-first workflow Osmani describes is design process applied to code. Designers have been saying “define the problem before you jump to solutions” for decades. AI just made that advice load-bearing for engineers too.

The full piece goes deep on skill gaps, testing discipline, and evaluation frameworks—worth a complete read.

White serif text reading "Agentic Engineering" centered on a black background.

Agentic Engineering

Agentic Engineering is a disciplined approach to AI-assisted software development that emphasizes human oversight and engineering rigor, distinguishing it fr...

addyosmani.com iconaddyosmani.com

Nolan Lawson opens with a line that’s hard to argue with:

The worst fact about these tools is that they work. They can write code better than you or I can, and if you don’t believe me, wait six months.

He’s right. They do work.

Lawson again:

I didn’t ask for the role of a programmer to be reduced to that of a glorified TSA agent, reviewing code to make sure the AI didn’t smuggle something dangerous into production.

It’s a vivid image. But the people I know doing this work well look more like film directors than airport security—they’re deciding what gets built and when to throw the model’s work away. That’s a different job.

Lawson on economic gravity:

Ultimately if you have a mortgage and a car payment and a family you love, you’re going to make your decision. It’s maybe not the decision that your younger, more idealistic self would want you to make, but it does keep your car and your house and your family safe inside it.

I’ve seen this play out with every industry shift I’ve lived through—desktop publishing, print to web, responsive design. Each time, the people with financial obligations adapted first and mourned later. The idealism erodes fast when the market moves.

Where I part ways with Lawson is the framing. He presents two options: abstain on principle, or capitulate for the paycheck. There’s a third path—use the tools to expand what your craft can produce. The grief is real. So is the third path.

We mourn our craft

I didn’t ask for this and neither did you. I didn’t ask for a robot to consume every blog post and piece of code I ever wrote and parrot it back so that some hack could make money off o…

nolanlawson.com iconnolanlawson.com

I’ve been watching the design community fracture over the past year. Not over tools or methodologies—over what it means to be a designer at all. One camp is excited about AI-assisted workflows, shipping working UI from terminals. The other is doubling down on pixel-craft in Figma, treating the shift as a threat to everything they’ve built their careers on. Dave Gauer published a piece that puts words to this feeling better than anything I’ve read from the design side:

It’s weird to say I’ve lost it when I’m still every bit the computer programmer (in both the professional and hobby sense) I ever was. My love for computers and programming them hasn’t diminished at all. But a social identity isn’t about typing on a keyboard, It’s about belonging to a group, a community, a culture.

He hasn’t lost the skill. He’s lost the tribe. I recognize that grief. When I wrote about these same changes hitting design, a former colleague responded: “I didn’t sign up for this.” None of us did. And I think UX and product designers are less than twelve months behind programmers in feeling this exact thing.

He describes what drove the wedge:

When I identified with the programmer culture, it was about programming. Now programming is a means to an end (“let’s see how fast we can build a surveillance state!”) or simply an unwanted chore to be avoided.

Swap “programming” for “design” and you’re looking at the trajectory I wrote about in “Product Design Is Changing.” When the craft becomes something an AI agent can approximate, the culture around it shifts. The conversation moves from “how do we make this great?” to “how fast can we ship this?” The designers who cared about the craft are watching their community become unrecognizable. I get it.

And then there’s this, on what the programming community actually lost:

We should have been chopping the cruft away and replacing it with deterministic abstractions like we’ve always done. That’s what that Larry Wall quote about good programmers being lazy was about. It did not mean that we would be okay with pulling a damn slot machine lever a couple times to generate the boilerplate.

That “slot machine lever” is the programmer’s version of the vibe coding critique. The craft people—in programming and in design—wanted better tools. What they got was a culture that treats the craft itself as an obstacle to speed.

The identity split I described in my essay is already visible: designers who orchestrate AI and ship working software versus designers who push pixels in Figma. The deeper question Gauer is circling is whether the craft was ever the point for you, or just the bottleneck.

A programmer’s loss of a social identity

Dave Gauer reflects on losing his social identity as a “computer programmer” as the culture shifts toward surveillance capitalism and fear-driven agendas, even though his love of programming and learning remains intact.

ratfactor.com iconratfactor.com
Person wearing glasses typing at a computer keyboard, surrounded by flowing code and a halftone glitch effect

ASCII Me

Over the past couple months, I’ve noticed a wave of ASCII-related projects show up on my feeds. WTH is ASCII? It’s the basic set of letters, numbers, and symbols that old-school computers agreed to use for text.

ASCII (American Standard Code for Information Interchange) has 128 characters:

  • 95 printable characters: digits 0–9, uppercase A–Z, lowercase a–z, space, and common punctuation and symbols.
  • 33 control characters: non-printing codes like NUL, LF (line feed), CR (carriage return), and DEL used historically for devices like teletypes and printers.

Early internet users who remember plain text-only email and Usenet newsgroups would have encountered ASCII art like these:

 /\_/\
( o.o )
 > ^ <

It’s a cat. Artist unknown.

   __/\\\\\\\\\\\\\____/\\\\\\\\\\\\\_______/\\\\\\\\\\\___
    _\/\\\/////////\\\_\/\\\/////////\\\___/\\\/////////\\\_
     _\/\\\_______\/\\\_\/\\\_______\/\\\__\//\\\______\///__
      _\/\\\\\\\\\\\\\\__\/\\\\\\\\\\\\\\____\////\\\_________
       _\/\\\/////////\\\_\/\\\/////////\\\______\////\\\______
        _\/\\\_______\/\\\_\/\\\_______\/\\\_________\////\\\___
         _\/\\\_______\/\\\_\/\\\_______\/\\\__/\\\______\//\\\__
          _\/\\\\\\\\\\\\\/__\/\\\\\\\\\\\\\/__\///\\\\\\\\\\\/___
           _\/////////////____\/////////////______\///////////_____

Dimensional lettering.

Anyway, you’ve seen it before and get the gist. My guess is that with Claude Code’s halo effect, the terminal is making a comeback and generating interest in this long lost artform again. And it’s text-based which is now fuel for AI.

Reactions to “Product Design Is Changing”

I posted my essay “Product Design Is Changing“ earlier this week and shared it on both LinkedIn and Reddit. The reactions split in a way was entirely predictable: LinkedIn was largely in agreement, Reddit was largely hostile (including some downvotes!). Debate is healthy and I’m glad people are talking about it. What I don’t want is designers willfully ignoring what is happening. To me, this similar to the industry-wide shifts when graphic design went from paste-up to desktop publishing, and then again from print to web. Folks have to adapt. To quote a previous essay of mine from August 2025:

The AI revolution mirrors the previous shifts in our industry, but with a crucial difference: it’s bigger and faster. Unlike the decade-long transitions from paste-up to desktop publishing and from print to web, AI’s impact is compressing adaptation timelines into months rather than years.

Anyway, I want to highlight some comments that widen the aperture a bit.

“I Didn’t Sign Up for This”

Julian Quayle, a brilliant creative director I worked with a long time ago in my agency years, left a comment on LinkedIn: “So much for years of craft and imagination… I didn’t sign up for this.”

He’s right. None of us signed up for it. And I don’t want to be glib about that. There’s a real grief in watching skills you spent years developing get compressed into a prompt. I’ve been doing this for 30 years. I know what it feels like to be proud of a pixel-perfect mockup, to care about the craft of visual design at a level that most people can’t even perceive. That craft isn’t worthless now. But the market is repricing it in real time, and pretending otherwise doesn’t help anyone.

And to be sure, my essay was about software design. I’m sure there’s an equivalent happening in the branding/graphic side of the house, but I can’t speak to it.

(BTW, Julian is one of the funnest and nicest Brits I’ve ever worked with. When we talk about taste, his is insanely good. And he got to work with David Bowie. Yes.)

Tommaso Nervegna, a Design Director at Accenture Song, gives one of the clearest practitioner accounts I’ve seen of what using Claude Code as a designer looks like day to day.

The guide is detailed—installation steps, terminal commands, deployment. This is essential reading for any designer interested in Claude Code. But for me, the interesting part isn’t the how-to. It’s his argument that raw AI coding tools aren’t enough without structure on top:

Claude Code is powerful, but without proper context engineering, it degrades as the conversation gets longer.

Anyone who’s used these tools seriously has experienced this. You start a session and the output is sharp. Forty minutes in, it’s forgotten your constraints and is hallucinating component names. Nervegna uses a meta-prompting framework called Get Shit Done that breaks work into phases with fresh contexts—research, planning, execution, verification—each getting its own 200K token window. No accumulated garbage.

The framework ends up looking a lot like good design process applied to AI:

Instead of immediately generating code, it asks:

“What happens when there’s no data to display?” “Should this work on mobile?” “What’s the error state look like?” “How do users undo this action?”

Those are the questions a senior designer asks in a review. Nervegna calls it “spec-driven development,” but it’s really the discipline of defining the problem before jumping to solutions—something our profession has always preached and often ignored when deadlines hit.

Nervegna again:

This is spec-driven development, but the spec is generated through conversation, not written in Jira by a project manager.

The specification work that used to live in PRDs and handoff docs is happening conversationally now, between a designer and an AI agent. The designer’s value is in the questions asked before any code gets written.

Terminal-style window reading "CLAUDE CODE FOR DESIGNERS — A PRACTICAL GUIDE" over coral background with black design-tool icons.

Claude Code for Designers: A Practical Guide

A Step-by-Step Guide to Designing and Shipping with Claude Code

nervegna.substack.com iconnervegna.substack.com

Earlier this week I published an essay on how product design is changing, and one of the sources I referenced was Jan Tegze’s piece on job shrinkage. I quoted him on the orchestrator model—using agents to create new capabilities rather than speeding up old tasks. But there’s another section of his article that deserves its own post. It’s the part nobody wants to talk about.

Jan Tegze, writing for his Thinking Out Loud newsletter:

Many people currently doing “strategic” knowledge work aren’t actually that strategic.

When agents started handling the execution layer, everyone assumed humans would naturally move up to higher-order thinking. Strategy, judgment, and vision.

But a different reality is emerging—many senior people with years of experience can’t actually operate at that level. Their expertise was mostly pattern matching and process execution dressed up in strategic language.

That’s a hard paragraph to read if you’re a senior IC or a manager who’s built a career on being thorough and diligent. Tegze isn’t being cruel—he’s describing a structural problem. We built evaluation systems that rewarded execution and called it strategy.

He shares a quote from a CEO of a mid-sized Canadian company:

“We’re discovering that our senior people and our junior people are equally lost when we ask them what we should do, not just how to do it. The seniors are just more articulate about their uncertainty.”

Tegze illustrates the pattern with a story about a friend he calls Jane—a senior research analyst billing at $250/hour at a consulting firm where they deployed an AI research agent:

The agent could do Jane’s initial research in 90 minutes—it would scan thousands of sources, identify patterns, generate a first-draft report.

Month one: Jane was relieved and thought she could focus on high-value synthesis work. She’d take the agent’s output and refine it, add strategic insights, make it client-ready.

Month three: A partner asked her, “Why does this take you a week now? The AI gives us 80% of what we need in an hour. What’s the other 20% worth?”

Jane couldn’t answer clearly. Because sometimes the agent’s output only needed light editing. Sometimes her “strategic insights” were things the agent had already identified, just worded differently.

The firm restructured Jane into a “Quality Reviewer” role at $150/hour. Six months later she left. They replaced her with two junior analysts at $65K each who, with the AI, were 85% as effective.

And then the kicker:

You often hear from AI vendors that, thanks to their AI tools, people can focus on higher-value work. But when pressed on what that meant specifically, they’d go vague. Strategic thinking, client relationships, creative problem solving.

Nobody could define what higher-value work actually looked like in practice. Nobody could describe the new role. So they defaulted to the only thing they could measure: cost reduction.

Tegze again:

We promoted people for the wrong reasons. We confused “does the work well” with “thinks strategically about the work.”

Tegze’s framing of the orchestrator model is the most useful I’ve seen—stop defending your current role and start building one that didn’t exist six months ago. But this section on the strategy gap is worth sitting with on its own. The automation isn’t just changing what we do. It’s revealing what we were actually good at all along.

Person in a suit standing on an isolated ice floe holding a resume aloft, surrounded by scattered icebergs.

Your Job Isn’t Disappearing. It’s Shrinking Around You in Real Time

AI isn’t taking your job. It’s making your expertise worthless while you watch. The three things everyone tries that fail, and the one strategy that actually works.

newsletter.jantegze.com iconnewsletter.jantegze.com

In a Jason Lemkin piece on SaaStr, Intercom CPO Paul Adams describes what happened to his design team over the last 18 months:

Every single designer at Intercom now ships code to production. Zero did 18 months ago. The mandate was clear: this is now part of your job. If you don’t like it, find somewhere that doesn’t require it, and they’ll hire designers who love the idea.

Not a pilot program nor an optional workshop. It was a mandate. Adams basically said, “This is your job now, or it isn’t your job here anymore.” (I do note the language here is indifferent to the real human cost.)

But the designers-shipping-code mandate is one piece of a larger consolidation. Adams applies a simple test across the entire org: what would a brand new startup incorporated today do here?

Would they have separate product marketers and content marketers? Or is that the same job now? Would they have both product managers and product designers as distinct roles? The answer usually points to consolidation, not specialization.

There it is again, the compression of roles.

But Adams isn’t just asking the question. He took over two-thirds of Intercom’s marketing six months ago and rebuilt it from scratch—teams, roadmaps, calendars, gone.

All of the above is a glimpse of what Matt Shumer was talking about in “Something Big Is Happening.”

The way the product gets built has changed too. Adams describes Intercom’s old process versus the new one:

The old way: Pick a job to be done → Listen to customers → Design a solution → Build and ship. Execution was certain. Technology was stable. Design was the hard part. The new way: Ask what AI makes possible → Prototype to see if you can build it reliably → Build the UX later → Ship → Learn at scale.

“Build the UX later” is a scary thought, isn’t it? In many ways, we must unlearn what we have learned, to quote Yoda. Honestly though, that’s easier said than done and is highly dependent on how forgiving your userbase is.

Why Most B2B Companies Are Failing at AI (And How to Avoid It) with Intercom&#8217;s CPO

Why Most B2B Companies Are Failing at AI (And How to Avoid It) with Intercom’s CPO

How Intercom Bet Everything on AI—And Built Fin to 1M+ Resolutions Per Week Paul Adams is Chief Product Officer at Intercom, leading Product Management, Product Design, Data Science, and Research. …

saastr.com iconsaastr.com

Jeff Bezos introduced the two-pizza rule in 2002: if a team needs more than two pizzas to eat, it’s too big. It became gospel for how to organize product teams. Dan Shipper thinks the number just got a lot smaller:

We have four software products, each run by a single person. Ninety-nine percent of our code is written by AI agents. Overall, we have six business units with just 20 full-time employees.

Two pizzas down to two slices. Two slices per person. One person per product. And these aren’t demos or side projects. Shipper’s numbers on one of them:

Monologue, our smart dictation app run by Naveen Naidu, is used about 30,000 times a day to transcribe 1.5 million words. The codebase totals 143,000 lines of code and Naveen’s written almost every single line of it himself with the help of Codex and Opus.

A year ago that would have been a team of four or five engineers plus a PM plus a designer. Shipper himself built a separate product—a Markdown editor—and describes the compression:

An editor like this would have previously taken 3-4 engineers six months to build. Instead, I made it in my spare time.

“In my spare time” is doing a lot of work in that sentence. This is what the small teams, big leverage argument looks like when you stop theorizing and start counting.

Two classical statue profiles exchange pepperoni pizza slices over a blue sky, with a small temple in the background.

The Two-slice Team

Amazon’s “two-pizza rule” worked for the past twenty-four years. We need a new heuristic for the next twenty-four.

every.to iconevery.to

Steve Yegge has been talking to nearly 40 people at Anthropic over the past four months. What he describes looks nothing like the feature factory world that NN/g catalogs. No 47-page alignment documents. No 14-meeting coordination cycles. Instead, campfires:

Everyone sits around a campfire together, and builds. The center of the campfire is a living prototype. There is no waterfall. There is no spec. There is a prototype that simply evolves, via group sculpting, into the final product: something that finally feels right. You know it when you finally find it.

As evidence of this, Anthropic, from what I’m told, does not produce an operating plan ahead more than 90 days, and that is their outermost planning cycle. They are vibing, on the shortest cycles and fastest feedback loops imaginable for their size.

No roadmap beyond 90 days. They group-sculpt a living prototype. Someone told Yegge that Claude Cowork shipped 10 days after the idea first came up. Ten days. A small team with real ownership, shipping at the speed the tools now allow.

Yegge argues this works partly because of a cultural requirement most companies would struggle with. He describes a three-person startup called SageOx that operates the same way:

A lot of engineers like to work in relative privacy, or even secrecy. They don’t want people to see all the false starts, struggles, etc. They just want people to see the finished product. It’s why we have git squash and send dignified PRs instead of streaming every compile error to our entire team.

But my SageOx friends Ajit and Ryan actually want the entire work stream to be public, because it’s incredibly valuable for forensics: figuring out exactly how and why a teammate, human or agent, got to a particular spot. It’s valuable because merging is a continuous activity and the forensics give the models the tools and context they need to merge intelligently.

So at SageOx they all see each other’s work all the time, and act on that info. It’s like the whole team is pair programming at once. They course-correct each other in real time.

Yegge calls this “the death of the ego.” Everyone sees your mistakes, your wrong turns, how fast you work. Nothing to hide. Most designers and engineers I know would be deeply uncomfortable with that. We like to polish before we share. We present finished comps, not the 13 variations we tried and abandoned.

But if the campfire model is where things are heading—and the speed advantage over the feature factory is hard to argue with—then the culture has to change before the process can. That’s the part nobody wants to talk about.

Five bees in goggles on a wooden stage assembling a glowing steampunk orb, surrounded by tools, blueprints, gears and theater seats

The Anthropic Hive Mind

As you’ve probably noticed, something is happening over at Anthropic. They are a spaceship that is beginning to take off.

steve-yegge.medium.com iconsteve-yegge.medium.com

I’ve seen this at every company past a certain size: you spot a disjointed UX problem across the product, you know what needs to happen, and then you spend three months in alignment meetings trying to get six teams to agree on a button style.

A recent piece from Laura Klein at Nielsen Norman Group examines why most product teams aren’t actually empowered, despite what the org chart claims. Klein on fragmentation:

When you have dozens of empowered teams, each optimizing its own metrics and building its own features, you get a product that feels like it was designed by dozens of different companies. One team’s area uses a modal dialog for confirmations. Another team uses an inline message. A third team navigates to a new page. The buttons say Submit in one place, Save in another, and Continue in a third. The tone of the microcopy varies wildly from formal to casual.

Users don’t see teams. They don’t see component boundaries. They just see a confusing, inconsistent product that seems to have been designed by people who never talked to each other, because, in a sense, it was.

Each team was empowered to make the best decisions for their area, and it did! But nobody was empowered to maintain coherence across the whole experience.

That last line is the whole problem. “Coherence,” as Klein calls it, is a design leadership responsibility, and it gets harder as AI lets individual teams ship faster without coordinating with each other. If every squad can generate production UI in hours instead of weeks, the fragmentation described here accelerates. Design systems become the only thing standing between your product and a Frankenstein experience.

The article is also sharp on what happens to PMs inside this dysfunction:

Picture a PM who spends 70% of her time in meetings coordinating with other teams, getting buy-in for a small change, negotiating priorities, trying to align roadmaps, escalating conflicts, chasing down dependencies, and attending working groups created to solve coordination problems. She spends a tiny fraction of her time with users. The rest is spent writing documents that explain her team’s work to other teams, updating roadmaps, reporting status, and attending planning meetings. She was hired to be a strategic product thinker, but she’s become a project manager, focused entirely on logistics and coordination.

I’ve watched this happen to PMs I’ve worked with. The coordination tax eats the strategic work. Marty Cagan calls this “product management theater”—a surplus of PMs who function as overpaid project managers. If AI compresses the engineering work but the coordination overhead stays the same, that ratio gets even more lopsided.

The fix is smaller teams with real ownership and strong design systems that enforce coherence without requiring 14 alignment meetings. But that requires organizational courage most companies don’t have.

Why Most Product Teams Aren't Really Empowered' headline with three hands untangling a ball of dark-blue yarn and NN/G logo.

Why Most Product Teams Aren’t Really Empowered

Although product teams say they’re empowered, many still function as feature factories and must follow orders.

nngroup.com iconnngroup.com

My essay yesterday was about the mechanics of how product design is changing—designing in code, orchestrating AI agents, collapsing the Figma-to-production handoff. That piece got into specifics. This piece by Pavel Bukengolts, writing for UX Magazine, is about the mindset:

AI is changing the how — the tools, the workflows, the speed. But the why of UX? That’s timeless.

Bukengolts is right. UX as a discipline isn’t going anywhere. But I worry that articles like this—well-intentioned and directionally correct—give designers permission to keep doing exactly what they’re doing now. “Sharpen your critical thinking” and “be the conscience in the room” is good advice. It’s also the kind of advice that lets you nod along without changing anything about your Tuesday.

The article lists the skills designers need: critical thinking, systems thinking, AI literacy, ethical awareness, strategic communication. All valid. But none of that addresses what the actual production work looks like six months from now. Bukengolts again:

In a world where AI does the work, your value is knowing why it matters and who it affects.

I agree with this in principle. The problem is the gap between “UX matters” and “your current UX role is secure.” Those are very different statements. UX will absolutely matter in an AI-powered world—someone has to shape the experience, evaluate whether it actually works for people, catch the things the model gets wrong. But the number of people doing that work, and what the job requires of them, is changing fast. I wrote in my essay that junior designers who can’t critically assess AI-generated work will find their roles shrinking fast. The skill floor is rising. Saying “stay curious and principled” isn’t wrong, but it’s not enough.

The piece closes with reassurance:

Yes, this moment is big. Yes, you’ll need to adapt. But no, you are not obsolete.

I’d feel better about that line if the article spent more time on how to adapt—not in terms of thinking skills, but in terms of the actual work. Learn to design in code. Get comfortable directing AI agents. Understand your design system well enough to make it machine-readable. Those are the specific steps that will separate designers who thrive from designers who got the mindset right but missed the shift happening underneath them.

Black 3D letters spelling CHANGE on warm backdrop; caption reads: AI can design interfaces; humans provide empathy and ethics.

Design Smarter: Future-Proof Your UX Career in the Age of AI

Is UX still a thing? AI is rising fast, but UX isn’t disappearing. It’s evolving. The big shift isn’t just tools, it’s how we think: critical thinking to spot gaps, systems thinking to map complexity, and AI literacy to understand capabilities without pretending we build it all. Empathy and ethics become the edge: designers must ask who’s affected, what’s left out, and what unintended consequences might arise. In practice, we translate data and research into a story that matters, bridging users, business, and tech, with strategic communication that keeps everyone aligned. In an AI-powered world, human judgment, why it matters, and to whom, stays central. Stay curious, sharp, and principled.

uxmag.com iconuxmag.com

I sent this article to both of my kids this week. My daughter is in college studying publishing. My son is a high school senior planning to go into real estate. Neither of them works in tech. That’s exactly why they need to read it.

Matt Shumer has spent six years building an AI startup and investing in the space. He wrote this piece for the people in his life who keep asking “so what’s the deal with AI?”—and getting the sanitized answer:

I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I’ve lost my mind. And for a while, I told myself that was a good enough reason to keep what’s truly happening to myself. But the gap between what I’ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.

I know this feeling. I wrote yesterday about how AI is collapsing the gap between design and code and shifting the designer’s value toward taste and orchestration. That essay was for the software design industry. Shumer is writing for everyone else.

His core argument: tech workers have already lived through the disruption that’s coming for every other knowledge-work profession. He explains why tech got hit first:

The AI labs made a deliberate choice. They focused on making AI great at writing code first… because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That’s why they did it first.

Christina Wodtke agrees something big is happening but thinks Shumer’s timeline for everyone else is off. Programming, she argues, is a near-ideal use case for AI—there’s an ocean of public training data, and code has a built-in quality check: it runs or it doesn’t. Hallucinations get caught by the compiler. Other fields aren’t so clean-cut.

Shumer makes the classic tech-insider mistake: assuming his experience generalizes to everyone else’s. It doesn’t. Ethan Mollick’s “jagged frontier” of AI capability is as jagged as ever. AI is spectacular at some tasks and embarrassingly bad at others, and the pattern doesn’t map to human intuitions about difficulty.

She makes another point that matters for anyone in a creative field:

A nuance Shumer completely misses: industries where there isn’t one right answer but there are better and worse answers may actually fare better with AI. When you’re writing strategy, designing an experience, or crafting a narrative, a “hallucination” isn’t necessarily a bug. It might be an interesting idea.

That maps to what I know is true in design. A wrong answer in code crashes the app. A wrong answer in a design brainstorm might be the seed of something good.

This is why I sent Shumer’s piece to my kids but didn’t tell them to panic. Publishing runs on editorial judgment, taste, and relationships with authors. Real estate depends on physical presence, local knowledge, and trust built over handshakes. Neither field has the clean training data and binary pass/fail that made coding so vulnerable so fast. But that doesn’t mean nothing changes. Wodtke again:

Your job probably won’t disappear. But parts of it will shift, and the timeline depends on your field’s specific relationship to data, verification, and ambiguity. Prepare thoughtfully instead of panicking.

Shumer’s practical advice is modest: use AI one hour a day, experiment with it. Not reading about it, but really using it. I’d add Wodtke’s framing to that: spend the hour figuring out which parts of your work sit on the easy side of the jagged frontier, and which parts don’t. That’s more useful than assuming the whole thing collapses overnight.

I said yesterday that the gap between “designer who orchestrates AI” and “designer who pushes pixels” will be enormous within 12 months. Shumer is making that same argument for every knowledge-work profession. The whole piece is worth your time and maybe worth sharing with someone who’s been resistant to AI. Just keep in mind Wodtke’s nuance.

Matt Shumer" card with gold title, subheading "notes on building ai products, models, and demos", shumer.dev logo and @mattshumer_

Something Big Is Happening

A personal note for non-tech friends and family on what AI is starting to change.

shumer.dev iconshumer.dev
Silhouette of a meditating person beneath a floating iridescent crystal-like structure emitting vertical rainbow light

Product Design Is Changing

I made my first website in Macromedia Dreamweaver in 1999. Its claim to fame was an environment with code on one side and a rudimentary WYSIWYG editor on the other. My site was a simple portfolio site, with a couple of animated GIFs thrown in for some interest. Over the years, I used other tools to create for the web, but usually, I left the coding to the experts. I’d design in Photoshop, Illustrator, Sketch, or Figma and then hand off to a developer. Until recently, with rebuilding this site a couple of times and working on a Severance fan project.

A couple weeks ago, as an experiment, I pointed Claude Code at our BuildOps design system repo and asked it to generate a screen using our components. It worked after about three prompts. Not one-shotted, but close. I sat there looking at a functioning UI—built from our actual components—and realized I’d just skipped the entire part of my job that I’ve spent many years doing: drawing pictures of apps and websites in a design tool, then handing them to someone else to build.

That moment crystallized something I’d been circling all last year. I wrote last spring about how execution skills were being commoditized and the designer’s value was shifting toward taste and strategic direction. A month later I mapped out a timeline for how design systems would become the infrastructure that AI tools generate against—prompt, generate, deploy. That was ten months ago, and most of it is already happening. Product design is changing. Not in the way most people are talking about it, but in a way that’s more fundamental and more interesting.

In my previous post about Google Reader, I wrote about Chris Wetherell’s original vision—a polymorphic information tool, not a feed reader. But even Google Reader ended up as a three-pane inbox. That layout didn’t originate with Reader, though. It’s older than that.

Terry Godier traces that layout to a single decision. In 2002, Brent Simmons released NetNewsWire, the first RSS reader that looked like an email client. Godier asked him why, and Simmons’ answer was pragmatic:

“I was actually thinking about Usenet, not email, but whatever. The question I asked myself then was how would I design a Usenet app for (then-new) Mac OS X in the year 2002?”

“The answer was pretty clear to me: instead of multiple windows, a single window with a sidebar, list of posts, and detail view.”

A reasonable choice in 2002. But then Godier shares Simmons reflecting on why everyone kept copying him twenty-two years later:

“But every new RSS reader ought to consider not being yet another three-paned-aggregator. There are surely millions of users who might prefer a river of news or other paradigms.”

“Why not have some fun and do something new, or at least different?”

The person who designed the original paradigm was asking, twenty-two years later, why everyone was still copying him.

Godier’s argument is that when Simmons borrowed the inbox layout, he inadvertently imported the inbox’s psychology. Unread counts. Bold text for new items. A backlog that accumulates. The visual language of social debt, applied to content nobody sent you:

When you dress a new thing in old clothes, people don’t just learn the shape. They inherit the feelings, the assumptions, the emotional weight. You can’t borrow the layout of an inbox without also borrowing some of its psychology.

He calls this “phantom obligation”—the guilt you feel for something no one asked you to do. And I’ll admit, I feel it. I open Inoreader every morning and when that number isn’t zero, some part of my brain registers it as a task. It shouldn’t. Nobody is waiting. But the interface says otherwise.

Godier’s best line is the one that frames the whole piece:

We’ve been laundering obligation. Each interface inherits legitimacy from the last, but the social contract underneath gets hollowed out.

The red dot on a game has the same visual weight as a text from your kid. We kept the weight and dropped the reason.

PHANTOM OBLIGATION — noun: The guilt you feel for something no one asked you to do.

Phantom Obligation

Why RSS readers look like email clients, and what that’s doing to us.

terrygodier.com iconterrygodier.com

Every article I share on this blog starts the same way: in my RSS reader. I use Inoreader to follow about a hundred feeds—design blogs, tech publications, and independent newsletters. Every morning I scroll through what’s new, mark what’s interesting, and the best stuff eventually becomes a link post here. It’s not a fancy workflow. It’s an RSS reader and a notes app. But it works because the format works.

This is a 2023 article, but I’m fascinated by it because Google Reader was so influential in my life. David Pierce, writing for The Verge, chronicles how Google Reader came to be and why Google killed it.

Chris Wetherell, who built the first prototype, wasn’t thinking about an RSS reader. He was thinking about a universal information layer:

“I drew a big circle on the whiteboard,” he recalls. “And I said, ‘This is information.’ And then I drew spokes off of it, saying, ‘These are videos. This is news. This is this and that.’” He told the iGoogle team that the future of information might be to turn everything into a feed and build a way to aggregate those feeds.

Jason Shellen, the product manager, saw the same thing:

“We were trying to avoid saying ‘feed reader,’” Shellen says, “or reading at all. Because I think we built a social product.”

Google couldn’t see it. Reader had 30 million users, many of them daily, but that was a rounding error by Google standards. Pierce captures the absurdity well:

Almost nothing ever hits Google scale, which is why Google kills almost everything.

So Google poured its resources into Google Plus instead. That product was dead within months of launch. Reader, the thing they killed to make room for it, had been a working social network the whole time. Jenna Bilotta, a designer on the team:

“They could have taken the resources that were allocated for Google Plus, invested them in Reader, and turned Reader into the amazing social network that it was starting to be.”

What gets me is that the vision Wetherell drew on that whiteboard—a single place to follow everything you care about, organized by your taste, shared with people you trust, and non-algorithmic—still doesn’t fully exist. RSS readers are the closest thing we have, and they’re good enough that I’ve built my entire reading and writing practice around one. But the curation layer Wetherell imagined is still unfinished.

Framed memorial reading IN LOVING MEMORY (2005–2013) with three colorful app icons, lit candles and white roses.

Who killed Google Reader?

Google Reader was supposed to be much more than a tool for nerds. But it never got the chance.

theverge.com icontheverge.com

What’s Next in Vertical SaaS

After posting my essay about Wall Street and the B2B software stocks tumbling, I came across a few items that pulls on the thread even more, to something forward-looking.

Firstly, my old colleague Shawn Smith had a more nuanced reaction to the story. Smith has been both a customer many times over of Salesforce and a product manager there.

On the customer side, without exception, the sentiment was that Salesforce is an expensive partial solution. There were always gaps in what it could do, which were filled by janky workarounds. In every case, the organization at least considered building an in-house solution which would cover all the bases *and* cost less than the Salesforce contract. I think the threat of AI to Salesforce is very real in this sense. Companies will use it to build their own solutions, but this outcome is probably at least 2-5 years out in many cases because switching costs are real, and contracts are an obstacle.

He is less convinced about something like Adobe where individual preferences around tooling are more of the determining factor. The underlying threat in Smith’s analysis—that companies will build their own solutions—points to a deeper question about which software businesses have real moats. Especially with newer, AI-native upstarts.

Anthropic published a study that puts numbers to something I’ve been writing about in the design context for a while now. They ran a randomized controlled trial with 52 junior software engineers learning a new Python library. Half used AI assistance. Half coded by hand.

Judy Hanwen Shen and Alex Tamkin, writing for Anthropic Research:

Participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two letter grades. Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance.

So the AI group didn’t finish meaningfully faster, but they understood meaningfully less. And the biggest gap was in debugging—the ability to recognize when code is wrong and figure out why. That’s the exact skill you need most when your job is to oversee AI-generated output.

The largest gap in scores between the two groups was on debugging questions, suggesting that the ability to understand when code is incorrect and why it fails may be a particular area of concern if AI impedes coding development.

This is the same dynamic I fear in design. When I wrote about the design talent crisis, educators like Eric Heiman told me “we internalize so much by doing things slower… learning through tinkering with our process, and making mistakes.” Bradford Prairie put it more bluntly: “If there’s one thing that AI can’t replace, it’s your sense of discernment for what is good and what is not good.” But discernment comes from reps, and AI is eating the reps.

The honest framing from Anthropic’s own researchers:

It is possible that AI both accelerates productivity on well-developed skills and hinders the acquisition of new ones.

Credit to Anthropic for publishing research that complicates the case for their own product. And the study’s footnote is worth noting: they used a chat-based AI assistant, not an agentic tool like Claude Code. Their expectation is that “the impacts of such programs on skill development are likely to be more pronounced.”

I can certainly attest that when I use Claude Code, I have no idea what’s going on!

The one bright spot: not all AI use was equal. Participants who asked conceptual questions and used AI to check their understanding scored well. The ones who delegated code generation wholesale scored worst. The difference was whether you were thinking alongside the tool or letting it think for you.

Cognitive effort—and even getting painfully stuck—is likely important for fostering mastery.

Getting painfully stuck. That’s the apprenticeship. That’s the grunt work. And it’s exactly what we’re optimizing away.

Stylized hand pointing to a white sheet with three horizontal rows of black connected dots on a beige background.

How AI assistance impacts the formation of coding skills

Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.

anthropic.com iconanthropic.com

I recently spent some time to move my entire note-taking system away from Notion to Obsidian because the latter runs on Markdown files, which are text files. Why? Because AI runs on text.

And that is also the argument from Patrick Morgan. Your notes, your documented processes, your collected examples of what “good” looks like—if those live in plain text, AI can actually work with them. If they live in your head, or scattered across tools that don’t export, they’re invisible.

There’s a difference between having a fleeting conversation and collaborating on an asset you both work on. When your thinking lives in plain text — especially Markdown — it becomes legible not just to you, but to an AI that can read across hundreds of files, notice patterns, and act at scale.

I like that he frames this as scaffolding rather than some elaborate knowledge management system. He’s honest about the PKM fatigue most of us share:

Personal knowledge management is far from a new concept. Honestly, it’s a topic I started to ignore because too many people were trying to sell me on yet another “life changing” system. Even when I tried to jump through the hoops, it was all just too much for me for too little return. But now that’s changed. With AI, the value is much greater and the barrier to entry much lower. I don’t need an elaborate system. I just need to get my thinking in text so I can share it with my AI.

This is the part that matters for designers. We externalize visual thinking all the time—moodboards, style tiles, component libraries. But we rarely externalize the reasoning behind those decisions in a format that’s portable and machine-readable. Why did we choose that pattern? What were we reacting against? What does “good” look like for this particular problem?

Morgan’s practical recommendation is dead simple: three markdown files. One for process, one for taste, one for raw thinking. That’s it.

This is how your private thinking becomes shared context.

The designers who start doing this now will have documented judgment that AI can actually use.

Side profile of a woman's face merged with a vintage keyboard and monitor displaying a black-and-white mountain photo in an abstract geometric collage.

AI Runs on Text. So Should You.

Where human thinking and AI capability naturally meet

open.substack.com iconopen.substack.com

Many designers I’ve worked with want to get to screens as fast as possible. Open Figma, start laying things out, figure out the structure as they go. It works often enough that nobody questions it. But Daniel Rosenberg makes a case for why it shouldn’t be the default.

Rosenberg, writing for the Interaction Design Foundation, argues that the conceptual model—the objects users manipulate, the actions they perform, and the attributes they change—should be designed before anyone touches a screen:

Even before you sketch your first screen it is beneficial to develop a designer’s conceptual model and use it as the baseline for guiding all future interaction design decisions.

Rosenberg maps this to natural language. Objects are nouns. Actions are verbs. Attributes are adjectives. The way these elements relate to each other is the grammar of your interface. Get the grammar wrong and no amount of visual polish will save you.

His example is painfully simple. A tax e-sign system asked him to “ENTER a PIN” when he’d never used the system before. There was no PIN to enter. The action should have been “CREATE.” One wrong verb and a UX expert with 40 years of experience couldn’t complete the task. His accountant confirmed that dozens of clients had called thinking the system was broken.

Rosenberg on why this cascades:

A suboptimal decision on any lower layer will cascade through all the layers above. This is why designing the conceptual model grammar with the lowest cognitive complexity at the very start… is so powerful.

This is the part I want my team to internalize. When you jump straight to screens, you’re making grammar decisions implicitly—choosing verbs for buttons, deciding which objects to surface, grouping attributes in panels. You’re doing conceptual modeling whether you know it or not. The question is whether you’re doing it deliberately.

Article title "The MAGIC of Semantic Interaction Design" with small "Article" label and Interaction Design Foundation logo at bottom left.

The MAGIC of Semantic Interaction Design

Blame the user: me, a UX expert with more than 40 years of experience, who has designed more than 100 successful commercial products and evaluated the inadequate designs of nearly 1, 000 more.

interaction-design.org iconinteraction-design.org

Everyone wants to talk about the AI use case. Nobody wants to talk about the work that makes the use case possible.

Erika Flowers, who led NASA’s AI readiness initiative, has a great metaphor for this on the Invisible Machines podcast. Her family builds houses, and before they could install a high-tech steel roof, they spent a week building scaffolding, setting up tarps, rigging safety harnesses, positioning dumpsters for debris. The scaffolding wasn’t the job. But without it, the job couldn’t happen.

Flowers on where most organizations are with AI right now:

We are trying to just climb up on these roofs with our most high tech pneumatic nail gun and we got all these tools and stuff and we haven’t clipped off to our belay gear. We don’t have the scaffolding set up. We don’t have the tarps and the dumpsters to catch all the debris. We just want to get up there. That is the state of AI and transformation.

The scaffolding is the boring stuff: data integration, governance, connected workflows, organizational readiness. It’s context engineering at the enterprise level. Before any AI feature can do real work, someone has to make sure it has the right data, the right permissions, and the right place in a process. Nobody wants to fund that part.

But Flowers goes further. She argues we’re not just skipping the scaffolding—we’re automating the wrong things entirely. Her example: accounting software uses AI to help you build a spreadsheet faster, then you email it to someone who extracts the one number they actually needed. Why not just ask the AI for the number? We’re using new technology to speed up old workflows instead of asking whether the workflow should exist at all.

Then she gets to the interesting question—who’s supposed to design all of this?

I don’t think it exists necessarily with the roles that we have. It’s going to be a lot closer to Hollywood… producer, director, screenwriter. And I don’t mean as metaphors, I mean literally those people and how they think and how they do it because we’re in a post software era.

She lists therapists, psychologists, wedding planners, dance choreographers. People who know how to choreograph human interactions without predetermined inputs. That’s a different skill set than designing screens, and I think she’s onto something.

Why AI Scaffolding Matters More than Use Cases ft Erika Flowers

We’re in a moment when organizations are approaching agentic AI backwards, chasing flashy use cases instead of building the scaffolding that makes AI agents actually work at scale. Erika Flowers, who led NASA’s AI Readiness Initiative and has advised Meta, Google, Netflix, and Intuit, joins Robb and Josh for a frank and funny conversation about what’s broken in enterprise AI adoption. She dismantles the myth of the “big sexy AI use case” and explains why most AI projects fail before they start. The trio makes the case that we’re entering a post-software world, whether organizations are ready or not. Chapters - 0:09 - NASA AI Readiness Explained | Erica Flowers on Agentic AI & Runtimes 1:48 - Why the “Big Sexy AI Use Case” Is a Lie 2:42 - AI Didn’t Start with ChatGPT: What NASA Has Been Doing for 30 Years 4:24 - Why AI Runtimes Matter More Than Any Single Use Case 5:21 - The Hidden AI Problem: Legacy Data, Silos & Organizational Reality 7:13 - The Boring AI That Actually Works (And Why Enterprises Ignore It) 8:10 - The AI Arms Race Nobody Understands 9:22 - AI Scaffolding Explained: The Metaphor Every Leader Needs to Hear 12:12 - AI Readiness Is Cultural Change, Not Just Technology 14:38 - From Parking Lots to Companies: How Simple AI Agents Quietly Scale 17:01 - Why Most AI Features Feel Useless in Real Products 19:08 - Stop Automating Spreadsheets: Ask AI the Question Instead 25:06 - The Post-Software Era: Why Designers Aren’t Enough Anymore 28:33 - UI Is a Medium: How AI Will Absorb Interfaces Entirely 46:24 - Infinite Content, Human Creativity, and the Future After AI Listen and Check out Erika’s podcast, “Flower Power Hour”: https://open.spotify.com/show/15BTSl9fWiH3QTmVAYj6Fd Learn more about Erika at www.helloerikaflowers.com/ ---------- Support our show by supporting our sponsors! This episode is supported by OneReach.ai Forged over a decade of R&D and proven in 10,000+ deployments, OneReach.ai’s GSX is the first complete AI agent runtime environment (circa 2019) — a hardened AI agent architecture for enterprise control and scale. Backed by UC Berkeley, recognized by Gartner, and trusted across highly regulated industries, including healthcare, finance, government and telecommunications. A complete system for accelerating AI adoption - design, train, test, deploy, monitor, and orchestrate neurosymbolic applications (agents). Use any AI models - Build and deploy intelligent agents fast - Create guardrails for organizational alignment - Enterprise-grade security and governance Request free prototype: https://onereach.ai/prototype/?utm_source=youtube&utm_medium=social&utm_campaign=podcast_s6e12&utm_content=1 ---------- The revised and significantly updated second edition of our bestselling book about succeeding with AI agents, Age of Invisible Machines, is available everywhere: Amazon — https://bit.ly/4hwX0a5 #InvisibleMachines #Podcast #TechPodcast #AIPodcast #AI #AgenticAI #AIAgents #DigitalTransformation #AIReadiness #AIDeployment #AISoftware #AITransformation #AIAdoption #AIProjects #NASA #AgentRuntime #Innovation #AIUseCase

youtu.be iconyoutu.be