
Acceleration Is Not Automation
I’ve been wandering the wilderness to understand where the software design profession is going. Via this blog and my newsletter, I’ve been exploring the possibilities by reading, commenting, and writing. Many other designers are in the same boat, with Erika Flowers’s Zero Vector design methodology being the most defined. Kudos to her for being one of the first—if not the first—to plant the flag.
Directionally Flowers is right. But for me, working in a team and on B2B software, it feels too simplistic and ignores the realities of working with customers and counterparts in product management and engineering. (That’s her whole point: one person to do it all, no handoff.)
The destination is within view. But it’s hazy and distant. The path to get there is unclear, like driving through soupy fog when your headlights reflecting off the mist are all you can see.
At the core, I don’t believe the process has changed because the UX design process mirrors the scientific method:
- Observe
- Question
- Hypothesize
- Experiment
- Test
- Analyze
“Never Over” TV commercial for Eli Lllly by Wieden+Kennedy, 2026
Compare with the design thinking framework popularized by IDEO and Stanford’s d.school in the late 1990s and 2000s:
- Observe → Empathize
- Question → Define
- Hypothesize → Ideate
- Experiment → Prototype
- Test → Test
- Analyze → (Analyze)

The Design Thinking framework from Standford’s d.school.
Even if you don’t consciously follow the official design thinking process, as a designer you do it anyway. Research → ideate → test → iterate. It’s the same thing at a high level.
The Double Diamond expands on this a bit, explaining other aspects that we designers do.
- Discover, or research and observe what’s happening in the problem space
- Define, or analyze your research and define the problem
- Develop, diverge on solutions to that problem
- Deliver, start homing in on solutions via testing

The Double Diamond design process from the Design Council.
So the question about where design is going is less about the overall process—because it stays the same, just compressed—and more about who is doing what with what. In other words, on a daily basis, what are designers doing and what tools are they using.
The Coding World Changed in Three Months
The industry has moved incredibly fast. First, coding was upended by agentic engineering. Developer and AI researcher Simon Willison said recently on Lenny’s Podcast:
…All of the software engineers who took time off over the over the holidays and started tinkering with this stuff got this moment of realization where it’s like, “Oh wow this stuff actually works now. I could tell it to build code and if I describe that code well enough, it’ll follow the instructions and it’ll build the thing that I asked it to build.”
I think the reverberations to that are still shaking us [through] software engineering. A lot of people woke up in January and February and started realizing, “Oh wow, this technology which I’d been kind of paying attention to, suddenly it’s got really really good.” And what does that mean? Like what does the fact [that] I can churn out 10,000 lines of code in a day and most of it works. Is that good? Like how do we get from “most of it works” to “all of it works”?
What was a slow simmer that started with Cursor’s autocomplete and step-by-step prompting, quickly turned into a rapid boil with Claude Code and Opus 4.5 in November 2025. By January 2026, developers like Geoffrey Huntley discovered the Ralph Wiggum loop applying reinforcement learning to Claude Code forcing itself to continue until its task was solved without bugs; and Steve Yegge released the token-burning automated software factory Gas Town. Over the course of the last three months, innovations kept coming: skills, Markdown files serving as how-tos for agents, teams of multiple agents, and a plethora of agent “harnesses,” or apparatuses to orchestrate multiple agent teams. All together, these new tools have effectively automated programming, with developers now commanding multiple teams of AI agents. As Willison put it, “I can fire up four agents in parallel and have them work on four different problems. And by 11 AM, I am wiped out for the day.”
With AI transforming engineering well underway, the question becomes, “What else can be accelerated in software development?” The other legs of the three-legged stool, of course.
What Else Can Be Accelerated?
Writing PRDs
Many product management activities can be accelerated with AI. Given the right inputs, CSVs of analytics data, Markdown transcripts of customer calls, deep research on market conditions, Claude Cowork can produce decent if not great analyses. Talk about findings in a team meeting and then feed that transcript back through Claude and then you can get a tight PRD. The core PM deliverable can be automated.
The quality of the deliverable is, at best, a C+ out of the box. It might read well and seem credible, but with a little critical thinking, you’ll realize the PRD is full of holes and gross assumptions. You need to give it a battle-tested template and build a skill that considers the right context to write the PRD. You’ll need to iterate, employing reinforcement learning to compound the AI’s experience. Keep improving the skill until the PRDs get better.
Of course, PRDs are just one deliverable out of many a PM might be responsible for. But for building something, I’d argue it’s the most important, because it feeds everything else. Because it’s the beginning of the spec.
With a PRD, how would an AI come up with a solution? What could be automated on the design front?
Designing Flows and Prototypes
“Tea. Earl Grey. Hot.”
The magic of LLMs is that you can ask it for anything and it’ll make it. It’s like the Star Trek replicator but for digital artifacts. Like replicator food, the generated simulacra isn’t necessarily good. I’ve tried a few times to generate flows from Claude. Feeding it a PRD, I asked for a user flow and got a Mermaid diagram which could be rendered as a flowchart in FigJam or Figma via a plugin. There’s a thing that we humans do when we think about systems: we simplify and calibrate the level of granularity so a flow is easy to understand. What I’ve found with the few flows I’ve generated with Claude is that it tends to flex the altitude in the same chart. Sometimes it’s super low and detailed, and other times it’s high and hand-wavy. With work and iterating on a skill, I’m sure flows can get better.
This is true of any of our deliverables, even wireframes, mocks, and prototypes. Iterate on a skill to get the deliverables better and improve over time.

The AI foundational model capabilities will grow exponentially and AI-enabled design tools will benefit from the algorithmic advances. Sources: AI 2027 scenario & Roger Wong
A year ago in “Prompt. Generate. Deploy.”, I forecasted this workflow arriving via tools like Tempo, which bundled PRD → flow → wireframes → code into a single pipeline. That bundled version didn’t quite materialize. But the pieces did, just unbundled across Claude, Figma plugins, and v0.
But if we’re prompting flows and prototypes into existence, what should we do with the newfound gains in productivity? Not play solitaire while Claude churns, of course. Instead, we can test more.
In Jake Knapp’s Design Sprint, paper or low-fidelity prototypes were used to validate hypotheses. And, if you’re lucky, you could get through maybe two iterations. But the core idea is to get validation signal from real customers and users about your solution via a prototype. Now, with AI, you can iterate on a working prototype quickly enough that talking to more customers and sharing more variations is possible. This multiplies the confidence in your solution, lowering the risk of spending resources to launch it.

The 5-day design sprint from “Sprint” by Jake Knapp.
Design-to-Code Handoff
Once you have a validated prototype, you can go about designing the real thing. Maybe you’ll want to continue to do it in Figma like always, or maybe you’ll use newer AI-powered tools like v0, Lovable, or Claude Code. Or maybe you’ll use your prototype as the base and make it production-ready. In any case, you have the opportunity to shape the material directly, to actually make the thing instead of a picture of the thing.
For designers, I believe it’s possible to build the front-end, at the very least. Some platforms have really complex application logic and backends, so I tend to trust software engineers more than me. But if I can hand off a fully-formed frontend to engineering to hook up the backend, then design QA is cut down by 90% because I made it.
Acceleration vs Automation

In the use cases I described above—faster PRD writing, faster flows and prototypes, are they really automation? Or is the artifact generation simply being accelerated?
Agentic engineering is truly automation. Engineers who are orchestrating teams of agents feed them prompts and then sit back and wait for the results. It’s closer to George Jetson sitting at a console pushing a button as a job. Yes, I know there’s more to it. Getting the spec right is the secret. And it’s the PRDs and designs that make up the spec.
We haven’t really gotten to automated PRDs from short prompts, much less automated flow design, high-fidelity mockups, and interactive prototypes. Each one of those deliverables still take extensive back and forth with the LLMs to get right, to get to something above mediocre.
As mentioned earlier, skills is one way to improve the output. But to truly automate, we have to think about breaking down the processes further into what specialists might do.
Enter the Agent Team

Augment Code’s Intent agent orchestration GUI.
In agentic engineering, developers have figured out that they need to give agents certain roles or personas with specific instructions. For example, in Intent by Augment Code, they utilize a coordinator agent whose system prompt begins with:
You plan, delegate, and verify. You do NOT implement code yourself. You NEVER edit files directly. You have no file editing tools available. Delegation to implementor agents is the ONLY way code gets written.
And then a developer agent takes assigned tasks:
You plan and implement. You write specs first, then implement the work yourself after approval. No delegation, no sub-agents.
And finally, a verifier agent ensures the completed task is done right:
You verify the implementation against the spec’s Acceptance Criteria. You are evidence-driven: if you can’t point to concrete evidence, it’s not verified.
This team of agents works together to get something done. The planning, breaking down of a spec to bite-sized chunks, the coordination, and the testing is all done in this team until they think it’s done. Sometimes this will go on for dozens of minutes.
A Team of PMs
We can take this same idea and apply it to product management work. I can imagine the following agents:
- Customer researcher. Gather and summarize heaps of customer call transcripts, support tickets, product usage metrics, etc.
- Market researcher. Research competitors and write detailed market analyses on the competitive landscape.
- Business analyst. Extract the requirements from the research, compare against present state, then report findings.
- Product strategist. Based on all of the above, create SWOT analysis, and then roadmap.
- Product manager. Write a PRD for the first wave of the roadmap.
To do all of the above in a single skill would not yield great results. But having these specialized agents work together could produce meaningful artifacts. The human judgement PMs bring to the table will include institutional knowledge about the business, its product, and customer base. PMs should shape the AI’s output via iterations.
(Obviously, product managers need to conduct the customer calls IRL. We’re not talking about AI-automated user interviews.)
A Team of Design Specialists
MC Dean created an agent team of 10 design specialists. It’s packaged together in something called Designpowers, inspired by Jesse Vincent’s Superpowers. Her list of specialists include:
- The design-strategist builds your flows, information architecture, personas, and design principles.
- The design-scout does competitive research and pattern analysis.
- The design-lead handles visual design — layout, colour, typography, components.
- The motion-designer takes care of animation, transitions, and micro-interactions.
- The content-writer writes interface copy at Grade 6 reading level.
- The design-builder converts specs into production code.
- The accessibility-reviewer runs WCAG and COGA evaluations on everything the team produces.
- The design-critic reviews the work against your brief and principles, finding the gaps nobody else caught.
- The inspiration-scout handles aesthetic references, cross-domain inspiration, mood boards.
- The heuristic evaluator evaluates a design against established usability heuristics (Nielsen’s 10) and conducts cognitive walkthroughs of key tasks.
I love this and I think this is the step towards automation I’ve been exploring in this essay. Looking at this list, I can see some agents that would be applicable in my day-to-day and some that aren’t. I would also add some others. For what my team and I do at BuildOps, an operational platform for commercial contractors, I’d have the following design specialist agents:
- UX researcher. Gather and summarize user interview transcripts, moderated and unmoderated studies, support tickets, product usage metrics, etc. Is there anything else that the Customer Research agent hasn’t already discovered?
- Design strategist. From the available research—above and from the Product agent team—brainstorm solutions that align with the product strategy.
- Design architect. Given a high-level solution, map out all the flows including edge cases.
- UX designer. From the flows, spec out all the necessary individual pages. What appears on each screen and what is the user expected to do?
- UX copywriter. Writes any UX copy like component labels and user instructions according to our copywriting style guide.
- Prototyper. Using the spec’d pages, make an interactive prototype. This can be used for testing and validation with customers and users.
- Design builder. Turn the prototype into production-ready code. Incorporate the design system properly and account for error states and edge cases.
- Accessibility reviewer. Ensures compliance with a11y guidelines and industry best practices.
- Jakob Nielsen. Evaluate the final design against Nielsen’s 10 UX heuristics.
- Verifier. Double-check that the final design satisfies all the requirements from the PRD and finds any gaps that may have been missed.
All of the above assumes we have a rock-solid design system with a full assortment of components and documented patterns and rules.
We have the team, and now what? Is it really as simple as describing what you want to build? In Dean’s Designpowers, the user acts as the creative director, intercepting “any handoff to correct, add, redirect, or skip.” If we’re to imagine a more automated workflow, the user here—you—would simply feed in the PRD as context, let the AI churn for a while, then inspect the resulting prototype and iterate from there. That is what agentic design would look like.
But then the bottleneck becomes you. Your judgement built on years of experience is what can shape and direct the agent team’s output. As I argued in this week’s newsletter, specialist experience is what builds the judgment an AI can’t hold. “…Judgment compounds from pattern recognition that only comes from doing grunt work in one lane long enough to know what good looks like.”
Walking Out of the Wilderness
If you’re still with me, let’s address this directly: Do we want to automate design? No. But it will happen, and is already happening in tech-forward companies like Silicon Valley startups. The answer is not to resist, but to adapt. To follow the skill, not cling onto the role.
Adapting will look different depending on what you’re building and what happens if it breaks. For consumer apps and early-stage products, a solo operating commanding an agent team may be fine. For vertical SaaS, it isn’t.
My team of designers do a lot of discovery, understanding the problem from multiple vantage points, and create bullet-proof solutions by thinking through edge cases, application performance, and integration points. We have to because we work on mission-critical operational software. If BuildOps doesn’t work as expected, our customers’ businesses will come to a grinding halt. That can’t happen.
Which is why I don’t believe a team of one can ship a robust feature end-to-end in vertical SaaS. There’s too much complexity, but more importantly, there’s too much at stake. As a designer, I don’t understand the ins and outs of integrating with an enterprise accounting system. I’m not trained enough in engineering to be able to spot something amiss in AI-generated code that will result in catastrophe. But I do have the experience necessary in design to pick out and correct a poor user experience the AI may have built.
Here’s what I see through the fog: agentic design is the future. But the process to actually run it isn’t fully formed yet. I can see its outlines now. That’s where I’m headed next.

