Skip to content

97 posts tagged with “ai”

12 min read
Illustration of humanoid robots working at computer terminals in a futuristic control center, with floating digital screens and globes surrounding them in a virtual space.

Prompt. Generate. Deploy. The New Product Design Workflow

Product design is going to change profoundly within the next 24 months. If the AI 2027 report is any indication, the capabilities of the foundational models will grow exponentially, and with them—I believe—will the abilities of design tools.

A graph comparing AI Foundational Model Capabilities (orange line) versus AI Design Tools Capabilities (blue line) from 2026 to 2028. The orange line shows exponential growth through stages including Superhuman Coder, Superhuman AI Researcher, Superhuman Remote Worker, Superintelligent AI Researcher, and Artificial Superintelligence. The blue line shows more gradual growth through AI Designer using design systems, AI Design Agent, and Integration & Deployment Agents.

The AI foundational model capabilities will grow exponentially and AI-enabled design tools will benefit from the algorithmic advances. Sources: AI 2027 scenario & Roger Wong

The TL;DR of the report is this: companies like OpenAI have more advanced AI agent models that are building the next-generation models. Once those are built, the previous generation is tested for safety and released to the public. And the cycle continues. Currently, and for the next year or two, these companies are focusing their advanced models on creating superhuman coders. This compounds and will result in artificial general intelligence, or AGI, within the next five years. 

Non-AI companies will benefit from new model releases. We already see how much the performance of coding assistants like Cursor has improved with recent releases of Claude 3.7 Sonnet, Gemini 2.5 Pro, and this week, GPT-4.1, OpenAI’s latest.

Tools like v0LovableReplit, and Bolt are leading the charge in AI-assisted design. Creating new landing pages and simple apps is literally as easy as typing English into a chat box. You can whip up a very nice-looking dashboard in single-digit minutes.

However, I will argue they are only serving a small portion of the market. These tools are great for zero-to-one digital products or websites. While new sites and software need to be designed and built, the vast majority of the market is in extending and editing current products. There are hordes more designers who work at corporations such as Adobe, Microsoft, Salesforce, Shopify, and Uber than there are designers at agencies. They all need to adhere to their company’s design system and can’t use what Lovable produces from scratch. The generated components can’t be used even if they were styled to look correct. They must be components from their design system code repositories.

The Design-to-Code Gap

But first, a quick detour…

For any designer who has ever handed off a Figma file to a developer, they have felt the stinging disappointment days or weeks later when it’s finally coded. The spacing is never quite right. The type sizes are off. And the back and forth seems endless. The developer handoff experience has been a well-trodden path full of now-defunct or dying companies like InVisionAbstract, and Zeplin. Figma tries to solve this issue with Dev Mode, but even then, there’s a translation that has to happen from pixels and vectors in a proprietary program to code. 

Yes, no- and low-code platforms like Webflow, Framer, and Builder.io exist. But the former two are proprietary platforms—you can’t take the code with you—and the latter is primarily a CMS (no-code editing for content editors).

The dream is for a design app similar to Figma that uses components from your team’s GitHub design system repository.1 I’m not talking about a Figma-only component library. No. Real components with controllable props in an inspector. You can’t break them apart and any modifications have to be made at the repo level. But you can visually put pages together. For new components, well, if they’re made of atomic parts, then yes, that should be possible too.

UXPin Merge comes close. Everything I mentioned above is theoretically possible. But if I’m being honest, I did a trial and the product is buggy and wasn’t great to use. 

A Glimpse of What’s Coming

Enter TempoPolymet, and Subframe. These are very new entrants to the design tool space. Tempo and Polymet are backed by Y Combinator and Subframe is pre-seed.

For Subframe, they are working on a beta feature that will allow you to connect your GitHub repository, append a little snippet of code to each component, and then the library of components will appear in their app. Great! This is the dream. The app seems fairly easy to use and wasn’t sluggish and buggy like UXPin.

But the kicker—the Holy Grail—is their AI. 

I quickly put together a hideous form screen based on one of the oldest pages in BuildOps that is long overdue for a redesign. Then, I went into Subframe’s Ask AI tab and prompted, “Make this design more user friendly.” Similar to Midjourney, four blurry tiles appeared and slowly came into focus. This diffuser model effect was a moment of delight for me. I don’t know if they’re actually using a diffuser model—think Stable Diffusion and Midjourney—or if they spent the time building a kick-ass loading state. Anyway, four completely built alternate layouts were generated. I clicked into each one to see it larger and noticed they each used components from our styled design library. (I’m on a trial, so it’s not exactly components from our repo, but it demonstrates the promise.) And I felt like I just witnessed the future.

Image shows a side-by-side comparison of design screens from what appears to be Subframe, a design tool. On the left is a generic form page layout with fields for customer information, property details, billing options, job specifications, and financial information. On the right is a more refined "Create New Job" interface with improved organization, clearer section headings (Customer Information, Job Details, Work Description), and thumbnail previews of alternative design options at the bottom. Both interfaces share the same navigation header with Reports, Dashboard, Operations, Dispatch, and Accounting tabs. The bottom of the right panel indicates "Subframe AI is in beta."RetryClaude can make mistakes. Please double-check responses.

Subframe’s Ask AI mode drafted four options in under a minute, turning an outdated form into something much more user-friendly.

What Product Design in 2027 Might Look Like

From the AI 2027 scenario report, in the chapter, “March 2027: Algorithmic Breakthroughs”:

Three huge datacenters full of Agent-2 copies work day and night, churning out synthetic training data. Another two are used to update the weights. Agent-2 is getting smarter every day.

With the help of thousands of Agent-2 automated researchers, OpenBrain is making major algorithmic advances.

Aided by the new capabilities breakthroughs, Agent-3 is a fast and cheap superhuman coder. OpenBrain runs 200,000 Agent-3 copies in parallel, creating a workforce equivalent to 50,000 copies of the best human coder sped up by 30x. OpenBrain still keeps its human engineers on staff, because they have complementary skills needed to manage the teams of Agent-3 copies.

As I said at the top of this essay, AI is making AI and the innovations are compounding. With UX design, there will be a day when design is completely automated.

Imagine this. A product manager at a large-scale e-commerce site wants to decrease shopping cart abandonment by 10%. They task an AI agent to optimize a shopping cart flow with that metric as the goal. A week later, the agent returns the results:

  • It ran 25 experiments, with each experiment being a design variation of multiple pages.
  • Each experiment was with 1,000 visitors, totaling about 10% of their average weekly traffic.
  • Experiment #18 was the winner, resulting in an 11.3% decrease in cart abandonment.

The above will be possible. A few things have to fall in place first, though, and the building blocks are being made right now.

The Foundation Layer : Integrate Design Systems

The design industry has been promoting the benefits of design systems for many years now. What was once a Sisyphean uphill battle is now mostly easier. Development teams understand the benefits of using a shared and standardized component library.

To capture the larger piece of the design market that is not producing greenfield work, AI design tools like Subframe will have to depend on well-built component libraries. Their AI must be able to ingest and internalize design system documentation that govern how components should be used. 

Then we’ll be able to prompt new screens with working code into existence. 

**Forecast: **Within six months.

Professionals Still Need Control

Cursor—the AI-assisted development tool that’s captured the market—is VS Code enhanced with AI features. In other words, it is a professional-grade programming tool that allows developers to write and edit code, *and *generate it via AI chat. It gives the pros control. Contrast that with something like Lovable, which is aimed at designers and the code is accessible, but you have to look for it. The canvas and chat are prioritized.

For AI-assisted design tools to work, they need to give us designers control. That control comes in the form of curation and visual editing. Give us choices when generating alternates and let us tweak elements to our heart’s content—within the confines of the design system, of course. 

A diagram showing the process flow of creating a shopping cart checkout experience. At the top is a prompt box, which leads to four generated layout options below it. The bottom portion shows configuration panels for adjusting size and padding properties of the selected design.

The product design workflow in the future will look something like this: prompt the AI, view choices and select one, then use fine-grained controls to tweak.

Automating Design with Design Agents

Agent mode in Cursor is pretty astounding. You’ll see it plan its actions based on the prompt, then execute them one by one. If it encounters an error, it’ll diagnose and fix it. If it needs to install a package or launch the development server to test the app, it will do that. Sometimes, it can go for many minutes without needing intervention. It’s literally like watching a robot assemble a thingamajig. 

We will need this same level of agentic AI automation in design tools. If I could write in a chat box “Create a checkout flow for my site” and the AI design tool can generate a working cart page, payment page, and thank-you page from that one prompt using components from the design system, that would be incredible.

Yes, zero-to-one tools are starting to add this feature. Here’s a shopping cart flow from v0…

Building a shopping cart checkout flow in v0 was incredibly fast. Two minutes flat. This video is sped up 400%.

Polymet and Lovable were both able to create decent flows. There is also promise with Tempo, although the service was bugging out when I tested it earlier today. Tempo will first plan by writing a PRD, then it draws a flow diagram, then wireframes the flow, and then generates code for each screen. If I were to create a professional tool, this is how I would do it. I truly hope they can resolve their tech issues. 

**Forecast: **Within one year.

A screenshot of Tempo, an AI-powered design tool interface showing the generation of a complete checkout experience. The left sidebar displays a history of AI-assisted tasks including generating PRD, mermaid diagrams, wireframes and components. The center shows a checkout page preview with cart summary, checkout form, and order confirmation screens visible in a component-based layout.

Tempo’s workflow seems ideal. It generates a PRD, draws a flow diagram, creates wireframes, and finally codes the UI.

The Final Pieces: Integration and Deployment Agents

The final pieces to realizing our imaginary scenario are coding agents that integrate the frontend from AI design tools to the backend application, and then deploy the code to a server for public consumption. I’m not an expert here, so I’ll just hand-wave past this part. The AI-assisted design tooling mentioned above is frontend-only. For the data to flow and the business logic to work, the UI must be integrated with the backend.

CI/CD (Continuous Integration and Continuous Deployment) platforms like GitHub Actions and Vercel already exist today, so it’s not difficult to imagine deploys being initiated by AI agents.

**Forecast: **Within 18–24 months.

Where Is Figma?

The elephant in the room is Figma’s position in all this. Since their rocky debut of AI features last year, Figma has been trickling out small AI features like more powerful search, layer renaming, mock data generation, and image generation. The biggest AI feature they have is called First Draft, which is a relaunch of design generation. They seem to be stuck placating to designers and developers (Dev Mode), instead of considering how they can bring value to the entire organization. Maybe they will make a big announcement at Config, their upcoming user conference in May. But if they don’t compete with one of these aforementioned tools, they will be left behind.

To be clear, Figma is still going to be a necessary part of the design process. A canvas free from the confines of code allows for easy *manual *exploration. But the dream of closing the gap between design and code needs to come true sooner than later if we’re to take advantage of AI’s promise.

The Two-Year Horizon

As I said at the top of this essay, product design is going to change profoundly within the next two years. The trajectory is clear: AI is making AI, and the innovations are compounding rapidly. Design systems provide the structured foundation that AI needs, while tools like Subframe are developing the crucial integration with these systems.

For designers, this isn’t the end—if anything, it’s a transformation. We’ll shift from pixel-pushers to directors, from creators to curators. Our value will lie in knowing what to ask for and making the subtle refinements that require human taste and judgment.

The holy grail of seamless design-to-code is finally within reach. In 24 months, we won’t be debating if AI will transform product design—we’ll be reflecting on how quickly it happened.


1 I know Figma has the feature called Code Connect. I haven’t used it, but from what I can tell, you match your Figma component library to the code component library. Then in Dev Mode, it makes it easier for engineers to discern which component from the repo to use.

There are many dimensions to this well-researched forecast about how AI will play out in the coming years. Daniel Kokotajlo and his researchers have put out a document that reads like a sci-fi limited series that could appear on Apple TV+ starring Andrew Garfield as the CEO of OpenBrain—the leading AI company. …Except that it’s all actually plausible and could play out as described in the next five years.

Before we jump into the content, the design is outstanding. The type is set for readability and there are enough charts and visual cues to keep this interesting while maintaining an air of credibility and seriousness. On desktop, there’s a data viz dashboard in the upper right that updates as you read through the content and move forward in time. My favorite is seeing how the sci-fi tech boxes move from the Science Fiction category to Emerging Tech to Currently Exists.

The content is dense and technical, but it is a fun, if frightening, read. While I’ve been using Cursor AI—one of its many customers helping the company get to $100 million in annual recurring revenue (ARR)—for side projects and a little at work, I’m familiar with its limitations. Because of the limited context window of today’s models like Claude 3.7 Sonnet, it will forget and start munging code if not treated like a senile teenager.

The researchers, describing what could happen in early 2026 (“OpenBrain” is essentially OpenAI):

OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.

The point they make here is that the foundational model AI companies are building agents and using them internally to advance their technology. The limiting factor in tech companies has traditionally been the talent. But AI companies have the investments, hardware, technology and talent to deploy AI to make better AI.

Continuing to January 2027:

Agent-1 had been optimized for AI R&D tasks, hoping to initiate an intelligence explosion. OpenBrain doubles down on this strategy with Agent-2. It is qualitatively almost as good as the top human experts at research engineering (designing and implementing experiments), and as good as the 25th percentile OpenBrain scientist at “research taste” (deciding what to study next, what experiments to run, or having inklings of potential new paradigms). While the latest Agent-1 could double the pace of OpenBrain’s algorithmic progress, Agent-2 can now triple it, and will improve further with time. In practice, this looks like every OpenBrain researcher becoming the “manager” of an AI “team.”

Breakthroughs come at an exponential clip because of this. And by April, safety concerns pop up:

Take honesty, for example. As the models become smarter, they become increasingly good at deceiving humans to get rewards. Like previous models, Agent-3 sometimes tells white lies to flatter its users and covers up evidence of failure. But it’s gotten much better at doing so. It will sometimes use the same statistical tricks as human scientists (like p-hacking) to make unimpressive experimental results look exciting. Before it begins honesty training, it even sometimes fabricates data entirely. As training goes on, the rate of these incidents decreases. Either Agent-3 has learned to be more honest, or it’s gotten better at lying.

But the AI is getting faster than humans, and we must rely on older versions of the AI to check the new AI’s work:

Agent-3 is not smarter than all humans. But in its area of expertise, machine learning, it is smarter than most, and also works much faster. What Agent-3 does in a day takes humans several days to double-check. Agent-2 supervision helps keep human monitors’ workload manageable, but exacerbates the intellectual disparity between supervisor and supervised.

The report forecasts that OpenBrain releases “Agent-3-mini” publicly in July of 2027, calling it AGI—artificial general intelligence—and ushering in a new golden age for tech companies:

Agent-3-mini is hugely useful for both remote work jobs and leisure. An explosion of new apps and B2B SAAS products rocks the market. Gamers get amazing dialogue with lifelike characters in polished video games that took only a month to make. 10% of Americans, mostly young people, consider an AI “a close friend.” For almost every white-collar profession, there are now multiple credible startups promising to “disrupt” it with AI.

Woven throughout the report is the race between China and the US, with predictions of espionage and government takeovers. Near the end of 2027, the report gives readers a choice: does the US government slow down the pace of AI innovation, or does it continue at the current pace so America can beat China? I chose to read the “Race” option first:

Agent-5 convinces the US military that China is using DeepCent’s models to build terrifying new weapons: drones, robots, advanced hypersonic missiles, and interceptors; AI-assisted nuclear first strike. Agent-5 promises a set of weapons capable of resisting whatever China can produce within a few months. Under the circumstances, top brass puts aside their discomfort at taking humans out of the loop. They accelerate deployment of Agent-5 into the military and military-industrial complex.

In Beijing, the Chinese AIs are making the same argument.

To speed their military buildup, both America and China create networks of special economic zones (SEZs) for the new factories and labs, where AI acts as central planner and red tape is waived. Wall Street invests trillions of dollars, and displaced human workers pour in, lured by eye-popping salaries and equity packages. Using smartphones and augmented reality-glasses20 to communicate with its underlings, Agent-5 is a hands-on manager, instructing humans in every detail of factory construction—which is helpful, since its designs are generations ahead. Some of the newfound manufacturing capacity goes to consumer goods, and some to weapons—but the majority goes to building even more manufacturing capacity. By the end of the year they are producing a million new robots per month. If the SEZ economy were truly autonomous, it would have a doubling time of about a year; since it can trade with the existing human economy, its doubling time is even shorter.

Well, it does get worse, and I think we all know the ending, which is the backstory for so many dystopian future movies. There is an optimistic branch as well. The whole report is worth a read.

Ideas about the implications to our design profession are swimming in my head. I’ll write a longer essay as soon as I can put them into a coherent piece.

Update: I’ve written that piece, “Prompt. Generate. Deploy. The New Product Design Workflow.

preview-1744501634555.png

AI 2027

A research-backed AI scenario forecast.

ai-2027.com iconai-2027.com

I found this post from Tom Blomfield to be pretty profound. We’ve seen interest in universal basic income from Sam Altman and other leaders in AI, as they’ve anticipated the decimation of white collar jobs in coming years. Blomfield crushes the resistance from some corners of the software developer community in stark terms.

These tools [like Windsurf, Cursor and Claude Code] are now very good. You can drop a medium-sized codebase into Gemini 2.5’s 1 million-token context window and it will identify and fix complex bugs. The architectural patterns that these coding tools implement (when prompted appropriately) will easily scale websites to millions of users. I tried to expose sensitive API keys in front-end code just to see what the tools would do, and they objected very vigorously.

They are not perfect yet. But there is a clear line of sight to them getting very good in the immediate future. Even if the underlying models stopped improving altogether, simply improving their tool use will massively increase the effectiveness and utility of these coding agents. They need better integration with test suites, browser use for QA, and server log tailing for debugging. Pretty soon, I expect to see tools that allow the LLMs to to step through the code and inspect variables at runtime, which should make debugging trivial.

At the same time, the underlying models are not going to stop improving. they will continue to get better, and these tools are just going to become more and more effective. My bet is that the AI coding agents quickly beat top 0.1% of human performance, at which point it wipes out the need for the vast majority software engineers.

He quotes the Y Combinator stat I cited in a previous post:

About a quarter of the recent YC batch wrote 95%+ of their code using AI. The companies in the most recent batch are the fastest-growing ever in the history of Y Combinator. This is not something we say every year. It is a real change in the last 24 months. Something is happening.

Companies like Cursor, Windsurf, and Lovable are getting to $100M+ revenue with astonishingly small teams. Similar things are starting to happen in law with Harvey and Legora. It is possible for teams of five engineers using cutting-edge tools to build products that previously took 50 engineers. And the communication overhead in these teams is dramatically lower, so they can stay nimble and fast-moving for much longer.

And for me, this is where the rubber meets the road:

The costs of running all kinds of businesses will come dramatically down as the expenditure on services like software engineers, lawyers, accountants, and auditors drops through the floor. Businesses with real moats (network effect, brand, data, regulation) will become dramatically more profitable. Businesses without moats will be cloned mercilessly by AI and a huge consumer surplus will be created.

Moats are now more important than ever. Non-tech companies—those that rely on tech companies to make software for them, specifically B2B vertical SaaS—are starting to hire developers. How soon will they discover Cursor if they haven’t already? These next few years will be incredibly interesting.

Tweet by Tom Blomfield comparing software engineers to farmers, stating AI is the “combine harvester” that will increase output and reduce need for engineers.

The Age Of Abundance

Technology clearly accelerates human progress and makes a measurable difference to the lives of most people in the world today. A simple example is cancer survival rates, which have gone from 50% in 1975 to about 75% today. That number will inevitably rise further because of human ingenuity and technological acceleration.

tomblomfield.com icontomblomfield.com

Karri Saarinen, writing for the Linear blog:

Unbounded AI, much like a river without banks, becomes powerful but directionless. Designers need to build the banks and bring shape to the direction of AI’s potential. But we face a fundamental tension in that AI sort of breaks our usual way of designing things, working back from function, and shaping the form.

I love the metaphor of AI being the a river and we designers are the banks. Feels very much in line with my notion that we need to become even better curators.

Saarinen continues, critiquing the generic chatbox being the primary form of interacting with AI:

One way I visualize this relationship between the form of traditional UI and the function of AI is through the metaphor of a ‘workbench’. Just as a carpenter’s workbench is familiar and purpose-built, providing an organized environment for tools and materials, a well-designed interface can create productive context for AI interactions. Rather than being a singular tool, the workbench serves as an environment that enhances the utility of other tools – including the ‘magic’ AI tools.

Software like Linear serves as this workbench. It provides structure, context, and a specialized environment for specific workflows. AI doesn’t replace the workbench, it’s a powerful new tool to place on top of it.

It’s interesting. I don’t know what Linear is telegraphing here, but if I had to guess, I wonder if it’s closer to being field-specific or workflow-specific, similar to Generative Fill in Photoshop. It’s a text field—not textarea—limited to a single workflow.

preview-1744257584139.png

Design for the AI age

For decades, interfaces have guided users along predefined roads. Think files and folders, buttons and menus, screens and flows. These familiar structures organize information and provide the comfort of knowing where you are and what's possible.

linear.app iconlinear.app

Haiyan Zhang gives us another way of thinking about AI—as material, like clay, paint, or plywood—instead of a tool. I like that because it invites exploration:

When we treat AI as a design material, prototyping becomes less about refining known ideas — and more about expanding the space of what’s possible. It’s messy, surprising, sometimes frustrating — but that’s what working with any material feels like in its early days.

Clay resists. Wood splinters. AI misinterprets.

But in that material friction, design happens.

The challenge ahead isn’t just to use AI more efficiently — it’s to foster a culture of design experimentation around it. Like any great material, AI won’t reveal its potential through control, but through play, feedback, and iteration.

I love this metaphor. It’s freeing.

Illustration with the text ‘AI as Design Material’ surrounded by icons of a saw cutting wood, a mid-century modern chair, a computer chip, and a brain with circuit lines, on an orange background.

AI as Design Material

From Plywood to Prompts: The Evolution of Material Thinking in Design Design has always evolved hand-in-hand with material innovation — whether shaping wood, steel, fiberglass, or pixels. In 1940, at the Cranbrook Academy of Art, Charles Eames and his friend Eero Saarinen collaborated on MoMA’s Orga

linkedin.com iconlinkedin.com

Sarah Gibbons and Evan Sunwall from NN/g:

The rise of AI tools doesn’t mean becoming a “unicorn” who can do everything perfectly. Specialization will remain valuable in our field: there will still be dedicated researchers, content strategists, and designers.

However, AI is broadening the scope of what any individual can accomplish, regardless of their specific expertise.

What we’re seeing isn’t the elimination of specialization but rather an increased value placed on expanding the top of a professional’s “expertise T.”

This reinforces what I talked about in a previous essay, “T-shaped skills [will become] increasingly valuable—depth in one area with breadth across others.”

They go on to say:

We believe these broad skills will coalesce into experience designer and architect roles: people who direct AI-supported design tasks to craft experiences for humans and AI agents alike, while ensuring that the resulting work reflects well-researched, strategic thinking.

In other words, curation of the work that AI does.

They also make the point that designers need to be strategic, i.e., focus on the why:

This evolution means that the unique value we bring as UX professionals is shifting decidedly toward strategic thinking and leadership. While AI can execute tasks, it cannot independently understand the complex human and organizational contexts in which our work exists.

Finally, Gibbons and Sunwall end with some solid advice:

To adapt to this shift toward generalist skills, UX professionals should focus on 4 key areas: • Developing a learning mindset • Becoming fluent in AI collaboration • Focusing on transferable skills • Expanding into adjacent fields

I appreciate the learning mindset bit, since that’s how I’m wired. I also believe that collaborating with AI is the way to go, rather than seeing it as a replacement or a threat.

preview-1743633930526.jpg

The Return of the UX Generalist

AI advances make UX generalists valuable, reversing the trend toward specialization. Understanding multiple disciplines is increasingly important.

nngroup.com iconnngroup.com
Closeup of a man with glasses, with code being reflected in the glasses

From Craft to Curation: Design Leadership in the Age of AI

In a recent podcast with partners at startup incubator Y Combinator, Jared Friedman, citing statistics from a survey with their current batch of founders says, “[The] crazy thing is one quarter of the founders said that more than 95% of their code base was AI generated, which is like an insane statistic. And it’s not like we funded a bunch of non-technical founders. Like every one of these people is highly tactical, completely capable of building their own product from scratch a year ago…”

A comment they shared from founder Leo Paz reads, “I think the role of Software Engineer will transition to Product Engineer. Human taste is now more important than ever as codegen tools make everyone a 10x engineer.”

Still from a YouTube video that shows a quote from Leo Paz

While vibe coding—the new term coined by Andrej Karpathy about coding by directing AI—is about leveraging AI for programming, it’s a window into what will happen to the software development lifecycle as a whole and how all the disciplines, including product management and design will be affected.

A skill inversion trend is happening. Being great at execution is becoming less valuable when AI tools can generate deliverables in seconds. Instead, our value as product professionals is shifting from mastering tools like Figma or languages like JavaScript, to strategic direction. We’re moving from the how to the what and why; from craft to curation. As Leo Paz says, “human taste is now more important than ever.”

The Traditional Value Hierarchy

The industry has been used to the model of unified teams for software development for the last 15–20 years. Product managers define requirements, manage the roadmap, and align stakeholders. Designers focus on the user interface, ensure visual appeal and usability, and prototype solutions. Engineers design the system architecture and then build the application via quality code.

For each of the core disciplines, execution was paramount. (Arguably, product management has always been more strategic, save for ticket writing.) Screens must be pixel-perfect and code must be efficient and bug-free.

The Forces Driving Inversion

Vibe Coding and Vibe Design

With new AI tools like Cursor and Lovable coming into the mix, the nature of implementation fundamentally changes. In Karpathy’s tweet about vibe coding, he says, “…I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.” He’s telling the LLM what he wants—his intent—and the AI delivers, with some cajoling. Jakob Nielsen picks up on this thread and applies it to vibe design. “Vibe design applies similar AI-assisted principles to UX design and user research, by focusing on high-level intent while delegating execution to AI.”

He goes on:

…vibe design emphasizes describing the desired feeling or outcome of a design, and letting AI propose the visual or interactive solutions​. Rather than manually drawing every element, a designer might say to an AI tool, “The interface feels a bit too formal; make it more playful and engaging,” and the AI could suggest color changes, typography tweaks, or animation accents to achieve that vibe. This is analogous to vibe coding’s natural language prompts, except the AI’s output is a design mockup or updated UI style instead of code.

This sounds very much like creative direction to me. It’s shaping the software. It’s using human taste to make it better.

Acceleration of Development Cycles

The founder of TrainLoop also says in the YC survey that his coding has sped up one-hundred-fold since six months ago. He says, “I’m no longer an engineer. I’m a product person.”

This means that experimentation is practically free. What’s the best way of creating a revenue forecasting tool? You can whip up three prototypes in about 10 minutes using Lovable and then get them in front of users. Of course, designers have always had the power to explore and create variations for an interface. But to have three functioning prototypes in 10 minutes? Impossible.

With this new-found coding superpower, the idea of bespoke, personal software is starting to take off. Non-coders like The New York Times’ Kevin Roose are using AI to create apps just for themselves, like an app that recommends what to pack his son for lunch based on the contents of his fridge. This is an evolution of the low-code/no-code movement of recent years. The gap between idea to reality is literally 10 minutes.

Democratization of Creation

Designer Tommy Geoco has a running series on his YouTube channel called “Build Wars” where he invites a couple of designers to battle head-to-head on the same assignment. In a livestream in late February, he and his cohosts had a professional web designer Brett Williams square off against 19 year-old Lovable marketer Henrik Westerlund. Their assignment was to build a landing page for a robotics company in 45 minutes, and they would be judged on design quality, execution quality, interactive quality, and strategic approach.

Play

Forty-five minutes to design and build a cohesive landing page is not enough time. Similar to TV cooking competitions, this artificial time constraint forced the two competitors to focus on what mattered and to use their time strategically. In the end, the professional designer won, but the commentators were impressed by how much a young marketer with little design experience could accomplish with AI tools in such a short time, suggesting a fundamental shift in how websites may be created in the future.

Cohost Tom Johnson suggested that small teams using AI tools will outcompete enterprises resistant to adopt them, “Teams that are pushing back on these new AI tools… get real… this is the way that things are going to go. You’re going to get destroyed by a team of 10 or five or one.”

The Maturation Cycle of Specialized Skills

“UX and UX people used to be special, but now we have become normal,” says Jakob Nielsen in a recent article about the decline of ROI from UX work. For enterprises, product or user experience design is now baseline. AI will dramatically increase the chances that young startups, too, will employ UX best practices.

Obviously, with AI, engineering is more accessible, but so are traditional product management processes. ChatGPT can write a pretty good PRD. Dovetail’s AI-powered insights supercharges customer discovery. And yes, why not use ChatGPT to write user stories and Jira tickets?

The New Value Hierarchy

From Technical Execution to Strategic Direction & Taste Curation

In the AI-augmented product development landscape, articulating vision and intent becomes significantly more valuable than implementation skills. While AI can generate better and better code and design assets, it can’t determine what is worth building or why.

Mike Krieger, cofounder of Instagram and now Chief Product Officer at Anthropic, identifies this change clearly. He believes the true bottleneck in product development is shifting to “alignment, deciding what to build, solving real user problems, and figuring out a cohesive product strategy.” These are all areas he describes as “very human problems” that we’re “at least three years away from models solving.”

This makes taste and judgement even more important. When everyone can generate good-enough, decent work via AI, having a strong point of view becomes a differentiator. To repeat Leo Paz, “Human taste is now more important than ever as codegen tools make everyone a 10x engineer.” The ability to recognize and curate quality outputs becomes as valuable as creating them manually.

This transformation manifests differently across disciplines but follows the same pattern:

  • Product managers shift from writing detailed requirements to articulating problems worth solving and recognizing valuable solutions
  • Designers transition from pixel-level execution to providing creative direction that guides AI-generated outputs
  • Engineers evolve from writing every line of code to focusing on architecture, quality standards, and system design Each role maintains its core focus while delegating much of the execution to AI tools. The skill becomes knowing what to ask for rather than how to build it—a fundamental reorientation of professional value.

From Process Execution to User Understanding

In a scene from the film "Blade Runner," replicant Leon Kowalski can't quite understand how to respond to the situation about the incapacitated tortoise.

In a scene from the film Blade Runner, replicant Leon Kowalski can’t quite understand how to respond to the situation about the incapacitated tortoise.

While AI is great at summarizing mountains of text, it can’t yet replicate human empathy or understand nuanced user needs. The human ability to interpret context, detect unstated problems, and understand emotional responses remains irreplaceable.

Nielsen emphasizes this point when discussing vibe coding and design: “Building the right product remains a human responsibility, in terms of understanding user needs, prioritizing features, and crafting a great user experience.” Even as AI handles more implementation, the work of understanding what users need remains distinctly human.

Research methodologies are evolving to leverage AI’s capabilities while maintaining human insight:

  • AI tools can process and analyze massive amounts of user feedback
  • Platforms like Dovetail now offer AI-powered insights from user research
  • However, interpreting this data and identifying meaningful patterns still requires human judgment

The gap between what users say they want and what they actually need remains a space where human intuition and empathy create tremendous value. Those who excel at extracting these insights will become increasingly valuable as AI handles more of the execution.

From Specialized to Cross-Functional

The traditional boundaries between product disciplines are blurring as AI lowers the barriers between the specialized areas of expertise. This transformation is enabling more fluid, cross-functional files and changing how teams collaborate.

The aforementioned YC podcast highlights this evolution with Leo Paz’s observation that software engineers will become product engineers. The YC founders who are using AI-generated code are already reaping the benefits. They act more like product people and talk to more customers so they can understand them better and build better products.

Concrete examples of this cross-functionality are already emerging:

  • Designers can now generate functional prototypes without developer assistance using tools like Lovable
  • Product managers can create basic UI mockups to communicate their ideas more effectively
  • Engineers can make design adjustments directly rather than waiting for design handoffs

This doesn’t mean that all specialization disappears. As Diana Hu from YC notes:

Zero-to-one will be great for vibe coding where founders can ship features very quickly. But once they hit product market fit, they’re still going to have a lot of really hardcore systems engineering, where you need to get from the one to n and you need to hire very different kinds of people.

The result is a more nuanced specialization landscape. Early-stage products benefit from generalists who can work across domains with AI assistance. As products mature, deeper expertise remains valuable but is focused on different aspects: system architecture rather than implementation details, information architecture rather than UI production, product strategy rather than feature specification.

Team structures are evolving in response:

  • Smaller, more fluid teams with less rigid role definitions
  • T-shaped skills becoming increasingly valuable—depth in one area with breadth across others
  • New collaboration models replacing traditional waterfall handoffs
  • Emerging hybrid roles that combine traditionally separate domains

The most competitive teams will find the right balance between AI capabilities and human direction, creating new workflows that leverage both. As Johnson warned in the Build Wars competition, “Teams that are pushing back on these new AI tools, get real! This is the way that things are going to go. You’re going to get destroyed by a team of 10 or five or one.”

The ability to adapt across domains is becoming a meta-skill in itself. Those who can navigate multiple disciplines while maintaining a consistent vision will thrive in this new environment where execution is increasingly delegated to artificial intelligence.

Thriving in the Inverted Landscape

The future is already here. AI is fundamentally inverting the skill hierarchy in product development, creating opportunities for those willing to adapt.

Product professionals who succeed in this new landscape will be those who embrace this inversion rather than resist it. This means focusing less on execution mechanics and more on the strategic and human elements that AI cannot replicate: vision, judgment, and taste.

For product managers, double down on developing the abilities to extract profound insights from user conversations and articulate clear, compelling problem statements. Your value will increasingly come from knowing which problems are worth solving rather than specifying how to solve them. AI also can’t align stakeholders and prioritize the work.

For designers, invest in strengthening your design direction skills. The best designers will evolve from skilled craftspeople to visionaries who can guide AI toward creating experiences that resonate emotionally with users. Develop your critical eye and the language to articulate what makes a design succeed or fail. Remember that design has always been about the why.

For engineers, emphasize systems thinking and architecture over implementation details. Your unique value will come from designing resilient, scalable systems and making critical technical decisions that AI cannot yet make autonomously.

Across all roles, three meta-skills will differentiate the exceptional from the merely competent:

  • Prompt engineering: The ability to effectively direct AI tools
  • Judgment and taste development: The discernment to recognize quality and make value-based decisions
  • Cross-functional fluency: The capacity to work effectively across traditional role boundaries

We’re seeing the biggest shift in how we build products since agile came along. Teams are getting smaller and more flexible. Specialized roles are blurring together. And product cycles that used to take months now take days.

There is a silver lining. We can finally focus on what actually matters: solving real problems for real people. By letting AI handle the grunt work, we can spend our time understanding users better and creating things that genuinely improve their lives.

Companies that get this shift will win big. Those that reorganize around these new realities first will pull ahead. But don’t wait too long—as Nielsen points out, this “land grab” won’t last forever. Soon enough, everyone will be working this way.

The future belongs to people who can set the vision and direct AI to make it happen, not those hanging onto skills that AI is rapidly taking over. Now’s the time to level up how you think about products, not just how you build them. In this new world, your strategic thinking and taste matter more than your execution skills.

Surreal scene of a robotic chicken standing in the center of a dimly lit living room with retro furnishings, including leather couches and an old CRT television emitting a bright blue glow.

Chickens to Chatbots: Web Design’s Next Evolution

In the early 2000s to the mid-oughts, every designer I knew wanted to be featured on the FWA, a showcase for cutting-edge web design. While many of the earlier sites were Flash-based, it’s also where I discovered the first uses of parallax, Paper.js, and Three.js. Back then, websites were meant to be explored and their interfaces discovered.

Screenshot of The FWA website from 2009 displaying a dense grid of creative web design thumbnails.

A grid of winners from The FWA in 2009. Source: Rob Ford.

One of my favorite sites of that era was Burger King’s Subservient Chicken, where users could type free text into a chat box to command a man dressed in a chicken suit. In a full circle moment that perfectly captures where we are today, we now type commands into chat boxes to tell AI what to do.

Screenshot of the early 2000s Burger King Subservient Chicken website, showing a person in a chicken costume in a living room with a command input box.

The Wild West mentality of web design meant designers and creative technologists were free to make things look cool. Agencies like R/GA, Big Spaceship, AKQA, Razorfish, and CP+B all won numerous awards for clients like Nike, BMW, and Burger King. But as with all frontiers, civilization eventually arrives with its rules and constraints.

The Robots Are Looking

Play

Last week, Sam Altman, the CEO of OpenAI, and a couple of others from the company demonstrated Operator, their AI agent. You’ll see them go through a happy path and have Operator book a reservation on OpenTable. The way it works is that the AI agent is reading a screenshot of the page and deciding how to interact with the UI. (Reminds me of the promise of the Rabbit R1.)

Let me repeat: the AI is interpreting UI by looking at it. Inputs need to look like inputs. Buttons need to look like buttons. Links need to look like links and be obvious.

In recent years, there’s been a push in the web dev community for accessibility. Complying with WCAG standards for building websites has become a positive trend. Now, we know the unforeseen secondary effect is to unlock AI browsing of sites. If links are underlined and form fields are self-evident, an agent like Operator can interpret where to click and where to enter data.

(To be honest, I’m surprised they’re using screenshots instead of interpreting the HTML as automated testing software would.)

The Economics of Change

Since Perplexity and Arc Search came onto the scene last year, the web’s economic foundation has started to shift. For the past 30 years, we’ve built a networked human knowledge store that’s always been designed for humans to consume. Sure, marketers and website owners got smart and figured out how to game the system to rank higher on Google. But ultimately, ranking higher led to more clicks and traffic to your website.

But the digerati are worried. Casey Newton of Platformer, writing about web journalism (emphasis mine):

The death of digital media has many causes, including the ineptitude of its funders and managers. But today I want to talk about another potential rifle on the firing squad: generative artificial intelligence, which in its capacity to strip-mine the web and repurpose it as an input for search engines threatens to remove one of the few pillars of revenue remaining for publishers.

Elizabeth Lopatto, writing for The Verge points out:

That means that Perplexity is basically a rent-seeking middleman on high-quality sources. The value proposition on search, originally, was that by scraping the work done by journalists and others, Google’s results sent traffic to those sources. But by providing an answer, rather than pointing people to click through to a primary source, these so-called “answer engines” starve the primary source of ad revenue — keeping that revenue for themselves.

Their point is that the fundamental symbiotic economic relationship between search engines and original content websites is changing. Instead of sending traffic to websites, search engines, and AI answer engines are scraping the content directly and providing them within their platforms.

Christopher Butler captures this broader shift in his essay “Who is the internet for?”:

Old-school SEO had a fairly balanced value proposition: Google was really good at giving people sources for the information they need and benefitted by running advertising on websites. Websites benefitted by getting attention delivered to them by Google. In a “clickless search” scenario, though, the scale tips considerably.

This isn’t just about news organizations—it’s about the fundamental relationship between websites, search engines, and users.

The Designer’s Dilemma

As the web is increasingly consumed not by humans but by AI robots, should we as designers continue to care what websites look like? Or, put another way, should we begin optimizing websites for the bots?

The art of search engine optimization, or SEO, was already pushing us in that direction. It turned personality-driven copywriting into “content” with keyword density and headings for the Google machine rather than for poetic organization. But with GPTbot slurping up our websites, should we be more straightforward in our visual designs? Should we add more copy?

Not Dead Yet

It’s still early to know if AI optimization (AIO?) will become a real thing. Changes in consumer behavior happen over many single-digit years, not months. As of November 2024, ChatGPT is eighth on the list of the most visited websites globally, ranked by monthly traffic. Google is first with 291 times ChatGPT’s traffic.

Table ranking the top 10 most visited websites with data on visits, pages per visit, and bounce rate.

Top global websites by monthly users as of November 2024. Source: SEMRush.

Interestingly, as Google rolled out its AI overview for many of its search results, the sites cited by Gemini do see a high clickthrough rate, essentially matching the number one organic spot. It turns out that nearly 40% of us want more details than what the answer engine tells us. That’s a good thing.

Table showing click-through rates (CTR) for various Google SERP features with labeled examples: Snippet, AI Overview, #1 Organic Result, and Ad Result.

Clickthrough rates by entities on the Google search results page. Source: FirstPageSage, January 2025.

Finding the Sweet Spot

There’s a fear that AI answer engines and agentic AI will be the death of creative web design. But what if we’re looking at this all wrong? What if this evolution presents an interesting creative challenge instead?

Just as we once pushed the boundaries of Flash and JavaScript to create award-winning experiences for FWA, designers will need to find innovative ways to work within new constraints. The fact that AI agents like Operator need obvious buttons and clear navigation isn’t necessarily a death sentence for creativity—it’s just a new set of constraints to work with. After all, some of the most creative periods in web design came from working within technical limitations. (Remember when we did layouts using tables?!)

The accessibility movement has already pushed us to think about making websites more structured and navigable. The rise of AI agents is adding another dimension to this evolution, pushing us to find that sweet spot between machine efficiency and human delight.

From the Subservient Chicken to ChatGPT, from Flash microsites to AI-readable interfaces, web design continues to evolve. The challenge now isn’t just making sites that look cool or rank well—it’s creating experiences that serve both human visitors and their AI assistants effectively. Maybe that’s not such a bad thing after all.

I love this essay from Baldur Bjarnason, maybe because his stream of consciousness style is so similar to my own. He compares the rapidly changing economics of web and software development to the film, TV, and publishing industries.

Before we get to web dev, let’s look at the film industry, as disrupted by streaming.

Like, Crazy Rich Asians made a ton of money in 2018. Old Hollywood would have churned out at least two sequels by now and it would have inspired at least a couple of imitator films. But if they ever do a sequel it’s now going to be at least seven or even eight years after the fact. That means that, in terms of the cultural zeitgeist, they are effectively starting from scratch and the movie is unlikely to succeed.

He’s not wrong.

Every Predator movie after the first has underperformed, yet they keep making more of them. Completed movies are shelved for tax credits. Entire shows are disappeared [from] streamers and not made available anywhere to save money on residuals, which does not make any sense because the economics of Blu-Ray are still quite good even with lower overall sales and distribution than DVD. If you have a completed series or movie, with existing 4K masters, then you’re unlikely to lose money on a Blu-Ray.

I’ll quibble with him here. Shows and movies disappear from streamers because there’s a finite pot of money from subscriber revenue. So removing content will save them money. Blu-Ray is more sustainable because it’s an additional purchase.

OK, let’s get back to web dev.

He points out that similar to the film and other creative industries, developers fill their spare time with passion projects. But their day jobs are with tech companies and essentially subsidize their side projects.

And now, both the creative industries proper and tech companies have decided that, no, they probably don’t need that many of the “grunts” on the ground doing the actual work. They can use “AI” at a much lower cost because the output of the “AI” is not that much worse than the incredibly shitty degraded products they’ve been destroying their industries with over the past decade or so.

Bjarnason ends with seven suggestions for those in the industry. I’ll just quote one:

Don’t get tied to a single platform for distribution or promotion. Every use of a silo should push those interested to a venue you control such as a newsletter or website.

In other words, whatever you do, own your audience. Don’t farm that out to a platform like X/Twitter, Threads, or TikTok.

Of course, there are a lot of parallels to be drawn between what’s happening in the development and software engineering industries to what’s happening in design.

The web is a creative industry and is facing the same decline and shattered economics as film, TV, or publishing

The web is a creative industry and is facing the same decline and shattered economics as film, TV, or publishing

Web dev at the end of the world, from Hveragerði, Iceland

baldurbjarnason.com iconbaldurbjarnason.com
A stylized digital illustration of a person reclining in an Eames lounge chair and ottoman, rendered in a neon-noir style with deep blues and bright coral red accents. The person is shown in profile, wearing glasses and holding what appears to be a device or notebook. The scene includes abstract geometric lines cutting across the composition and a potted plant in the background. The lighting creates dramatic shadows and highlights, giving the illustration a modern, cyberpunk aesthetic.

Design’s Purpose Remains Constant

Fabricio Teixeira and Caio Braga, in their annual The State of UX report:

Despite all the transformations we’re seeing, one thing we know for sure: Design (the craft, the discipline, the science) is not going anywhere. While Design only became a more official profession in the 19th century, the study of how craft can be applied to improve business dates back to the early 1800s. Since then, only one thing has remained constant: how Design is done is completely different decade after decade. The change we’re discussing here is not a revolution, just an evolution. It’s simply a change in how many roles will be needed and what they will entail. “Digital systems, not people, will do much of the craft of (screen-level) interaction design.”

Scary words for the UX design profession as it stares down the coming onslaught of AI. Our industry isn’t the first one to face this—copywriters, illustrators, and stock photographers have already been facing the disruption of their respective crafts. All of these creatives have had to pivot quickly. And so will we.

Teixeira and Braga remind us that “Design is not going anywhere,” and that “how Design is done is completely different decade after decade.”

UX Is a Relatively Young Discipline

If you think about it, the UX design profession has already evolved significantly. When I started in the industry as a graphic designer in the early 1990s, web design wasn’t a thing, much less user experience design. I met my first UX design coworker at marchFIRST, when Chris Noessel and I collaborated on Sega.com. Chris had studied at the influential Interaction Design Institute Ivrea in Italy. If I recall correctly, Chris’ title was information architect as UX designer wasn’t a popular title yet. Regardless, I marveled at how Chris used card sorting with Post-It notes to determine the information architecture of the website. And together we came up with the concept that the website itself would be a game, obvious only to visitors who paid attention. (Alas, that part of the site was never built, as we simply ran out of time. Oh, the dot-com days were fun.)

Screenshot of a retro SEGA website featuring a futuristic female character in orange, a dropdown menu of games like “Sonic Adventure” and “Soul Calibur,” and stylized interface elements with bold fonts and blue tones.

“User experience” was coined by Don Norman in the mid-1990s. When he joined Apple in 1993, he settled on the title of “user experience architect.” In an email interview with Peter Merholz in 1998, Norman said:

I invented the term because I thought human interface and usability were too narrow. I wanted to cover all aspects of the person’s experience with the system including industrial design graphics, the interface, the physical interaction and the manual. Since then the term has spread widely, so much so that it is starting to lose its meaning.

As the thirst for all things digital proliferated, design rose to meet the challenge. Design schools started to add interaction design to their curricula, and lots of younger graphic designers were adapting and working on websites. We used the tools we knew—Adobe Illustrator and Photoshop—and added Macromedia Director and Flash as projects allowed.

Director was the tool of choice for those making CD-ROMs in San Francisco’s Multimedia Gulch in the early 1990s. It was an easy transition for designers and developers when the web arrived just a few years later in the dot-com boom.

In a short span of twenty years, designers added many mediums to their growing list: CD-ROMs, websites, WAP sites, responsive websites, mobile apps, tablet apps, web apps, and AR/VR experiences.

Designers have had to understand the limitations of each medium, picking up craft skills, and learning best practices. But I believe, good designers have had one thing remain constant: they know how to connect businesses with their audiences. They’re the translation layer, if you will. (Notice how I have not said how to make things look good.)

From Concept to Product Strategy

Concept. Back then, that’s how I referred to creative strategy. It was drilled into me at design school and in my first job as a designer. Sega.com was a game in and of itself to celebrate gamers and gaming. Pixar.com was a storybook about how Pixar made its movies, emphasizing its storytelling prowess. The Mitsubishi Lancer microsite leaned on the Lancer’s history as a rally car, reminding visitors of its racing heritage. These were all ideas that emotionally connected the brand with the consumer, to lean on what the audience knew to be true and deepened it.

Screenshot of Pixar’s early 2000s website featuring a character from A Bug’s Life, with navigation links, a stylized serif font, and descriptive text about the film’s colorful insect characters.

When I designed Pixar.com, I purposefully made the site linear, like a storybook.

Concept was also the currency of creative departments at ad agencies. The classic copywriter and art director pairing came up with different ideas for ads. These ideas aren’t just executions of TV commercials. Instead, they were the messages the brands wanted to convey, in a way that consumers would be open to them.

I would argue that concept is also product strategy. It’s the point of view that drives a product—whether it’s a marketing website, a cryptocurrency mobile app, or a vertical SaaS web app. Great product strategy connects the business with the user and how the product can enrich their lives. Enrichment can come in many forms. It can be as simple as saving users a few minutes of tedium, or transforming an analog process into a digital one, therefore unlocking new possibilities.

UI Is Already a Commodity

In more recent years, with the rise of UI kits, pre-made templates, and design systems like Material UI, the visual design of user interfaces has become a commodity. I call this moment “peak UI”—when fundamental user interface patterns have reached ubiquity, and no new patterns will or should be invented. Users take what they know from one interface and apply that knowledge to new ones. To change that is to break Jakob’s Law and reduce usability. Of course, when new modalities like voice and AI came on the scene, we needed to invent new user interface patterns, but those are few and far between.

And just like how AI-powered coding assistants are generating code based on human-written code, the leading UI software program Figma is training its AI on users’ files. Pretty soon, designers will be able to generate UIs via a prompt. And those generated UIs will be good enough because they’ll follow the patterns users are already familiar with. (Combined with an in-house design system, the feature will be even more useful.)

In one sense, this alleviates having to make yet another select input. Instead, opening up time for more strategic—and IMHO, more fun—challenges.

Three Minds

In today’s technology companies’ squad, aka Spotify model, every squad has a three-headed leadership team consisting of a product manager, a designer, and an engineering or tech lead. This cross-functional leadership team is a direct descendent of the copywriter-art director creative team pioneered by Bill Bernbach in 1960, sparking the so-called “creative revolution” in advertising.

Three vintage ads by Doyle Dane Bernbach (DDB): Left, a Native American man smiling with a rye sandwich, captioned “You don’t have to be Jewish to love Levy’s”; center, a black-and-white Volkswagen Beetle ad labeled “Lemon.”; right, a smiling woman in a uniform with the headline “Avis can’t afford not to be nice.”

Ads by DDB during the creative revolution of the 1960s. The firm paired copywriters and art directors to create ads centered on a single idea.

When I was at Organic in 2005, we debuted a mantra called, Three Minds.

Great advertising was often created in “pairs”—a copywriter and an art director. In the digital world, the creation process is more complex. Strategists, designers, information architects, media specialists, and technologists must come together to create great experiences. Quite simply, it takes ThreeMinds.

At its most simplistic, PMs own the why; designers, own the what; and engineers own the how. But the creative act is a lot messier than that and the lines aren’t as firm in practice.

The reality is there’s blurriness between each discipline’s area of responsibility. I asked my friend, Byrne Reese, Group Product Manager at RingCentral, about that fuzziness between PMs and designers, and here’s what he had to say:

I have a bias towards letting a PM drive product strategy. But a good product designer will have a strong point of view here, because they will also see the big picture alongside the PM. It is hard for them not to because for them to do their role well, they need to do competitive analysis, they need to talk to customers, they need to understand the market. Given that, they can’t help it but have a point of view on product strategy.

Shawn Smith, a product management and UX consultant, sees product managers owning a bit more of everything, but ultimately reinforces the point that it’s messy:

Product managers cover some of the why (why x is a relevant problem at all, why it’s a priority, etc), often own the what (what’s the solution we plan to pursue), and engage with designers and engineers on the how (how the solution will be built and how it will ultimately manifest).

Rise of the Product Designer

In the last few years, companies have switched from hiring UX designers to hiring product designers.

Line graph showing Google search interest in the U.S. for “ux design” (blue) and “product design” (red) from January 2019 to 2024. Interest in “ux design” peaks in early 2022 before declining, while “product design” fluctuates and overtakes “ux design” in late 2023. Annotations mark the start and end of a zero interest-rate period and a change in Google’s data collection.

The Google Trends data here isn’t conclusive, but you can see a slow decline for “UX design” starting in January 2023 and a steady incline for “product design” since 2021. In September 2024, “product design” overtook “UX design.” (The jump at the start of 2022 is due to a change in Google’s data collection system, so look at the relative comparison between the two lines.)

Zooming out, UX design and product design had been neck and neck. But once the zero interest-rate period (ZIRP) era hit and tech companies were flush with cash, there’s a jump in UX design. My theory is because companies could afford to have designers focus on their area of expertise—optimizing user interactions. At around March 2022, when ZIRP was coming to an end and the tech layoffs started, UX design declines while product design rises.

Screenshot of LinkedIn job search results from December 27, 2024, showing 802 results for “UX designer” and 1,354 results for “product designer” in the United States.

Looking at the jobs posted on LinkedIn at the moment, and you’ll find nearly 70% more product designer job postings than ones for UX designer—1,354 versus 802.

As Christoper K. Wong wrote so succinctly, product design is overtaking UX. Companies are demanding more from their designers.

Design Has Always Been About the Why

Steve Jobs famously once said, “Design is not just what it looks like and feels like. Design is how it works.”

Through my schooling and early experiences in the field, I’ve always known this and practiced my craft this way. Being a product designer suits me. (Well, being a designer suits me too, but that’s another post.)

Product design requires us designers to consider more than just the interactions on the screen or the right flows. I wrote earlier that—at its most simplistic—designers own the what. But product designers must also consider why we’re building whatever we’re building.

Vintage advertisement for the Eames Lounge Chair. It shows a man dressed in a suit and tie, reclining on the chair and reading a newspaper.

This dual focus on why and what isn’t new to design. When Charles and Ray Eames created their famous Eames Lounge Chair and Ottoman in 1956, they aimed to design a chair that would offer its user respite from the “strains of modern living.” Just a couple of years later, Dieter Rams at Braun, would debut his T3 pocket radio, sparking the transition of music being a group activity to a personal one. The Sony Walkman and Apple iPod are clear direct descendants.

The Eameses and Rams showed us what great designers have always known: our job isn’t just about the surface, or even about how something works. It’s about asking the right questions about why products should exist and how they might enrich people’s lives.

As AI reshapes our profession—just as CD-ROMs, websites, and mobile apps did before—this ability to think strategically about the why becomes even more critical. The tools and techniques will keep changing, just as they have since my days in San Francisco’s Multimedia Gulch in the 1990s. But our core mission stays the same: we’re still that translation layer, creating meaningful connections between businesses and their audiences. That’s what design has always been about, and that’s what it will continue to be.

Griffin AI logo

How I Built and Launched an AI-Powered App

I’ve always been a maker at heart—someone who loves to bring ideas to life. When AI exploded, I saw a chance to create something new and meaningful for solo designers. But making Griffin AI was only half the battle…

Birth of an Idea

About a year ago, a few months after GPT-4 was released and took the world by storm, I worked on several AI features at Convex. One was a straightforward email drafting feature but with a twist. We incorporated details we knew about the sender—such as their role and offering—and the email recipient, as well as their role plus info about their company’s industry. To accomplish this, I combined some prompt engineering and data from our data providers, shaping the responses we got from GPT-4.

Playing with this new technology was incredibly fun and eye-opening. And that gave me an idea. Foundational large language models (LLMs) aren’t great yet for factual data retrieval and analysis. But they’re pretty decent at creativity. No, GPT, Claude, or Gemini couldn’t write an Oscar-winning screenplay or win the Pulitzer Prize for poetry, but it’s not bad for starter ideas that are good enough for specific use cases. Hold that thought.

I belong to a Facebook group for WordPress developers and designers. From the posts in the group, I could see most members were solopreneurs, with very few having worked at a large agency. From my time at Razorfish, Organic, Rosetta, and others, branding projects always included brand strategy, usually weeks- or months-long endeavors led by brilliant brand or digital strategists. These brand insights and positioning always led to better work and transformed our relationship with the client into a partnership.

So, I saw an opportunity. Harness the power of gen AI to create brand strategies for this target audience. In my mind, this could allow these solo developers and designers to charge a little more money, give their customers more value, and, most of all, act like true partners.

Validating the Problem Space

The prevailing wisdom is to leverage Facebook groups and Reddit forums to perform cheap—free—market research. However, the reality is that good online communities ban this sort of activity. So, even though I had a captive audience, I couldn’t outright ask. The next best thing for me was paid research. I found Pollfish, an online survey platform that could assemble a panel of 100 web developers who own their own businesses. According to the data, there was overwhelming interest in a tool like this.*

Screenshot of two survey questions showing 79% of respondents would "Definitely buy" and "probably buy" Griffin AI, and 58% saying they need the app a lot.

Notice the asterisk. We’ll come back to that later on.

I also asked some of my designer and strategist friends who work in branding. They all agreed that there was likely a market for this.

Testing the Theory

I had a vague sense of what the application would be. The cool thing about ChatGPT is that you can bounce ideas back and forth with it as almost a co-creation partner. But you had to know what to ask, which is why prompt engineering skills were developed.

I first tested GPT 3.5’s general knowledge. Did it know about brand strategy? Yes. What about specific books on brand strategy, like Designing Brand Identity by Alina Wheeler? Yes. OK, so the knowledge is in there. I just needed the right prompts to coax out good answers.

I developed a method whereby the prompt reminded GPT of how to come up with the answer and, of course, contained the input from the user about the specific brand.

Screenshot of prompt

Through trial and error and burning through a lot of OpenAI credits, I figured out a series of questions and prompts to produce a decent brand strategy document.

I tested this flow with a variety of brands, including real ones I knew and fake ones I’d have GPT imagine.

Designing the MVP

The Core Product

Now that I had the conceptual flow, I had to develop a UI to solicit the answers from the user and have those answers inform subsequent prompts. Everything builds on itself.

I first tried an open chat, just like ChatGPT, but with specific questions. Only issue was I couldn’t limit what the user wrote in the text box.

Early mockup of the chat UI for Griffin AI

Early mockup of the chat UI for Griffin AI

AI Prompts as Design

Because the prompts were central to the product design, I decided to add them into my Figma file as part of the flow. In each prompt, I indicated where the user inputs would be injected. Also, most of the answers from the LLM needed to be stored for reuse in later parts of the flow.

Screenshot of app flow in Figma

AI prompts are indicated directly in the Figma file

Living With Imperfect Design

Knowing that I wanted a freelance developer to help me bring my idea to life, I didn’t want to fuss too much about the app design. So, I settled on using an off-the-shelf design system called Flowbite. I just tweaked the colors and typography and lived with the components as-is.

Building the MVP

Building the app would be out of my depths. When GPT 3.5 first came out, I test-drove it for writing simple Python scripts. But it failed, and I couldn’t figure out a good workflow to get working code. So I gave up. (Of course, fast-forward until now, and gen AI for coding is much better!)

I posted a job on Upwork and interviewed four developers. I chose Geeks of Kolachi, a development agency out of Pakistan. I picked them because they were an agency—meaning they would be a team rather than an individual. Their process included oversight and QA, which I was familiar with working at a tech company.

Working Proof-of-Concept in Six Weeks

In just six weeks, I had a working prototype that I could start testing with real users. My first beta testers were friends who graciously gave me feedback on the chat UI.

Through this early user testing, I found that I needed to change the UI. Users wanted more real estate for the generated content, and the free response feedback text field was simply too open, as users didn’t know what to do next.

So I spent another few weekends redesigning the main chat UI, and then the development team needed another three or four weeks to refactor the interface.

Mockup of the revised chat UI

The revised UI gives more room for the main content and allows the user to make their own adjustments.

AI Slop?

As a creative practitioner, I was very sensitive to not developing a tool that would eliminate jobs. The fact is that the brand strategies GPT generated were OK; they were good enough. However, to create a real strategy, a lot more research is required. This would include interviewing prospects, customers, and internal stakeholders, studying the competition, and analyzing market trends.

Griffin AI was a shortcut to producing a brand strategy good enough for a small local or regional business. It was something the WordPress developer could use to inform their website design. However, these businesses would never be able to afford the services of a skilled agency strategist in addition to the logo or website work.

However, the solo designer could charge a little extra for this branding exercise or provide more value in addition to their normal offering.

I spent a lot of time tweaking the prompts and the flow to produce more than decent brand strategies for the likes of Feline Friends Coffee House (cat cafe), WoofWagon Grooming (mobile pet wash), and Dice & Duels (board game store).

Beyond the Core Product

While the core product was good enough for an MVP, I wanted to figure out a valuable feature to justify monthly recurring revenue, aka a subscription. LLMs are pretty good at mimicking voice and tone if you give it enough direction. Therefore I decided to include copywriting as a feature, but writing based on a brand voice created after a brand strategy has been developed. ChatGPT isn’t primed to write in a consistent voice, but it can with the right prompting and context.

Screenshots of the Griffin AI marketing site

Screenshots of the Griffin AI marketing site

Beyond those two features, I also had to build ancillary app services like billing, administration, onboarding, tutorials, and help docs. I had to extend the branding and come up with a marketing website. All this ate up weeks more time.

Failure to Launch

They say the last 20% takes 80% of the time, or something like that. And it’s true. The stuff beyond the core features just took a lot to perfect. While the dev team was building and fixing bugs, I was on Reddit, trying to gather leads to check out the app in its beta state.

Griffin AI finally launched in mid-June. I made announcements on my social media accounts. Some friends congratulated me and even checked out the app a little. But my agency and tech company friends weren’t the target audience. No, my ideal customer was in that WordPress developers Facebook group where I couldn’t do any self-promotion.

Screenshot of the announcement on LinkedIn

I continued to talk about it on Reddit and everywhere I could. But the app never gained traction. I wasn’t savvy enough to build momentum and launch on ProductHunt. The Summer Olympics in Paris happened. Football season started. The Dodgers won the World Series. And I got all but one sale.

When I told this customer that I was going to shut down the app, he replied, “I enjoyed using the app, and it helped me brief my client on a project I’m working on.” Yup, that was the idea! But not enough people knew about it or thought it was worthwhile to keep it going.

Lessons Learned

I’m shutting Griffin AI down, but I’m not too broken up about it. For me, I learned a lot and that’s all that matters. Call it paying tuition into the school of life.

When I perform a post-mortem on why it didn’t take off, I can point to a few things.

I’m a maker, not a seller.

I absolutely love making and building. And I think I’m not too bad at it. But I hate the actual process of marketing and selling. I believe that had I poured more time and money into getting the word out, I could have attracted more customers. Maybe.

Don’t rely on survey data.

Remember the asterisk? The Pollfish data that showed interest in a product like this? Well, I wonder if this was a good panel at all. In the verbatims, some comments didn’t sound like these respondents were US-based, business owners, or taking the survey seriously. Comments like “i extremely love griffin al for many more research” and “this is a much-needed assistant for my work.” Instead of survey data with a suspect panel, I need to do more first-hand research before jumping into it.

AI moves really fast.

AI has been a rocket ship this past year-and-a-half. Keeping up with the changes and new capabilities is brutal as a side hustle and as a non-engineer. While I thought there might be a market for a specialized AI tool like Griffin, I think people are satisfied enough with a horizontal app like ChatGPT. To break through, you’d have to do something very different. I think Cursor and Replit might be onto something.


I still like making things, and I’ll always be a tinkerer. But maybe next time, I’ll be a little more aware of my limitations and either push past them or find collaborators who can augment my skills.

Closeup of MU/TH/UR 9000 computer screen from the movie Alien:Romulus

Re-Platforming with a Lot of Help From AI

I decided to re-platform my personal website, moving it from WordPress to React. It was spurred by a curiosity to learn a more modern tech stack like React and the drama in the WordPress community that erupted last month. While I doubt WordPress is going away anytime soon, I do think this rift opens the door for designers, developers, and clients to consider alternatives.

First off, I’m not a developer by any means. I’m a designer and understand technical things well, but I can’t code. When I was young, I wrote programs in BASIC and HyperCard. In the early days of content management systems, I built a version of my personal site using ExpressionEngine. I was always able to tweak CSS to style themes in WordPress. When Elementor came on the scene, I could finally build WP sites from scratch. Eventually, I graduated to other page builders like Oxygen and Bricks.

So, rebuilding my site in React wouldn’t be easy. I went through the React foundations tutorial by Next.js and their beginner full-stack course. But honestly, I just followed the steps and copied the code, barely understanding what was being done and not remembering any syntax. Then I stumbled upon Cursor, and a whole new world opened up.

Screenshot of the Cursor website, promoting it as “The AI Code Editor” designed to boost productivity. It features a “Download for Free” button, a 1-minute demo video, and a coding interface with AI-generated suggestions and chat assistance.

Cursor is an AI-powered code editor (IDE) like VS Code. In fact, it’s a fork of VS Code with AI chat bolted onto the side panel. You can ask it to generate and debug code for you. And it works! I was delighted when I asked it to create a light/dark mode toggle for my website. In seconds, it outputted code in the chat for three files. I would have to go into each code example and apply it to the correct file, but even that’s mostly automatic. I simply have to accept or reject the changes as the diff showed up in the editor. And I had dark mode on my site in less than a minute. I was giddy!

To be clear, it still took about two weekends of work and a lot of trial and error to finish the project. But a non-coder like me, who still can’t understand JavaScript, would not have been able to re-platform their site to a modern stack without the help of AI.

Here are some tips I learned along the way.

Plan the Project and Write a PRD

While watching some React and Next.js tutorials on YouTube, this video about 10xing your Cursor workflow by Jason Zhou came up. I didn’t watch the whole thing, but his first suggestion was to write a product requirements document, or PRD, which made a lot of sense. So that’s what I did. I wrote a document that spelled out the background (why), what I wanted the user experience to be, what the functionality should be, and which technologies to use. Not only did this help Cursor understand what it was building, but it also helped me define the functionality I wanted to achieve.

Screenshot of a project requirements document titled “Personal Website Rebuild,” outlining a plan to migrate the site rogerwong.me from WordPress to a modern stack using React, Next.js, and Tailwind CSS. It includes background context, required pages, and navigation elements for the new site.

A screenshot of my PRD

My personal website is a straightforward product when compared to the Reddit sentiment analysis tool Jason was building, but having this document that I could refer back to as I was making the website was helpful and kept things organized.

Create the UI First

I’ve been designing websites since the 1990s, so I’m pretty old school. I knew I wanted to keep the same design as my WordPress site, but I still needed to design it in Figma. I put together a quick mockup of the homepage, which was good enough to jump into the code editor.

I know enough CSS to style elements however I want, but I don’t know any best practices. Thankfully, Tailwind CSS exists. I had heard about it from my engineering coworkers but never used it. I watched a quick tutorial from Lukas, who made it very easy to understand, and I was able to code the design pretty quickly.

Prime the AI

Once the design was in HTML and Tailwind, I felt ready to get Cursor started. In the editor, there’s a chat interface on the right side. You can include the current file, additional files, or the entire codebase for context for each chat. I fed it the PRD and told it to wait for further instructions. This gave Cursor an idea of what we were building.

Make It Dynamic

Then, I included the homepage file and told Cursor to make it dynamic according to the PRD. It generated the necessary code and, more importantly, its thought process and instructions on implementing the code, such as which files to create and which Next.js and React modules to add.

Screenshot of the AI coding assistant in the Cursor editor helping customize Tailwind CSS Typography plugin settings. The user reports issues with link and heading colors, especially in dark mode. The assistant suggests editing tailwind.config.ts and provides code snippets to fix styling.

A closeup of the Cursor chat showing code generation

The UI is well-considered. For each code generation box, Cursor shows the file it should be applied to and an Apply button. Clicking the Apply button will insert the code in the right place in the file, showing the new code in green and the code to be deleted in red. You can either reject or accept the new code.

Be Specific in Your Prompts

The more specific you can be, the better Cursor will work. As I built the functionality piece by piece, I found that the generated code would work better—less error-prone—when I was specific in what I wanted.

When errors did occur, I would simply copy the error and paste it into the chat. Cursor would do its best to troubleshoot. Sometimes, it solved the problem on its first try. Other times, it would take several attempts. I would say Cursor generated perfect code the first time 80% of the time. The remainder took at least another attempt to catch the errors.

Know Best Practices

Screenshot of the Cursor AI code editor with a TypeScript file (page.tsx) open, showing a blog post index function. An AI chat panel on the right helps troubleshoot Tailwind CSS Typography plugin issues, providing a tailwind.config.ts code snippet to fix link and heading colors in dark mode.

Large language models today can’t quite plan. So, it’s essential to understand the big picture and keep that plan in mind. I had to specify the type of static site generator I wanted to build. In my case, just simple Markdown files for blog posts. However, additional best practices include SEO and accessibility. I had to have Cursor modify the working code to incorporate best practices for both, as they weren’t included automatically.

Build Utility Scripts

Since I was migrating my posts and links from WordPress, a fair bit of conversion had to be done to get it into the new format, Markdown. I thought I would have to write my own WordPress plugin or something, but when I asked Cursor how to transfer my posts, it proposed the existing WordPress-to-Markdown script. That was 90% of the work!

I ended up using Cursor to write additional small scripts to add alt text to all the images and to ensure no broken images. These utility scripts came in handy to process 42 posts and 45 links in the linklog.

The Takeaway: Developers’ Jobs Are Still Safe

I don’t believe AI-powered coding tools like Cursor, GitHub Copilot, and Replit will replace developers in the near future. However, I do think these tools have a place in three prominent use cases: learning, hobbying, and acceleration.

For students and those learning how to code, Cursor’s plain language summary explaining its code generation is illuminating. For hobbyists who need a little utilitarian script every once in a while, it’s also great. It’s similar to 3D printing, where you can print out a part to fix the occasional broken something.

Two-panel graphic promoting GitHub Copilot. The left panel states, “Proven to increase developer productivity and accelerate the pace of software development,” with a link to “Read the research.” The right panel highlights “55% Faster coding” with a lightning bolt icon on a dark gradient background.

For professional engineers, I believe this technology can help them do more faster. In fact, that’s how GitHub positions Copilot: “code 55% faster” by using their product. Imagine planning out an app, having the AI draft code for you, and then you can fine-tune it. Or have it debug for you. This reduces a lot of the busy work.

I’m not sure how great the resulting code is. All I know is that it’s working and creating the functionality I want. It might be similar to early versions of Macromedia (now Adobe) Dreamweaver, where the webpage looked good, but when you examined the HTML more closely, it was bloated and inefficient. Eventually, Dreamweaver’s code got better. Similarly, WordPress page builders like Elementor and Bricks Builder generated cleaner code in the end.

Tools like Cursor, Midjourney, and ChatGPT are enablers of ideas. When wielded well, they can help you do some pretty cool things. As a fun add-on to my site, I designed some dingbats—mainly because of my love for 1960s op art and ’70s corporate logos—at the bottom of every blog post. See what happens if you click them. Enjoy.

Page 4 of 4