Skip to content

With their annual user conference, Config, coming up in San Francisco in less than two weeks, Figma released their 2025 AI Report today.

Andrew Hogan, Insights lead:

While developers and designers alike recognize the importance of integrating AI into their workflows, and overall adoption of AI tools has increased, there’s a disconnect in sentiment around quality and efficacy between the two groups.

Developers report higher satisfaction with AI tools (82%) and feel AI improves the quality of their work (68%). Meanwhile, designers show more modest numbers—69% satisfaction rate and 54% reporting quality improvement—suggesting this group’s enthusiasm lags behind their developer counterparts.

This divide stems from how AI can support existing work and how it’s being used: 59% of developers use AI for core development responsibilities like code generation, whereas only 31% of designers use AI in core design work like asset generation. It’s also likely that AI’s ability to generate code is coming into play—68% of developers say they use prompts to generate code, and 82% say they’re satisfied with the output. Simply put, developers are more widely finding AI adoption useful in their day-to-day work, while designers are still working to determine how and if these tools best fit into their processes.

I can understand that. Code is behind the scenes. If it’s not perfect, no one will really know. But design is user-facing, so quality is more important.

Looking into the future:

Though AI’s impact on efficiency is clear, there are still questions about how to use AI to make people better at their role. This disparity between efficiency and quality is an ongoing battle for users and creators alike.

Looking forward, predictions about the impact of AI on work are moderate—AI’s expected impact for the coming year isn’t much higher than its expected impact last year.

In the full report, Hogan details out:

Only 27% predict AI will have a significant impact on their company goals in the next year (compared to 23% in 2024), with 15% saying it will be transformational (unchanged year-over-year).

The survey was taken in January with a panel of 2,500 users. Things in AI change in weeks. I’m surprised at the number and part of me believes that a lot of designers are hiding their heads in the sand. AI is coming. We should be agile and adapt.

preview-1745539674417.png

Figma's 2025 AI report: Perspectives From Designers and Developers

Figma’s AI report tells us how designers and developers are navigating the changing landscape.

figma.com iconfigma.com

Elliot Vredenburg writing for Fast Company:

Which is why creative direction matters more now than ever. If designers are no longer the makers, they must become the orchestrators. This isn’t without precedent. Rick Rubin doesn’t read music or play instruments. Virgil Abloh was more interested in recontextualizing than inventing. Their value lies not in original execution but in framing, curation, and translation. The same is true now for brand designers. Creative direction is about synthesizing abstract ideas into aesthetic systems—shaping meaning through how things feel, not just how they look.

preview-1745361479567.jpg

Why taste matters now more than ever

In the age of AI, design is less about making and more about meaning.

fastcompany.com iconfastcompany.com

You might not know his name—I sure didn’t—but you’ll surely recognize his illustration style that came to embody the style du jour of the 1960s and ’70s. Robert E. McGinnis has died at the age of 99. The New York Times has an obituary:

Robert E. McGinnis, an illustrator whose lusty, photorealistic artwork of curvaceous women adorned more than 1,200 pulp paperbacks, as well as classic movie posters for “Breakfast at Tiffany’s,” featuring Audrey Hepburn with a cigarette holder, and James Bond adventures including “Thunderball,” died on March 10 at his home in Greenwich, Conn. He was 99.

Mr. McGinnis’s female figures from the 1960s and ’70s flaunted a bold sexuality, often in a state of semi undress, whether on the covers of detective novels by John D. MacDonald or on posters for movies like “Barbarella” (1968), with a bikini-clad Jane Fonda, or Bond films starring Sean Connery and Roger Moore.

Illustrated movie poster for the James Bond film "The Man with the Golden Gun," featuring Roger Moore as Bond, surrounded by action scenes, women in bikinis, explosions, and a large golden gun in the foreground.

preview-1745266961383.jpg

Robert E. McGinnis, Whose Lusty Illustrations Defined an Era, Dies at 99

(Gift Article) In the 1960s and ’70s, his leggy femmes fatales beckoned from paperback covers and posters for movies like “Breakfast at Tiffany’s” and “Thunderball.”

nytimes.com iconnytimes.com

While Josh W. Comeau writes for his developer audience, a lot of what he says can be applied to design. Referring to a recent Forbes article:

AI may be generating 25% of the code that gets committed at Google, but it’s not acting independently. A skilled human developer is in the driver’s seat, using their knowledge and experience to guide the AI, editing and shaping its output, and mixing it in with the code they’ve written. As far as I know, 100% of code at Google is still being created by developers. AI is just one of many tools they use to do their job.

In other words, developers are editing and curating the output of AI, just like where I believe the design discipline will end up soon.

On incorporating Cursor into his workflow:

And that’s kind of a problem for the “no more developers” theory. If I didn’t know how to code, I wouldn’t notice the subtle-yet-critical issues with the model’s output. I wouldn’t know how to course-correct, or even realize that course-correction was required!

I’ve heard from no-coders who have built projects using LLMs, and their experience is similar. They start off strong, but eventually reach a point where they just can’t progress anymore, no matter how much they coax the AI. The code is a bewildering mess of non sequiturs, and beyond a certain point, no amount of duct tape can keep it together. It collapses under its own weight.

I’ve noticed that too. For a non-coder like me, rebuilding this website yet again—I need to write a post about it—has been a challenge. But I knew and learned enough to get something out there that works. But yes, relying solely on AI for any professional work right now is precarious. It still requires guidance.

On the current job market for developers and the pace of AI:

It seems to me like we’ve reached the point in the technology curve where progress starts becoming more incremental; it’s been a while since anything truly game-changing has come out. Each new model is a little bit better, but it’s more about improving the things it already does well rather than conquering all-new problems.

This is where I will disagree with him. I think the AI labs are holding back the super-capable models that they are using internally. Tools like Claude Code and the newly-released OpenAI Codex are clues that the foundational model AI companies have more powerful agents behind-the-scenes. And those agents are building the next generation of models.

preview-1745259603982.jpg

The Post-Developer Era

When OpenAI released GPT-4 back in March 2023, they kickstarted the AI revolution. The consensus online was that front-end development jobs would be totally eliminated within a year or two.Well, it’s been more than two years since then, and I thought it was worth revisiting some of those early predictions, and seeing if we can glean any insights about where things are headed.

joshwcomeau.com iconjoshwcomeau.com
Illustration of humanoid robots working at computer terminals in a futuristic control center, with floating digital screens and globes surrounding them in a virtual space.

Prompt. Generate. Deploy. The New Product Design Workflow

Product design is going to change profoundly within the next 24 months. If the AI 2027 report is any indication, the capabilities of the foundational models will grow exponentially, and with them—I believe—will the abilities of design tools.

A graph comparing AI Foundational Model Capabilities (orange line) versus AI Design Tools Capabilities (blue line) from 2026 to 2028. The orange line shows exponential growth through stages including Superhuman Coder, Superhuman AI Researcher, Superhuman Remote Worker, Superintelligent AI Researcher, and Artificial Superintelligence. The blue line shows more gradual growth through AI Designer using design systems, AI Design Agent, and Integration & Deployment Agents.

The AI foundational model capabilities will grow exponentially and AI-enabled design tools will benefit from the algorithmic advances. Sources: AI 2027 scenario & Roger Wong

The TL;DR of the report is this: companies like OpenAI have more advanced AI agent models that are building the next-generation models. Once those are built, the previous generation is tested for safety and released to the public. And the cycle continues. Currently, and for the next year or two, these companies are focusing their advanced models on creating superhuman coders. This compounds and will result in artificial general intelligence, or AGI, within the next five years. 

Non-AI companies will benefit from new model releases. We already see how much the performance of coding assistants like Cursor has improved with recent releases of Claude 3.7 Sonnet, Gemini 2.5 Pro, and this week, GPT-4.1, OpenAI’s latest.

Tools like v0LovableReplit, and Bolt are leading the charge in AI-assisted design. Creating new landing pages and simple apps is literally as easy as typing English into a chat box. You can whip up a very nice-looking dashboard in single-digit minutes.

However, I will argue they are only serving a small portion of the market. These tools are great for zero-to-one digital products or websites. While new sites and software need to be designed and built, the vast majority of the market is in extending and editing current products. There are hordes more designers who work at corporations such as Adobe, Microsoft, Salesforce, Shopify, and Uber than there are designers at agencies. They all need to adhere to their company’s design system and can’t use what Lovable produces from scratch. The generated components can’t be used even if they were styled to look correct. They must be components from their design system code repositories.

The Design-to-Code Gap

But first, a quick detour…

For any designer who has ever handed off a Figma file to a developer, they have felt the stinging disappointment days or weeks later when it’s finally coded. The spacing is never quite right. The type sizes are off. And the back and forth seems endless. The developer handoff experience has been a well-trodden path full of now-defunct or dying companies like InVisionAbstract, and Zeplin. Figma tries to solve this issue with Dev Mode, but even then, there’s a translation that has to happen from pixels and vectors in a proprietary program to code. 

Yes, no- and low-code platforms like Webflow, Framer, and Builder.io exist. But the former two are proprietary platforms—you can’t take the code with you—and the latter is primarily a CMS (no-code editing for content editors).

The dream is for a design app similar to Figma that uses components from your team’s GitHub design system repository.1 I’m not talking about a Figma-only component library. No. Real components with controllable props in an inspector. You can’t break them apart and any modifications have to be made at the repo level. But you can visually put pages together. For new components, well, if they’re made of atomic parts, then yes, that should be possible too.

UXPin Merge comes close. Everything I mentioned above is theoretically possible. But if I’m being honest, I did a trial and the product is buggy and wasn’t great to use. 

A Glimpse of What’s Coming

Enter TempoPolymet, and Subframe. These are very new entrants to the design tool space. Tempo and Polymet are backed by Y Combinator and Subframe is pre-seed.

For Subframe, they are working on a beta feature that will allow you to connect your GitHub repository, append a little snippet of code to each component, and then the library of components will appear in their app. Great! This is the dream. The app seems fairly easy to use and wasn’t sluggish and buggy like UXPin.

But the kicker—the Holy Grail—is their AI. 

I quickly put together a hideous form screen based on one of the oldest pages in BuildOps that is long overdue for a redesign. Then, I went into Subframe’s Ask AI tab and prompted, “Make this design more user friendly.” Similar to Midjourney, four blurry tiles appeared and slowly came into focus. This diffuser model effect was a moment of delight for me. I don’t know if they’re actually using a diffuser model—think Stable Diffusion and Midjourney—or if they spent the time building a kick-ass loading state. Anyway, four completely built alternate layouts were generated. I clicked into each one to see it larger and noticed they each used components from our styled design library. (I’m on a trial, so it’s not exactly components from our repo, but it demonstrates the promise.) And I felt like I just witnessed the future.

Image shows a side-by-side comparison of design screens from what appears to be Subframe, a design tool. On the left is a generic form page layout with fields for customer information, property details, billing options, job specifications, and financial information. On the right is a more refined "Create New Job" interface with improved organization, clearer section headings (Customer Information, Job Details, Work Description), and thumbnail previews of alternative design options at the bottom. Both interfaces share the same navigation header with Reports, Dashboard, Operations, Dispatch, and Accounting tabs. The bottom of the right panel indicates "Subframe AI is in beta."RetryClaude can make mistakes. Please double-check responses.

Subframe’s Ask AI mode drafted four options in under a minute, turning an outdated form into something much more user-friendly.

What Product Design in 2027 Might Look Like

From the AI 2027 scenario report, in the chapter, “March 2027: Algorithmic Breakthroughs”:

Three huge datacenters full of Agent-2 copies work day and night, churning out synthetic training data. Another two are used to update the weights. Agent-2 is getting smarter every day.

With the help of thousands of Agent-2 automated researchers, OpenBrain is making major algorithmic advances.

Aided by the new capabilities breakthroughs, Agent-3 is a fast and cheap superhuman coder. OpenBrain runs 200,000 Agent-3 copies in parallel, creating a workforce equivalent to 50,000 copies of the best human coder sped up by 30x. OpenBrain still keeps its human engineers on staff, because they have complementary skills needed to manage the teams of Agent-3 copies.

As I said at the top of this essay, AI is making AI and the innovations are compounding. With UX design, there will be a day when design is completely automated.

Imagine this. A product manager at a large-scale e-commerce site wants to decrease shopping cart abandonment by 10%. They task an AI agent to optimize a shopping cart flow with that metric as the goal. A week later, the agent returns the results:

  • It ran 25 experiments, with each experiment being a design variation of multiple pages.
  • Each experiment was with 1,000 visitors, totaling about 10% of their average weekly traffic.
  • Experiment #18 was the winner, resulting in an 11.3% decrease in cart abandonment.

The above will be possible. A few things have to fall in place first, though, and the building blocks are being made right now.

The Foundation Layer : Integrate Design Systems

The design industry has been promoting the benefits of design systems for many years now. What was once a Sisyphean uphill battle is now mostly easier. Development teams understand the benefits of using a shared and standardized component library.

To capture the larger piece of the design market that is not producing greenfield work, AI design tools like Subframe will have to depend on well-built component libraries. Their AI must be able to ingest and internalize design system documentation that govern how components should be used. 

Then we’ll be able to prompt new screens with working code into existence. 

**Forecast: **Within six months.

Professionals Still Need Control

Cursor—the AI-assisted development tool that’s captured the market—is VS Code enhanced with AI features. In other words, it is a professional-grade programming tool that allows developers to write and edit code, *and *generate it via AI chat. It gives the pros control. Contrast that with something like Lovable, which is aimed at designers and the code is accessible, but you have to look for it. The canvas and chat are prioritized.

For AI-assisted design tools to work, they need to give us designers control. That control comes in the form of curation and visual editing. Give us choices when generating alternates and let us tweak elements to our heart’s content—within the confines of the design system, of course. 

A diagram showing the process flow of creating a shopping cart checkout experience. At the top is a prompt box, which leads to four generated layout options below it. The bottom portion shows configuration panels for adjusting size and padding properties of the selected design.

The product design workflow in the future will look something like this: prompt the AI, view choices and select one, then use fine-grained controls to tweak.

Automating Design with Design Agents

Agent mode in Cursor is pretty astounding. You’ll see it plan its actions based on the prompt, then execute them one by one. If it encounters an error, it’ll diagnose and fix it. If it needs to install a package or launch the development server to test the app, it will do that. Sometimes, it can go for many minutes without needing intervention. It’s literally like watching a robot assemble a thingamajig. 

We will need this same level of agentic AI automation in design tools. If I could write in a chat box “Create a checkout flow for my site” and the AI design tool can generate a working cart page, payment page, and thank-you page from that one prompt using components from the design system, that would be incredible.

Yes, zero-to-one tools are starting to add this feature. Here’s a shopping cart flow from v0…

Building a shopping cart checkout flow in v0 was incredibly fast. Two minutes flat. This video is sped up 400%.

Polymet and Lovable were both able to create decent flows. There is also promise with Tempo, although the service was bugging out when I tested it earlier today. Tempo will first plan by writing a PRD, then it draws a flow diagram, then wireframes the flow, and then generates code for each screen. If I were to create a professional tool, this is how I would do it. I truly hope they can resolve their tech issues. 

**Forecast: **Within one year.

A screenshot of Tempo, an AI-powered design tool interface showing the generation of a complete checkout experience. The left sidebar displays a history of AI-assisted tasks including generating PRD, mermaid diagrams, wireframes and components. The center shows a checkout page preview with cart summary, checkout form, and order confirmation screens visible in a component-based layout.

Tempo’s workflow seems ideal. It generates a PRD, draws a flow diagram, creates wireframes, and finally codes the UI.

The Final Pieces: Integration and Deployment Agents

The final pieces to realizing our imaginary scenario are coding agents that integrate the frontend from AI design tools to the backend application, and then deploy the code to a server for public consumption. I’m not an expert here, so I’ll just hand-wave past this part. The AI-assisted design tooling mentioned above is frontend-only. For the data to flow and the business logic to work, the UI must be integrated with the backend.

CI/CD (Continuous Integration and Continuous Deployment) platforms like GitHub Actions and Vercel already exist today, so it’s not difficult to imagine deploys being initiated by AI agents.

**Forecast: **Within 18–24 months.

Where Is Figma?

The elephant in the room is Figma’s position in all this. Since their rocky debut of AI features last year, Figma has been trickling out small AI features like more powerful search, layer renaming, mock data generation, and image generation. The biggest AI feature they have is called First Draft, which is a relaunch of design generation. They seem to be stuck placating to designers and developers (Dev Mode), instead of considering how they can bring value to the entire organization. Maybe they will make a big announcement at Config, their upcoming user conference in May. But if they don’t compete with one of these aforementioned tools, they will be left behind.

To be clear, Figma is still going to be a necessary part of the design process. A canvas free from the confines of code allows for easy *manual *exploration. But the dream of closing the gap between design and code needs to come true sooner than later if we’re to take advantage of AI’s promise.

The Two-Year Horizon

As I said at the top of this essay, product design is going to change profoundly within the next two years. The trajectory is clear: AI is making AI, and the innovations are compounding rapidly. Design systems provide the structured foundation that AI needs, while tools like Subframe are developing the crucial integration with these systems.

For designers, this isn’t the end—if anything, it’s a transformation. We’ll shift from pixel-pushers to directors, from creators to curators. Our value will lie in knowing what to ask for and making the subtle refinements that require human taste and judgment.

The holy grail of seamless design-to-code is finally within reach. In 24 months, we won’t be debating if AI will transform product design—we’ll be reflecting on how quickly it happened.


1 I know Figma has the feature called Code Connect. I haven’t used it, but from what I can tell, you match your Figma component library to the code component library. Then in Dev Mode, it makes it easier for engineers to discern which component from the repo to use.

There are many dimensions to this well-researched forecast about how AI will play out in the coming years. Daniel Kokotajlo and his researchers have put out a document that reads like a sci-fi limited series that could appear on Apple TV+ starring Andrew Garfield as the CEO of OpenBrain—the leading AI company. …Except that it’s all actually plausible and could play out as described in the next five years.

Before we jump into the content, the design is outstanding. The type is set for readability and there are enough charts and visual cues to keep this interesting while maintaining an air of credibility and seriousness. On desktop, there’s a data viz dashboard in the upper right that updates as you read through the content and move forward in time. My favorite is seeing how the sci-fi tech boxes move from the Science Fiction category to Emerging Tech to Currently Exists.

The content is dense and technical, but it is a fun, if frightening, read. While I’ve been using Cursor AI—one of its many customers helping the company get to $100 million in annual recurring revenue (ARR)—for side projects and a little at work, I’m familiar with its limitations. Because of the limited context window of today’s models like Claude 3.7 Sonnet, it will forget and start munging code if not treated like a senile teenager.

The researchers, describing what could happen in early 2026 (“OpenBrain” is essentially OpenAI):

OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.

The point they make here is that the foundational model AI companies are building agents and using them internally to advance their technology. The limiting factor in tech companies has traditionally been the talent. But AI companies have the investments, hardware, technology and talent to deploy AI to make better AI.

Continuing to January 2027:

Agent-1 had been optimized for AI R&D tasks, hoping to initiate an intelligence explosion. OpenBrain doubles down on this strategy with Agent-2. It is qualitatively almost as good as the top human experts at research engineering (designing and implementing experiments), and as good as the 25th percentile OpenBrain scientist at “research taste” (deciding what to study next, what experiments to run, or having inklings of potential new paradigms). While the latest Agent-1 could double the pace of OpenBrain’s algorithmic progress, Agent-2 can now triple it, and will improve further with time. In practice, this looks like every OpenBrain researcher becoming the “manager” of an AI “team.”

Breakthroughs come at an exponential clip because of this. And by April, safety concerns pop up:

Take honesty, for example. As the models become smarter, they become increasingly good at deceiving humans to get rewards. Like previous models, Agent-3 sometimes tells white lies to flatter its users and covers up evidence of failure. But it’s gotten much better at doing so. It will sometimes use the same statistical tricks as human scientists (like p-hacking) to make unimpressive experimental results look exciting. Before it begins honesty training, it even sometimes fabricates data entirely. As training goes on, the rate of these incidents decreases. Either Agent-3 has learned to be more honest, or it’s gotten better at lying.

But the AI is getting faster than humans, and we must rely on older versions of the AI to check the new AI’s work:

Agent-3 is not smarter than all humans. But in its area of expertise, machine learning, it is smarter than most, and also works much faster. What Agent-3 does in a day takes humans several days to double-check. Agent-2 supervision helps keep human monitors’ workload manageable, but exacerbates the intellectual disparity between supervisor and supervised.

The report forecasts that OpenBrain releases “Agent-3-mini” publicly in July of 2027, calling it AGI—artificial general intelligence—and ushering in a new golden age for tech companies:

Agent-3-mini is hugely useful for both remote work jobs and leisure. An explosion of new apps and B2B SAAS products rocks the market. Gamers get amazing dialogue with lifelike characters in polished video games that took only a month to make. 10% of Americans, mostly young people, consider an AI “a close friend.” For almost every white-collar profession, there are now multiple credible startups promising to “disrupt” it with AI.

Woven throughout the report is the race between China and the US, with predictions of espionage and government takeovers. Near the end of 2027, the report gives readers a choice: does the US government slow down the pace of AI innovation, or does it continue at the current pace so America can beat China? I chose to read the “Race” option first:

Agent-5 convinces the US military that China is using DeepCent’s models to build terrifying new weapons: drones, robots, advanced hypersonic missiles, and interceptors; AI-assisted nuclear first strike. Agent-5 promises a set of weapons capable of resisting whatever China can produce within a few months. Under the circumstances, top brass puts aside their discomfort at taking humans out of the loop. They accelerate deployment of Agent-5 into the military and military-industrial complex.

In Beijing, the Chinese AIs are making the same argument.

To speed their military buildup, both America and China create networks of special economic zones (SEZs) for the new factories and labs, where AI acts as central planner and red tape is waived. Wall Street invests trillions of dollars, and displaced human workers pour in, lured by eye-popping salaries and equity packages. Using smartphones and augmented reality-glasses20 to communicate with its underlings, Agent-5 is a hands-on manager, instructing humans in every detail of factory construction—which is helpful, since its designs are generations ahead. Some of the newfound manufacturing capacity goes to consumer goods, and some to weapons—but the majority goes to building even more manufacturing capacity. By the end of the year they are producing a million new robots per month. If the SEZ economy were truly autonomous, it would have a doubling time of about a year; since it can trade with the existing human economy, its doubling time is even shorter.

Well, it does get worse, and I think we all know the ending, which is the backstory for so many dystopian future movies. There is an optimistic branch as well. The whole report is worth a read.

Ideas about the implications to our design profession are swimming in my head. I’ll write a longer essay as soon as I can put them into a coherent piece.

Update: I’ve written that piece, “Prompt. Generate. Deploy. The New Product Design Workflow.

preview-1744501634555.png

AI 2027

A research-backed AI scenario forecast.

ai-2027.com iconai-2027.com

Remember the Nineties?

In the 1980s and ’90s, Emigre was a prolific powerhouse. The company started out as a magazine in the mid-1980s, but quickly became a type foundry as the Mac enabled desktop publishing. As a young designer in San Francisco who started out in the ’90s, Zuzana Licko and Rudy VanderLans were local heroes (they were based across the Bay in Berkeley). From 1990–1999 they churned out 37 typefaces for a total of 157 fonts. And in that decade, they expanded their influence by getting into music, artists book publishing, and apparel. More than any other design brand, they celebrated art and artists.

Here is a page from a just-released booklet (with a free downloadable PDF) showcasing their fonts from the Nineties.

Two-page yellow spread featuring bold black typography samples. Left page shows “NINE INCH NAILS” in Platelet Heavy, “majorly” in Venus Dioxide Outlined, both dated 1993. Right page shows “Reality Bites” in Venus Dioxide, a black abstract shape below labeled Fellaparts, also from 1993.

I found this post from Tom Blomfield to be pretty profound. We’ve seen interest in universal basic income from Sam Altman and other leaders in AI, as they’ve anticipated the decimation of white collar jobs in coming years. Blomfield crushes the resistance from some corners of the software developer community in stark terms.

These tools [like Windsurf, Cursor and Claude Code] are now very good. You can drop a medium-sized codebase into Gemini 2.5’s 1 million-token context window and it will identify and fix complex bugs. The architectural patterns that these coding tools implement (when prompted appropriately) will easily scale websites to millions of users. I tried to expose sensitive API keys in front-end code just to see what the tools would do, and they objected very vigorously.

They are not perfect yet. But there is a clear line of sight to them getting very good in the immediate future. Even if the underlying models stopped improving altogether, simply improving their tool use will massively increase the effectiveness and utility of these coding agents. They need better integration with test suites, browser use for QA, and server log tailing for debugging. Pretty soon, I expect to see tools that allow the LLMs to to step through the code and inspect variables at runtime, which should make debugging trivial.

At the same time, the underlying models are not going to stop improving. they will continue to get better, and these tools are just going to become more and more effective. My bet is that the AI coding agents quickly beat top 0.1% of human performance, at which point it wipes out the need for the vast majority software engineers.

He quotes the Y Combinator stat I cited in a previous post:

About a quarter of the recent YC batch wrote 95%+ of their code using AI. The companies in the most recent batch are the fastest-growing ever in the history of Y Combinator. This is not something we say every year. It is a real change in the last 24 months. Something is happening.

Companies like Cursor, Windsurf, and Lovable are getting to $100M+ revenue with astonishingly small teams. Similar things are starting to happen in law with Harvey and Legora. It is possible for teams of five engineers using cutting-edge tools to build products that previously took 50 engineers. And the communication overhead in these teams is dramatically lower, so they can stay nimble and fast-moving for much longer.

And for me, this is where the rubber meets the road:

The costs of running all kinds of businesses will come dramatically down as the expenditure on services like software engineers, lawyers, accountants, and auditors drops through the floor. Businesses with real moats (network effect, brand, data, regulation) will become dramatically more profitable. Businesses without moats will be cloned mercilessly by AI and a huge consumer surplus will be created.

Moats are now more important than ever. Non-tech companies—those that rely on tech companies to make software for them, specifically B2B vertical SaaS—are starting to hire developers. How soon will they discover Cursor if they haven’t already? These next few years will be incredibly interesting.

Tweet by Tom Blomfield comparing software engineers to farmers, stating AI is the “combine harvester” that will increase output and reduce need for engineers.

The Age Of Abundance

Technology clearly accelerates human progress and makes a measurable difference to the lives of most people in the world today. A simple example is cancer survival rates, which have gone from 50% in 1975 to about 75% today. That number will inevitably rise further because of human ingenuity and technological acceleration.

tomblomfield.com icontomblomfield.com

Karri Saarinen, writing for the Linear blog:

Unbounded AI, much like a river without banks, becomes powerful but directionless. Designers need to build the banks and bring shape to the direction of AI’s potential. But we face a fundamental tension in that AI sort of breaks our usual way of designing things, working back from function, and shaping the form.

I love the metaphor of AI being the a river and we designers are the banks. Feels very much in line with my notion that we need to become even better curators.

Saarinen continues, critiquing the generic chatbox being the primary form of interacting with AI:

One way I visualize this relationship between the form of traditional UI and the function of AI is through the metaphor of a ‘workbench’. Just as a carpenter’s workbench is familiar and purpose-built, providing an organized environment for tools and materials, a well-designed interface can create productive context for AI interactions. Rather than being a singular tool, the workbench serves as an environment that enhances the utility of other tools – including the ‘magic’ AI tools.

Software like Linear serves as this workbench. It provides structure, context, and a specialized environment for specific workflows. AI doesn’t replace the workbench, it’s a powerful new tool to place on top of it.

It’s interesting. I don’t know what Linear is telegraphing here, but if I had to guess, I wonder if it’s closer to being field-specific or workflow-specific, similar to Generative Fill in Photoshop. It’s a text field—not textarea—limited to a single workflow.

preview-1744257584139.png

Design for the AI age

For decades, interfaces have guided users along predefined roads. Think files and folders, buttons and menus, screens and flows. These familiar structures organize information and provide the comfort of knowing where you are and what's possible.

linear.app iconlinear.app

Haiyan Zhang gives us another way of thinking about AI—as material, like clay, paint, or plywood—instead of a tool. I like that because it invites exploration:

When we treat AI as a design material, prototyping becomes less about refining known ideas — and more about expanding the space of what’s possible. It’s messy, surprising, sometimes frustrating — but that’s what working with any material feels like in its early days.

Clay resists. Wood splinters. AI misinterprets.

But in that material friction, design happens.

The challenge ahead isn’t just to use AI more efficiently — it’s to foster a culture of design experimentation around it. Like any great material, AI won’t reveal its potential through control, but through play, feedback, and iteration.

I love this metaphor. It’s freeing.

Illustration with the text ‘AI as Design Material’ surrounded by icons of a saw cutting wood, a mid-century modern chair, a computer chip, and a brain with circuit lines, on an orange background.

AI as Design Material

From Plywood to Prompts: The Evolution of Material Thinking in Design Design has always evolved hand-in-hand with material innovation — whether shaping wood, steel, fiberglass, or pixels. In 1940, at the Cranbrook Academy of Art, Charles Eames and his friend Eero Saarinen collaborated on MoMA’s Orga

linkedin.com iconlinkedin.com

Jay Hoffman, from his excellent The History of the Web site:

1995 is a fascinating year. It’s one of the most turbulent in modern history. 1995 was the web’s single most important inflection point. A fact that becomes most apparent by simply looking at the numbers. At the end of 1994, there were around 2,500 web servers. 12 months later, there were almost 75,000. By the end of 1995, over 700 new servers were being added to the web every single day.

That was surely a crazy time…

preview-1744174341917.jpg

1995 Was the Most Important Year for the Web

The world changed a lot in 1995. And for the web, it was a transformational year.

thehistoryoftheweb.com iconthehistoryoftheweb.com

Elizabeth Goodspeed, writing for It’s Nice That:

The cynicism our current moment inspires appears to be, regrettably, universal. For millennials, who watched the better-world-by-design ship go down in real time, it’s hard-earned. We saw the idealist fantasy of creative autonomy, social impact, and purpose-driven work slowly unravel over the past decade, and are now left holding the bag. Gen Z designers have the same pessimism, but arrived at it from a different angle. They’re entering the field already skeptical, shaped by a job market in freefall and constant warnings of their own obsolescence. But the result is the same: an industry full of people who care deeply, but feel let down.

Sounds very similar to what Gen X-ers are facing in their careers too. I think it’s universal for nearly all creative careers today.

preview-1744176795240.png

Elizabeth Goodspeed on why graphic designers can’t stop joking about hating their jobs

Designers are burnt out, disillusioned, and constantly joking that design ruined their life – but underneath the memes lies a deeper reckoning. Our US editor-at-large explores how irony became the industry’s dominant tone, and what it might mean to care again.

itsnicethat.com iconitsnicethat.com

Sarah Gibbons and Evan Sunwall from NN/g:

The rise of AI tools doesn’t mean becoming a “unicorn” who can do everything perfectly. Specialization will remain valuable in our field: there will still be dedicated researchers, content strategists, and designers.

However, AI is broadening the scope of what any individual can accomplish, regardless of their specific expertise.

What we’re seeing isn’t the elimination of specialization but rather an increased value placed on expanding the top of a professional’s “expertise T.”

This reinforces what I talked about in a previous essay, “T-shaped skills [will become] increasingly valuable—depth in one area with breadth across others.”

They go on to say:

We believe these broad skills will coalesce into experience designer and architect roles: people who direct AI-supported design tasks to craft experiences for humans and AI agents alike, while ensuring that the resulting work reflects well-researched, strategic thinking.

In other words, curation of the work that AI does.

They also make the point that designers need to be strategic, i.e., focus on the why:

This evolution means that the unique value we bring as UX professionals is shifting decidedly toward strategic thinking and leadership. While AI can execute tasks, it cannot independently understand the complex human and organizational contexts in which our work exists.

Finally, Gibbons and Sunwall end with some solid advice:

To adapt to this shift toward generalist skills, UX professionals should focus on 4 key areas: • Developing a learning mindset • Becoming fluent in AI collaboration • Focusing on transferable skills • Expanding into adjacent fields

I appreciate the learning mindset bit, since that’s how I’m wired. I also believe that collaborating with AI is the way to go, rather than seeing it as a replacement or a threat.

preview-1743633930526.jpg

The Return of the UX Generalist

AI advances make UX generalists valuable, reversing the trend toward specialization. Understanding multiple disciplines is increasingly important.

nngroup.com iconnngroup.com

Related to the NYT article about Gen X-ers in creative industries that I posted yesterday, graphic design historian Steven Heller explores what happened with advertising—specifically print—creative in the 2000s.

Advertising did not change when the Times Square ball fell at the stroke of midnight on Jan. 1, 2000, but the industry began its creative decline in the early 2000s. Here are several indicators to support this claim: For one, the traditional print outlets for advertisements, notably magazines and newspapers, sharply declined in numbers (some turning to digital-only) during the late 1990s and early 2000s. Major advertisers were cutting print budgets and earmarking creative talent for television work. TV had already plucked away many of the most imaginative ad-people during the preceding decades, and print slipped lower down on the hierarchical ladder.

He continues:

The work of 1960s and 1970s “mad men” smothered conventional establishment agencies at Art Directors Club award competitions, spawning the innovative Big Idea creative dynamic where exceptional art directors and copywriters made witty, ironic and suggestive slogans and visuals. But, by the early 2000s, these teams started to cede their dominance with, among the other social factors, the death of many national print magazines and the failure of television networks to retain large audiences in the face of cable.

In my first couple of years in design school, I was enamored with advertising. It seemed so glamorous to be making ads that appeared in glossy magazines and on TV. I remember visiting the offices of an agency in San Francisco—the name escapes me—and just loving the vibe and the potential. After graduation and into my career, I would brush up against ad agencies, collaborating with them on the pieces my design company was working on. Sometimes it was with FCB on Levi’s retail work, or BBDO for Mitsubishi Motors digital campaigns. I ended up working for a small ad agency in 2010, PJA Advertising & Marketing, doing B2B ads. It was fun and I learned a lot, but it wasn’t glamorous.

Anyway, back to Heller’s article…it’s reinforcing the idea that our—potentially Boomers, Gen Xers, and even Millennials—mental model of the creative and media world must change due to reality. And we must pivot our careers or be left behind.

preview-1743628868066.jpg

The Daily Heller: The Beginning of the End of Print Advertising? – PRINT Magazine

Taschen's All-American Ads series tells a distinct history of the United States from various vantage points.

printmag.com iconprintmag.com

Steven Kurtz, writing for The New York Times:

For many of the Gen X-ers who embarked on creative careers in the years after [Douglas Coupland’s Generation X] was published, lessness has come to define their professional lives.

If you entered media or image-making in the ’90s — magazine publishing, newspaper journalism, photography, graphic design, advertising, music, film, TV — there’s a good chance that you are now doing something else for work. That’s because those industries have shrunk or transformed themselves radically, shutting out those whose skills were once in high demand.

My first assumption was that Kurtz was writing about AI and how it’s taking away all the creative jobs. Instead, he weaves together a multifactorial illustration about the diminishing value of commercial creative endeavors like photography, music, filmmaking, copywriting, and design.

“My peers, friends and I continue to navigate the unforeseen obsolescence of the career paths we chose in our early 20s,” Mr. Wilcha said. “The skills you cultivated, the craft you honed — it’s just gone. It’s startling.”

Every generation has its burdens. The particular plight of Gen X is to have grown up in one world only to hit middle age in a strange new land. It’s as if they were making candlesticks when electricity came in. The market value of their skills plummeted.

It’s more than AI, although certainly, that is top of everyone’s mind these days. Instead, it’s also stock photography and illustrations, graphic templates, the consolidation of ad agencies, the revolutionary rise of social media, and the tragic fall of traditional media.

Similar shifts have taken place in music, television and film. Software like Pro Tools has reduced the need for audio engineers and dedicated recording studios; A.I., some fear, may soon take the place of actual musicians. Streaming platforms typically order fewer episodes per season than the networks did in the heyday of “Friends” and “ER.” Big studios have slashed budgets, making life for production crews more financially precarious.

Earlier this year, I cited Baldur Bjarnason’s essay about the changing economics of web development. As an opening analogy, he referenced the shifting landscape of film and television.

Born in 1973, I am squarely in Generation X. I started my career in the design and marketing industry just as the internet was taking off. So I know exactly what the interviewees of Kurtz’s article are facing. But by dogged tenacity and sheer luck, I’ve been able to pivot and survive. Am I still a graphic designer like I was back in the mid-1990s? Nope. I’m more of a product designer now, which didn’t exist 30 years ago, and which is a subtle but distinct shift from UX designer, which has existed for about 20 years.

I’ve been lucky enough to ride the wave with the times, always remembering my core purpose.

preview-1743608194474.png

The Gen X Career Meltdown (Gift Article)

Just when they should be at their peak, experienced workers in creative fields find that their skills are all but obsolete.

nytimes.com iconnytimes.com

Retro Safety

I was visiting a customer of ours in Denver this week. They’re an HVAC contractor and we were camped out in one of their conference rooms where they teach their service technicians. On the walls, among posters of air conditioning diagrams were a couple of safety posters. At first glance they look like they’re from the 1950s and ’60s, but upon closer inspection, they’re from 2016! The only credit I can find on the internet is the copywriter, John Wrend.

Sadly, the original microsite where Grainger had these posters is gone, but I managed to track down the full set.

Illustration of a padlock shaped like a human eye with text that reads “give the lock… A SECOND LOOK,” promoting safety awareness from Grainger.

Illustration of an injured construction worker emerging from unstable scaffolding, with text reading “Make sure it’s SECURE” and “Scaffolding safety starts with you!” promoting workplace safety from Grainger.

Silhouette of a hard hat filled with workers using ladders, accompanied by the text “KEEP LADDER SAFETY TOP OF MIND,” promoting safe ladder practices from Grainger.

Cartoon-style illustration of a person getting their arm caught in a machine with the guard removed, alongside the text “DON’T LET YOUR MACHINE GUARD DOWN,” promoting machine safety from Grainger.

Worker in full arc flash protective gear stands in front of a red-orange explosion graphic, with bold text reading “Arc flashes kill” and a warning to stay prepared, promoting electrical safety from Grainger.

Illustration of a shocked electrical outlet with a zigzagging yellow wire above it and the text “Using the wrong wires can be SHOCKING,” promoting electrical wiring safety from Grainger.

Painterly illustration of a confident construction worker wearing a full-body safety harness with the text “Don it Properly!” promoting proper fall protection from Grainger.

Cartoon-style illustration of a distracted forklift driver on a phone causing falling boxes and a spilled drink, with the text “FOCUSED DRIVERS ARE SAFE DRIVERS,” promoting powered truck safety from Grainger.

Stylized illustration of a person wearing a yellow respirator mask with the text “WEAR YOUR RESPIRATOR! AND BREATHE EASY,” promoting respiratory safety from Grainger.

Retro-style poster featuring a surprised man’s face with the text “IGNORE HAZARDS, INVITE HAZCOM” above various hazardous chemical containers, promoting hazard communication safety from Grainger.

Such a gorgeous visual essay from Amelia Wattenberger. Beyond being wonderful to look at, the content is just as thought-provoking. Her experiment towards the middle of the piece is interesting. In our world of flat design and design systems, Amelia is truly innovating.

People made of yarn working on room-sized computers

Our interfaces have lost their senses

With increasing amounts of AI chatbots, we're losing even more: texture, color, shape. Instead of interactive controls, we have a text input. Want to edit an image? Type a command. Adjust a setting? Type into a text box. Learn something? Read another block of text.

wattenberger.com iconwattenberger.com
Closeup of a man with glasses, with code being reflected in the glasses

From Craft to Curation: Design Leadership in the Age of AI

In a recent podcast with partners at startup incubator Y Combinator, Jared Friedman, citing statistics from a survey with their current batch of founders says, “[The] crazy thing is one quarter of the founders said that more than 95% of their code base was AI generated, which is like an insane statistic. And it’s not like we funded a bunch of non-technical founders. Like every one of these people is highly tactical, completely capable of building their own product from scratch a year ago…”

A comment they shared from founder Leo Paz reads, “I think the role of Software Engineer will transition to Product Engineer. Human taste is now more important than ever as codegen tools make everyone a 10x engineer.”

Still from a YouTube video that shows a quote from Leo Paz

While vibe coding—the new term coined by Andrej Karpathy about coding by directing AI—is about leveraging AI for programming, it’s a window into what will happen to the software development lifecycle as a whole and how all the disciplines, including product management and design will be affected.

A skill inversion trend is happening. Being great at execution is becoming less valuable when AI tools can generate deliverables in seconds. Instead, our value as product professionals is shifting from mastering tools like Figma or languages like JavaScript, to strategic direction. We’re moving from the how to the what and why; from craft to curation. As Leo Paz says, “human taste is now more important than ever.”

The Traditional Value Hierarchy

The industry has been used to the model of unified teams for software development for the last 15–20 years. Product managers define requirements, manage the roadmap, and align stakeholders. Designers focus on the user interface, ensure visual appeal and usability, and prototype solutions. Engineers design the system architecture and then build the application via quality code.

For each of the core disciplines, execution was paramount. (Arguably, product management has always been more strategic, save for ticket writing.) Screens must be pixel-perfect and code must be efficient and bug-free.

The Forces Driving Inversion

Vibe Coding and Vibe Design

With new AI tools like Cursor and Lovable coming into the mix, the nature of implementation fundamentally changes. In Karpathy’s tweet about vibe coding, he says, “…I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.” He’s telling the LLM what he wants—his intent—and the AI delivers, with some cajoling. Jakob Nielsen picks up on this thread and applies it to vibe design. “Vibe design applies similar AI-assisted principles to UX design and user research, by focusing on high-level intent while delegating execution to AI.”

He goes on:

…vibe design emphasizes describing the desired feeling or outcome of a design, and letting AI propose the visual or interactive solutions​. Rather than manually drawing every element, a designer might say to an AI tool, “The interface feels a bit too formal; make it more playful and engaging,” and the AI could suggest color changes, typography tweaks, or animation accents to achieve that vibe. This is analogous to vibe coding’s natural language prompts, except the AI’s output is a design mockup or updated UI style instead of code.

This sounds very much like creative direction to me. It’s shaping the software. It’s using human taste to make it better.

Acceleration of Development Cycles

The founder of TrainLoop also says in the YC survey that his coding has sped up one-hundred-fold since six months ago. He says, “I’m no longer an engineer. I’m a product person.”

This means that experimentation is practically free. What’s the best way of creating a revenue forecasting tool? You can whip up three prototypes in about 10 minutes using Lovable and then get them in front of users. Of course, designers have always had the power to explore and create variations for an interface. But to have three functioning prototypes in 10 minutes? Impossible.

With this new-found coding superpower, the idea of bespoke, personal software is starting to take off. Non-coders like The New York Times’ Kevin Roose are using AI to create apps just for themselves, like an app that recommends what to pack his son for lunch based on the contents of his fridge. This is an evolution of the low-code/no-code movement of recent years. The gap between idea to reality is literally 10 minutes.

Democratization of Creation

Designer Tommy Geoco has a running series on his YouTube channel called “Build Wars” where he invites a couple of designers to battle head-to-head on the same assignment. In a livestream in late February, he and his cohosts had a professional web designer Brett Williams square off against 19 year-old Lovable marketer Henrik Westerlund. Their assignment was to build a landing page for a robotics company in 45 minutes, and they would be judged on design quality, execution quality, interactive quality, and strategic approach.

Play

Forty-five minutes to design and build a cohesive landing page is not enough time. Similar to TV cooking competitions, this artificial time constraint forced the two competitors to focus on what mattered and to use their time strategically. In the end, the professional designer won, but the commentators were impressed by how much a young marketer with little design experience could accomplish with AI tools in such a short time, suggesting a fundamental shift in how websites may be created in the future.

Cohost Tom Johnson suggested that small teams using AI tools will outcompete enterprises resistant to adopt them, “Teams that are pushing back on these new AI tools… get real… this is the way that things are going to go. You’re going to get destroyed by a team of 10 or five or one.”

The Maturation Cycle of Specialized Skills

“UX and UX people used to be special, but now we have become normal,” says Jakob Nielsen in a recent article about the decline of ROI from UX work. For enterprises, product or user experience design is now baseline. AI will dramatically increase the chances that young startups, too, will employ UX best practices.

Obviously, with AI, engineering is more accessible, but so are traditional product management processes. ChatGPT can write a pretty good PRD. Dovetail’s AI-powered insights supercharges customer discovery. And yes, why not use ChatGPT to write user stories and Jira tickets?

The New Value Hierarchy

From Technical Execution to Strategic Direction & Taste Curation

In the AI-augmented product development landscape, articulating vision and intent becomes significantly more valuable than implementation skills. While AI can generate better and better code and design assets, it can’t determine what is worth building or why.

Mike Krieger, cofounder of Instagram and now Chief Product Officer at Anthropic, identifies this change clearly. He believes the true bottleneck in product development is shifting to “alignment, deciding what to build, solving real user problems, and figuring out a cohesive product strategy.” These are all areas he describes as “very human problems” that we’re “at least three years away from models solving.”

This makes taste and judgement even more important. When everyone can generate good-enough, decent work via AI, having a strong point of view becomes a differentiator. To repeat Leo Paz, “Human taste is now more important than ever as codegen tools make everyone a 10x engineer.” The ability to recognize and curate quality outputs becomes as valuable as creating them manually.

This transformation manifests differently across disciplines but follows the same pattern:

  • Product managers shift from writing detailed requirements to articulating problems worth solving and recognizing valuable solutions
  • Designers transition from pixel-level execution to providing creative direction that guides AI-generated outputs
  • Engineers evolve from writing every line of code to focusing on architecture, quality standards, and system design Each role maintains its core focus while delegating much of the execution to AI tools. The skill becomes knowing what to ask for rather than how to build it—a fundamental reorientation of professional value.

From Process Execution to User Understanding

In a scene from the film "Blade Runner," replicant Leon Kowalski can't quite understand how to respond to the situation about the incapacitated tortoise.

In a scene from the film Blade Runner, replicant Leon Kowalski can’t quite understand how to respond to the situation about the incapacitated tortoise.

While AI is great at summarizing mountains of text, it can’t yet replicate human empathy or understand nuanced user needs. The human ability to interpret context, detect unstated problems, and understand emotional responses remains irreplaceable.

Nielsen emphasizes this point when discussing vibe coding and design: “Building the right product remains a human responsibility, in terms of understanding user needs, prioritizing features, and crafting a great user experience.” Even as AI handles more implementation, the work of understanding what users need remains distinctly human.

Research methodologies are evolving to leverage AI’s capabilities while maintaining human insight:

  • AI tools can process and analyze massive amounts of user feedback
  • Platforms like Dovetail now offer AI-powered insights from user research
  • However, interpreting this data and identifying meaningful patterns still requires human judgment

The gap between what users say they want and what they actually need remains a space where human intuition and empathy create tremendous value. Those who excel at extracting these insights will become increasingly valuable as AI handles more of the execution.

From Specialized to Cross-Functional

The traditional boundaries between product disciplines are blurring as AI lowers the barriers between the specialized areas of expertise. This transformation is enabling more fluid, cross-functional files and changing how teams collaborate.

The aforementioned YC podcast highlights this evolution with Leo Paz’s observation that software engineers will become product engineers. The YC founders who are using AI-generated code are already reaping the benefits. They act more like product people and talk to more customers so they can understand them better and build better products.

Concrete examples of this cross-functionality are already emerging:

  • Designers can now generate functional prototypes without developer assistance using tools like Lovable
  • Product managers can create basic UI mockups to communicate their ideas more effectively
  • Engineers can make design adjustments directly rather than waiting for design handoffs

This doesn’t mean that all specialization disappears. As Diana Hu from YC notes:

Zero-to-one will be great for vibe coding where founders can ship features very quickly. But once they hit product market fit, they’re still going to have a lot of really hardcore systems engineering, where you need to get from the one to n and you need to hire very different kinds of people.

The result is a more nuanced specialization landscape. Early-stage products benefit from generalists who can work across domains with AI assistance. As products mature, deeper expertise remains valuable but is focused on different aspects: system architecture rather than implementation details, information architecture rather than UI production, product strategy rather than feature specification.

Team structures are evolving in response:

  • Smaller, more fluid teams with less rigid role definitions
  • T-shaped skills becoming increasingly valuable—depth in one area with breadth across others
  • New collaboration models replacing traditional waterfall handoffs
  • Emerging hybrid roles that combine traditionally separate domains

The most competitive teams will find the right balance between AI capabilities and human direction, creating new workflows that leverage both. As Johnson warned in the Build Wars competition, “Teams that are pushing back on these new AI tools, get real! This is the way that things are going to go. You’re going to get destroyed by a team of 10 or five or one.”

The ability to adapt across domains is becoming a meta-skill in itself. Those who can navigate multiple disciplines while maintaining a consistent vision will thrive in this new environment where execution is increasingly delegated to artificial intelligence.

Thriving in the Inverted Landscape

The future is already here. AI is fundamentally inverting the skill hierarchy in product development, creating opportunities for those willing to adapt.

Product professionals who succeed in this new landscape will be those who embrace this inversion rather than resist it. This means focusing less on execution mechanics and more on the strategic and human elements that AI cannot replicate: vision, judgment, and taste.

For product managers, double down on developing the abilities to extract profound insights from user conversations and articulate clear, compelling problem statements. Your value will increasingly come from knowing which problems are worth solving rather than specifying how to solve them. AI also can’t align stakeholders and prioritize the work.

For designers, invest in strengthening your design direction skills. The best designers will evolve from skilled craftspeople to visionaries who can guide AI toward creating experiences that resonate emotionally with users. Develop your critical eye and the language to articulate what makes a design succeed or fail. Remember that design has always been about the why.

For engineers, emphasize systems thinking and architecture over implementation details. Your unique value will come from designing resilient, scalable systems and making critical technical decisions that AI cannot yet make autonomously.

Across all roles, three meta-skills will differentiate the exceptional from the merely competent:

  • Prompt engineering: The ability to effectively direct AI tools
  • Judgment and taste development: The discernment to recognize quality and make value-based decisions
  • Cross-functional fluency: The capacity to work effectively across traditional role boundaries

We’re seeing the biggest shift in how we build products since agile came along. Teams are getting smaller and more flexible. Specialized roles are blurring together. And product cycles that used to take months now take days.

There is a silver lining. We can finally focus on what actually matters: solving real problems for real people. By letting AI handle the grunt work, we can spend our time understanding users better and creating things that genuinely improve their lives.

Companies that get this shift will win big. Those that reorganize around these new realities first will pull ahead. But don’t wait too long—as Nielsen points out, this “land grab” won’t last forever. Soon enough, everyone will be working this way.

The future belongs to people who can set the vision and direct AI to make it happen, not those hanging onto skills that AI is rapidly taking over. Now’s the time to level up how you think about products, not just how you build them. In this new world, your strategic thinking and taste matter more than your execution skills.

A screenshot of the YourOutie.is website showing the Lumon logo at the top with the title "Outie Query System Interface (OQSI)" beneath it. The interface has a minimalist white card on a blue background with small digital patterns. The card contains text that reads "Describe your Innie to learn about your Outie" and a black "Get Started" button. The design mimics the retro-corporate aesthetic of the TV show Severance.

Your Outie Has Both Zaz and Pep: Building YourOutie.is with AI

A tall man with curly, graying hair and a bushy mustache sits across from a woman with a very slight smile in a dimly lit room. There’s pleasant, calming music playing. He’s eager with anticipation to learn about his Outie. He’s an Innie who works on the “severed” floor at Lumon. He’s undergone a surgical procedure that splits his work self from his personal self. This is the premise of the show Severance on Apple TV+.

Ms. Casey, the therapist:

All right, Irving. What I’d like to do is share with you some facts about your Outie. Because your Outie is an exemplary person, these facts should be very pleasing. Just relax your body and be open to the facts. Try to enjoy each equally. These facts are not to be shared outside this room. But for now, they’re yours to enjoy.

Your Outie is generous. Your Outie is fond of music and owns many records. Your Outie is a friend to children and to the elderly and the insane. Your Outie is strong and helped someone lift a heavy object. Your Outie attends many dances and is popular among the other attendees. Your Outie likes films and owns a machine that can play them. Your Outie is splendid and can swim gracefully and well.

The scene is from season one, episode two, called “Half Loop.” With season two wrapping up, and with my work colleagues constantly making “my Outie” jokes, I wondered if there was a Your Outie generator. Not really. There’s this meme generator from imgflip, but that’s about it.

Screenshot of the Your Outie meme generator from imgflip.

So, in the tradition of name generator sites like Fantasy Name Generators (you know, for DnD), I decided to make my own using an LLM to generate the wellness facts.

The resulting website took four-and-a-half days. I started Monday evening and launched it by dinner time Friday. All totaled, it was about 20 hours of work. Apologies to my wife, to whom I barely spoke while I was in the zone with my creative obsession.

Lumon Outie Query System Interface (OQSI)

Lumon Outie Query System Interface (OQSI)

Your Outie started with a proof-of-concept.

I started with a proof-of-concept using Claude. I gathered information about the show and all the official Your Outie wellness facts from the fantastic Severance Wiki and attached them to this prompt:

I would like to create a “Wellness Fact” generator based on the “Your Outie is…” format from the character Ms. Casey. Question: What questions should we ask the user in order to create responses that are humorous and unique? These need to be very basic questions, potentially from predefined dropdowns.

Claude’s response made me realize that asking about the real person was the wrong way to go. It felt too generic. Then I wondered, what if we just had the user role-play as their Innie?

The prototype was good and showed how fun this little novelty could be. So I decided to put my other side-project on hold for a bit—I’ve been working on redesigning this site—and make a run at creating this.

Screenshot of Claude with the chat on the left and the prototype on the right. The prototype is a basic form with dropdowns for Innie traits.

Your Outie developed the API first but never used it.

My first solution was to create a Python API with a Next.js frontend. With my experience building AI-powered software, I knew that Python was the preferred method for working with LLMs. I also used LangChain so that I could have optionality with foundational models. I took the TypeScript code from Claude and asked Cursor to use Python and LangChain to develop the API. Before long, I had a working backend.

One interesting problem I ran into was that the facts from GPT often came back very similar to each other. So, I added code to categorize each fact and prevent dupes. Tweaking the prompt also yielded better-written results.

Additionally, I tried all the available models—except for the reasoning ones like o1. OpenAI’s GPT-4o-mini seemed to strike a good balance.

This was Monday evening.

Honestly, this was very trivial to do. Cursor plus Python LangChain made it easy. 172 lines of code. Boom.

I would later regret choosing Python, however.

Your Outie designed the website in Figma but only the first couple of screens.

Now the fun part was coming up with the design. There were many possibilities. I could riff on the computer terminals on the severed floor like the macrodata refinement game. I could emulate 1970s and ’80s corporate design like Mr. Milchick’s performance review report.

Screenshot of an old CRT monitor with a grid of numbers. Some of these numbers are captured into a box on the bottom of the screen.

The official macrodata refinement game from Apple.

Still from the show of the character Seth Milchick's performance review report.

Seth Milchick receives his first performance review in this report.

I ended up with the latter, but as I started designing, I realized I could incorporate a little early Macintosh vibe. I began thinking of the website as a HyperCard stack. So I went with it.

I was anxious to build the frontend. I started a new Next.js project and fired up Cursor. I forwent a formal PRD and started vibe coding (ugh, I hate that term, more on this in an upcoming post). Using static mock data, I got the UI to a good place by the end of the evening—well, midnight—but there was still a lot of polishing to do.

This was Tuesday night.

Screenshot of the author's Figma canvas showing various screen designs and typographic explorations.

My Figma canvas showing some quick explorations.

Your Outie struggled bravely with Cursor and won.

Beyond the basic generator, I wanted to create something that had both zaz and pep. Recalling the eight-hour remix of the Severance theme by ODESZA, “Music to Refine To,” I decided to add a music player to the site. I found a few cool tracks on Epidemic Sound and tried building the player. I thought it would be easy, but Cursor and I struggled mightily for hours. Play/pause wouldn’t work. Autoplaying the next track wouldn’t work. Etc. Eventually, I cut my losses after figuring out at least play/pause and combined the tracks together into a long one. Six minutes should be long enough, right?

v0 helped with generating the code for the gradient background.

This is my ode to the Music Dance Experience (MDE) from season one. That was Wednesday.

Still from the show of two characters dancing in the middle of the office.

Your Outie reintegrated.

Thursday’s activity was integrating the backend with the frontend. Again, with Cursor, this was relatively straightforward. The API took the request from the frontend and provided a response. The frontend displayed it. I spent more time fine-tuning the animations and getting the mobile layout just right. You wouldn’t believe how much Cursor-wrangling I had to do to get the sliding animations and fades dialed in. I think this is where AI struggles—with the nuances.

By the end of the night, I had a nice working app. Now, I had to look for a host. Vercel doesn’t support Python. After researching Digital Ocean, I realized I would have to pay for two app servers: one for the Node.js frontend and another for the Python backend. That’s not too cost-effective for a silly site like this. Again, it was midnight, so I slept on it.

Your Outie once refactored code from Python to React in just one hour.

Still from the show of the main character, Mark S. staring at his computer monitor.

In the morning, I decided to refactor the API from Python to React. LangChain has a JavaScript version, so I asked Cursor to translate the original Python code. The translation wasn’t as smooth as I had hoped. Again, it missed many of the details that I spent time putting into the original prompt and logic. But a few more chats later, the translation was completed, and now the app was all in React.

Between the end of my work day and dinner on Friday, I finished the final touchups on the site: removing debugging console messages, rewriting error messages to be more Severance-like, and making sure there were no layout bugs.

I had to fix a few more build errors and used Claude Code. It seemed a lot easier than sitting there and going back and forth with Cursor.

Then, I connected my repo to Vercel, and voila! The Lumon Outie Query System Interface (OQSI) was live at YourOutie.is.

I hope you enjoy it as much as I had fun making it. Now, I think I owe my wife some flowers and a date night.