Skip to content
5 min read
Foggy impressionist painting of a steam train crossing a bridge, plume of steam and a small rowboat on the river below.

The Year AI Changed Design

At the beginning of this year, AI prompt-to-code tools were still very new to the market. Lovable had just relaunched in December and Bolt debuted just a couple months before that. Cursor was my first taste of using AI to code back in November of 2024. As we sit here in December, just 12 months later, our profession and the discipline of design has materially changed. Now, of course, the core is still the same. But how we work, how we deliver, and how we achieve results, are different.

When ChatGPT got good (around GPT-4), I began using it as a creative sounding board. Design is never a solitary activity and feedback from peers and partners has always been a part of the process. To be able to bounce ideas off of an always-on, always-willing creative partner was great. To be sure, I didn’t share sketches or mockups; I was playing with written ideas.

Now, ChatGPT or Gemini’s deep research features are often where I start when I begin to tackle a new feature. And after the chatbot has written the report, I’ll read it and ask a lot of questions as a way of learning and internalizing the material. I’ll then use that as a jumping off point for additional research. Many designers on my team do the same.

Business leaders are obsessed with quantifying AI efficiency—McKinsey says $4.4 trillion in potential, EY says 96% of companies see gains—but the research is murkier than the headlines suggest. One study found developers actually worked slower with AI tools in familiar codebases. The honest answer for design work? It depends on the task. On the one hand, it’s saving a bunch of Google searches and manually reading all the source material. For example, if I’m working on a custom fields feature for a CRM, it’s great to fire off a deep research request and have the AI summarize findings across five different CRMs. That’s much more efficient than me having to hunt for and read all that documentation. So yeah, maybe it saves some time in this part of the process.

Back in February, Andrej Karpathy coined the term “vibe coding” to describe the nascent AI prompt-to-code tools. As the year progressed, those tools got good enough for making advanced prototypes. Whether it’s using Claude to generate some interaction ideas in its canvas, using Lovable to whip up an experience, or prompting Figma Make, it’s a new method for achieving the same goal: simulate the experience and validate with users. We’ve come a long way from InVision clickthrough prototypes. Figma’s traditional built-in prototyping mode allows for more precision and complete adherence to the look and feel of an application. However, these tools enable much more complex interactions. Where we suffer in fidelity with a design system—these tools typically don’t include the ability to import a design system, although that’s changing—we gain in more realistic simulations. No longer do we have to fake typing content into an input!

But some designers aren’t stopping at prototypes. With tools like Cursor and Claude Code, they’re going further—shipping working code to production. There are many, many caveats. Frontend-only changes are easy and low stakes. But shipping full features with backend implications and database queries is a whole different ballgame. And then the question is, do we want to? I think it works in some orgs and with some products. For example, in enterprise SaaS, where both functionality and interactions can be complex, I believe our time as designers is better spent upfront: redefining problems, solving the right things, and doing usability testing.

As we use AI tools more and more, we are also being tasked with adding AI features to existing products or conjuring entirely new AI-native products. In other words, we’re increasingly having to use AI as material. We’ve heard this term develop over the course of 2025, and it essentially means adding AI functionality to our toolkit of parts to solve problems. Some functionality is obvious, an AI copilot, for example. While others are more integrated like smart autocomplete, recommendations, or AI-enabled semantic search. Whatever the problems are, we need to understand when to employ AI strategies like agents, RAG, and orchestration.

The surface area of what UX and digital product designers work on has therefore increased. When I interviewed Elena Pacenti, the Director of the MDes Interaction Design program at California College of the Arts, in early summer, she expressed optimism regarding the future of our profession, saying that she does not believe designers will be in less demand due to AI.

Instead, she foresees a higher demand for designers, saying, “I do not believe that designers will be less in demand. I think there will be a tremendous need for designers.” She argues that the last ten years were spent developing the technology and now designers are needed to act as stakeholders who “bring value for human beings” and determine “what makes sense and what doesn’t.”

Our value as systems thinkers, as the ones who translate between business and user, and ultimately the folks who fight for the user, remain as essential to business as ever. While it’s fun to whip up an experience in Cursor or ship some frontend code, I will argue that we’re more valuable when we derisk feature development by discerning how to solve user problems. If we want to be seen as strategic leaders, we need to stop worrying about the pixels so much and leave that to the AI. If this past year has taught us anything, it’s that AI will continue to get better. We should continue to provide value by creating the future and helping users connect with it. AI can extrapolate from what exists. Humans can imagine what doesn’t yet.

P.S. The image above is J.M.W. Turner’s Rain, Steam and Speed – The Great Western Railway from 1844. Turner painted it when rail travel was remaking England—a new technology moving faster than people could fully comprehend. The train in the painting is barely visible, emerging from mist and rain, more felt than seen. That’s what this year has been like. The transformation is already underway, and we’re all still squinting to make out its shape.

Subscribe for updates

Get weekly (or so) post updates and design insights in your inbox.