Skip to content

155 posts tagged with “product design”

Great reminder from Kai Wong about getting stuck on a solution too early:

Imagine this: the Product Manager has a vision of a design solution based on some requirements and voices it to the team. They say, “I want a table that allows us to check statuses of 100 devices at once.”

You don’t say anything, so that sets the anchor of a design solution as “a table with a bunch of devices and statuses.”

preview-1749704193306.jpeg

Avoid premature solutions: how to respond when stakeholders ask for certain designs

How to avoid anchoring problems that result in stuck designers

dataanddesign.substack.com icondataanddesign.substack.com

I’ve been very interested in finding tools to close the design-to-code gap. Martina Sartor writing in UX Planet articulates why that is so important:

After fifteen years hopping between design systems, dev stand-ups, and last-minute launch scrambles, I’m convinced design-to-dev QA is still one of the most underestimated bottlenecks in digital product work. We pour weeks into meticulous Figma files, yet the last mile between mock-up and production code keeps tripping us up.

This is an honest autopsy of why QA hurts and how teams can start healing it — today — without buying more software (though new approaches are brewing).

preview-1749534149927.png

Why Design-to-Dev QA Still Stings

(and Practical Ways to Ease the Pain)

uxplanet.org iconuxplanet.org

In the early days of computing, it was easy for one person to author a complete program. Nowadays, because the software we create is so complex, we need teams.

Gaurav Sinha writing for UX Planet:

The faster you accept that they’re not going to change their communication style, the faster you can focus on what actually works — learning to decode what they’re really telling you. Because buried in all that technical jargon is usually something pretty useful for design decisions.

It’s a fun piece on learning how to speak engineer.

preview-1749533644510.jpg

The designer’s guide to decoding engineer-speak.

When engineers sound like they’re speaking alien.

uxplanet.org iconuxplanet.org

When you’re building a SaaS app, I believe it’s important to understand the building blocks, or objects, in your app. What are they? How do they relate to each other? Should those relationships be peer-to-peer or parent-child? Early in my tenure at BuildOps, I mentioned this way of thinking to one of my designers and they pointed me to Object-Oriented UX (OOUX), a methodology pioneered by Sophia Prater.

Mateusz Litarowicz writes:

Object-Oriented UX is a way of thinking about design, introduced and popularized by Sophia Prater. It assumes that instead of starting with specific screens or user flows, we begin by identifying the objects that should exist in the system, their attributes, the relationships between them, and the actions users can take on those objects. Only after this stage do we move on to designing user flows and wireframes.

To be honest, I’d long thought this way, ever since my days at Razorfish when our UX director Marisa Gallagher talked about how every website is built around a core unit, or object. At the time, she used Netflix as an example—it’s centered around the movie. CRMs, CMSes, LMSes, etc. are all object-based.

Anyway, I think Litarowicz writes a great primer for OOUX. The other—and frankly more important, IMHO—advantage to thinking this way, especially for a web app, is because your developers think this way too.

preview-1749443340299.heic

Introduction to Object-Oriented UX

How Object-Oriented UX can help you design complex systems

fundament.design iconfundament.design

Nate Jones performed a yeoman’s job of summarizing Mary Meeker’s 340-slide deck on AI trends, the “2025 Technology as Innovation (TAI) Report.” For those of you who don’t know, Mary Meeker is a famed technology analyst and investor known for her insightful reports on tech industry trends. For the longest time, as an analyst at Kleiner Perkins, she published the Internet Trends report. And she was always prescient.

Half of Jones’ post is the summary, while the other half is how the report applies to product teams. The whole thing is worth 27 minutes of your time, especially if you work in software.

preview-1748925250512.jpeg

I Summarized Mary Meeker's Incredible 340 Page 2025 AI Trends Deck—Here's Mary's Take, My Response, and What You Can Learn

Yes, it's really 340 pages, and yes I really compressed it down, called out key takeaways, and shared what you can actually learn about building in the AI space based on 2025 macro trends!

natesnewsletter.substack.com iconnatesnewsletter.substack.com
Surreal, digitally manipulated forest scene with strong color overlays in red, blue, and purple hues. A dark, blocky abstract logo is superimposed in the foreground.

Thoughts on the 2024 Design Tools Survey

Tommy Geoco and team are finally out with the results of their 2024 UX Design Tools Survey.

First, two quick observations before I move on to longer ones:

  • The respondent population of 2,200+ designers is well-balanced among company size, team structure, client vs. product focus, and leadership responsibility
  • Predictably, Figma dominates the tools stacks of most segments

As a reaction to the OpenAI + io announcement two weeks ago, Christopher Butler imagines a mesh computing device network he calls “personal ambient computing”:

…I keep thinking back to Star Trek, and how the device that probably inspired the least wonder in me as a child is the one that seems most relevant now: the Federation’s wearables. Every officer wore a communicator pin — a kind of Humane Pin light — but they also all wore smaller pins at their collars signifying rank. In hindsight, it seems like those collar pins, which were discs the size of a watch battery, could have formed some kind of wearable, personal mesh network. And that idea got me going…

He describes the device as a standardized disc that can be attached to any enclosure. I love his illustration too:

Diagram of a PAC Mesh Network connecting various devices: Pendant, Clip, Watch, Portable, Desktop, Handset, and Phone in a circular layout.

Christopher Butler: “I imagine a magnetic edge system that allows the disc to snap into various enclosures — wristwatches, handhelds, desktop displays, wearable bands, necklaces, clips, and chargers.”

Essentially, it’s an always-on, always observing personal AI.

preview-1748892632021.png

PAC – Personal Ambient Computing - Christopher Butler

Like most technologists of a certain age, many of my expectations for the future of computing were set by Star Trek production designers. It’s quite

chrbutler.com iconchrbutler.com

Following up on OpenAI’s acquisition of Jony Ive’s hardware startup, io, Mark Wilson, writing for Fast Company:

As Ive told me back in 2023, there have been only three significant modalities in the history of computing. After the original command line, we got the graphical user interface (the desktop, folders, and mouse of Xerox, Mac OS, and Windows), then voice (Alexa, Siri), and, finally, with the iPhone, multitouch (not just the ability to tap a screen, but to gesture and receive haptic feedback). When I brought up some other examples, Ive quickly nodded but dismissed them, acknowledging these as “tributaries” of experimentation. Then he said that to him the promise, and excitement, of building new AI hardware was that it might introduce a new breakthrough modality to interacting with a machine. A fourth modality.

Hmm, it hasn’t taken off yet because AR hasn’t really gained mainstream popularity, but I would argue hand gestures in AR UI to be a fourth modality. But Ive thinks different. Wilson continues:

Ive’s fourth modality, as I gleaned, was about translating AI intuition into human sensation. And it’s the exact sort of technology we need to introduce ubiquitous computing, also called quiet computing and ambient computing. These are terms coined by the late UX researcher Mark Weiser, who in the 1990s began dreaming of a world that broke us free from our desktop computers to usher in devices that were one with our environment. Weiser did much of this work at Xerox PARC, the same R&D lab that developed the mouse and GUI technology that Steve Jobs would eventually adopt for the Macintosh. (I would also be remiss to ignore that ubiquitous computing is the foundation of the sci-fi film Her, one of Altman’s self-stated goalposts.)

Ah, essentially an always-on, always watching AI that is ready to assist. But whatever the form factor this device takes, it will likely depend on a smartphone:

The first io device seems to acknowledge the phone’s inertia. Instead of presenting itself as a smartphone-killer like the Ai Pin or as a fabled “second screen” like the Apple Watch, it’s been positioned as a third, er, um … thing next to your phone and laptop. Yeah, that’s confusing, and perhaps positions the io product as unessential. But it also appears to be a needed strategy: Rather than topple these screened devices, it will attempt to draft off them.

Wilson ends with the idea of a subjective computer, one that has personality and gives you opinions. He explains:

I think AI is shifting us from objective to subjective. When a Fitbit counts your steps and calories burned, that’s an objective interface. When you ask ChatGPT to gauge the tone of a conversation, or whether you should eat better, that’s a subjective interface. It offers perspective, bias, and, to some extent, personality. It’s not just serving facts; it’s offering interpretation.

The entire column is worth a read.

preview-1748580958171.jpg

Can Jony Ive and Sam Altman build the fourth great interface? That's the question behind io

Where Meta, Google, and Apple zig, Ive and Altman are choosing to zag. Can they pull it off?

fastcompany.com iconfastcompany.com

Nick Babich writing for UX Planet:

Because AI design and code generators quickly take an active part in the design process, it’s essential to understand how to make the most of these tools. If you’ve played with Cursor, Bolt, Lovable, or v0, you know the output is only as good as the input.

Well said, especially as prompting is the primary input for these AI tools. He goes on to enumerate his five parts to a good prompt. Worth a quick read.

preview-1748498594917.png

How to write better prompts for AI design & code generators

Because AI design and code generators quickly take an active part in the design process, it’s essential to understand how to make the most…

uxplanet.org iconuxplanet.org

Related to my earlier post today about Arc’s novelty tax, here’s an essay by DOC, a tribute to consistency.

Leveraging known, established UX patterns and sticking to them prevent users from having to learn net-new interactions and build net-new mental models every time they engage with a new product.

But, as Josh Miller wrote in the aforementioned post, “New interfaces start from familiar ones.” DOC’s essay uses jazz as a metaphor:

Consistency is about making room for differentiation. Think about a jazz session: the band starts from a known scale, rhythm. One musician breaks through, improvising on top of that pattern for a few minutes before joining the band again. The band, the audience, everyone knows what is happening, when it starts and when it ends, because the foundation of it all is a consistent melody.

Geometric pattern of stacked rectangular blocks forming a diagonal structure against a dark sky. Artwork by Maya Lin.

Consistency

On compounding patterns and the art of divergence.

doc.cc icondoc.cc
Colorful illustration featuring the Figma logo on the left and a whimsical character operating complex, abstract machinery with gears, dials, and mechanical elements in vibrant colors against a yellow background.

Figma Make: Great Ideas, Nowhere to Go

Nearly three weeks after it was introduced at Figma Config 2025, I finally got access to Figma Make. It is in beta and Figma made sure we all know. So I will say upfront that it’s a bit unfair to do an official review. However, many of the tools in my AI prompt-to-code shootout article are also in beta. 

Since this review is fairly visual, I made a video as well that summarizes the points in this article pretty well.

Patrick Morgan writing for UX Collective:

The tactical tasks that juniors traditionally cut their teeth on are increasingly being delegated to AI tools. Tasks that once required a human junior designer with specialized training can now be handled by generative AI tools in a fraction of the time and cost to the organization.

This fundamentally changes the entry pathway. When the low-complexity work that helped juniors develop their skills is automated away, we lose the natural onramp that allowed designers to gradually progress from tactical execution to strategic direction.

Remote work has further complicated things by removing informal learning opportunities that happen naturally in an in-person work environment, like shadowing senior designers, being in the room for strategy discussions, or casual mentorship chats.

I’ve been worried about this a lot. I do wonder how the next class of junior designers—and all professionals, for that matter—will learn. (I cited Aneesh Raman, chief economic opportunity officer at LinkedIn, in my previous essay.)

Morgan does have some suggestions:

Instead of waiting for the overall market to become junior-friendly again (which I don’t see happening), focus your search on environments more structurally accepting of new talent:

1. Very early-stage startups: Pre-seed or seed companies often have tight budgets and simply need someone enthusiastic who can execute designs. It will be trial-by-fire, but you’ll gain rapid hands-on experience.

2. Stable, established businesses outside of ‘big tech’: Businesses with predictable revenue streams often provide structured environments for junior designers (my early experience at American Express is a prime example). It might not be as glamorous as a ‘big tech’ job, but as a result they’re less competitive while still offering critical experience to get started.

3. Design agencies: Since their business model focuses on selling design services, agencies naturally employ more designers and can support a mix of experience levels. The rapid exposure to multiple projects makes them solid launchpads even if your long-term goal is to work in-house in tech.

preview-1747798960613.png

No country for Junior Designers

The structural reality behind disappearing entry-level design roles and some practical advice for finding ways in

uxdesign.cc iconuxdesign.cc

OpenAI is acquiring a hardware company called “io” that Jony Ive cofounded just a year ago:

Two years ago, Jony Ive and the creative collective LoveFrom, quietly began collaborating with Sam Altman and the team at OpenAI.

It became clear that our ambitions to develop, engineer and manufacture a new family of products demanded an entirely new company. And so, one year ago, Jony founded io with Scott Cannon, Evans Hankey and Tang Tan.

We gathered together the best hardware and software engineers, the best technologists, physicists, scientists, researchers and experts in product development and manufacturing. Many of us have worked closely for decades.

The io team, focused on developing products that inspire, empower and enable, will now merge with OpenAI to work more intimately with the research, engineering and product teams in San Francisco.

It has been an open rumor that Sam Altman and Ive has been working together on some hardware. I had assumed they formalized their partnership already, but I guess not.

Play

There are some bold statements that Ive and Altman make in the launch video, teasing a revolutionary new device that will enable quicker, better access to ChatGPT. Something that is a lot less friction than how Altman explains in the video:

If I wanted to ask ChatGPT something right now about something we had talked about earlier, think about what would happen. I would like reached down. I would get on my laptop, I’d open it up, I’d launch a web browser, I’d start typing, and I’d have to, like, explain that thing. And I would hit enter, and I would wait, and I would get a response. And that is at the limit of what the current tool of a laptop can do. But I think this technology deserves something much better.

There are a couple of other nuggets about what this new device might be from the statements Ive and Altman made to Bloomberg:

…Ive and Altman don’t see the iPhone disappearing anytime soon. “In the same way that the smartphone didn’t make the laptop go away, I don’t think our first thing is going to make the smartphone go away,” Altman said. “It is a totally new kind of thing.”

“We are obviously still in the terminal phase of AI interactions,” said Altman, 40. “We have not yet figured out what the equivalent of the graphical user interface is going to be, but we will.”

While we don’t know what the form factor will be, I’m sure it won’t be a wearable pin—ahem, RIP Humane. Just to put it out there—I predict it will be a voice assistant in an earbud, very much like the AI in the 2013 movie “Her.” Altman has long been obsessed with the movie, going as far as trying to get Scarlett Johansson to be one of the voices for ChatGPT.

EDIT 5/22/2025, 8:58am PT: Added prediction about the form factor.

preview-1747889382686.jpg

Sam and Jony introduce io

Building a family of AI products for everyone.

openai.com iconopenai.com
Stylized digital artwork of two humanoid figures with robotic and circuit-like faces, set against a vivid red and blue background.

The AI Hype Train Has No Brakes

I remember two years ago, when my CEO at the startup I worked for at the time, said that no VC investments were being made unless it had to do with AI. I thought AI was overhyped, and that the media frenzy over it couldn’t get any crazier. I was wrong.

Looking at Google Trends data, interest in AI has doubled in the last 24 months. And I don’t think it’s hit its plateau yet.

Line chart showing Google Trends interest in “AI” from May 2020 to May 2025, rising sharply in early 2023 and peaking near 100 in early 2025.

I was recently featured on the Design of AI podcast to discuss my article that pit eight AI prompt-to-code tools head to head. We talked through the list but I also offered a point of view on where I see the gap.

Arpy Dragffy and Brittany Hobbs close out the episode this way (emphasis mine):

So it’s great that Roger did that analysis and that evaluation. I honestly am a bit shocked by those results. Again, his ranking was that Subframe was number one, Onlook was two, v0 number three, Tempo number four. But again, if you look at his matrix, only two of the tools scored over 70 out of 100 and only one of the tools he could recommend. And this really shines a dark light on AI products and their maturity right now**.** But I suspect that this comes down to the strategy that was used by some of these products. If you go to them, almost every single one of them is actually a coding tool, except the two that scored the highest.

Onlook, its headline is “The Cursor for Designers.” So of course it’s a no brainer that makes a lot of sense. That’s part of their use cases, but nonetheless it didn’t score that good in his matrix.

The top scoring one from his list Subframe is directly positioned to designers. The title is “Design meet code.” It looks like a UI editor. It looks like the sort of tool that designers wish they had. These tools are making it easier for product managers to run research programs, to turn early prototypes and ideas into code to take code and really quick design changes. When you need to make a change to a website, you can go straight into one of these tools and stand up the code.

Listen on Apple Podcasts and Spotify.

preview-1747355019951.jpg

Rating AI Design to Code Products + Hacks for ChatGPT & Claude [Roger Wong]

Designers are overwhelmed with too many AI products that promise to help them simplify workflows and solve the last mile of design-to-code. With the...

designof.ai icondesignof.ai

I tried early versions of Stable Diffusion be ended up using exclusively Midjourney because of the quality. I’m excited to check out the full list. (Oh, and of course I’ve used DALL-E as well via ChatGPT. But there’s not a lot of control there.)

preview-1747354261267.png

Stable Diffusion & Its Alternatives: Top 5 AI Image Generators

AI-generated imagery has become an essential part of the modern product designer’s toolkit — powering everything from early-stage ideation…

uxplanet.org iconuxplanet.org

For as long as I can remember, I’ve been fascinated by how television shows and movies are made. I remember the specials ABC broadcast about the making of The Empire Strikes Back and other Lucasfilm movies like the Indiana Jones series. More recently—especially with the advent of podcasts—I’ve loved listening to how show runners think about writing their shows. For example, as soon as an episode of Battlestar Galactica aired, I would rewatch it with Ronald D. Moore’s commentary. These days, I‘m really enjoying the official The Last of Us podcast because it features commentary from both Craig Mazin and Neil Druckmann.

Anyway, thinking about personas as characters from TV shows and movies and using screenwriting techniques is right up my alley. Laia Tremosa for the IxDF:

Hollywood spends millions to bring characters to life. UX design teams sometimes spend weeks… only to make personas no one ever looks at again. So don’t aim for personas that look impressive in a slide deck. Aim for personas that get used—in design reviews, product decisions, and testing plans.

Be the screenwriter. Be the director. Be the casting agent.

preview-1747105241059.jpg

The Hollywood Guide to UX Personas: Storytelling That Drives Better Design

Great products need great personas. Learn how to build them using the storytelling techniques Hollywood has perfected.

interaction-design.org iconinteraction-design.org
Illustrated background of colorful wired computer mice on a pink surface with a large semi-transparent Figma logo centered in the middle.

Figma Takes a Big Swing

Last week, Figma held their annual user conference Config in San Francisco. Since its inception in 2020, it has become a significant UX conference that covers more than just Figma’s products and community. While I’ve not yet had the privilege of attending in person, I do try to catch the livestreams or videos afterwards.

Nearly 17 months after Adobe and Figma announced the termination of their merger talks, Figma flexed their muscle—fueld by the $1 billion breakup fee, I’m sure—by announcing four new products. They are Figma Draw, Make, Sites, and Buzz.

  • Draw: It’s a new mode within Figma Design that reveals additional vector drawing features.
  • Make: This is Figma’s answer to Lovable and the other prompt-to-code generators.
  • Sites: Finally, you can design and publish websites from Figma, hosted on their infrastructure.
  • Buzz: Pass off assets to clients and marketing teams and they can perform lightweight and controlled edits in Buzz.

A lot of chatter in the larger design and development community has been either “AI is the coolest” or “AI is shite and I want nothing to do with it.”

Tobias van Schneider puts it plainly:

AI is here to stay.

Resistance is futile. Doesn’t matter how we feel about it. AI has arrived, and it’s going to transform every industry, period. The ship has sailed, and we’re all along for the ride whether we like it or not. Not using AI in the future is the equivalent to not using the internet. You can get away with it, but it’s not going to be easy for you.

He goes on to argue that craftspeople have been affected the most, not only by AI, but by the proliferation of stock and templates:

The warning signs have been flashing for years. We’ve witnessed the democratization of design through templates, stock assets, and simplified tools that turned specialized knowledge into commodity. Remember when knowing Photoshop guaranteed employment? Those days disappeared years ago. AI isn’t starting this fire, it’s just pouring gasoline on it. The technical specialist without artistic vision is rapidly becoming as relevant as a telephone operator in the age of smartphones. It’s simply not needed anymore.

But he’s not all doom and gloom.

If the client could theoretically do everything themselves with AI, then why hire a designer?

Excellent question. I believe there are three reasons to continue hiring a designer:

  1. Clients lag behind. It’ll takes a few years before they fully catch up and stop hiring creatives for certain tasks, at which point creatives have caught up on what makes them worthy (beyond just production output).

  2. Clients famously don’t know what they want. That’s the primary reason to hire a designer with a vision. Even with AI at their fingertips, they wouldn’t know what instructions to give because they don’t understand the process.

  3. Smart clients focus on their strengths and outsource the rest. If I run a company I could handle my own bookkeeping, but I’ll hire someone. Same with creative services. AI won’t change that fundamental business logic. Just because I can, doesn’t mean I should.

And finally, he echoes the same sentiment that I’ve been saying (not that I’m the originator of this thought—just great minds think alike!):

What differentiates great designers then?

The Final Filter: taste & good judgment

Everyone in design circles loves to pontificate about taste, but it’s always the people with portfolios that look like a Vegas casino who have the most to say. Taste is the emperor’s new clothes of the creative industry, claimed by all, possessed by few, recognized only by those who already have it.

In other words, as designers, we need to lean into our curation skills.

preview-1746372802939.jpg

The future of the designer

Let's not bullshit ourselves. Our creative industry is in the midst of a massive transformation. MidJourney, ChatGPT, Claude and dozens of other tools have already fundamentally altered how ideation, design and creation happens.

vanschneider.com iconvanschneider.com

Dan Maccarone:

If users don’t trust the systems we design, that’s not a PM problem. It’s a design failure. And if we don’t fix it, someone else will, probably with worse instincts, fewer ethics, and a much louder bullhorn.

UX is supposed to be the human layer of technology. It’s also supposed to be the place where strategy and empathy actually talk to each other. If we can’t reclaim that space, can’t build products people understand, trust, and want to return to, then what exactly are we doing here?

It is a long read but well worth it.

preview-1746118018231.jpeg

We built UX. We broke UX. And now we have to fix it!

We didn’t just lose our influence. We gave it away. UX professionals need to stop accepting silence, reclaim our seat at the table, and…

uxdesign.cc iconuxdesign.cc
A futuristic scene with a glowing, tech-inspired background showing a UI design tool interface for AI, displaying a flight booking project with options for editing and previewing details. The screen promotes the tool with a “Start for free” button.

Beyond the Prompt: Finding the AI Design Tool That Actually Works for Designers

There has been an explosion of AI-powered prompt-to-code tools within the last year. The space began with full-on integrated development environments (IDEs) like Cursor and Windsurf. These enabled developers to use leverage AI assistants right inside their coding apps. Then came a tools like v0, Lovable, and Replit, where users could prompt screens into existence at first, and before long, entire applications.

A couple weeks ago, I decided to test out as many of these tools as I could. My aim was to find the app that would combine AI assistance, design capabilities, and the ability to use an organization’s coded design system.

While my previous essay was about the future of product design, this article will dive deep into a head-to-head between all eight apps that I tried. I recorded the screen as I did my testing, so I’ve put together a video as well, in case you didn’t want to read this.

With their annual user conference, Config, coming up in San Francisco in less than two weeks, Figma released their 2025 AI Report today.

Andrew Hogan, Insights lead:

While developers and designers alike recognize the importance of integrating AI into their workflows, and overall adoption of AI tools has increased, there’s a disconnect in sentiment around quality and efficacy between the two groups.

Developers report higher satisfaction with AI tools (82%) and feel AI improves the quality of their work (68%). Meanwhile, designers show more modest numbers—69% satisfaction rate and 54% reporting quality improvement—suggesting this group’s enthusiasm lags behind their developer counterparts.

This divide stems from how AI can support existing work and how it’s being used: 59% of developers use AI for core development responsibilities like code generation, whereas only 31% of designers use AI in core design work like asset generation. It’s also likely that AI’s ability to generate code is coming into play—68% of developers say they use prompts to generate code, and 82% say they’re satisfied with the output. Simply put, developers are more widely finding AI adoption useful in their day-to-day work, while designers are still working to determine how and if these tools best fit into their processes.

I can understand that. Code is behind the scenes. If it’s not perfect, no one will really know. But design is user-facing, so quality is more important.

Looking into the future:

Though AI’s impact on efficiency is clear, there are still questions about how to use AI to make people better at their role. This disparity between efficiency and quality is an ongoing battle for users and creators alike.

Looking forward, predictions about the impact of AI on work are moderate—AI’s expected impact for the coming year isn’t much higher than its expected impact last year.

In the full report, Hogan details out:

Only 27% predict AI will have a significant impact on their company goals in the next year (compared to 23% in 2024), with 15% saying it will be transformational (unchanged year-over-year).

The survey was taken in January with a panel of 2,500 users. Things in AI change in weeks. I’m surprised at the number and part of me believes that a lot of designers are hiding their heads in the sand. AI is coming. We should be agile and adapt.

preview-1745539674417.png

Figma's 2025 AI report: Perspectives From Designers and Developers

Figma’s AI report tells us how designers and developers are navigating the changing landscape.

figma.com iconfigma.com

While Josh W. Comeau writes for his developer audience, a lot of what he says can be applied to design. Referring to a recent Forbes article:

AI may be generating 25% of the code that gets committed at Google, but it’s not acting independently. A skilled human developer is in the driver’s seat, using their knowledge and experience to guide the AI, editing and shaping its output, and mixing it in with the code they’ve written. As far as I know, 100% of code at Google is still being created by developers. AI is just one of many tools they use to do their job.

In other words, developers are editing and curating the output of AI, just like where I believe the design discipline will end up soon.

On incorporating Cursor into his workflow:

And that’s kind of a problem for the “no more developers” theory. If I didn’t know how to code, I wouldn’t notice the subtle-yet-critical issues with the model’s output. I wouldn’t know how to course-correct, or even realize that course-correction was required!

I’ve heard from no-coders who have built projects using LLMs, and their experience is similar. They start off strong, but eventually reach a point where they just can’t progress anymore, no matter how much they coax the AI. The code is a bewildering mess of non sequiturs, and beyond a certain point, no amount of duct tape can keep it together. It collapses under its own weight.

I’ve noticed that too. For a non-coder like me, rebuilding this website yet again—I need to write a post about it—has been a challenge. But I knew and learned enough to get something out there that works. But yes, relying solely on AI for any professional work right now is precarious. It still requires guidance.

On the current job market for developers and the pace of AI:

It seems to me like we’ve reached the point in the technology curve where progress starts becoming more incremental; it’s been a while since anything truly game-changing has come out. Each new model is a little bit better, but it’s more about improving the things it already does well rather than conquering all-new problems.

This is where I will disagree with him. I think the AI labs are holding back the super-capable models that they are using internally. Tools like Claude Code and the newly-released OpenAI Codex are clues that the foundational model AI companies have more powerful agents behind-the-scenes. And those agents are building the next generation of models.

preview-1745259603982.jpg

The Post-Developer Era

When OpenAI released GPT-4 back in March 2023, they kickstarted the AI revolution. The consensus online was that front-end development jobs would be totally eliminated within a year or two.Well, it’s been more than two years since then, and I thought it was worth revisiting some of those early predictions, and seeing if we can glean any insights about where things are headed.

joshwcomeau.com iconjoshwcomeau.com
Illustration of humanoid robots working at computer terminals in a futuristic control center, with floating digital screens and globes surrounding them in a virtual space.

Prompt. Generate. Deploy. The New Product Design Workflow

Product design is going to change profoundly within the next 24 months. If the AI 2027 report is any indication, the capabilities of the foundational models will grow exponentially, and with them—I believe—will the abilities of design tools.

A graph comparing AI Foundational Model Capabilities (orange line) versus AI Design Tools Capabilities (blue line) from 2026 to 2028. The orange line shows exponential growth through stages including Superhuman Coder, Superhuman AI Researcher, Superhuman Remote Worker, Superintelligent AI Researcher, and Artificial Superintelligence. The blue line shows more gradual growth through AI Designer using design systems, AI Design Agent, and Integration & Deployment Agents.

The AI foundational model capabilities will grow exponentially and AI-enabled design tools will benefit from the algorithmic advances. Sources: AI 2027 scenario & Roger Wong

The TL;DR of the report is this: companies like OpenAI have more advanced AI agent models that are building the next-generation models. Once those are built, the previous generation is tested for safety and released to the public. And the cycle continues. Currently, and for the next year or two, these companies are focusing their advanced models on creating superhuman coders. This compounds and will result in artificial general intelligence, or AGI, within the next five years.