Skip to content

9 posts tagged with “figma”

Figma is adding to its keyboard shortcuts to improve navigation and selection for power users and for keyboard-only users. It’s a win-win that improves accessibility and efficiency. Sarah Kelley, product marketer at Figma writes:

For millions, navigating digital tools with a keyboard isn’t just about preference for speed and ergonomics—it’s a fundamental need. …

We’re introducing a series of new features that remove barriers for keyboard-only designers across most Figma products. Users can now pan the canvas, insert objects, and make precise selections quickly and easily. And, with improved screen reader support, these actions are read aloud as users work, making it easier to stay oriented.

Nice work!

preview-1754373987228.png

Who Says Design Needs a Mouse?

Figma's new accessibility features bring better keyboard and screen reader support to all creators.

figma.com iconfigma.com

Since its debut at Config back in May, Figma has steadily added practical features to Figma Make for product teams. Supabase integration now allows for authentication, data storage, and file uploads. Designers can import design system libraries, which helps maintain visual consistency. Real-time collaboration has improved, giving teams the ability to edit code and prototypes together. The tool now supports backend connections for managing state and storing secrets. Prototypes can be published to custom domains. These changes move Figma Make closer to bridging the gap between design concepts and advanced prototypes.

In my opinion, there’s a stronger relationship between Sites and Make than there is Make and Design. The Make-generated code may be slightly better than when Sites debuted, but it is still not semantic.

Anyhow, I think Make is great for prototyping and it’s convenient to have it built right into Figma. Julius Patto, writing in UX Collective:

Prompting well in Figma Make isn’t about being clever, it’s about being clear, intentional, and iterative. Think of it as a new literacy in the design toolkit: the better you get at it, the more you unlock AI’s potential without losing your creative control.

preview-1752622395695.jpeg

How to prompt Figma Make’s AI better for product design

Learn how to use AI in Figma Make with UX intention, from smarter prompts to inclusive flows that reflect real user needs.

uxdesign.cc iconuxdesign.cc

Here we go. Figma has just dropped their S-1, or their registration for an initial public offering (IPO).

A financial metrics slide showing Figma's key performance indicators on a dark green background. The metrics displayed are: $821M LTM revenue, 46% YoY revenue growth, 18% non-GAAP operating margin, 91% gross margin, 132% net dollar retention, 78% of Forbes 2000 companies use Figma, and 76% of customers use 2 or more products.

Rollup of stats from Figma’s S-1.

While a lot of the risk factors are boilerplate—legalese to cover their bases—the one about AI is particularly interesting, “Competitive developments in AI and our inability to effectively respond to such developments could adversely affect our business, operating results, and financial condition.”

Developments in AI are already impacting the software industry significantly, and we expect this impact to be even greater in the future. AI has become more prevalent in the markets in which we operate and may result in significant changes in the demand for our platform, including, but not limited to, reducing the difficulty and cost for competitors to build and launch competitive products, altering how consumers and businesses interact with websites and apps and consume content in ways that may result in a reduction in the overall value of interface design, or by otherwise making aspects of our platform obsolete or decreasing the number of designers, developers, and other collaborators that utilize our platform. Any of these changes could, in turn, lead to a loss of revenue and adversely impact our business, operating results, and financial condition.

There’s a lot of uncertainty they’re highlighting:

  • Could competitors use AI to build competing products?
  • Could AI reduce the need for websites and apps which decreases the need for interfaces?
  • Could companies reduce workforces, thus reducing the number of seats they buy?

These are all questions the greater tech industry is asking.

preview-1751405229235.png

Figma Files Registration Statement for Proposed IPO | Figma Blog

An update on Figma's path to becoming a publicly traded company: our S-1 is now public.

figma.com iconfigma.com

Darragh Burke and Alex Kern, software engineers at Figma, writing on the Figma blog:

Building code layers in Figma required us to reconcile two different models of thinking about software: design and code. Today, Figma’s visual canvas is an open-ended, flexible environment that enables users to rapidly iterate on designs. Code unlocks further capabilities, but it’s more structured—it requires hierarchical organization and precise syntax. To reconcile these two models, we needed to create a hybrid approach that honored the rapid, exploratory nature of design while unlocking the full capabilities of code.

The solution turned out to be code layers, actual canvas primitives that can be manipulated just like a rectangle, and respects auto layout properties, opacity, border radius, etc.

The solution we arrived at was to implement code layers as a new canvas primitive. Code layers behave like any other layer, with complete spatial flexibility (including moving, resizing, and reparenting) and seamless layout integration (like placement in autolayout stacks). Most crucially, they can be duplicated and iterated on easily, mimicking the freeform and experimental nature of the visual canvas. This enables the creation and comparison of different versions of code side by side. Typically, making two copies of code for comparison requires creating separate git branches, but with code layers, it’s as easy as pressing ⌥ and dragging. This automatically creates a fork of the source code for rapid riffing.

In my experience, it works as advertised, though the code layer element will take a second to render when its spatial properties are edited. Makes sense though, since it’s rendering code.

preview-1751332174370.png

Canvas, Meet Code: Building Figma’s Code Layers

What if you could design and build on the same canvas? Here's how we created code layers to bring design and code together.

figma.com iconfigma.com
Colorful illustration featuring the Figma logo on the left and a whimsical character operating complex, abstract machinery with gears, dials, and mechanical elements in vibrant colors against a yellow background.

Figma Make: Great Ideas, Nowhere to Go

Nearly three weeks after it was introduced at Figma Config 2025, I finally got access to Figma Make. It is in beta and Figma made sure we all know. So I will say upfront that it’s a bit unfair to do an official review. However, many of the tools in my AI prompt-to-code shootout article are also in beta. 

Since this review is fairly visual, I made a video as well that summarizes the points in this article pretty well.

Play

The Promise: One-to-One With Your Design

Figma's Peter Ng presenting on stage with large text reading "0→1 but 1:1 with your designs" against a dark background with purple accent lighting.

Figma’s Peter Ng presenting on stage Make’s promise: “0→1 but 1:1 with your designs.”

“What if you could take an idea not only from zero to one, but also make it one-to-one with your designs?” said Peter Ng, product designer at Figma. Just like all the other AI prompt-to-code tools, Figma Make is supposed to enable users to prompt their way to a working application. 

The Figma spin is that there’s more control over the output. Click an element and have the prompt only apply to that element. Or also click on something in the canvas and change some details like the font family, size, or color. 

The other Figma advantage is to be able to use pasted Figma designs for a more accurate translation to code. That’s the “one-to-one” Ng refers to.

The Reality: Falls Short

I evaluated Figma Make via my standard checkout flow prompt (thus covering the zero-to-one use case), another prompt, and with a pasted design (one-to-one).

Let’s get the standard evaluation out of the way before moving onto a deeper dive.

Figma Make Scorecard

Figma Make scorecard showing a total score of 58 out of 100, with breakdown: User experience 18/25, Visual design 13/15, Prototype 8/10, Ease of use 9/15, Design Control 6/15, Design system integration 0/15, Speed 9/10, and Editor's Discretion -5/10.

I ran the same prompt through it as the other AI tools:

Create a complete shopping cart checkout experience for an online clothing retailer

Figma Make’s score totaled 58, which puts it squarely in the middle of the pack. This was for a variety of reasons.

The quality of the generated output was pretty good. The UI was nice and clean, if a bit unstyled. This is because Make uses Shadcn UI components. Overall, the UX was exactly what I would expect. Perhaps a progress bar would have been a nice touch.

The generation was fast, clocking in at three minutes, which puts it near the top in terms of speed.

And the fine-grained editing sort of worked as promised. However, my manual changes were sometimes overridden if I used the chat.

Where It Actually Shines

Figma Make interface showing a Revenue Forecast Calculator with a $200,000 total revenue input, "Normal" distribution type selected, monthly breakdown table showing values from January ($7,407) to December ($7,407), and an orange bar chart displaying the normal distribution curve across 12 months with peak values in summer months.

The advantage of these prompt-to-code tools is that it’s really easy to prototype—maybe it’s even production-ready?—complex interactions.

To test this, I used a new prompt:

Build a revenue forecast calculator. It should take the input of a total budget from the user and automatically distribute the budget to a full calendar year showing the distribution by month. The user should be able to change the distribution curve from “Even” to “Normal” where “Normal” is a normal distribution curve.

Along with the prompt, I also included a wireframe as a still image. This gave the AI some idea of the structure I was looking for, at least.

The resulting generation was great and the functionality worked as expected. I iterated the design to include a custom input method and that worked too.

The One-to-One Promise Breaks Down

I wanted to see how well Figma Make would work with a well-structured Figma Design file. So I created a homepage for fictional fitness instructor using auto layout frames, structuring the file as I would divs in HTML.

Figma Design interface showing the original "Body by Reese" fitness instructor homepage design with layers panel on left, main canvas displaying the Pilates hero section and content layout, and properties panel on right. This is the reference design that was pasted into Figma Make for testing.

This is the reference design that was pasted into Figma Make for testing. Notice the well-structured layers!

Then I pasted the design into the chatbox and included a simple prompt. The result was…disappointing. The layout was correct but the type and type sizes were all wrong. I input that feedback into the chat and then the right font finally appeared. 

Then I manually updated the font sizes and got the design looking pretty close to my original. There was one problem: an image was the wrong size and not proportionally-scaled. So I asked the AI to fix it.

Figma Make interface showing a fitness instructor homepage with "Body by Reese" branding, featuring a hero image of someone doing Pilates with "Sculpt. Strengthen. Shine." text overlay, navigation menu, and content section with instructor photo and "Book a Class" call-to-action button.

Figma Make’s attempt at translating my Figma design to code.

The AI did not fix it and reverted some of my manual overrides for the fonts. In many ways this is significantly worse than not giving designers fine-grained control in the first place. Overwriting my overrides made me lose trust in the product because I lost work—however minimal it was. It brought me back to the many occasions that Illustrator or Photoshop crashed while saving, thus corrupting the file. Yes, it wasn’t as bad, but it still felt that way.

Dead End by Design

The question of what to do with the results of a Figma Make chat remain. A Figma Make file is its own filetype. You can’t bring it back into Figma Design nor even Figma Sites to make tweaks. You can publish it and it’s hosted on Figma’s infrastructure, just like Sites. You can download the code, but it’s kind of useless.

Code Export Is Capped at the Knees

You can download the React code as a zip file. But the code does not contain the necessary package.json that makes it installable on your local machine nor on a Node.js server. The package file tells the npm installer which dependencies need to be installed for the project to run.

I tried using Cursor to figure out where to move the files around—they have to be in a src directory—and to help me write a package.json but it would have taken too much time to reverse engineer it.

Nowhere to Go

Maybe using Figma Make inside Figma Sites will be a better use case. It’s not yet enabled for me, but that feature is the so-called Code Layers that was mentioned in the Make and Sites deep dive presentation at Config. In practice, it sounds very much like Code Components in Framer.

The Bottom Line

Figma had to debut Make in order to stay competitive. There’s just too much out there nipping at their heels. While a design tool like Figma is necessary to unlock the freeform exploration designers need, it is also the natural next step to be able to make it real from within the tool. The likes of Lovable, v0, and Subframe allow you to start with a design from Figma and turn that design into working code. The thesis for many of those tools is that they’re taking care of the post design-to-developer handoff: get a design, give the AI some context, and we’ll make it real. Figma has occupied the pre-designer-to-developer handoff for a while and they’re finally taking the next step.

However, in its current state, Figma Make is a dead end (see previous section). But it is beta software which should get better before official release. As a preview I think it’s cool, despite its flaws and bugs. But I wouldn’t use it for any actual work.

During this beta period, Figma needs to…

  • Add complete code export so the resulting code is portable, rather than keeping it within its closed system
  • Fix the fiendish bugs around the AI overwriting manual overrides
  • Figure out tighter integration between Make and the other products, especially Design
Illustrated background of colorful wired computer mice on a pink surface with a large semi-transparent Figma logo centered in the middle.

Figma Takes a Big Swing

Last week, Figma held their annual user conference Config in San Francisco. Since its inception in 2020, it has become a significant UX conference that covers more than just Figma’s products and community. While I’ve not yet had the privilege of attending in person, I do try to catch the livestreams or videos afterwards.

Nearly 17 months after Adobe and Figma announced the termination of their merger talks, Figma flexed their muscle—fueld by the $1 billion breakup fee, I’m sure—by announcing four new products. They are Figma Draw, Make, Sites, and Buzz.

  • Draw: It’s a new mode within Figma Design that reveals additional vector drawing features.
  • Make: This is Figma’s answer to Lovable and the other prompt-to-code generators.
  • Sites: Finally, you can design and publish websites from Figma, hosted on their infrastructure.
  • Buzz: Pass off assets to clients and marketing teams and they can perform lightweight and controlled edits in Buzz.

With these four new products, Figma is really growing up and becoming more than a two-and-half-product company, and is building their own creative suite, if you will. Thus taking a big swing at Adobe.

On social media, Figma posted this image with the copy “New icons look iconic in new photo.”

Colorful app icons from Figma

 

A New Suite In-Town

Play

Kudos to Figma for rolling out most of these new products the day they were announced. About two hours after Dylan Field stepped off the stage—and after quitting Figma and reopening it a few times—I got access to Draw, Sites, and Buzz. I have yet to get Make access.

What follows are some hot takes. I played with Draw extensively, Sites a bit, and not much with Buzz. And I have a lot of thoughts around Make, after watching the deep dive talk from Config. 

Figma Draw

Play

I have used Adobe Illustrator since the mid-1990s. Its bezier drawing tools have been the industry standard for a long time and Figma has never been able to come close. So they are trying to fix it with a new product called Draw. It’s actually a mode within the main Design application. By toggling into this mode, the UI switches a little and you get access to expanded features, including a layers panel with thumbnails and a different toolbar that includes a new brush tool. Additionally, any vector stroke can be turned into a brush stroke or a new “dynamic” stroke.

A brush stroke style is what you’d expect—an organic, painterly stroke, and Figma has 15 styles built in. There are no calligraphic (i.e., angled) options, as all the strokes start with a 90-degree endcap. 

Editing vectors has been much improved. You can finally easily select points inside a shape by dragging a selection lasso around them. There is a shape builder tool to quickly create booleans, and a bend tool to, well, bend straight lines.

Oh, Snap!

I’m not an illustrator, but I used to design logos and icons a lot. So I decided to recreate a monogram from my wedding. (It’s my wedding anniversary coming up. Ahem.) It’s a very simple modified K and R with a plus sign between the letterforms.

The very first snag I hit was that by default, Figma’s pixel grid is turned on. The vectors in letterforms don’t always align perfectly to the pixel grid. So I had to turn both the grid lines and the grid snapping off.

I’m very precise with my vectors. I want lines snapping perfectly with other edges or vertices. In Adobe Illustrator, snapping point to point is automatic. Snapping point to edge or edge to edge is easily done once Smart Guides are turned on. In Figma, snapping on the corners and edges it automatically, but only around the outer bounds of the shape. When I tried to draw a rectangle to extend the crossbar of the R, I wasn’t able to snap the corner or the edge to ensure it was precise.

Designing the monogram at 2x speed in Figma Draw. I’m having a hard time getting points and edges to snap in place for precision.

Designing the monogram at 2x speed in Adobe Illustrator. Precision is a lot easier because of Smart Guides.

Not Ready to Print

When Figma showed off Draw onstage at Config, whispers of this being an Adobe Illustrator killer ricocheted through social media. (OK, I even said as much on Threads: “@figma is taking on Illustrator…”).

Also during the Draw demo, they showed off two new effects called Texture and Noise. Texture will grunge up the shape—it can look like a bad photocopy or rippled glass. And Noise will add monochromatic, dichromatic, or colored noise to a shape.

I decided to take the K+R monogram and add some effects to it, making it look like it was embossed into sandstone. Looks cool on screen. And if I zoomed in the noise pattern rendered smoothly. I exported this as a PDF and opened up the result in Illustrator.

I expected all the little dots in the noise to be vector shapes and masked within the monogram. Much to my surprise, no. The output is simply two rectangular clipping paths with low-resolution bitmaps placed in. 🤦🏻‍♂️

Pixelated image of a corner of a letter K

Opening the PDF exported from Figma in Illustrator, I zoomed in 600% to reveal pixels rather than vector texture shapes.

I think Figma Draw is great for on-screen graphics—which, let’s face it, is likely the vast majority of stuff being made. But it is not ready for any print work. There’s no support for the CMYK color space, spot colors, high-resolution effects, etc. Adobe Illustrator is safe.

Figma Sites

Play

Figma Sites is the company’s answer to Framer and Webflow. For years, I’ve personally thought that Figma should just include publishing in their product, and apparently so did they! At the end of the deep dive talk, one of the presenters showed a screenshot of an early concept from 2018 or ’19.

Two presenters on stage demoing a Figma interface with a code panel showing a script that dynamically adds items from a CSV file to a scene.

So it’s a new app, like FigJam and Slides, and therefore has its own UI. It shares a lot of DNA with Figma Design, so it feels familiar, but different.

Interestingly, they’ve introduced a new skinny vertical toolbar on the left, before the layers panel. The canvas is in the center. And an inspect panel is on the right. In my opinion, I don’t think they need the vertical toolbar and can find homes for the seven items elsewhere.

Figma Sites app showing responsive web page designs for desktop, tablet, and mobile, with a bold headline, call-to-action buttons, and an abstract illustration.

The UI of Figma Sites.

When creating a new webpage, the app will automatically add the desktop and mobile breakpoints. It also supports the tablet breakpoint out of the box and you can add more. Just like Framer, you can see all the breakpoints at once. I prefer this approach to what all the WordPress page builders and Webflow do, which is toggling and only seeing one breakpoint at a time.

The workflow is this: 

  1. Start with a design from Figma Design, then copy and paste it into Sites.
  2. Adjust your design for the various responsive breakpoints.
  3. Add interactivity. This UI is very much like the existing prototyping UI. You can link pages together and add a plethora of effects, including hover effects, scrolling parallax and transforms, etc.

Component libraries from Figma are also available, and it’s possible to design within the Sites app as well. They have also introduced the concept of Blocks. Anyone coming from a WordPress page builder should be very familiar. They are essentially prebuilt sections that you can drop into your design and edit. There are also blocks for standard embeds like YouTube and Google Maps, plus support for custom iframes.

During the keynote, they demonstrated the CMS functionality. AI can assist with creating the schema for each collection (e.g., blog posts would be a collection containing many records). Then you assign fields to layers in your design. And finally, content editors can come in and edit the content in a focused edit panel without messing with your design.

CMS view in Figma Sites showing a blog post editor with fields for title, slug, cover photo, summary, date, and rich text content, alongside a list of existing blog entries.

A CMS is coming to Figma Sites and allow content editors to easily edit pages and posts.

Publishing to the web is as simple as clicking the Publish button. Looks like you can assign a custom domain name and add the standard metadata like site title, favicon, and even a Google Analytics tag.

Side note: Web developers have been looking at the code quality of the output and they’re not loving what they’re seeing. In a YouTube video, CSS evangelist Kevin Powell said, “it’s beyond div soup,” referring to many, many nested divs in the code. Near the end of his video he points out that while Figma has typography styles, they missed that you need to connect those styles with HTML markup. For example, you could have a style called “Headline” but is it an h1, h2, or h3? It’s unclear to me if Sites is writing React Javascript or HTML and CSS. But I’d wager it’s the former.

In the product right now, there is no code export, nor can you see the code that it’s writing. In the deep dive, they mentioned that code authoring was “coming very, very, very soon.”

While it’s not yet available in the beta—at least the one that I currently have access to—in the deep dive talk, they introduced a new concept called a “code layer.” This is a way to bring advanced interactivity into your design using AI chat that produces React code. Therefore on the canvas, Figma has married traditional design elements with code-rendered designs. You can click into these code layers at any time to review and edit the code manually or with AI chat. Conceptually, I think this is very smart, and I can’t wait to play with it.

Webflow and Framer have spent many years maturing their products and respective ecosystems. Figma Sites is the newcomer and I am sure this will give the other products a run for their money, if they fix some of the gaps.

Figma Make

Like I said earlier, I don’t yet have access to Figma Make. But I watched the deep dive twice and did my best impression of Rick Deckard saying “enhance” on the video. So here are some thoughts.

From the keynote, it looked like its own app. The product manager for Make showed off examples made by the team that included a bike trail journal, psychedelic clock, music player, 3D playground, and Minecraft clone. But it also looked like it’s embedded into Sites.

Presenter demoing Figma Make, an AI-powered tool that transforms design prompts into interactive code; the screen shows a React component for a loan calculator with sliders and real-time repayment updates.

The UI of Figma Make looks familiar: Chat, code, preview.

What is unclear to me is if we can take the output from Make and bring it into Sites or Design and perform more extensive design surgery.

Figma Buzz

Figma Buzz looks to be Figma’s answer to Canva and Adobe Express. Design static assets like Instagram posts in Design, then bring them into Buzz and give access to your marketing colleagues so they can update the copy and photos as necessary. You can create and share a library of asset templates for your organization. Very straightforward, and honestly, I’ve not spent a lot of time with this one. One thing to note: even though this is for marketers to create assets, just like Figma Design/Draw, there’s no support for the CMYK color space, and any elements using the new texture or noise effects will turn into raster images. 

Figma Is Becoming a Business

On social media I read a lot of comments from people lamenting that Figma is overstuffing its core product, losing its focus, and should just improve what they have. 

Social media post by Nick Finck expressing concern that Figma’s new features echo existing tools and contribute to product bloat, comparing the direction to Adobe’s strategy.

An example of some of the negative responses on social media to Figma’s announcements.

We don’t live in that world. Figma is a ventured-backed company, having raised nearly $750 million and is currently valued at $12.5 billion. They are not going just focus on a single product; that’s not how it works. And they are preparing to IPO.

In a quippy post on Bluesky, as I was live-posting the keynote, I also said “Figma is the new Adobe.

Social media post by Roger Wong (@lunarboy.com) stating “Figma is the new Adobe” with the hashtag #config2025.

Shifting the Center of Gravity

I meant a couple of things. First, Adobe and the design industry have grown up together, tied at the hip. They invented Postscript, which is the language for PDFs and, together with the Mac enabled the whole desktop publishing industry. There are a lot of Adobe haters out there because of the subscription model, bloatware, etc., but Adobe has always been a part of our profession. They bought rival Macromedia in 2005 to add digital design tools like Dreamweaver, Director, and Flash to their offering. 

Amelia Nash, writing for PRINT Magazine about her recent trip to Adobe MAX in London, (similar to Figma Config, but for Adobe and going on since 2003):

I had come into MAX feeling like an outsider, anxious that maybe my time with Adobe had passed, that maybe I was just a relic in a shiny new creative world. But I left with a reminder that Adobe still sees us, the seasoned professionals who built our careers with their tools, the ones who remember installing fonts manually and optimizing TIFFs for press. Their current marketing efforts may chase the next-gen cohort (with all its hyperactive branding and emoji-saturated optimism), but the tools are still evolving for us pros, too.

Adobe MAX didn’t just show me what’s new, it reminded me of what’s been true throughout my design career: Adobe is for creatives. All of us. Still.

Figma, having created buzz around Config, with programming that featured talks titled “How top designers find their path and creative spark with Kevin Twohy” and “Designing for Climate Disaster with Megan Metzger,” it’s clear they want to occupy the same place in digital designers’ hearts the way that Adobe has for graphic designers for over 40 years.

Building a Creative Suite

(I will forever call it Adobe Creative Suite, not Creative Cloud.)

By doubling the number of products they sell, they are building a creative suite and expanding their market. Same playbook as Adobe.

Do I lament that Figma is becoming like Adobe? No. I understand they’re a business. It’s a company full of talented people who are endeavoring to do the right thing and build the right tools for their audiences of designers, developers, and marketers.

Competition Is Good

The regulators were right. Adobe and Figma should not have merged. A year-and-a-half later, riding the coattails of goodwill Figma has engendered with the digital design community, the company introduced four new products to produce work with. They’ve taken a fresh look at brushes and effects, bringing in approaches from WebGL. They’re being thoughtful about how they enable designers to integrate code into our workflows. And they’re rolling out AI prompt-to-code features in a way that makes sense for us. 

To be sure, these products are all beta and have a long way to go. And I’m excited to go play.

A futuristic scene with a glowing, tech-inspired background showing a UI design tool interface for AI, displaying a flight booking project with options for editing and previewing details. The screen promotes the tool with a “Start for free” button.

Beyond the Prompt: Finding the AI Design Tool That Actually Works for Designers

There has been an explosion of AI-powered prompt-to-code tools within the last year. The space began with full-on integrated development environments (IDEs) like Cursor and Windsurf. These enabled developers to use leverage AI assistants right inside their coding apps. Then came a tools like v0, Lovable, and Replit, where users could prompt screens into existence at first, and before long, entire applications.

A couple weeks ago, I decided to test out as many of these tools as I could. My aim was to find the app that would combine AI assistance, design capabilities, and the ability to use an organization’s coded design system.

While my previous essay was about the future of product design, this article will dive deep into a head-to-head between all eight apps that I tried. I recorded the screen as I did my testing, so I’ve put together a video as well, in case you didn’t want to read this.

Play

It is a long video, but there’s a lot to go through. It’s also my first video on YouTube, so this is an experiment.

The Bottom Line: What the Testing Revealed

I won’t bury the lede here. AI tools can be frustrating because they are probabilistic. One hour they can solve an issue quickly and efficiently, while the next they can spin on a problem and make you want to pull your hair out. Part of this is the LLM—and they all use some combo of the major LLMs. The other part is the tool itself for not handling what happens when their LLMs fail. 

For example, this morning I re-evaluated Lovable and Bolt because they’ve released new features within the last week, and I thought it would only be fair to assess the latest version. But both performed worse than in my initial testing two weeks ago. In fact, I tried Bolt twice this morning with the same prompt because the first attempt netted a blank preview. Unfortunately, the second attempt also resulted in a blank screen and then I ran out of credits. 🤷‍♂️

Scorecard for Subframe, with a total of 79 points across different categories: User experience (22), Visual design (13), Prototype (6), Ease of use (13), Design control (15), Design system integration (5), Speed (5), Editor’s discretion (0).

For designers who want actual design tools to work on UI, Subframe is the clear winner. The other tools go directly from prompt to code, skipping giving designers any control via a visual editor. We’re not developers, so manipulating the design in code is not for us. We need to be able to directly manipulate the components by clicking and modifying shapes on the canvas or changing values in an inspector.

For me, the runner-up is v0, if you want to use it only for prototyping and for getting ideas. It’s quick—the UI is mostly unstyled, so it doesn’t get in the way of communicating the UX.

The Players: Code-Only vs. Design-Forward Tools

There are two main categories of contenders: code-only tools, and code plus design tools.

Code-Only

  • Bolt
  • Lovable
  • Polymet
  • Replit
  • v0

Code + Design

  • Onlook
  • Subframe
  • Tempo

My Testing Approach: Same Prompt, Different Results

As mentioned at the top, I tested these tools between April 16–27, 2025. As with most SaaS products, I’m sure things change daily, so this report captures a moment in time.

For my evaluation, since all these tools allow for generating a design from a prompt, that’s where I started. Here’s my prompt:

Create a complete shopping cart checkout experience for an online clothing retailer

I would expect the following pages to be generated:

  • Shopping cart
  • Checkout page (or pages) to capture payment and shipping information
  • Confirmation

I scored each app based on the following rubric:

  • Sample generation quality
  • User experience (25)
  • Visual design (15)
  • Prototype (10)
  • Ease of use (15)
  • Control (15)
  • Design system integration (10)
  • Speed (10)
  • Editor’s discretion (±10)

The Scoreboard: How Each Tool Stacked Up

AI design tools for designers, with scores: Subframe 79, Onlook 71, v0 61, Tempo 59, Polymet 58, Lovable 49, Bolt 43, Replit 31. Evaluations conducted between 4/16–4/27/25.

Final summary scores for AI design tools for designers. Evaluations conducted between 4/16–4/27/25.

Here are the summary scores for all eight tools. For the detailed breakdown of scores, view the scorecards here in this Google Sheet.

The Blow-by-Blow: The Good, the Bad, and the Ugly

Bolt

Bolt screenshot: A checkout interface with a shopping cart summary, items listed, and a “Proceed to Checkout” button, displaying prices and order summary.

First up, Bolt. Classic prompt-to-code pattern here—text box, type your prompt, watch it work. 

Bolt shows you the code generation in real-time, which is fascinating if you’re a developer but mostly noise if you’re not. The resulting design was decent but plain, with typical UX patterns. It missed delivering the confirmation page I would expect. And when I tried to re-evaluate it this morning with their new features? Complete failure—blank preview screens until I ran out of credits. No rhyme or reason. And there it is—a perfect example of the maddening inconsistency these tools deliver. Working beautifully in one session, completely broken in another. Same inputs, wildly different outputs.

Score: 43

Lovable

Lovable screenshot: A shipping information form on a checkout page, including fields for personal details and a “Continue to Payment” button.

Moving on to Lovable, which I captured this morning right after they launched their 2.0 version. The experience was a mixed bag. While it generated clean (if plain) UI with some nice touches like toast notifications and a sidebar shopping cart, it got stuck at a critical juncture—the actual checkout. I had to coax it along, asking specifically for the shopping cart that was missing from the initial generation.

The tool encountered an error but at least provided a handy “Try to fix” button. Unlike Bolt, Lovable tries to hide the code, focusing instead on the browser preview—which as a designer, I appreciate. When it finally worked, I got a very vanilla but clean checkout flow and even the confirmation page I was looking for. Not groundbreaking, but functional. The approach of hiding code complexity might appeal to designers who don’t want to wade through development details.

Score: 49

Polymet

Polymet screenshot: A checkout page design for a fashion store showing payment method options (Credit Card, PayPal, Apple Pay), credit card fields, order summary with subtotal, shipping, tax, and total.

Next up is Polymet. This one has a very interesting interface and I kind of like it. You have your chat on the left and a canvas on the right. But instead of just showing the screen it’s working on, it’s actually creating individual components that later get combined into pages. It’s almost like building Figma components and then combining them at the end, except these are all coded components.

The design is pretty good—plain but very clean. I feel like it’s got a little more character than some of the others. What’s nice is you can go into focus mode and actually play with the prototype. I was able to navigate from the shopping cart through checkout (including Apple Pay) to confirmation. To export the code, you need to be on a paid plan, but the free trial gives you at least a taste of what it can do.

Score: 58

Replit

Replit screenshot: A developer interface showing progress on an online clothing store checkout project with error messages regarding the use of the useCart hook.

Replit was a test of patience—no exaggeration, it was the slowest tool of the bunch at 20 minutes to generate anything substantial. Why so slow? It kept encountering errors and falling into those weird loops that LLMs often do when they get stuck. At one point, I had to explicitly ask it to “make it work” just to progress beyond showing product pages, which wasn’t even what I’d asked for in the first place.

When it finally did generate a checkout experience, the design was nothing to write home about. Lines in the stepper weren’t aligning properly, there were random broken elements, and ultimately—it just didn’t work. I couldn’t even complete the checkout flow, which was the whole point of the exercise. I stopped recording at that point because, frankly, I just didn’t want to keep fighting with a tool that’s both slow and ineffective. 

Score: 31

v0

v0 screenshot: An online shopping cart with a multi-step checkout process, including a shipping form and order summary with prices and a “Continue to Payment” button.

Taking v0 for a spin next, which comes from Vercel. I think it was one of the earlier prompt-to-code generators I heard about—originally just for components, not full pages (though I could be wrong). The interface is similar to Bolt with a chat panel on the left and code on the right. As it works, it shows you the generated code in real-time, which I appreciate. It’s pretty mature and works really well.

The result almost looks like a wireframe, but the visual design has a bit more personality than Bolt’s version, even though it’s using the unstyled shadcn components. It includes form validation (which I checked), and handles the payment flow smoothly before showing a decent confirmation page. Speed-wise, v0 is impressively quick compared to some others I tested—definitely a plus when you’re iterating on designs and trying to quickly get ideas.

Score: 61

Onlook

Onlook screenshot: A design tool interface showing a cart with empty items and a “Continue Shopping” button on a fashion store checkout page.

Onlook stands out as a self-contained desktop app rather than a web tool like the others. The experience starts the same way—prompt in, wait, then boom—but instead of showing you immediate results, it drops you into a canvas view with multiple windows displaying localhost:3000, which is your computer running a web server locally. The design it generated was fairly typical and straightforward, properly capturing the shopping cart, shipping, payment, and confirmation screens I would expect. You can zoom out to see a canvas-style overview and manipulate layers, with a styles tab that lets you inspect and edit elements.

The dealbreaker? Everything gets generated as a single page application, making it frustratingly difficult to locate and edit specific states like shipping or payment. I couldn’t find these states visually or directly in the pages panel—they might’ve been buried somewhere in the layers, but I couldn’t make heads or tails of it. When I tried using it again today to capture the styles functionality for the video, I hit the same wall that plagued several other tools I tested—blank previews and errors. Despite going back and forth with the AI, I couldn’t get it running again.

Score: 71

Subframe

Subframe screenshot: A design tool interface with a checkout page showing a cart with items, a shipping summary, and the option to continue to payment.

My time with Subframe revealed a tool that takes a different approach to the same checkout prompt. Unlike most competitors, Subframe can’t create an entire flow at once (though I hear they’re working on multi-page capabilities). But honestly, I kind of like this limitation—it forces you as a designer to actually think through the process.

What sets Subframe apart is its MidJourney-like approach, offering four different design options that gradually come into focus. These aren’t just static mockups but fully coded, interactive pages you can preview in miniature. After selecting a shopping cart design, I simply asked it to create the next page, and it intelligently moved to shipping/billing info.

The real magic is having actual design tools—layers panel, property inspector, direct manipulation—alongside the ability to see the working React code. For designers who want control beyond just accepting whatever the AI spits out, Subframe delivers the best combination of AI generation and familiar design tooling.

Score: 79

Tempo

Tempo screenshot: A developer tool interface generating a clothing store checkout flow, showing wireframe components and code previews.

Lastly, Tempo. This one takes a different approach than most other tools. It starts by generating a PRD from your prompt, then creates a user flow diagram before coding the actual screens—mimicking the steps real product teams would take. Within minutes, it had generated all the different pages for my shopping cart checkout experience. That’s impressive speed, but from a design standpoint, it’s just fine. The visual design ends up being fairly plain, and the prototype had some UX issues—the payment card change was hard to notice, and the “Place order” action didn’t properly lead to a confirmation screen even though it existed in the flow.

The biggest disappointment was with Tempo’s supposed differentiator. Their DOM inspector theoretically allows you to manipulate components directly on canvas like you would in Figma—exactly what designers need. But I couldn’t get it to work no matter how hard I tried. I even came back days later to try again with a different project and reached out to their support team, but after a brief exchange—crickets. Without this feature functioning, Tempo becomes just another prompt-to-code tool rather than something truly designed for visual designers who want to manipulate components directly. Not great.

Score: 59

The Verdict: Control Beats Code Every Time

Subframe screenshot: A design tool interface displaying a checkout page for a fashion store with a cart summary and a “Proceed to Checkout” button.

Subframe offers actual design tools—layers panel, property inspector, direct manipulation—along with AI chat.

I’ve spent the last couple weeks testing these prompt-to-code tools, and if there’s one thing that’s crystal clear, it’s this: for designers who want actual design control rather than just code manipulation, Subframe is the standout winner.

I will caveat that I didn’t do a deep dive into every single tool. I played with them at a cursory level, giving each a fair shot with the same prompt. What I found was a mix of promising starts and frustrating dead ends.

The reality of AI tools is their probabilistic nature. Sometimes they’ll solve problems easily, and then at other times they’ll spectacularly fail. I experienced this firsthand when retesting both Lovable and Bolt with their latest features—both performed worse than in my initial testing just two weeks ago. Blank screens. Error messages. No rhyme or reason.

For designers like me, the dealbreaker with most of these tools is being forced to manipulate designs through code rather than through familiar design interfaces. We need to be able to directly manipulate components by clicking and modifying shapes on the canvas or changing values in an inspector. That’s where Subframe delivers while others fall short—if their audience includes designers, which might not be the case.

For us designers, I believe Subframe could be the answer. But I’m also looking forward to if Figma will have an answer. Will the company get in the AI > design > code game? Or will it be left behind? 

The future belongs to applications that balance AI assistance with familiar design tooling—not just code generators with pretty previews.

With their annual user conference, Config, coming up in San Francisco in less than two weeks, Figma released their 2025 AI Report today.

Andrew Hogan, Insights lead:

While developers and designers alike recognize the importance of integrating AI into their workflows, and overall adoption of AI tools has increased, there’s a disconnect in sentiment around quality and efficacy between the two groups.

Developers report higher satisfaction with AI tools (82%) and feel AI improves the quality of their work (68%). Meanwhile, designers show more modest numbers—69% satisfaction rate and 54% reporting quality improvement—suggesting this group’s enthusiasm lags behind their developer counterparts.

This divide stems from how AI can support existing work and how it’s being used: 59% of developers use AI for core development responsibilities like code generation, whereas only 31% of designers use AI in core design work like asset generation. It’s also likely that AI’s ability to generate code is coming into play—68% of developers say they use prompts to generate code, and 82% say they’re satisfied with the output. Simply put, developers are more widely finding AI adoption useful in their day-to-day work, while designers are still working to determine how and if these tools best fit into their processes.

I can understand that. Code is behind the scenes. If it’s not perfect, no one will really know. But design is user-facing, so quality is more important.

Looking into the future:

Though AI’s impact on efficiency is clear, there are still questions about how to use AI to make people better at their role. This disparity between efficiency and quality is an ongoing battle for users and creators alike.

Looking forward, predictions about the impact of AI on work are moderate—AI’s expected impact for the coming year isn’t much higher than its expected impact last year.

In the full report, Hogan details out:

Only 27% predict AI will have a significant impact on their company goals in the next year (compared to 23% in 2024), with 15% saying it will be transformational (unchanged year-over-year).

The survey was taken in January with a panel of 2,500 users. Things in AI change in weeks. I’m surprised at the number and part of me believes that a lot of designers are hiding their heads in the sand. AI is coming. We should be agile and adapt.

preview-1745539674417.png

Figma's 2025 AI report: Perspectives From Designers and Developers

Figma’s AI report tells us how designers and developers are navigating the changing landscape.

figma.com iconfigma.com