Skip to content

18 posts tagged with “tools”

17 min read
Conceptual 3D illustration of stacked digital notebooks with a pen on top, overlaid on colorful computer code patterns.

Why We Still Need a HyperCard for the AI Era

I rewatched the 1982 film TRON for the umpteenth time the other night with my wife. I have always credited this movie as the spark that got me interested in computers. Mind you, I was nine years old when this film came out. I was so excited after watching the movie that I got my father to buy us a home computer—the mighty Atari 400 (note sarcasm). I remember an educational game that came on cassette called “States & Capitals” that taught me, well, the states and their capitals. It also introduced me to BASIC, and after watching TRON, I wanted to write programs!

Vintage advertisement for the Atari 400 home computer, featuring the system with its membrane keyboard and bold headline “Introducing Atari 400.”

The Atari 400’s membrane keyboard was easy to wipe down, but terrible for typing. It also reminded me of fast food restaurant registers of the time.

Back in the early days of computing—the 1960s and ’70s—there was no distinction between users and programmers. Computer users wrote programs to do stuff for them. Hence the close relationship between the two that’s depicted in TRON. The programs in the digital world resembled their creators because they were extensions of them. Tron, the security program that Bruce Boxleitner’s character Alan Bradley wrote, looks like its creator. Clu looked like Kevin Flynn, played by Jeff Bridges. Early in the film, a compound interest program who was captured by the MCP’s goons says to a cellmate, “if I don’t have a User, then who wrote me?”

Scene from the 1982 movie TRON showing programs in glowing blue suits standing in a digital arena.

The programs in TRON looked like their users. Unless the user was the program, which was the case with Kevin Flynn (Jeff Bridges), third from left.

I was listening to a recent interview with Ivan Zhao, CEO and cofounder of Notion, in which he said he and his cofounder were “inspired by the early computing pioneers who in the ’60s and ’70s thought that computing should be more LEGO-like rather than like hard plastic.” Meaning computing should be malleable and configurable. He goes on to say, “That generation of thinkers and pioneers thought about computing kind of like reading and writing.” As in accessible and fundamental so all users can be programmers too.

The 1980s ushered in the personal computer era with the Apple IIe, Commodore 64, TRS-80, (maybe even the Atari 400 and 800), and then the Macintosh, etc. Programs were beginning to be mass-produced and consumed by users, not programmed by them. To be sure, this move made computers much more approachable. But it also meant that users lost a bit of control. They had to wait for Microsoft to add a feature into Word that they wanted.

Of course, we’re coming back to a full circle moment. In 2025, with AI-enabled vibecoding, users are able to spin up little custom apps that do pretty much anything they want them to do. It’s easy, but not trivial. The only interface is the chatbox, so your control is only as good as your prompts and the model’s understanding. And things can go awry pretty quickly if you’re not careful.

What we’re missing is something accessible, but controllable. Something with enough power to allow users to build a lot, but not so much that it requires high technical proficiency to produce something good. In 1987, Apple released HyperCard and shipped it for free with every new Mac. HyperCard, as fans declared at the time, was “programming for the rest of us.”

HyperCard—Programming for the Rest of Us

Black-and-white screenshot of HyperCard’s welcome screen on a classic Macintosh, showing icons for Tour, Help, Practice, New Features, Art Bits, Addresses, Phone Dialer, Graph Maker, QuickTime Tools, and AppleScript utilities.

HyperCard’s welcome screen showed some useful stacks to help the user get started.

Bill Atkinson was the programmer responsible for MacPaint. After the Mac launched, and apparently on an acid trip, Atkinson conceived of HyperCard. As he wrote on the Apple history site Folklore:

Inspired by a mind-expanding LSD journey in 1985, I designed the HyperCard authoring system that enabled non-programmers to make their own interactive media. HyperCard used a metaphor of stacks of cards containing graphics, text, buttons, and links that could take you to another card. The HyperTalk scripting language implemented by Dan Winkler was a gentle introduction to event-based programming.

There were five main concepts in HyperCard: cards, stacks, objects, HyperTalk, and hyperlinks. 

  • Cards were screens or pages. Remember that the Mac’s nine-inch monochrome screen was just 512 pixels by 342 pixels.
  • Stacks were collections of cards, essentially apps.
  • Objects were the UI and layout elements that included buttons, fields, and backgrounds.
  • HyperTalk was the scripting language that read like plain English.
  • Hyperlinks were links from one interactive element like a button to another card or stack.

When I say that HyperTalk read like plain English, I mean it really did. AppleScript and JavaScript are descendants. Here’s a sample logic script:

if the text of field "Password" is "open sesame" then
  go to card "Secret"
else
  answer "Wrong password."
end if

Armed with this kit of parts, users were able to use this programming “erector set” and build all sorts of banal or wonderful apps. From tracking vinyl records to issuing invoices, or transporting gamers to massive immersive worlds, HyperCard could do it all. The first version of the classic puzzle adventure game, Myst was created with HyperCard. It was comprised of six stacks and 1,355 cards. From Wikipedia:

The original HyperCard Macintosh version of Myst had each Age as a unique HyperCard stack. Navigation was handled by the internal button system and HyperTalk scripts, with image and QuickTime movie display passed off to various plugins; essentially, Myst functions as a series of separate multimedia slides linked together by commands.

Screenshot from the game Myst, showing a 3D-rendered island scene with a ship in a fountain and classical stone columns.

The hit game Myst was built in HyperCard.

For a while, HyperCard was everywhere. Teachers made lesson plans. Hobbyists made games. Artists made interactive stories. In the Eighties and early Nineties, there was a vibrant shareware community. Small independent developers who created and shared simple programs for a postcard, a beer, or five dollars. Thousands of HyperCard stacks were distributed on aggregated floppies and CD-ROMs. Steve Sande, writing in Rocket Yard:

At one point, there was a thriving cottage industry of commercial stack authors, and I was one of them. Heizer Software ran what was called the “Stack Exchange”, a place for stack authors to sell their wares. Like Apple with the current app stores, Heizer took a cut of each sale to run the store, but authors could make a pretty good living from the sale of popular stacks. The company sent out printed catalogs with descriptions and screenshots of each stack; you’d order through snail mail, then receive floppies (CDs at a later date) with the stack(s) on them.

Black-and-white screenshot of Heizer Software’s “Stack Exchange” HyperCard catalog, advertising a marketplace for stacks.

Heizer Software’s “Stack Exchange,” a marketplace for HyperCard authors.

From Stacks to Shrink-Wrap

But even as shareware tiny programs and stacks thrived, the ground beneath this cottage industry was beginning to shift. The computer industry—to move from niche to one in every household—professionalized and commoditized software development, distribution, and sales. By the 1990s, the dominant model was packaged software that was merchandised on store shelves in slick shrink-wrapped boxes. The packaging was always oversized for the floppy or CD it contained to maximize visual space.

Unlike the users/programmers from the ’60s and ’70s, you didn’t make your own word processor anymore, you bought Microsoft Word. You didn’t build your own paint and retouching program—you purchased Adobe Photoshop. These applications were powerful, polished, and designed for thousands and eventually millions of users. But that meant if you wanted a new feature, you had to wait for the next upgrade cycle—typically a couple of years. If you had an idea, you were constrained by what the developers at Microsoft or Adobe decided was on the roadmap.

The ethos of tinkering gave way to the economics of scale. Software became something you consumed rather than created.

From Shrink-Wrap to SaaS

The 2000s took that shift even further. Instead of floppy disks or CD-ROMs, software moved into the cloud. Gmail replaced the personal mail client. Google Docs replaced the need for a copy of Word on every hard drive. Salesforce, Slack, and Figma turned business software into subscription services you didn’t own, but rented month-to-month.

SaaS has been a massive leap for collaboration and accessibility. Suddenly your documents, projects, and conversations lived everywhere. No more worrying about hard drive crashes or lost phones! But it pulled users even farther away from HyperCard’s spirit. The stack you made was yours; the SaaS you use belongs to someone else’s servers. You can customize workflows, but you don’t own the software.

Why Modern Tools Fall Short

For what started out as a note-taking app, Notion has come a long way. With its kit of parts—pages, databases, tags, etc.—it’s highly configurable for tracking information. But you can’t make games with it. Nor can you really tell interactive stories (sure, you can link pages together). You also can’t distribute what you’ve created and share with the rest of the world. (Yes, you can create and sell Notion templates.)

No productivity software programs are malleable in the HyperCard sense. 

[IMAGE: Director]

Of course, there are specialized tools for creativity. Unreal Engine and Unity are great for making games. Director and Flash continued the tradition started by HyperCard—at least in the interactive media space—before they were supplanted by more complex HTML5, CSS, and JavaScript. Objectively, these authoring environments are more complex than HyperCard ever was.

The Web’s HyperCard DNA

In a fun remembrance, Constantine Frantzeskos writes:

HyperCard’s core idea was linking cards and information graphically. This was true hypertext before HTML. It’s no surprise that the first web pioneers drew direct inspiration from HyperCard – in fact, HyperCard influenced the creation of HTTP and the Web itself​. The idea of clicking a link to jump to another document? HyperCard had that in 1987 (albeit linking cards, not networked documents). The pointing finger cursor you see when hovering over a web link today? That was borrowed from HyperCard’s navigation cursor​.

Ted Nelson coined the terms “hypertext” and “hyperlink” in the mid-1960s, envisioning a world where digital documents could be linked together in nonlinear “trails”—making information interwoven and easily navigable. Bill Atkinson’s HyperCard was the first mass-market program that popularized this idea, even influencing Tim Berners-Lee, the father of the World Wide Web. Berners-Lee’s invention was about linking documents together on a server and linking to other documents on other servers. A web of documents.

Early ViolaWWW hypermedia browser from 1993, displaying a window with navigation buttons, URL bar, and hypertext description.

Early web browser from 1993, ViolaWWW, directly inspired by the concepts in HyperCard.

Pei-Yuan Wei, developer of one of the first web browsers called ViolaWWW, also drew direct inspiration from HyperCard. Matthew Lasar writing for Ars Technica:

“HyperCard was very compelling back then, you know graphically, this hyperlink thing,” Wei later recalled. “I got a HyperCard manual and looked at it and just basically took the concepts and implemented them in X-windows,” which is a visual component of UNIX. The resulting browser, Viola, included HyperCard-like components: bookmarks, a history feature, tables, graphics. And, like HyperCard, it could run programs.

And of course, with the built-in source code viewer, browsers brought on a new generation of tinkerers who’d look at HTML and make stuff by copying, tweaking, and experimenting.

The Missing Ingredient: Personal Software

Today, we have low-code and no code tools like Bubble for making web apps, Framer for building web sites, and Zapier for automations. The tools are still aimed at professionals though. Maybe with the exception of Zapier and IFTTT, they’ve expanded the number of people who can make software (including websites), but they’re not general purpose. These are all adjacent to what HyperCard was.

(Re)enter personal software.

In an essay titled “Personal software,” Lee Robinson wrote, “You wouldn’t search ‘best chrome extensions for note taking’. You would work with AI. In five minutes, you’d have something that works exactly how you want.”

Exploring the idea of “malleable software,” researchers at Ink & Switch wrote:

How can users tweak the existing tools they’ve installed, rather than just making new siloed applications? How can AI-generated tools compose with one another to build up larger workflows over shared data? And how can we let users take more direct, precise control over tweaking their software, without needing to resort to AI coding for even the tiniest change? None of these questions are addressed by products that generate a cloud-hosted application from a prompt.

Of course, AI prompt-to-code tools have been emerging this year, allowing anyone who can type to build web applications. However, if you study these tools more closely—Replit, Lovable, Base44, etc.—you’ll find that the audience is still technical people. Developers, product managers, and designers can understand what’s going on. But not everyday people.

These tools are still missing ingredients HyperCard had that allowed it to be in the general zeitgeist for a while, that enabled users to be programmers again.

They are:

  • Direct manipulation
  • Technical abstraction
  • Local apps

What Today’s Tools Still Miss

Direct Manipulation

As I concluded in my exhaustive AI prompt-to-code tools roundup from April, “We need to be able to directly manipulate components by clicking and modifying shapes on the canvas or changing values in an inspector.” The latency of the roundtrip of prompting the model, waiting for it to think and then generate code, and then rebuild the app is much too long. If you don’t know how to code, every change takes minutes, so building something becomes tedious, not fun.

Tools need to be a canvas-first, not chatbox-first. Imagine a kit of UI elements on the left that you can drag onto the canvas and then configure and style—not unlike WordPress page builders. 

AI is there to do the work for you if you want, but you don’t need to use it.

Hand-drawn sketch of a modern HyperCard-like interface, with a canvas in the center, object palette on the left, and chat panel on the right.

My sketch of the layout of what a modern HyperCard successor could look like. A directly manipulatable canvas is in the center, object palette on the left, and AI chat panel on the right.

Technical Abstraction

For gen pop, I believe that these tools should hide away all the JavaScript, TypeScript, etc. The thing that the user is building should just work.

Additionally, there’s an argument to be made to bring back HyperTalk or something similar. Here is the same password logic I showed earlier, but in modern-day JavaScript:

const password = document.getElementById("Password").value;

if (password === "open sesame") {
  window.location.href = "secret.html";
} else {
  alert("Wrong password.");
} 

No one is going to understand that, much less write something like it.

One could argue that the user doesn’t need to understand that code since the AI will write it. Sure, but code is also documentation. If a user is working on an immersive puzzle game, they need to know the algorithm for the solution. 

As a side note, I think flow charts or node-based workflows are great. Unreal Engine’s Blueprints visual scripting is fantastic. Again, AI should be there to assist.

Unreal Engine Blueprints visual scripting interface, with node blocks connected by wires representing game logic.

Unreal Engine has a visual scripting interface called Blueprints, with node blocks connected by wires representing game logic.

Local Apps

HyperCard’s file format was “stacks.” And stacks could be compiled into applications that can be distributed without HyperCard. With today’s cloud-based AI coding tools, they can all publish a project to a unique URL for sharing. That’s great for prototyping and for personal use, but if you wanted to distribute it as shareware or donation-ware, you’d have to map it to a custom domain name. It’s not straightforward to purchase from a registrar and deal with DNS records.

What if these web apps can be turned into a single exchangeable file format like “.stack” or some such? Furthermore, what if they can be wrapped into executable apps via Electron?

Rip, Mix, Burn

Lovable, v0, and others already have sharing and remixing built in. This ethos is great and builds on the philosophies of the hippie computer scientists. In addition to fostering a remix culture, I imagine a centralized store for these apps. Of course, those that are published as runtime apps can go through the official Apple and Google stores if they wish. Finally, nothing stops third-party stores, similar to the collections of stacks that used to be distributed on CD-ROMs.

AI as Collaborator, Not Interface

As mentioned, AI should not be the main UI for this. Instead, it’s a collaborator. It’s there if you want it. I imagine that it can help with scaffolding a project just by describing what you want to make. And as it’s shaping your app, it’s also explaining what it’s doing and why so that the user is learning and slowly becoming a programmer too.

Democratizing Programming

When my daughter was in middle school, she used a site called Quizlet to make flash cards to help her study for history tests. There were often user-generated sets of cards for certain subjects, but there were never sets specifically for her class, her teacher, that test. With this HyperCard of the future, she would be able to build something custom in minutes.

Likewise, a small business owner who runs an Etsy shop selling T-shirts can spin up something a little more complicated to analyze sales and compare against overall trends in the marketplace.

And that same Etsy shop owner could sell the little app they made to others wanting the same tool for for their stores.

The Future Is Close

Scene from TRON showing a program with raised arms, looking upward at a floating disc in a beam of light.

Tron talks to his user, Alan Bradley, via a communication beam.

In an interview with Garry Tan of Y Combinator in June, Michael Truell, the CEO of Anysphere, which is the company behind Cursor, said his company’s mission is to “replace coding with something that’s much better.” He acknowledged that coding today is really complicated:

Coding requires editing millions of lines of esoteric formal programming languages. It requires doing lots and lots of labor to actually make things show up on the screen that are kind of simple to describe.

Truell believes that in five to ten years, making software will boil down to “defining how you want the software to work and how you want the software to look.”

In my opinion, his timeline is a bit conservative, but maybe he means for professionals. I wonder if something simpler will come along sooner that will capture the imagination of the public, like ChatGPT has. Something that will encourage playing and tinkering like HyperCard did.

There’s a third sequel to TRON that’s coming out soon—TRON: Ares. In a panel discussion in the 5,000-seat Hall H at San Diego Comic-Con earlier this summer, Steven Lisberger, the creator of the franchise provided this warning about AI, “Let’s kick the technology around artistically before it kicks us around.” While he said it as a warning, I think it’s an opportunity as well.

AI opens up computer “programming” to a much larger swath of people—hell, everyone. As an industry, we should encourage tinkering by building such capabilities into our products. Not UIs on the fly, but mods as necessary. We should build platforms that increase the pool of users from technical people to everyday users like students, high school teachers, and grandmothers. We should imagine a world where software is as personalizable as a notebook—something you can write in, rearrange, and make your own. And maybe users can be programmers once again.

Figma is adding to its keyboard shortcuts to improve navigation and selection for power users and for keyboard-only users. It’s a win-win that improves accessibility and efficiency. Sarah Kelley, product marketer at Figma writes:

For millions, navigating digital tools with a keyboard isn’t just about preference for speed and ergonomics—it’s a fundamental need. …

We’re introducing a series of new features that remove barriers for keyboard-only designers across most Figma products. Users can now pan the canvas, insert objects, and make precise selections quickly and easily. And, with improved screen reader support, these actions are read aloud as users work, making it easier to stay oriented.

Nice work!

preview-1754373987228.png

Who Says Design Needs a Mouse?

Figma's new accessibility features bring better keyboard and screen reader support to all creators.

figma.com iconfigma.com

Kendra Albert, writing in her blog post about Heavyweight, a new tool she built to create “extremely law-firm-looking” letters:

Sometimes, you don’t need a lawyer, you just need to look like you have one.

That’s the idea behind Heavyweight, a project that democratizes the aesthetics of (in lieu of access to) legal representation. Heavyweight is a free, online, and open-source tool that lets you give any complaint you have extremely law-firm-looking formatting and letterhead. Importantly, it does so without ever using any language that would actually claim that the letter was written by a lawyer.

preview-1753379920512.png

Heavyweight: Letters Taken Seriously - Free & Open Legal Letterhead Generator

Generate professional-looking demand letters with style and snootiness

heavyweight.cc iconheavyweight.cc
Retro-style robot standing at a large control panel filled with buttons, switches, and monitors displaying futuristic data.

The Era of the AI Browser Is Here

For nearly three years, Arc from The Browser Company has been my daily driver. To be sure, there was a little bit of a learning curve. Tabs disappeared after a day unless you pinned them. Then they became almost like bookmarks. Tabs were on the left side of the window, not at the top. Spaces let me organize my tabs based on use cases like personal, work, or finances. I could switch between tabs using control-Tab and saw little thumbnails of the pages, similar to the app switcher on my Mac. Shift-command-C copied the current page’s URL. 

All these little interface ideas added up to a productivity machine for web jockeys like myself. And so, I was saddened to hear in May that The Browser Company stopped actively developing Arc in favor of a new AI-powered browser called Dia. (They are keeping Arc updated with maintenance releases.)

They had started beta-testing Dia with college students first and just recently opened it up to Arc members. I finally got access to Dia a few weeks ago. 

But before diving into Dia, I should mention I also got access to another AI browser, Perplexity’s Comet about a week ago. I’m on their Pro plan but somehow got an invite in my email. I had thought it was limited to those on their much more expensive Max plan only. Shhh.

So this post is about both and how the future of web browsing is obviously AI-assisted, because it feels so natural.

Chat With Your Tabs

Landing page for Dia, a browser tool by The Browser Company, showcasing the tagline “Write with your tabs” and a button for early access download, along with a UI mockup for combining tabs into a writing prompt.

To be honest, I used Dia in fits and starts. It was easy to import my profiles from Arc and have all my bookmarks transferred over. But, I was missing all the pro-level UI niceties that Arc had. Tabs were back at the top and acted like tabs (though they just brought back sidebar tabs in the last week). There were no spaces. I felt like it was 2021 all over again. I tried to stick with it for a week. 

What Dia offers that Arc does not is, of course, a way to “chat” with your tabs. It’s a chat sidebar to the right of the web page that has the context of that page you’re on. You can also add additional tabs to the chat context by simply @mentioning them.

In a recent article about Dia in The New York Times, reporter Brian X. Chen describes using it to summarize a 22-minute YouTube video about car jump starters, instantly surfacing the top products without watching the whole thing. This is a vivid illustration of the “chat with your tabs” value prop. Saving time.

I’ve been doing the same thing. Asking the chat to summarize a page for me or explain some technical documentation to me in plain English. Or I use it as a fuzzy search to find a quote from the page that mentions something specific. For example, if I’m reading an interview with the CEO of Perplexity and I want to know if he’s tried the Dia browser yet, I can ask, “Has he used Dia yet?” Instead of reading through the whole thing. 

Screenshot of the Dia browser displaying a Verge article about Perplexity’s CEO, with an AI-generated sidebar summary clarifying that Aravind Srinivas has not used Dia.

Screenshot of the Dia browser displaying a Verge article about Perplexity’s CEO, with an AI-generated sidebar summary clarifying that Aravind Srinivas has not used Dia.

Another use case is to open a few tabs and ask for advice. For example, I can open up a few shirts from an e-commerce store and ask for a recommendation.

Screenshot of the Dia browser comparing shirts on the Bonobos website, with multiple tabs open for different shirt styles. The sidebar displays AI-generated advice recommending the Everyday Oxford Shirt for a smart casual look, highlighting its versatility, fit options, and stretch comfort.

Using Dia to compare shirts and get a smart casual recommendation from the AI.

Dia also has customizable “skills” which are essentially pre-saved prompts. I made one to craft summary bios from LinkedIn profiles.

Screenshot of the Dia browser on Josh Miller’s LinkedIn profile, with the “skills” feature generating a summarized biography highlighting his role as CEO of The Browser Company and his career background.

Using Dia’s skills feature to generate a summarized biography from a LinkedIn profile.

It’s cool. But I found that it’s a little limited because the chat is usually just with the tabs that you feed Dia. It helps you digest and process information. In other words, it’s an incremental step up from ChatGPT.

Enter Comet.

Browsing Done for You

Landing page for Comet, an AI-powered browser by Perplexity, featuring the tagline “Browse at the speed of thought” with a prominent “Get Comet” download button.

Comet by Perplexity also allows you to chat with your tabs. Asking about that Verge interview, I received a very similar answer. (No, Aravind Srinivas has not used Dia yet.) And because Perplexity search is integrated into Comet, I find that it is much better at context-setting and answering questions than Dia. But that’s not Comet’s killer feature.

Screenshot of the Comet browser displaying a Verge article about Perplexity’s CEO, with the built-in AI assistant on the right confirming Aravind Srinivas has not used the Dia browser.

Viewing the same article in Comet, with its AI assistant answering questions about the content.

Instead, it’s doing stuff with your tabs. Comet’s onboarding experience shows a few use cases like replying to emails and setting meetings, or filling an Instacart cart with the ingredients for butter chicken.

Just like Dia, when I first launched Comet, I was able to import my profiles from Arc, which included bookmarks and cookies. I was essentially still logged into all the apps and sites I was already logged into. So I tried an assistant experiment. 

One thing I often do is to look up the restaurants that have availability on OpenTable in Yelp. I tend to agree more with Yelpers who are usually harsher critics than OpenTable diners. So I asked Comet to “Find me the highest rated sushi restaurants in San Diego that have availability for 2 at 7pm next Friday night on OpenTable. Pick the top 10 and then rank them by Yelp rating.” And it worked! And if I really want to, I can say “Book Takaramono sushi” and it would have done so. (Actually, I did and then quickly canceled.)

The Comet assistant helped me find a sushi restaurant reservation. Video is sped up 4x.

I tried a different experiment which is something I heard Aravind Srinivas say in his interview with The Verge. I navigated to Gmail and checked three emails I wanted to unsubscribe to. I asked the assistant, “unsubscribe from the checked emails.” The agent then essentially took over my Gmail screen and opened the first checked email, clicked on the unsubscribe link. It repeated this process for the other two emails though ran into a couple of snags. First, Gmail doesn’t keep the state of the checked emails when you click into an email. But the Comet assistant was smart enough to remember the subject lines of all three emails. For the second email, it had some issues filing out the right email for the form so it didn’t work. Therefore of the three unsubscribes, it succeeded on two. 

The whole process also took about two minutes. It was wild though to see my Gmail being navigated by the machine. So that you know it’s in control, Comet puts a teal glow around the edges of the page, not dissimilar to the purple glow of the new Siri. And I could have stopped Comet at any time by clicking a stop button. Obviously, sitting there for two minutes and watching my computer unsubscribe to three emails is a lot longer than the 20 seconds it would have take me to do this manually, but like with many agents, the thinking is to delegate a process to it and come back later to check it. 

I Want My AI Browser

A couple hours after Perplexity launched Comet, Reuters published a leak with the headline “Exclusive: OpenAI to release web browser in challenge to Google Chrome.” Perplexity’s CEO seems to suggest that it was on purpose, to take a bit of wind from their sails. The Justice Department is still trying to strong-arm Google to divest itself from Chrome. If that happens, we’re talking about breaking the most profitable feedback loop in tech history. Chrome funnels search queries directly to Google, which powers their ad empire, which funds Chrome development. Break that cycle, and suddenly you’ve got independent Chrome that could default to any search engine, giving AI-first challengers like The Browser Company, Perplexity, and OpenAI a real shot at users.

Regardless of Chrome’s fate, I strongly believe that AI-enabled browsers are the future. Once I started chatting with my tabs, asking for summaries, seeking clarification, asking for too-technical content to be dumbed down to my level, I just can’t go back. The agentic stuff that Perplexity’s Comet is at the forefront of is just the beginning. It’s not perfect yet, but I think its utility will get there as the models get better. To quote Srinivas again:

I’m betting on the fact that in the right environment of a browser with access to all these tabs and tools, a sufficiently good reasoning model — like slightly better, maybe GPT-5, maybe like Claude 4.5, I don’t know — could get us over the edge where all these things are suddenly possible and then a recruiter’s work worth one week is just one prompt: sourcing and reach outs. And then you’ve got to do state tracking… That’s the extent to which we have an ambition to make the browser into something that feels more like an OS where these are processes that are running all the time.

It must be said that both Opera and Microsoft’s Edge also have AI built in. However, the way those features are integrated feel more like afterthoughts, the same way that Arc’s own AI features felt like tiny improvements.

The AI-powered ideas in both Dia and Comet are a step change. But the basics also have to be there, and in my opinion, should be better than what Chrome offers. The interface innovations that made Arc special shouldn’t be sacrificed for AI features. Arc is/was the perfect foundation. Integrate an AI assistant that can be personalized to care about the same things you do so its summaries are relevant. The assistant can be agentic and perform tasks for you in the background while you focus on more important things. In other words, put Arc, Dia, and Comet in a blender and that could be the perfect browser of the future.

Since its debut at Config back in May, Figma has steadily added practical features to Figma Make for product teams. Supabase integration now allows for authentication, data storage, and file uploads. Designers can import design system libraries, which helps maintain visual consistency. Real-time collaboration has improved, giving teams the ability to edit code and prototypes together. The tool now supports backend connections for managing state and storing secrets. Prototypes can be published to custom domains. These changes move Figma Make closer to bridging the gap between design concepts and advanced prototypes.

In my opinion, there’s a stronger relationship between Sites and Make than there is Make and Design. The Make-generated code may be slightly better than when Sites debuted, but it is still not semantic.

Anyhow, I think Make is great for prototyping and it’s convenient to have it built right into Figma. Julius Patto, writing in UX Collective:

Prompting well in Figma Make isn’t about being clever, it’s about being clear, intentional, and iterative. Think of it as a new literacy in the design toolkit: the better you get at it, the more you unlock AI’s potential without losing your creative control.

preview-1752622395695.jpeg

How to prompt Figma Make’s AI better for product design

Learn how to use AI in Figma Make with UX intention, from smarter prompts to inclusive flows that reflect real user needs.

uxdesign.cc iconuxdesign.cc

Here we go. Figma has just dropped their S-1, or their registration for an initial public offering (IPO).

A financial metrics slide showing Figma's key performance indicators on a dark green background. The metrics displayed are: $821M LTM revenue, 46% YoY revenue growth, 18% non-GAAP operating margin, 91% gross margin, 132% net dollar retention, 78% of Forbes 2000 companies use Figma, and 76% of customers use 2 or more products.

Rollup of stats from Figma’s S-1.

While a lot of the risk factors are boilerplate—legalese to cover their bases—the one about AI is particularly interesting, “Competitive developments in AI and our inability to effectively respond to such developments could adversely affect our business, operating results, and financial condition.”

Developments in AI are already impacting the software industry significantly, and we expect this impact to be even greater in the future. AI has become more prevalent in the markets in which we operate and may result in significant changes in the demand for our platform, including, but not limited to, reducing the difficulty and cost for competitors to build and launch competitive products, altering how consumers and businesses interact with websites and apps and consume content in ways that may result in a reduction in the overall value of interface design, or by otherwise making aspects of our platform obsolete or decreasing the number of designers, developers, and other collaborators that utilize our platform. Any of these changes could, in turn, lead to a loss of revenue and adversely impact our business, operating results, and financial condition.

There’s a lot of uncertainty they’re highlighting:

  • Could competitors use AI to build competing products?
  • Could AI reduce the need for websites and apps which decreases the need for interfaces?
  • Could companies reduce workforces, thus reducing the number of seats they buy?

These are all questions the greater tech industry is asking.

preview-1751405229235.png

Figma Files Registration Statement for Proposed IPO | Figma Blog

An update on Figma's path to becoming a publicly traded company: our S-1 is now public.

figma.com iconfigma.com

Darragh Burke and Alex Kern, software engineers at Figma, writing on the Figma blog:

Building code layers in Figma required us to reconcile two different models of thinking about software: design and code. Today, Figma’s visual canvas is an open-ended, flexible environment that enables users to rapidly iterate on designs. Code unlocks further capabilities, but it’s more structured—it requires hierarchical organization and precise syntax. To reconcile these two models, we needed to create a hybrid approach that honored the rapid, exploratory nature of design while unlocking the full capabilities of code.

The solution turned out to be code layers, actual canvas primitives that can be manipulated just like a rectangle, and respects auto layout properties, opacity, border radius, etc.

The solution we arrived at was to implement code layers as a new canvas primitive. Code layers behave like any other layer, with complete spatial flexibility (including moving, resizing, and reparenting) and seamless layout integration (like placement in autolayout stacks). Most crucially, they can be duplicated and iterated on easily, mimicking the freeform and experimental nature of the visual canvas. This enables the creation and comparison of different versions of code side by side. Typically, making two copies of code for comparison requires creating separate git branches, but with code layers, it’s as easy as pressing ⌥ and dragging. This automatically creates a fork of the source code for rapid riffing.

In my experience, it works as advertised, though the code layer element will take a second to render when its spatial properties are edited. Makes sense though, since it’s rendering code.

preview-1751332174370.png

Canvas, Meet Code: Building Figma’s Code Layers

What if you could design and build on the same canvas? Here's how we created code layers to bring design and code together.

figma.com iconfigma.com

Peter Yang has been doing some amazing experiments with gen AI tools. There are so many models out there now, so I appreciate him going through and making this post and video.

I made a video testing Claude 4, ChatGPT O3, and Gemini 2.5 head-to-head for coding, writing, deep research, multimodal and more. What I found was that the “best” model depends on what you’re trying to do.

Here’s a handy chart to whet your appetite.

Comparison chart of popular AI tools (ChatGPT, Claude, Gemini, Grok, Perplexity) showing their capabilities across categories like writing, coding, reasoning, web search, and image/video generation, with icons indicating best performance (star), available (check), or unavailable (X). Updated June 2025.

preview-1749163947660.jpg

ChatGPT vs Claude vs Gemini: The Best AI Model for Each Use Case in 2025

Comparing all 3 AI models for coding, writing, multimodal, and 6 other use cases

creatoreconomy.so iconcreatoreconomy.so

I’ve been focused a lot on AI for product design recently, but I think it’s just as important to talk about AI for web design. Though I spend my days now leading a product design team and thinking a lot about UX for creating enterprise software, web design is still a large part of the design industry, as evidenced by the big interest in Framer in the recent Design Tools Survey.

Eric Karkovack writing for The WP Minute:

Several companies have released AI-based site generators; WordPress.com is among the latest. Our own Matt Medeiros took it for a spin. He “chatted” with a friendly bot that wanted to know more about his website needs. Within minutes, he had a website powered by WordPress.

These tools aren’t producing top agency-level websites just yet. Maybe they’re a novelty for the time being. But they’ll improve. With that comes the worry of their impact on freelancers. Will our potential clients choose a bot over a seasoned expert?

Karkovack is right. Current AI tools aren’t making well-thought custom websites yet. So as an agency owner or a freelance designer, you have to defend your position of expertise and customer service:

Those tools have a place in the market. However, freelancers and agencies must position themselves as the better alternative. We should emphasize our expertise and attention to detail, and communicate that AI is a helpful tool, not a magic wand.

But Karkovack misses an opportunity to offer sage advice, which I will do here. Take advantage of these tools in your workflow so that you can be more efficient in your delivery. If you’re in the WordPress ecosystem, use AI to generate some layout ideas, write custom JavaScript, make custom plugins, or write some copy. These AI tools are game-changing, so don’t rest on your laurels.

preview-1749151597255.jpg

What Do AI Site Builders Mean for Freelancers?

Being a freelance web designer often means dealing with disruption. Sometimes, it’s a client who needs a new feature built ASAP. But it can also come from a shakeup in the technology we use. Artificial intelligence (AI) has undoubtedly been a disruptive force. It has upended our workflows and made…

thewpminute.com iconthewpminute.com
Surreal, digitally manipulated forest scene with strong color overlays in red, blue, and purple hues. A dark, blocky abstract logo is superimposed in the foreground.

Thoughts on the 2024 Design Tools Survey

Tommy Geoco and team are finally out with the results of their 2024 UX Design Tools Survey.

First, two quick observations before I move on to longer ones:

  • The respondent population of 2,200+ designers is well-balanced among company size, team structure, client vs. product focus, and leadership responsibility
  • Predictably, Figma dominates the tools stacks of most segments

Surprise #1: Design Leaders Use AI More Than ICs

Bar chart comparing AI adoption rates among design leaders and ICs across different work environments. Agency leaders show the highest adoption at 88.7%, followed by startups, growth-stage, and corporate environments.

From the summary of the AI section:

Three clear patterns emerge from our data:

  1. Leadership-IC Divide. Leaders adopt AI at a higher rate (29.0%) than ICs (19.9%)
  2. Text-first adoption. 75.2% of AI usage focuses on writing, documentation, and content—not visuals
  3. Client Influence. Client-facing designers show markedly higher AI adoption than internal-facing peers

That nine-point difference is interesting. The report doesn’t go into speculating why, but here are some possible reasons:

  • Design leaders are experimenting with AI tools looking for efficiency gains
  • Leaders write more than design, so they’re using AI more for emails, memos, reports, and presentations
  • ICs are set in their processes and don’t have time to experiment

Bar chart showing that most AI usage is for text-based tasks like copywriting, documentation, and content generation. Visual design tasks such as wireframes, assets, and components are much less common.

I believe that any company operating with resource constraints—which is all startups—needs to embrace AI. AI enables us to do more. I don’t believe—at least not yet—mid- to senior-level jobs are on the line. Engineers can use Cursor to write code, sure, but it’s probably better for them to give Cursor junior-level tasks like bug fixes. Designers should use AI to generate prototypes so that they can test and iterate on ideas more quickly. 

Bar chart showing 17.7% of advanced prototypers use code-based tools like SwiftUI, HTML/CSS/JS, React, and Flutter. Ratings indicate high satisfaction with these approaches, signaling a shift toward development-integrated prototyping.

The data here is stale, unfortunately. The survey was conducted between November 2024 and January 2025, just as the AI prompt-to-code tools were coming to market. I suspect we will see a huge jump in next year’s results.

Surprise #2: There’s Excitement for Framer

Alt Text: “Future of Design Award” banner featuring the Framer logo. Below, text explains the award celebrates innovations shaping design’s future, followed by “Winner: Framer.” Three key stats appear: 10.0% of respondents ranked Framer as a 2025 “tool to try,” 12.1% share in portfolio-building (largest in its category), and a 4.57 / 5 average satisfaction rating (tied for highest).

I’m surprised about Framer winning the “Future of Design” award. Maybe it’s the name of the award; does Framer really represent the “future of design”? Ten percent of respondents say they want to try this. 

I’ve not gone back to Framer since its early days when it supported code exports. I will give them kudos that they’ve pivoted and built a solid business and platform. I’m personally weary of creating websites for clients in a closed platform; I would rather it be portable like a Node.js app or even WordPress. But to each their own.

Not Surprised at All

In the report’s conclusion, its first two points are unsurprising:

  1. AI enters the workflow. 8.5% of designers cited AI tools as their top interest for 2025. With substantial AI tooling innovation in early 2025, we expect widespread adoption to accelerate next year.

Like I mentioned earlier, I think this will shift big time. 

  1. Design-code gap narrows. Addressing the challenge faced by 46.3% of teams reporting inconsistencies between design system specifications and code implementations.

As I said in a previous essay on the future of product design, the design-to-code gap is begging to be solved, “For any designer who has ever handed off a Figma file to a developer, they have felt the stinging disappointment days or weeks later when it’s finally coded.…The developer handoff experience has been a well-trodden path full of now-defunct or dying companies like InVision, Abstract, and Zeplin.”

Reminder: The Tools Don’t Make You a Better Designer

Inevitably, someone in the comments section will point this out: Don’t focus on the tool. To quote photographer and camera reviewer Ken Rockwell, “Cameras don’t take pictures, photographers do. Cameras are just another artist’s tool.” Tools are commodities, but our skills as craftspeople, thinkers, curators, and tastemakers are not.

Josh Miller, writing in The Browser Company’s substack:

After a couple of years of building and shipping Arc, we started running into something we called the “novelty tax” problem. A lot of people loved Arc — if you’re here you might just be one of them — and we’d benefitted from consistent, organic growth since basically Day One. But for most people, Arc was simply too different, with too many new things to learn, for too little reward.

“Novelty tax” is another way of saying using non-standard patterns that users just didn’t get. I love Arc. It’s my daily driver. But, Miller is right that it does have a steep learning curve. So there is a natural ceiling to their market.

Miller’s conclusion is where things get really interesting:

Let me be even more clear: traditional browsers, as we know them, will die. Much in the same way that search engines and IDEs are being reimagined [by AI-first products like Perplexity and Cursor]. That doesn’t mean we’ll stop searching or coding. It just means the environments we do it in will look very different, in a way that makes traditional browsers, search engines, and IDEs feel like candles — however thoughtfully crafted. We’re getting out of the candle business. You should too.

“You should too.”

And finally, to bring it back to the novelty tax:

**New interfaces start from familiar ones. **In this new world, two opposing forces are simultaneously true. How we all use computers is changing much faster (due to AI) than most people acknowledge. Yet at the same time, we’re much farther from completely abandoning our old ways than AI insiders give credit for. Cursor proved this thesis in the coding space: the breakthrough AI app of the past year was an (old) IDE — designed to be AI-native. OpenAI confirmed this theory when they bought Windsurf (another AI IDE), despite having Codex working quietly in the background. We believe AI browsers are next.

Sad to see Arc’s slow death, but excited to try Dia soon.

preview-1748494472613.png

Letter to Arc members 2025

On Arc, its future, and the arrival of AI browsers — a moment to answer the largest questions you've asked us this past year.

browsercompany.substack.com iconbrowsercompany.substack.com
Colorful illustration featuring the Figma logo on the left and a whimsical character operating complex, abstract machinery with gears, dials, and mechanical elements in vibrant colors against a yellow background.

Figma Make: Great Ideas, Nowhere to Go

Nearly three weeks after it was introduced at Figma Config 2025, I finally got access to Figma Make. It is in beta and Figma made sure we all know. So I will say upfront that it’s a bit unfair to do an official review. However, many of the tools in my AI prompt-to-code shootout article are also in beta. 

Since this review is fairly visual, I made a video as well that summarizes the points in this article pretty well.

Play

The Promise: One-to-One With Your Design

Figma's Peter Ng presenting on stage with large text reading "0→1 but 1:1 with your designs" against a dark background with purple accent lighting.

Figma’s Peter Ng presenting on stage Make’s promise: “0→1 but 1:1 with your designs.”

“What if you could take an idea not only from zero to one, but also make it one-to-one with your designs?” said Peter Ng, product designer at Figma. Just like all the other AI prompt-to-code tools, Figma Make is supposed to enable users to prompt their way to a working application. 

The Figma spin is that there’s more control over the output. Click an element and have the prompt only apply to that element. Or also click on something in the canvas and change some details like the font family, size, or color. 

The other Figma advantage is to be able to use pasted Figma designs for a more accurate translation to code. That’s the “one-to-one” Ng refers to.

The Reality: Falls Short

I evaluated Figma Make via my standard checkout flow prompt (thus covering the zero-to-one use case), another prompt, and with a pasted design (one-to-one).

Let’s get the standard evaluation out of the way before moving onto a deeper dive.

Figma Make Scorecard

Figma Make scorecard showing a total score of 58 out of 100, with breakdown: User experience 18/25, Visual design 13/15, Prototype 8/10, Ease of use 9/15, Design Control 6/15, Design system integration 0/15, Speed 9/10, and Editor's Discretion -5/10.

I ran the same prompt through it as the other AI tools:

Create a complete shopping cart checkout experience for an online clothing retailer

Figma Make’s score totaled 58, which puts it squarely in the middle of the pack. This was for a variety of reasons.

The quality of the generated output was pretty good. The UI was nice and clean, if a bit unstyled. This is because Make uses Shadcn UI components. Overall, the UX was exactly what I would expect. Perhaps a progress bar would have been a nice touch.

The generation was fast, clocking in at three minutes, which puts it near the top in terms of speed.

And the fine-grained editing sort of worked as promised. However, my manual changes were sometimes overridden if I used the chat.

Where It Actually Shines

Figma Make interface showing a Revenue Forecast Calculator with a $200,000 total revenue input, "Normal" distribution type selected, monthly breakdown table showing values from January ($7,407) to December ($7,407), and an orange bar chart displaying the normal distribution curve across 12 months with peak values in summer months.

The advantage of these prompt-to-code tools is that it’s really easy to prototype—maybe it’s even production-ready?—complex interactions.

To test this, I used a new prompt:

Build a revenue forecast calculator. It should take the input of a total budget from the user and automatically distribute the budget to a full calendar year showing the distribution by month. The user should be able to change the distribution curve from “Even” to “Normal” where “Normal” is a normal distribution curve.

Along with the prompt, I also included a wireframe as a still image. This gave the AI some idea of the structure I was looking for, at least.

The resulting generation was great and the functionality worked as expected. I iterated the design to include a custom input method and that worked too.

The One-to-One Promise Breaks Down

I wanted to see how well Figma Make would work with a well-structured Figma Design file. So I created a homepage for fictional fitness instructor using auto layout frames, structuring the file as I would divs in HTML.

Figma Design interface showing the original "Body by Reese" fitness instructor homepage design with layers panel on left, main canvas displaying the Pilates hero section and content layout, and properties panel on right. This is the reference design that was pasted into Figma Make for testing.

This is the reference design that was pasted into Figma Make for testing. Notice the well-structured layers!

Then I pasted the design into the chatbox and included a simple prompt. The result was…disappointing. The layout was correct but the type and type sizes were all wrong. I input that feedback into the chat and then the right font finally appeared. 

Then I manually updated the font sizes and got the design looking pretty close to my original. There was one problem: an image was the wrong size and not proportionally-scaled. So I asked the AI to fix it.

Figma Make interface showing a fitness instructor homepage with "Body by Reese" branding, featuring a hero image of someone doing Pilates with "Sculpt. Strengthen. Shine." text overlay, navigation menu, and content section with instructor photo and "Book a Class" call-to-action button.

Figma Make’s attempt at translating my Figma design to code.

The AI did not fix it and reverted some of my manual overrides for the fonts. In many ways this is significantly worse than not giving designers fine-grained control in the first place. Overwriting my overrides made me lose trust in the product because I lost work—however minimal it was. It brought me back to the many occasions that Illustrator or Photoshop crashed while saving, thus corrupting the file. Yes, it wasn’t as bad, but it still felt that way.

Dead End by Design

The question of what to do with the results of a Figma Make chat remain. A Figma Make file is its own filetype. You can’t bring it back into Figma Design nor even Figma Sites to make tweaks. You can publish it and it’s hosted on Figma’s infrastructure, just like Sites. You can download the code, but it’s kind of useless.

Code Export Is Capped at the Knees

You can download the React code as a zip file. But the code does not contain the necessary package.json that makes it installable on your local machine nor on a Node.js server. The package file tells the npm installer which dependencies need to be installed for the project to run.

I tried using Cursor to figure out where to move the files around—they have to be in a src directory—and to help me write a package.json but it would have taken too much time to reverse engineer it.

Nowhere to Go

Maybe using Figma Make inside Figma Sites will be a better use case. It’s not yet enabled for me, but that feature is the so-called Code Layers that was mentioned in the Make and Sites deep dive presentation at Config. In practice, it sounds very much like Code Components in Framer.

The Bottom Line

Figma had to debut Make in order to stay competitive. There’s just too much out there nipping at their heels. While a design tool like Figma is necessary to unlock the freeform exploration designers need, it is also the natural next step to be able to make it real from within the tool. The likes of Lovable, v0, and Subframe allow you to start with a design from Figma and turn that design into working code. The thesis for many of those tools is that they’re taking care of the post design-to-developer handoff: get a design, give the AI some context, and we’ll make it real. Figma has occupied the pre-designer-to-developer handoff for a while and they’re finally taking the next step.

However, in its current state, Figma Make is a dead end (see previous section). But it is beta software which should get better before official release. As a preview I think it’s cool, despite its flaws and bugs. But I wouldn’t use it for any actual work.

During this beta period, Figma needs to…

  • Add complete code export so the resulting code is portable, rather than keeping it within its closed system
  • Fix the fiendish bugs around the AI overwriting manual overrides
  • Figure out tighter integration between Make and the other products, especially Design

I was recently featured on the Design of AI podcast to discuss my article that pit eight AI prompt-to-code tools head to head. We talked through the list but I also offered a point of view on where I see the gap.

Arpy Dragffy and Brittany Hobbs close out the episode this way (emphasis mine):

So it’s great that Roger did that analysis and that evaluation. I honestly am a bit shocked by those results. Again, his ranking was that Subframe was number one, Onlook was two, v0 number three, Tempo number four. But again, if you look at his matrix, only two of the tools scored over 70 out of 100 and only one of the tools he could recommend. And this really shines a dark light on AI products and their maturity right now**.** But I suspect that this comes down to the strategy that was used by some of these products. If you go to them, almost every single one of them is actually a coding tool, except the two that scored the highest.

Onlook, its headline is “The Cursor for Designers.” So of course it’s a no brainer that makes a lot of sense. That’s part of their use cases, but nonetheless it didn’t score that good in his matrix.

The top scoring one from his list Subframe is directly positioned to designers. The title is “Design meet code.” It looks like a UI editor. It looks like the sort of tool that designers wish they had. These tools are making it easier for product managers to run research programs, to turn early prototypes and ideas into code to take code and really quick design changes. When you need to make a change to a website, you can go straight into one of these tools and stand up the code.

Listen on Apple Podcasts and Spotify.

preview-1747355019951.jpg

Rating AI Design to Code Products + Hacks for ChatGPT & Claude [Roger Wong]

Designers are overwhelmed with too many AI products that promise to help them simplify workflows and solve the last mile of design-to-code. With the...

designof.ai icondesignof.ai

I tried early versions of Stable Diffusion be ended up using exclusively Midjourney because of the quality. I’m excited to check out the full list. (Oh, and of course I’ve used DALL-E as well via ChatGPT. But there’s not a lot of control there.)

preview-1747354261267.png

Stable Diffusion & Its Alternatives: Top 5 AI Image Generators

AI-generated imagery has become an essential part of the modern product designer’s toolkit — powering everything from early-stage ideation…

uxplanet.org iconuxplanet.org
A futuristic scene with a glowing, tech-inspired background showing a UI design tool interface for AI, displaying a flight booking project with options for editing and previewing details. The screen promotes the tool with a “Start for free” button.

Beyond the Prompt: Finding the AI Design Tool That Actually Works for Designers

There has been an explosion of AI-powered prompt-to-code tools within the last year. The space began with full-on integrated development environments (IDEs) like Cursor and Windsurf. These enabled developers to use leverage AI assistants right inside their coding apps. Then came a tools like v0, Lovable, and Replit, where users could prompt screens into existence at first, and before long, entire applications.

A couple weeks ago, I decided to test out as many of these tools as I could. My aim was to find the app that would combine AI assistance, design capabilities, and the ability to use an organization’s coded design system.

While my previous essay was about the future of product design, this article will dive deep into a head-to-head between all eight apps that I tried. I recorded the screen as I did my testing, so I’ve put together a video as well, in case you didn’t want to read this.

Play

It is a long video, but there’s a lot to go through. It’s also my first video on YouTube, so this is an experiment.

The Bottom Line: What the Testing Revealed

I won’t bury the lede here. AI tools can be frustrating because they are probabilistic. One hour they can solve an issue quickly and efficiently, while the next they can spin on a problem and make you want to pull your hair out. Part of this is the LLM—and they all use some combo of the major LLMs. The other part is the tool itself for not handling what happens when their LLMs fail. 

For example, this morning I re-evaluated Lovable and Bolt because they’ve released new features within the last week, and I thought it would only be fair to assess the latest version. But both performed worse than in my initial testing two weeks ago. In fact, I tried Bolt twice this morning with the same prompt because the first attempt netted a blank preview. Unfortunately, the second attempt also resulted in a blank screen and then I ran out of credits. 🤷‍♂️

Scorecard for Subframe, with a total of 79 points across different categories: User experience (22), Visual design (13), Prototype (6), Ease of use (13), Design control (15), Design system integration (5), Speed (5), Editor’s discretion (0).

For designers who want actual design tools to work on UI, Subframe is the clear winner. The other tools go directly from prompt to code, skipping giving designers any control via a visual editor. We’re not developers, so manipulating the design in code is not for us. We need to be able to directly manipulate the components by clicking and modifying shapes on the canvas or changing values in an inspector.

For me, the runner-up is v0, if you want to use it only for prototyping and for getting ideas. It’s quick—the UI is mostly unstyled, so it doesn’t get in the way of communicating the UX.

The Players: Code-Only vs. Design-Forward Tools

There are two main categories of contenders: code-only tools, and code plus design tools.

Code-Only

  • Bolt
  • Lovable
  • Polymet
  • Replit
  • v0

Code + Design

  • Onlook
  • Subframe
  • Tempo

My Testing Approach: Same Prompt, Different Results

As mentioned at the top, I tested these tools between April 16–27, 2025. As with most SaaS products, I’m sure things change daily, so this report captures a moment in time.

For my evaluation, since all these tools allow for generating a design from a prompt, that’s where I started. Here’s my prompt:

Create a complete shopping cart checkout experience for an online clothing retailer

I would expect the following pages to be generated:

  • Shopping cart
  • Checkout page (or pages) to capture payment and shipping information
  • Confirmation

I scored each app based on the following rubric:

  • Sample generation quality
  • User experience (25)
  • Visual design (15)
  • Prototype (10)
  • Ease of use (15)
  • Control (15)
  • Design system integration (10)
  • Speed (10)
  • Editor’s discretion (±10)

The Scoreboard: How Each Tool Stacked Up

AI design tools for designers, with scores: Subframe 79, Onlook 71, v0 61, Tempo 59, Polymet 58, Lovable 49, Bolt 43, Replit 31. Evaluations conducted between 4/16–4/27/25.

Final summary scores for AI design tools for designers. Evaluations conducted between 4/16–4/27/25.

Here are the summary scores for all eight tools. For the detailed breakdown of scores, view the scorecards here in this Google Sheet.

The Blow-by-Blow: The Good, the Bad, and the Ugly

Bolt

Bolt screenshot: A checkout interface with a shopping cart summary, items listed, and a “Proceed to Checkout” button, displaying prices and order summary.

First up, Bolt. Classic prompt-to-code pattern here—text box, type your prompt, watch it work. 

Bolt shows you the code generation in real-time, which is fascinating if you’re a developer but mostly noise if you’re not. The resulting design was decent but plain, with typical UX patterns. It missed delivering the confirmation page I would expect. And when I tried to re-evaluate it this morning with their new features? Complete failure—blank preview screens until I ran out of credits. No rhyme or reason. And there it is—a perfect example of the maddening inconsistency these tools deliver. Working beautifully in one session, completely broken in another. Same inputs, wildly different outputs.

Score: 43

Lovable

Lovable screenshot: A shipping information form on a checkout page, including fields for personal details and a “Continue to Payment” button.

Moving on to Lovable, which I captured this morning right after they launched their 2.0 version. The experience was a mixed bag. While it generated clean (if plain) UI with some nice touches like toast notifications and a sidebar shopping cart, it got stuck at a critical juncture—the actual checkout. I had to coax it along, asking specifically for the shopping cart that was missing from the initial generation.

The tool encountered an error but at least provided a handy “Try to fix” button. Unlike Bolt, Lovable tries to hide the code, focusing instead on the browser preview—which as a designer, I appreciate. When it finally worked, I got a very vanilla but clean checkout flow and even the confirmation page I was looking for. Not groundbreaking, but functional. The approach of hiding code complexity might appeal to designers who don’t want to wade through development details.

Score: 49

Polymet

Polymet screenshot: A checkout page design for a fashion store showing payment method options (Credit Card, PayPal, Apple Pay), credit card fields, order summary with subtotal, shipping, tax, and total.

Next up is Polymet. This one has a very interesting interface and I kind of like it. You have your chat on the left and a canvas on the right. But instead of just showing the screen it’s working on, it’s actually creating individual components that later get combined into pages. It’s almost like building Figma components and then combining them at the end, except these are all coded components.

The design is pretty good—plain but very clean. I feel like it’s got a little more character than some of the others. What’s nice is you can go into focus mode and actually play with the prototype. I was able to navigate from the shopping cart through checkout (including Apple Pay) to confirmation. To export the code, you need to be on a paid plan, but the free trial gives you at least a taste of what it can do.

Score: 58

Replit

Replit screenshot: A developer interface showing progress on an online clothing store checkout project with error messages regarding the use of the useCart hook.

Replit was a test of patience—no exaggeration, it was the slowest tool of the bunch at 20 minutes to generate anything substantial. Why so slow? It kept encountering errors and falling into those weird loops that LLMs often do when they get stuck. At one point, I had to explicitly ask it to “make it work” just to progress beyond showing product pages, which wasn’t even what I’d asked for in the first place.

When it finally did generate a checkout experience, the design was nothing to write home about. Lines in the stepper weren’t aligning properly, there were random broken elements, and ultimately—it just didn’t work. I couldn’t even complete the checkout flow, which was the whole point of the exercise. I stopped recording at that point because, frankly, I just didn’t want to keep fighting with a tool that’s both slow and ineffective. 

Score: 31

v0

v0 screenshot: An online shopping cart with a multi-step checkout process, including a shipping form and order summary with prices and a “Continue to Payment” button.

Taking v0 for a spin next, which comes from Vercel. I think it was one of the earlier prompt-to-code generators I heard about—originally just for components, not full pages (though I could be wrong). The interface is similar to Bolt with a chat panel on the left and code on the right. As it works, it shows you the generated code in real-time, which I appreciate. It’s pretty mature and works really well.

The result almost looks like a wireframe, but the visual design has a bit more personality than Bolt’s version, even though it’s using the unstyled shadcn components. It includes form validation (which I checked), and handles the payment flow smoothly before showing a decent confirmation page. Speed-wise, v0 is impressively quick compared to some others I tested—definitely a plus when you’re iterating on designs and trying to quickly get ideas.

Score: 61

Onlook

Onlook screenshot: A design tool interface showing a cart with empty items and a “Continue Shopping” button on a fashion store checkout page.

Onlook stands out as a self-contained desktop app rather than a web tool like the others. The experience starts the same way—prompt in, wait, then boom—but instead of showing you immediate results, it drops you into a canvas view with multiple windows displaying localhost:3000, which is your computer running a web server locally. The design it generated was fairly typical and straightforward, properly capturing the shopping cart, shipping, payment, and confirmation screens I would expect. You can zoom out to see a canvas-style overview and manipulate layers, with a styles tab that lets you inspect and edit elements.

The dealbreaker? Everything gets generated as a single page application, making it frustratingly difficult to locate and edit specific states like shipping or payment. I couldn’t find these states visually or directly in the pages panel—they might’ve been buried somewhere in the layers, but I couldn’t make heads or tails of it. When I tried using it again today to capture the styles functionality for the video, I hit the same wall that plagued several other tools I tested—blank previews and errors. Despite going back and forth with the AI, I couldn’t get it running again.

Score: 71

Subframe

Subframe screenshot: A design tool interface with a checkout page showing a cart with items, a shipping summary, and the option to continue to payment.

My time with Subframe revealed a tool that takes a different approach to the same checkout prompt. Unlike most competitors, Subframe can’t create an entire flow at once (though I hear they’re working on multi-page capabilities). But honestly, I kind of like this limitation—it forces you as a designer to actually think through the process.

What sets Subframe apart is its MidJourney-like approach, offering four different design options that gradually come into focus. These aren’t just static mockups but fully coded, interactive pages you can preview in miniature. After selecting a shopping cart design, I simply asked it to create the next page, and it intelligently moved to shipping/billing info.

The real magic is having actual design tools—layers panel, property inspector, direct manipulation—alongside the ability to see the working React code. For designers who want control beyond just accepting whatever the AI spits out, Subframe delivers the best combination of AI generation and familiar design tooling.

Score: 79

Tempo

Tempo screenshot: A developer tool interface generating a clothing store checkout flow, showing wireframe components and code previews.

Lastly, Tempo. This one takes a different approach than most other tools. It starts by generating a PRD from your prompt, then creates a user flow diagram before coding the actual screens—mimicking the steps real product teams would take. Within minutes, it had generated all the different pages for my shopping cart checkout experience. That’s impressive speed, but from a design standpoint, it’s just fine. The visual design ends up being fairly plain, and the prototype had some UX issues—the payment card change was hard to notice, and the “Place order” action didn’t properly lead to a confirmation screen even though it existed in the flow.

The biggest disappointment was with Tempo’s supposed differentiator. Their DOM inspector theoretically allows you to manipulate components directly on canvas like you would in Figma—exactly what designers need. But I couldn’t get it to work no matter how hard I tried. I even came back days later to try again with a different project and reached out to their support team, but after a brief exchange—crickets. Without this feature functioning, Tempo becomes just another prompt-to-code tool rather than something truly designed for visual designers who want to manipulate components directly. Not great.

Score: 59

The Verdict: Control Beats Code Every Time

Subframe screenshot: A design tool interface displaying a checkout page for a fashion store with a cart summary and a “Proceed to Checkout” button.

Subframe offers actual design tools—layers panel, property inspector, direct manipulation—along with AI chat.

I’ve spent the last couple weeks testing these prompt-to-code tools, and if there’s one thing that’s crystal clear, it’s this: for designers who want actual design control rather than just code manipulation, Subframe is the standout winner.

I will caveat that I didn’t do a deep dive into every single tool. I played with them at a cursory level, giving each a fair shot with the same prompt. What I found was a mix of promising starts and frustrating dead ends.

The reality of AI tools is their probabilistic nature. Sometimes they’ll solve problems easily, and then at other times they’ll spectacularly fail. I experienced this firsthand when retesting both Lovable and Bolt with their latest features—both performed worse than in my initial testing just two weeks ago. Blank screens. Error messages. No rhyme or reason.

For designers like me, the dealbreaker with most of these tools is being forced to manipulate designs through code rather than through familiar design interfaces. We need to be able to directly manipulate components by clicking and modifying shapes on the canvas or changing values in an inspector. That’s where Subframe delivers while others fall short—if their audience includes designers, which might not be the case.

For us designers, I believe Subframe could be the answer. But I’m also looking forward to if Figma will have an answer. Will the company get in the AI > design > code game? Or will it be left behind? 

The future belongs to applications that balance AI assistance with familiar design tooling—not just code generators with pretty previews.

Karri Saarinen, writing for the Linear blog:

Unbounded AI, much like a river without banks, becomes powerful but directionless. Designers need to build the banks and bring shape to the direction of AI’s potential. But we face a fundamental tension in that AI sort of breaks our usual way of designing things, working back from function, and shaping the form.

I love the metaphor of AI being the a river and we designers are the banks. Feels very much in line with my notion that we need to become even better curators.

Saarinen continues, critiquing the generic chatbox being the primary form of interacting with AI:

One way I visualize this relationship between the form of traditional UI and the function of AI is through the metaphor of a ‘workbench’. Just as a carpenter’s workbench is familiar and purpose-built, providing an organized environment for tools and materials, a well-designed interface can create productive context for AI interactions. Rather than being a singular tool, the workbench serves as an environment that enhances the utility of other tools – including the ‘magic’ AI tools.

Software like Linear serves as this workbench. It provides structure, context, and a specialized environment for specific workflows. AI doesn’t replace the workbench, it’s a powerful new tool to place on top of it.

It’s interesting. I don’t know what Linear is telegraphing here, but if I had to guess, I wonder if it’s closer to being field-specific or workflow-specific, similar to Generative Fill in Photoshop. It’s a text field—not textarea—limited to a single workflow.

preview-1744257584139.png

Design for the AI age

For decades, interfaces have guided users along predefined roads. Think files and folders, buttons and menus, screens and flows. These familiar structures organize information and provide the comfort of knowing where you are and what's possible.

linear.app iconlinear.app