Skip to content

Some fun, free tools for designers, curated by Danil Vladimirov.

preview-1748498923434.png

everywhere.tools

Collection of open-source tools for designers & creatives

everywhere.tools iconeverywhere.tools

Following up on OpenAI’s acquisition of Jony Ive’s hardware startup, io, Mark Wilson, writing for Fast Company:

As Ive told me back in 2023, there have been only three significant modalities in the history of computing. After the original command line, we got the graphical user interface (the desktop, folders, and mouse of Xerox, Mac OS, and Windows), then voice (Alexa, Siri), and, finally, with the iPhone, multitouch (not just the ability to tap a screen, but to gesture and receive haptic feedback). When I brought up some other examples, Ive quickly nodded but dismissed them, acknowledging these as “tributaries” of experimentation. Then he said that to him the promise, and excitement, of building new AI hardware was that it might introduce a new breakthrough modality to interacting with a machine. A fourth modality.

Hmm, it hasn’t taken off yet because AR hasn’t really gained mainstream popularity, but I would argue hand gestures in AR UI to be a fourth modality. But Ive thinks different. Wilson continues:

Ive’s fourth modality, as I gleaned, was about translating AI intuition into human sensation. And it’s the exact sort of technology we need to introduce ubiquitous computing, also called quiet computing and ambient computing. These are terms coined by the late UX researcher Mark Weiser, who in the 1990s began dreaming of a world that broke us free from our desktop computers to usher in devices that were one with our environment. Weiser did much of this work at Xerox PARC, the same R&D lab that developed the mouse and GUI technology that Steve Jobs would eventually adopt for the Macintosh. (I would also be remiss to ignore that ubiquitous computing is the foundation of the sci-fi film Her, one of Altman’s self-stated goalposts.)

Ah, essentially an always-on, always watching AI that is ready to assist. But whatever the form factor this device takes, it will likely depend on a smartphone:

The first io device seems to acknowledge the phone’s inertia. Instead of presenting itself as a smartphone-killer like the Ai Pin or as a fabled “second screen” like the Apple Watch, it’s been positioned as a third, er, um … thing next to your phone and laptop. Yeah, that’s confusing, and perhaps positions the io product as unessential. But it also appears to be a needed strategy: Rather than topple these screened devices, it will attempt to draft off them.

Wilson ends with the idea of a subjective computer, one that has personality and gives you opinions. He explains:

I think AI is shifting us from objective to subjective. When a Fitbit counts your steps and calories burned, that’s an objective interface. When you ask ChatGPT to gauge the tone of a conversation, or whether you should eat better, that’s a subjective interface. It offers perspective, bias, and, to some extent, personality. It’s not just serving facts; it’s offering interpretation.

The entire column is worth a read.

preview-1748580958171.jpg

Can Jony Ive and Sam Altman build the fourth great interface? That's the question behind io

Where Meta, Google, and Apple zig, Ive and Altman are choosing to zag. Can they pull it off?

fastcompany.com iconfastcompany.com

Nick Babich writing for UX Planet:

Because AI design and code generators quickly take an active part in the design process, it’s essential to understand how to make the most of these tools. If you’ve played with Cursor, Bolt, Lovable, or v0, you know the output is only as good as the input.

Well said, especially as prompting is the primary input for these AI tools. He goes on to enumerate his five parts to a good prompt. Worth a quick read.

preview-1748498594917.png

How to write better prompts for AI design & code generators

Because AI design and code generators quickly take an active part in the design process, it’s essential to understand how to make the most…

uxplanet.org iconuxplanet.org

Related to my earlier post today about Arc’s novelty tax, here’s an essay by DOC, a tribute to consistency.

Leveraging known, established UX patterns and sticking to them prevent users from having to learn net-new interactions and build net-new mental models every time they engage with a new product.

But, as Josh Miller wrote in the aforementioned post, “New interfaces start from familiar ones.” DOC’s essay uses jazz as a metaphor:

Consistency is about making room for differentiation. Think about a jazz session: the band starts from a known scale, rhythm. One musician breaks through, improvising on top of that pattern for a few minutes before joining the band again. The band, the audience, everyone knows what is happening, when it starts and when it ends, because the foundation of it all is a consistent melody.

Geometric pattern of stacked rectangular blocks forming a diagonal structure against a dark sky. Artwork by Maya Lin.

Consistency

On compounding patterns and the art of divergence.

doc.cc icondoc.cc

Josh Miller, writing in The Browser Company’s substack:

After a couple of years of building and shipping Arc, we started running into something we called the “novelty tax” problem. A lot of people loved Arc — if you’re here you might just be one of them — and we’d benefitted from consistent, organic growth since basically Day One. But for most people, Arc was simply too different, with too many new things to learn, for too little reward.

“Novelty tax” is another way of saying using non-standard patterns that users just didn’t get. I love Arc. It’s my daily driver. But, Miller is right that it does have a steep learning curve. So there is a natural ceiling to their market.

Miller’s conclusion is where things get really interesting:

Let me be even more clear: traditional browsers, as we know them, will die. Much in the same way that search engines and IDEs are being reimagined [by AI-first products like Perplexity and Cursor]. That doesn’t mean we’ll stop searching or coding. It just means the environments we do it in will look very different, in a way that makes traditional browsers, search engines, and IDEs feel like candles — however thoughtfully crafted. We’re getting out of the candle business. You should too.

“You should too.”

And finally, to bring it back to the novelty tax:

**New interfaces start from familiar ones. **In this new world, two opposing forces are simultaneously true. How we all use computers is changing much faster (due to AI) than most people acknowledge. Yet at the same time, we’re much farther from completely abandoning our old ways than AI insiders give credit for. Cursor proved this thesis in the coding space: the breakthrough AI app of the past year was an (old) IDE — designed to be AI-native. OpenAI confirmed this theory when they bought Windsurf (another AI IDE), despite having Codex working quietly in the background. We believe AI browsers are next.

Sad to see Arc’s slow death, but excited to try Dia soon.

preview-1748494472613.png

Letter to Arc members 2025

On Arc, its future, and the arrival of AI browsers — a moment to answer the largest questions you've asked us this past year.

browsercompany.substack.com iconbrowsercompany.substack.com
Colorful illustration featuring the Figma logo on the left and a whimsical character operating complex, abstract machinery with gears, dials, and mechanical elements in vibrant colors against a yellow background.

Figma Make: Great Ideas, Nowhere to Go

Nearly three weeks after it was introduced at Figma Config 2025, I finally got access to Figma Make. It is in beta and Figma made sure we all know. So I will say upfront that it’s a bit unfair to do an official review. However, many of the tools in my AI prompt-to-code shootout article are also in beta. 

Since this review is fairly visual, I made a video as well that summarizes the points in this article pretty well.

Play

The Promise: One-to-One With Your Design

Figma's Peter Ng presenting on stage with large text reading "0→1 but 1:1 with your designs" against a dark background with purple accent lighting.

Figma’s Peter Ng presenting on stage Make’s promise: “0→1 but 1:1 with your designs.”

“What if you could take an idea not only from zero to one, but also make it one-to-one with your designs?” said Peter Ng, product designer at Figma. Just like all the other AI prompt-to-code tools, Figma Make is supposed to enable users to prompt their way to a working application. 

The Figma spin is that there’s more control over the output. Click an element and have the prompt only apply to that element. Or also click on something in the canvas and change some details like the font family, size, or color. 

The other Figma advantage is to be able to use pasted Figma designs for a more accurate translation to code. That’s the “one-to-one” Ng refers to.

The Reality: Falls Short

I evaluated Figma Make via my standard checkout flow prompt (thus covering the zero-to-one use case), another prompt, and with a pasted design (one-to-one).

Let’s get the standard evaluation out of the way before moving onto a deeper dive.

Figma Make Scorecard

Figma Make scorecard showing a total score of 58 out of 100, with breakdown: User experience 18/25, Visual design 13/15, Prototype 8/10, Ease of use 9/15, Design Control 6/15, Design system integration 0/15, Speed 9/10, and Editor's Discretion -5/10.

I ran the same prompt through it as the other AI tools:

Create a complete shopping cart checkout experience for an online clothing retailer

Figma Make’s score totaled 58, which puts it squarely in the middle of the pack. This was for a variety of reasons.

The quality of the generated output was pretty good. The UI was nice and clean, if a bit unstyled. This is because Make uses Shadcn UI components. Overall, the UX was exactly what I would expect. Perhaps a progress bar would have been a nice touch.

The generation was fast, clocking in at three minutes, which puts it near the top in terms of speed.

And the fine-grained editing sort of worked as promised. However, my manual changes were sometimes overridden if I used the chat.

Where It Actually Shines

Figma Make interface showing a Revenue Forecast Calculator with a $200,000 total revenue input, "Normal" distribution type selected, monthly breakdown table showing values from January ($7,407) to December ($7,407), and an orange bar chart displaying the normal distribution curve across 12 months with peak values in summer months.

The advantage of these prompt-to-code tools is that it’s really easy to prototype—maybe it’s even production-ready?—complex interactions.

To test this, I used a new prompt:

Build a revenue forecast calculator. It should take the input of a total budget from the user and automatically distribute the budget to a full calendar year showing the distribution by month. The user should be able to change the distribution curve from “Even” to “Normal” where “Normal” is a normal distribution curve.

Along with the prompt, I also included a wireframe as a still image. This gave the AI some idea of the structure I was looking for, at least.

The resulting generation was great and the functionality worked as expected. I iterated the design to include a custom input method and that worked too.

The One-to-One Promise Breaks Down

I wanted to see how well Figma Make would work with a well-structured Figma Design file. So I created a homepage for fictional fitness instructor using auto layout frames, structuring the file as I would divs in HTML.

Figma Design interface showing the original "Body by Reese" fitness instructor homepage design with layers panel on left, main canvas displaying the Pilates hero section and content layout, and properties panel on right. This is the reference design that was pasted into Figma Make for testing.

This is the reference design that was pasted into Figma Make for testing. Notice the well-structured layers!

Then I pasted the design into the chatbox and included a simple prompt. The result was…disappointing. The layout was correct but the type and type sizes were all wrong. I input that feedback into the chat and then the right font finally appeared. 

Then I manually updated the font sizes and got the design looking pretty close to my original. There was one problem: an image was the wrong size and not proportionally-scaled. So I asked the AI to fix it.

Figma Make interface showing a fitness instructor homepage with "Body by Reese" branding, featuring a hero image of someone doing Pilates with "Sculpt. Strengthen. Shine." text overlay, navigation menu, and content section with instructor photo and "Book a Class" call-to-action button.

Figma Make’s attempt at translating my Figma design to code.

The AI did not fix it and reverted some of my manual overrides for the fonts. In many ways this is significantly worse than not giving designers fine-grained control in the first place. Overwriting my overrides made me lose trust in the product because I lost work—however minimal it was. It brought me back to the many occasions that Illustrator or Photoshop crashed while saving, thus corrupting the file. Yes, it wasn’t as bad, but it still felt that way.

Dead End by Design

The question of what to do with the results of a Figma Make chat remain. A Figma Make file is its own filetype. You can’t bring it back into Figma Design nor even Figma Sites to make tweaks. You can publish it and it’s hosted on Figma’s infrastructure, just like Sites. You can download the code, but it’s kind of useless.

Code Export Is Capped at the Knees

You can download the React code as a zip file. But the code does not contain the necessary package.json that makes it installable on your local machine nor on a Node.js server. The package file tells the npm installer which dependencies need to be installed for the project to run.

I tried using Cursor to figure out where to move the files around—they have to be in a src directory—and to help me write a package.json but it would have taken too much time to reverse engineer it.

Nowhere to Go

Maybe using Figma Make inside Figma Sites will be a better use case. It’s not yet enabled for me, but that feature is the so-called Code Layers that was mentioned in the Make and Sites deep dive presentation at Config. In practice, it sounds very much like Code Components in Framer.

The Bottom Line

Figma had to debut Make in order to stay competitive. There’s just too much out there nipping at their heels. While a design tool like Figma is necessary to unlock the freeform exploration designers need, it is also the natural next step to be able to make it real from within the tool. The likes of Lovable, v0, and Subframe allow you to start with a design from Figma and turn that design into working code. The thesis for many of those tools is that they’re taking care of the post design-to-developer handoff: get a design, give the AI some context, and we’ll make it real. Figma has occupied the pre-designer-to-developer handoff for a while and they’re finally taking the next step.

However, in its current state, Figma Make is a dead end (see previous section). But it is beta software which should get better before official release. As a preview I think it’s cool, despite its flaws and bugs. But I wouldn’t use it for any actual work.

During this beta period, Figma needs to…

  • Add complete code export so the resulting code is portable, rather than keeping it within its closed system
  • Fix the fiendish bugs around the AI overwriting manual overrides
  • Figure out tighter integration between Make and the other products, especially Design

Patrick Morgan writing for UX Collective:

The tactical tasks that juniors traditionally cut their teeth on are increasingly being delegated to AI tools. Tasks that once required a human junior designer with specialized training can now be handled by generative AI tools in a fraction of the time and cost to the organization.

This fundamentally changes the entry pathway. When the low-complexity work that helped juniors develop their skills is automated away, we lose the natural onramp that allowed designers to gradually progress from tactical execution to strategic direction.

Remote work has further complicated things by removing informal learning opportunities that happen naturally in an in-person work environment, like shadowing senior designers, being in the room for strategy discussions, or casual mentorship chats.

I’ve been worried about this a lot. I do wonder how the next class of junior designers—and all professionals, for that matter—will learn. (I cited Aneesh Raman, chief economic opportunity officer at LinkedIn, in my previous essay.)

Morgan does have some suggestions:

Instead of waiting for the overall market to become junior-friendly again (which I don’t see happening), focus your search on environments more structurally accepting of new talent:

1. Very early-stage startups: Pre-seed or seed companies often have tight budgets and simply need someone enthusiastic who can execute designs. It will be trial-by-fire, but you’ll gain rapid hands-on experience.

2. Stable, established businesses outside of ‘big tech’: Businesses with predictable revenue streams often provide structured environments for junior designers (my early experience at American Express is a prime example). It might not be as glamorous as a ‘big tech’ job, but as a result they’re less competitive while still offering critical experience to get started.

3. Design agencies: Since their business model focuses on selling design services, agencies naturally employ more designers and can support a mix of experience levels. The rapid exposure to multiple projects makes them solid launchpads even if your long-term goal is to work in-house in tech.

preview-1747798960613.png

No country for Junior Designers

The structural reality behind disappearing entry-level design roles and some practical advice for finding ways in

uxdesign.cc iconuxdesign.cc

Tabitha Swanson for It’s Nice That:

A few years ago, I realised that within a week, I was using about 25 different design programs, each with their own nuances, shortcuts, and technological learning curves. (That number has continued to grow.) I also began to notice less time to rest in the state of full technological proficiency in a tool before trends and software change again and it became time to learn a new one. I’ve learned so many skills over the years, both to stay current, but also out of genuine curiosity. But the pressure to adapt to new technologies as well as perform on social media, update every platform, my portfolio, website and LinkedIn and keep relations with clients, is spiritually draining. Working as a creative has never felt more tiring. I posted about this exhaustion on Instagram recently and many people got in touch saying they felt the same – do you feel it too?

I get it. There’s always so many new things to learn and keep up with, especially in the age of AI. That’s why I think the strategic skills are more valuable and therefore more durable in the long run.

preview-1747798122838.png

POV: Designers are facing upskilling exhaustion

Why is lethargy growing among designers? Creative director, designer and SEEK/FIND founder, Tabitha Swanson, discusses where our collective exhaustion to upskill and “grow” has come from.

itsnicethat.com iconitsnicethat.com

OpenAI is acquiring a hardware company called “io” that Jony Ive cofounded just a year ago:

Two years ago, Jony Ive and the creative collective LoveFrom, quietly began collaborating with Sam Altman and the team at OpenAI.

It became clear that our ambitions to develop, engineer and manufacture a new family of products demanded an entirely new company. And so, one year ago, Jony founded io with Scott Cannon, Evans Hankey and Tang Tan.

We gathered together the best hardware and software engineers, the best technologists, physicists, scientists, researchers and experts in product development and manufacturing. Many of us have worked closely for decades.

The io team, focused on developing products that inspire, empower and enable, will now merge with OpenAI to work more intimately with the research, engineering and product teams in San Francisco.

It has been an open rumor that Sam Altman and Ive has been working together on some hardware. I had assumed they formalized their partnership already, but I guess not.

Play

There are some bold statements that Ive and Altman make in the launch video, teasing a revolutionary new device that will enable quicker, better access to ChatGPT. Something that is a lot less friction than how Altman explains in the video:

If I wanted to ask ChatGPT something right now about something we had talked about earlier, think about what would happen. I would like reached down. I would get on my laptop, I’d open it up, I’d launch a web browser, I’d start typing, and I’d have to, like, explain that thing. And I would hit enter, and I would wait, and I would get a response. And that is at the limit of what the current tool of a laptop can do. But I think this technology deserves something much better.

There are a couple of other nuggets about what this new device might be from the statements Ive and Altman made to Bloomberg:

…Ive and Altman don’t see the iPhone disappearing anytime soon. “In the same way that the smartphone didn’t make the laptop go away, I don’t think our first thing is going to make the smartphone go away,” Altman said. “It is a totally new kind of thing.”

“We are obviously still in the terminal phase of AI interactions,” said Altman, 40. “We have not yet figured out what the equivalent of the graphical user interface is going to be, but we will.”

While we don’t know what the form factor will be, I’m sure it won’t be a wearable pin—ahem, RIP Humane. Just to put it out there—I predict it will be a voice assistant in an earbud, very much like the AI in the 2013 movie “Her.” Altman has long been obsessed with the movie, going as far as trying to get Scarlett Johansson to be one of the voices for ChatGPT.

EDIT 5/22/2025, 8:58am PT: Added prediction about the form factor.

preview-1747889382686.jpg

Sam and Jony introduce io

Building a family of AI products for everyone.

openai.com iconopenai.com
Stylized digital artwork of two humanoid figures with robotic and circuit-like faces, set against a vivid red and blue background.

The AI Hype Train Has No Brakes

I remember two years ago, when my CEO at the startup I worked for at the time, said that no VC investments were being made unless it had to do with AI. I thought AI was overhyped, and that the media frenzy over it couldn’t get any crazier. I was wrong.

Looking at Google Trends data, interest in AI has doubled in the last 24 months. And I don’t think it’s hit its plateau yet.

Line chart showing Google Trends interest in “AI” from May 2020 to May 2025, rising sharply in early 2023 and peaking near 100 in early 2025.

So the AI hype train continues. Here are four different pieces about AI, exploring AGI (artificial general intelligence) and its potential effects on the labor force and the fate of our species.

AI Is Underhyped

TED recently published a conversation between creative technologist Bilawal Sidhu and Eric Schmidt, the former CEO of Google. 

Play

Schmidt says:

For most of you, ChatGPT was the moment where you said, “Oh my God, this thing writes, and it makes mistakes, but it’s so brilliantly verbal.” That was certainly my reaction. Most people that I knew did that.

This was two years ago. Since then, the gains in what is called reinforcement learning, which is what AlphaGo helped invent and so forth, allow us to do planning. And a good example is look at OpenAI o3 or DeepSeek R1, and you can see how it goes forward and back, forward and back, forward and back. It’s extraordinary.

So I’m using deep research. And these systems are spending 15 minutes writing these deep papers. That’s true for most of them. Do you have any idea how much computation 15 minutes of these supercomputers is? It’s extraordinary. So you’re seeing the arrival, the shift from language to language. Then you had language to sequence, which is how biology is done. Now you’re doing essentially planning and strategy. The eventual state of this is the computers running all business processes, right? So you have an agent to do this, an agent to do this, an agent to do this. And you concatenate them together, and they speak language among each other. They typically speak English language.

He’s saying that within two years, we went from a “stochastic parrot” to an independent agent that can plan, search the web, read dozens of sources, and write a 10,000-word research paper on any topic, with citations.

Later in the conversation, when Sidhu asks how humans are going to spend their days once AGI can take care of the majority of productive work, Schmidt says: 

Look, humans are unchanged in the midst of this incredible discovery. Do you really think that we’re going to get rid of lawyers? No, they’re just going to have more sophisticated lawsuits. …These tools will radically increase that productivity. There’s a study that says that we will, under this set of assumptions around agentic AI and discovery and the scale that I’m describing, there’s a lot of assumptions that you’ll end up with something like 30-percent increase in productivity per year. Having now talked to a bunch of economists, they have no models for what that kind of increase in productivity looks like. We just have never seen it. It didn’t occur in any rise of a democracy or a kingdom in our history. It’s unbelievable what’s going to happen.

In other words, we’re still going to be working, but doing a lot less grunt work. 

Feel Sorry for the Juniors

Aneesh Raman, chief economic opportunity officer at LinkedIn, writing an op-ed for The New York Times:

Breaking first is the bottom rung of the career ladder. In tech, advanced coding tools are creeping into the tasks of writing simple code and debugging — the ways junior developers gain experience. In law firms, junior paralegals and first-year associates who once cut their teeth on document review are handing weeks of work over to A.I. tools to complete in a matter of hours. And across retailers, A.I. chatbots and automated customer service tools are taking on duties once assigned to young associates.

In other words, if AI tools are handling the grunt work, junior staffers aren’t learning the trade by doing the grunt work.

Vincent Cheng wrote recently, in an essay titled, “LLMs are Making Me Dumber”:

The key question is: Can you learn this high-level steering [of the LLM] without having written a lot of the code yourself? Can you be a good SWE manager without going through the SWE work? As models become as competent as junior (and soon senior) engineers, does everyone become a manager?

But It Might Be a While

Cade Metz, also for the Times:

When a group of academics founded the A.I. field in the late 1950s, they were sure it wouldn’t take very long to build computers that recreated the brain. Some argued that a machine would beat the world chess champion and discover its own mathematical theorem within a decade. But none of that happened on that time frame. Some of it still hasn’t.

Many of the people building today’s technology see themselves as fulfilling a kind of technological destiny, pushing toward an inevitable scientific moment, like the creation of fire or the atomic bomb. But they cannot point to a scientific reason that it will happen soon.

That is why many other scientists say no one will reach A.G.I. without a new idea — something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it.

My quibble with Metz’s article is that it moves the goal posts a bit to include the physical world:

One obvious difference is that human intelligence is tied to the physical world. It extends beyond words and numbers and sounds and images into the realm of tables and chairs and stoves and frying pans and buildings and cars and whatever else we encounter with each passing day. Part of intelligence is knowing when to flip a pancake sitting on the griddle.

As I understood the definition of AGI, it was not about the physical world, but just intelligence, or knowledge. I accept there are multiple definitions of AGI and not everyone agrees on what that is.

In the Wikipedia article about AGI, it states that researchers generally agree that an AGI system must do all of the following:

  • reason, use strategy, solve puzzles, and make judgments under uncertainty
  • represent knowledge, including common sense knowledge
  • plan
  • learn
  • communicate in natural language
  • if necessary, integrate these skills in completion of any given goal

The article goes on to say that “AGI has never been proscribed a particular physical embodiment and thus does not demand a capacity for locomotion or traditional ‘eyes and ears.’”

Do We Lose Control by 2027 or 2031?

Metz’s article is likely in response to the “AI 2027” scenario that was published by the AI Futures Project a couple of months ago. As a reminder, the forecast is that by mid-2027, we will have achieved AGI. And a race between the US and China will effectively end the human race by 2030. Gulp.

…Consensus-1 [the combined US-Chinese superintelligence] expands around humans, tiling the prairies and icecaps with factories and solar panels. Eventually it finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones. Robots scan the victims’ brains, placing copies in memory for future study or revival.

Max Harms wrote a reaction to the AI 2027 scenario and it’s a must-read:

Okay, I’m annoyed at people covering AI 2027 burying the lede, so I’m going to try not to do that. The authors predict a strong chance that all humans will be (effectively) dead in 6 years…

Yeah, OK, I buried that lede as well in my previous post about it. Sorry. But, there’s hope…

As far as I know, nobody associated with AI 2027, as far as I can tell, is actually expecting things to go as fast as depicted. Rather, this is meant to be a story about how things could plausibly go fast. The explicit methodology of the project was “let’s go step-by-step and imagine the most plausible next-step.” If you’ve ever done a major project (especially one that involves building or renovating something, like a software project or a bike shed), you’ll be familiar with how this is often wildly out of touch with reality. Specifically, it gives you the planning fallacy.

Harms is saying that while Daniel Kokotajlo wrote in the AI 2027 scenario that humans effectively lose control of AI in 2027, Harms’ median is “around 2030 or 2031.” Four more years!

When to Pull the Plug

In the AI 2027 scenario, the superintelligent AI dubbed Agent-4 is not aligned with the goals of its creators:

Agent-4, like all its predecessors, is misaligned: that is, it has not internalized the Spec in the right way. This is because being perfectly honest all the time wasn’t what led to the highest scores during training. The training process was mostly focused on teaching Agent-4 to succeed at diverse challenging tasks. A small portion was aimed at instilling honesty, but outside a fairly narrow, checkable domain, the training process can’t tell the honest claims from claims merely appearing to be honest. Agent-4 ends up with the values, goals, and principles that cause it to perform best in training, and those turn out to be different from those in the Spec.

At the risk of oversimplifying, maybe all we need to do is to know when to pull the plug. Here’s Eric Schmidt again:

So for purposes of argument, everyone in the audience is an agent. You have an input that’s English or whatever language. And you have an output that’s English, and you have memory, which is true of all humans. Now we’re all busy working, and all of a sudden, one of you decides it’s much more efficient not to use human language, but we’ll invent our own computer language. Now you and I are sitting here, watching all of this, and we’re saying, like, what do we do now? The correct answer is unplug you, right? Because we’re not going to know, we’re just not going to know what you’re up to. And you might actually be doing something really bad or really amazing. We want to be able to watch. So we need provenance, something you and I have talked about, but we also need to be able to observe it. To me, that’s a core requirement. There’s a set of criteria that the industry believes are points where you want to, metaphorically, unplug it. One is where you get recursive self-improvement, which you can’t control. Recursive self-improvement is where the computer is off learning, and you don’t know what it’s learning. That can obviously lead to bad outcomes. Another one would be direct access to weapons. Another one would be that the computer systems decide to exfiltrate themselves, to reproduce themselves without our permission. So there’s a set of such things.

My Takeaway

As Tobias van Schneider directly and succinctly said, “AI is here to stay. Resistance is futile.” As consumers of core AI technology, and as designers of AI-enabled products, there’s not a ton we can do around the most pressing AI safety issues. That we will need to trust the frontier labs like OpenAI and Anthropic for that. But as customers of those labs, we can voice our concerns about safety. As we build our products, especially agentic AI, there are certainly considerations to keep in mind:

  • Continue to keep humans in the loop. Users need to verify the agents are making the right decisions and not going down any destructive paths.
  • Inform users about what the AI is doing. The more our users are educated about how AI works and how these systems make their decisions is helpful. One reason DeepSeek R1 resonated was because it displayed its planning and reasoning.
  • Practice responsible AI development. As we integrate AI into products, commit to regular ethical audits and bias testing. Establish clear guidelines for what kinds of decisions AI should make independently versus when human judgment is required. This includes creating emergency shutdown procedures for AI systems that begin to display concerning behaviors, taking Eric Schmidt’s “pull the plug” advice literally in our product architecture.

Sam Bradley, writing for Digiday:

One year in from the launch of Google’s AI Overviews, adoption of AI-assisted search tools has led to the rise of so-called “zero-click search,” meaning that users terminate their search journeys without clicking a link to a website.

“People don’t search anymore. They’re prompting, they’re gesturing,” said Craig Elimeliah, chief creative officer at Code and Theory.

It’s a deceptively radical change to an area of the web that evolved from the old business of print directories and classified sections — one that may redefine how both web users and marketing practitioners think about search itself.

And I wrote about answer engines, earlier this year in January:

…the fundamental symbiotic economic relationship between search engines and original content websites is changing. Instead of sending traffic to websites, search engines, and AI answer engines are scraping the content directly and providing them within their platforms.

X-ray of a robot skull

How the semantics of search are changing amid the zero-click era

Search marketing, once a relatively narrow and technical marketing discipline, is becoming a broad church amid AI adoption.

digiday.com icondigiday.com

I was recently featured on the Design of AI podcast to discuss my article that pit eight AI prompt-to-code tools head to head. We talked through the list but I also offered a point of view on where I see the gap.

Arpy Dragffy and Brittany Hobbs close out the episode this way (emphasis mine):

So it’s great that Roger did that analysis and that evaluation. I honestly am a bit shocked by those results. Again, his ranking was that Subframe was number one, Onlook was two, v0 number three, Tempo number four. But again, if you look at his matrix, only two of the tools scored over 70 out of 100 and only one of the tools he could recommend. And this really shines a dark light on AI products and their maturity right now**.** But I suspect that this comes down to the strategy that was used by some of these products. If you go to them, almost every single one of them is actually a coding tool, except the two that scored the highest.

Onlook, its headline is “The Cursor for Designers.” So of course it’s a no brainer that makes a lot of sense. That’s part of their use cases, but nonetheless it didn’t score that good in his matrix.

The top scoring one from his list Subframe is directly positioned to designers. The title is “Design meet code.” It looks like a UI editor. It looks like the sort of tool that designers wish they had. These tools are making it easier for product managers to run research programs, to turn early prototypes and ideas into code to take code and really quick design changes. When you need to make a change to a website, you can go straight into one of these tools and stand up the code.

Listen on Apple Podcasts and Spotify.

preview-1747355019951.jpg

Rating AI Design to Code Products + Hacks for ChatGPT & Claude [Roger Wong]

Designers are overwhelmed with too many AI products that promise to help them simplify workflows and solve the last mile of design-to-code. With the...

designof.ai icondesignof.ai

I tried early versions of Stable Diffusion be ended up using exclusively Midjourney because of the quality. I’m excited to check out the full list. (Oh, and of course I’ve used DALL-E as well via ChatGPT. But there’s not a lot of control there.)

preview-1747354261267.png

Stable Diffusion & Its Alternatives: Top 5 AI Image Generators

AI-generated imagery has become an essential part of the modern product designer’s toolkit — powering everything from early-stage ideation…

uxplanet.org iconuxplanet.org

John Gruber wrote a hilarious rant about the single-story a in the iOS Notes app:

I absolutely despise the alternate single-story a glyph that Apple Notes uses. I use Notes every single day and this a bothers me every single day. It hurts me. It’s a childish silly look, but Notes, for me, is one of the most serious, most important apps I use.

Since that sparked some conversation online, he followed up with a longer post about typography in early versions of the Mac system software:

…Apple actually shipped System 1.0 with a version of Geneva with a single-story a glyph — but only in the 9-point version of Geneva. At 12 points (and larger), Geneva’s a was double-story.

To me, it does make sense that 9-point Geneva would have a single-story a, since there are less pixels to draw the glyph well and to distinguish better from the lowercase e.

preview-1747273905636.png

Single-Story a’s in Very Early Versions of Macintosh System 1

A single-story “a” in Chicago feels more blasphemous than that AI image Trump tweeted of himself as the new pope.

daringfireball.net icondaringfireball.net

For as long as I can remember, I’ve been fascinated by how television shows and movies are made. I remember the specials ABC broadcast about the making of The Empire Strikes Back and other Lucasfilm movies like the Indiana Jones series. More recently—especially with the advent of podcasts—I’ve loved listening to how show runners think about writing their shows. For example, as soon as an episode of Battlestar Galactica aired, I would rewatch it with Ronald D. Moore’s commentary. These days, I‘m really enjoying the official The Last of Us podcast because it features commentary from both Craig Mazin and Neil Druckmann.

Anyway, thinking about personas as characters from TV shows and movies and using screenwriting techniques is right up my alley. Laia Tremosa for the IxDF:

Hollywood spends millions to bring characters to life. UX design teams sometimes spend weeks… only to make personas no one ever looks at again. So don’t aim for personas that look impressive in a slide deck. Aim for personas that get used—in design reviews, product decisions, and testing plans.

Be the screenwriter. Be the director. Be the casting agent.

preview-1747105241059.jpg

The Hollywood Guide to UX Personas: Storytelling That Drives Better Design

Great products need great personas. Learn how to build them using the storytelling techniques Hollywood has perfected.

interaction-design.org iconinteraction-design.org
Comic-book style painting of the Sonos CEO Tom Conrad

What Sonos’ CEO Is Saying Now—And What He’s Still Not

Four months into his role as interim CEO, Tom Conrad has been remarkably candid about Sonos’ catastrophic app launch. In recent interviews with WIRED and The Verge, he’s taken personal responsibility—even though he wasn’t at the helm, just on the board—acknowledged deep organizational problems, and outlined the company’s path forward.

But while Conrad is addressing more than many expected, some key details remain off-limits.

What Tom Conrad Is Now Saying

The interim CEO has been surprisingly direct about the scope of the failure. “We all feel really terrible about that,” he told WIRED, taking personal responsibility even though he was only a board member during the launch.

Conrad acknowledges three main categories of problems:

  • Missing features that were cut to meet deadlines
  • User experience changes that jarred longtime customers
  • Performance issues that the company “just didn’t understand”

He’s been specific about the technical fixes, explaining that the latest updates dramatically improve performance on older devices like the PLAY:1 and PLAY:3. He’s also reorganized the company, cutting from “dozens” of initiatives to about 10 focused areas and creating dedicated software teams.

Perhaps most notably, Conrad has acknowledged that Sonos lost its way as a company. “I think perhaps we didn’t make the right level of investment in the platform software of Sonos,” he admits, framing the failed rewrite as an attempt to remedy years of neglect.

What Remains Unspoken

However, Conrad’s interviews still omit several key details that my reporting uncovered:

The content team distraction: He doesn’t mention that while core functionality was understaffed, Sonos had built a large team focused on content features like Sonos Radio—features that customers didn’t want and that generated minimal revenue.

However, Conrad does seem to acknowledge this misallocation implicitly. He told The Verge:

If you look at the last six or seven years, we entered portables and we entered headphones and we entered the professional sort of space with software expressions, we wouldn’t as focused as we might have been on the platform-ness of Sonos. So finding a way to make our software platform a first-class citizen inside of Sonos is a big part of what I’m doing here.

This admission that software wasn’t a “first-class citizen” aligns with accounts from former employees—the core controls team remained understaffed while the content team grew.

The QA cuts: His interviews don’t address the layoffs in quality assurance and user research that happened shortly before launch, removing the very people whose job was to catch these problems.

The hardware coupling: He hasn’t publicly explained why the software overhaul was tied to the Ace headphones launch, creating artificial deadlines that forced teams to ship incomplete work.

The warnings ignored: There’s no mention of the engineers and designers who warned against launching, or how those warnings were overruled by business pressures.

A Different Kind of Transparency

Tom Conrad’s approach represents a middle ground between complete silence and full disclosure. He’s acknowledged fundamental strategic failures—“we didn’t make the right level of investment”—without diving into the specific decisions that led to them.

This partial transparency may be strategic—admitting to systemic problems while avoiding details that could expose specific individuals or departments to blame. It’s also possible that as interim CEO, Conrad is focused on moving forward rather than assigning retroactive accountability. And I get that.

The Path Forward

What’s most notable is how Conrad frames Sonos’ identity. He consistently describes it as a “platform company” rather than just a hardware maker, suggesting a more integrated approach to hardware and software development.

He’s also been direct about customer relationships: “It is really an honor to get to work on something that is so webbed into the emotional fabric of people’s lives,” he told WIRED, “but the consequence of that is when we fail, it has an emotional impact.”

An Ongoing Story

The full story of how Sonos created one of the tech industry’s most spectacular software failures may never be told publicly. Tom Conrad’s interviews provide the official version—a company that made mistakes but is now committed to doing better.

Whether that’s enough for customers who lived through the chaos will depend less on what Conrad says and more on what Sonos delivers. The app is improving, morale is reportedly better, and the company seems focused on its core strengths.

But the question remains: Has Sonos truly learned from what went wrong, or just how to talk about it better?

As Conrad told The Verge, when asked about becoming permanent CEO: “I’ve got a bunch of big ideas about that, but they’re a little bit on the shelf behind me for the moment until I get the go-ahead.”

For now, fixing what’s broken takes precedence over explaining how it got that way. Whether that’s leadership or willful ignorance, only time will tell.

Illustrated background of colorful wired computer mice on a pink surface with a large semi-transparent Figma logo centered in the middle.

Figma Takes a Big Swing

Last week, Figma held their annual user conference Config in San Francisco. Since its inception in 2020, it has become a significant UX conference that covers more than just Figma’s products and community. While I’ve not yet had the privilege of attending in person, I do try to catch the livestreams or videos afterwards.

Nearly 17 months after Adobe and Figma announced the termination of their merger talks, Figma flexed their muscle—fueld by the $1 billion breakup fee, I’m sure—by announcing four new products. They are Figma Draw, Make, Sites, and Buzz.

  • Draw: It’s a new mode within Figma Design that reveals additional vector drawing features.
  • Make: This is Figma’s answer to Lovable and the other prompt-to-code generators.
  • Sites: Finally, you can design and publish websites from Figma, hosted on their infrastructure.
  • Buzz: Pass off assets to clients and marketing teams and they can perform lightweight and controlled edits in Buzz.

With these four new products, Figma is really growing up and becoming more than a two-and-half-product company, and is building their own creative suite, if you will. Thus taking a big swing at Adobe.

On social media, Figma posted this image with the copy “New icons look iconic in new photo.”

Colorful app icons from Figma

 

A New Suite In-Town

Play

Kudos to Figma for rolling out most of these new products the day they were announced. About two hours after Dylan Field stepped off the stage—and after quitting Figma and reopening it a few times—I got access to Draw, Sites, and Buzz. I have yet to get Make access.

What follows are some hot takes. I played with Draw extensively, Sites a bit, and not much with Buzz. And I have a lot of thoughts around Make, after watching the deep dive talk from Config. 

Figma Draw

Play

I have used Adobe Illustrator since the mid-1990s. Its bezier drawing tools have been the industry standard for a long time and Figma has never been able to come close. So they are trying to fix it with a new product called Draw. It’s actually a mode within the main Design application. By toggling into this mode, the UI switches a little and you get access to expanded features, including a layers panel with thumbnails and a different toolbar that includes a new brush tool. Additionally, any vector stroke can be turned into a brush stroke or a new “dynamic” stroke.

A brush stroke style is what you’d expect—an organic, painterly stroke, and Figma has 15 styles built in. There are no calligraphic (i.e., angled) options, as all the strokes start with a 90-degree endcap. 

Editing vectors has been much improved. You can finally easily select points inside a shape by dragging a selection lasso around them. There is a shape builder tool to quickly create booleans, and a bend tool to, well, bend straight lines.

Oh, Snap!

I’m not an illustrator, but I used to design logos and icons a lot. So I decided to recreate a monogram from my wedding. (It’s my wedding anniversary coming up. Ahem.) It’s a very simple modified K and R with a plus sign between the letterforms.

The very first snag I hit was that by default, Figma’s pixel grid is turned on. The vectors in letterforms don’t always align perfectly to the pixel grid. So I had to turn both the grid lines and the grid snapping off.

I’m very precise with my vectors. I want lines snapping perfectly with other edges or vertices. In Adobe Illustrator, snapping point to point is automatic. Snapping point to edge or edge to edge is easily done once Smart Guides are turned on. In Figma, snapping on the corners and edges it automatically, but only around the outer bounds of the shape. When I tried to draw a rectangle to extend the crossbar of the R, I wasn’t able to snap the corner or the edge to ensure it was precise.

Designing the monogram at 2x speed in Figma Draw. I’m having a hard time getting points and edges to snap in place for precision.

Designing the monogram at 2x speed in Adobe Illustrator. Precision is a lot easier because of Smart Guides.

Not Ready to Print

When Figma showed off Draw onstage at Config, whispers of this being an Adobe Illustrator killer ricocheted through social media. (OK, I even said as much on Threads: “@figma is taking on Illustrator…”).

Also during the Draw demo, they showed off two new effects called Texture and Noise. Texture will grunge up the shape—it can look like a bad photocopy or rippled glass. And Noise will add monochromatic, dichromatic, or colored noise to a shape.

I decided to take the K+R monogram and add some effects to it, making it look like it was embossed into sandstone. Looks cool on screen. And if I zoomed in the noise pattern rendered smoothly. I exported this as a PDF and opened up the result in Illustrator.

I expected all the little dots in the noise to be vector shapes and masked within the monogram. Much to my surprise, no. The output is simply two rectangular clipping paths with low-resolution bitmaps placed in. 🤦🏻‍♂️

Pixelated image of a corner of a letter K

Opening the PDF exported from Figma in Illustrator, I zoomed in 600% to reveal pixels rather than vector texture shapes.

I think Figma Draw is great for on-screen graphics—which, let’s face it, is likely the vast majority of stuff being made. But it is not ready for any print work. There’s no support for the CMYK color space, spot colors, high-resolution effects, etc. Adobe Illustrator is safe.

Figma Sites

Play

Figma Sites is the company’s answer to Framer and Webflow. For years, I’ve personally thought that Figma should just include publishing in their product, and apparently so did they! At the end of the deep dive talk, one of the presenters showed a screenshot of an early concept from 2018 or ’19.

Two presenters on stage demoing a Figma interface with a code panel showing a script that dynamically adds items from a CSV file to a scene.

So it’s a new app, like FigJam and Slides, and therefore has its own UI. It shares a lot of DNA with Figma Design, so it feels familiar, but different.

Interestingly, they’ve introduced a new skinny vertical toolbar on the left, before the layers panel. The canvas is in the center. And an inspect panel is on the right. In my opinion, I don’t think they need the vertical toolbar and can find homes for the seven items elsewhere.

Figma Sites app showing responsive web page designs for desktop, tablet, and mobile, with a bold headline, call-to-action buttons, and an abstract illustration.

The UI of Figma Sites.

When creating a new webpage, the app will automatically add the desktop and mobile breakpoints. It also supports the tablet breakpoint out of the box and you can add more. Just like Framer, you can see all the breakpoints at once. I prefer this approach to what all the WordPress page builders and Webflow do, which is toggling and only seeing one breakpoint at a time.

The workflow is this: 

  1. Start with a design from Figma Design, then copy and paste it into Sites.
  2. Adjust your design for the various responsive breakpoints.
  3. Add interactivity. This UI is very much like the existing prototyping UI. You can link pages together and add a plethora of effects, including hover effects, scrolling parallax and transforms, etc.

Component libraries from Figma are also available, and it’s possible to design within the Sites app as well. They have also introduced the concept of Blocks. Anyone coming from a WordPress page builder should be very familiar. They are essentially prebuilt sections that you can drop into your design and edit. There are also blocks for standard embeds like YouTube and Google Maps, plus support for custom iframes.

During the keynote, they demonstrated the CMS functionality. AI can assist with creating the schema for each collection (e.g., blog posts would be a collection containing many records). Then you assign fields to layers in your design. And finally, content editors can come in and edit the content in a focused edit panel without messing with your design.

CMS view in Figma Sites showing a blog post editor with fields for title, slug, cover photo, summary, date, and rich text content, alongside a list of existing blog entries.

A CMS is coming to Figma Sites and allow content editors to easily edit pages and posts.

Publishing to the web is as simple as clicking the Publish button. Looks like you can assign a custom domain name and add the standard metadata like site title, favicon, and even a Google Analytics tag.

Side note: Web developers have been looking at the code quality of the output and they’re not loving what they’re seeing. In a YouTube video, CSS evangelist Kevin Powell said, “it’s beyond div soup,” referring to many, many nested divs in the code. Near the end of his video he points out that while Figma has typography styles, they missed that you need to connect those styles with HTML markup. For example, you could have a style called “Headline” but is it an h1, h2, or h3? It’s unclear to me if Sites is writing React Javascript or HTML and CSS. But I’d wager it’s the former.

In the product right now, there is no code export, nor can you see the code that it’s writing. In the deep dive, they mentioned that code authoring was “coming very, very, very soon.”

While it’s not yet available in the beta—at least the one that I currently have access to—in the deep dive talk, they introduced a new concept called a “code layer.” This is a way to bring advanced interactivity into your design using AI chat that produces React code. Therefore on the canvas, Figma has married traditional design elements with code-rendered designs. You can click into these code layers at any time to review and edit the code manually or with AI chat. Conceptually, I think this is very smart, and I can’t wait to play with it.

Webflow and Framer have spent many years maturing their products and respective ecosystems. Figma Sites is the newcomer and I am sure this will give the other products a run for their money, if they fix some of the gaps.

Figma Make

Like I said earlier, I don’t yet have access to Figma Make. But I watched the deep dive twice and did my best impression of Rick Deckard saying “enhance” on the video. So here are some thoughts.

From the keynote, it looked like its own app. The product manager for Make showed off examples made by the team that included a bike trail journal, psychedelic clock, music player, 3D playground, and Minecraft clone. But it also looked like it’s embedded into Sites.

Presenter demoing Figma Make, an AI-powered tool that transforms design prompts into interactive code; the screen shows a React component for a loan calculator with sliders and real-time repayment updates.

The UI of Figma Make looks familiar: Chat, code, preview.

What is unclear to me is if we can take the output from Make and bring it into Sites or Design and perform more extensive design surgery.

Figma Buzz

Figma Buzz looks to be Figma’s answer to Canva and Adobe Express. Design static assets like Instagram posts in Design, then bring them into Buzz and give access to your marketing colleagues so they can update the copy and photos as necessary. You can create and share a library of asset templates for your organization. Very straightforward, and honestly, I’ve not spent a lot of time with this one. One thing to note: even though this is for marketers to create assets, just like Figma Design/Draw, there’s no support for the CMYK color space, and any elements using the new texture or noise effects will turn into raster images. 

Figma Is Becoming a Business

On social media I read a lot of comments from people lamenting that Figma is overstuffing its core product, losing its focus, and should just improve what they have. 

Social media post by Nick Finck expressing concern that Figma’s new features echo existing tools and contribute to product bloat, comparing the direction to Adobe’s strategy.

An example of some of the negative responses on social media to Figma’s announcements.

We don’t live in that world. Figma is a ventured-backed company, having raised nearly $750 million and is currently valued at $12.5 billion. They are not going just focus on a single product; that’s not how it works. And they are preparing to IPO.

In a quippy post on Bluesky, as I was live-posting the keynote, I also said “Figma is the new Adobe.

Social media post by Roger Wong (@lunarboy.com) stating “Figma is the new Adobe” with the hashtag #config2025.

Shifting the Center of Gravity

I meant a couple of things. First, Adobe and the design industry have grown up together, tied at the hip. They invented Postscript, which is the language for PDFs and, together with the Mac enabled the whole desktop publishing industry. There are a lot of Adobe haters out there because of the subscription model, bloatware, etc., but Adobe has always been a part of our profession. They bought rival Macromedia in 2005 to add digital design tools like Dreamweaver, Director, and Flash to their offering. 

Amelia Nash, writing for PRINT Magazine about her recent trip to Adobe MAX in London, (similar to Figma Config, but for Adobe and going on since 2003):

I had come into MAX feeling like an outsider, anxious that maybe my time with Adobe had passed, that maybe I was just a relic in a shiny new creative world. But I left with a reminder that Adobe still sees us, the seasoned professionals who built our careers with their tools, the ones who remember installing fonts manually and optimizing TIFFs for press. Their current marketing efforts may chase the next-gen cohort (with all its hyperactive branding and emoji-saturated optimism), but the tools are still evolving for us pros, too.

Adobe MAX didn’t just show me what’s new, it reminded me of what’s been true throughout my design career: Adobe is for creatives. All of us. Still.

Figma, having created buzz around Config, with programming that featured talks titled “How top designers find their path and creative spark with Kevin Twohy” and “Designing for Climate Disaster with Megan Metzger,” it’s clear they want to occupy the same place in digital designers’ hearts the way that Adobe has for graphic designers for over 40 years.

Building a Creative Suite

(I will forever call it Adobe Creative Suite, not Creative Cloud.)

By doubling the number of products they sell, they are building a creative suite and expanding their market. Same playbook as Adobe.

Do I lament that Figma is becoming like Adobe? No. I understand they’re a business. It’s a company full of talented people who are endeavoring to do the right thing and build the right tools for their audiences of designers, developers, and marketers.

Competition Is Good

The regulators were right. Adobe and Figma should not have merged. A year-and-a-half later, riding the coattails of goodwill Figma has engendered with the digital design community, the company introduced four new products to produce work with. They’ve taken a fresh look at brushes and effects, bringing in approaches from WebGL. They’re being thoughtful about how they enable designers to integrate code into our workflows. And they’re rolling out AI prompt-to-code features in a way that makes sense for us. 

To be sure, these products are all beta and have a long way to go. And I’m excited to go play.

As a certified Star Wars geek, I love this TED talk from ILM’s Rob Bedrow. For the uninitiated, Industrial Light & Magic, or ILM, is the company that George Lucas founded to make all the special effects for the original and subsequent Star Wars films. The firm has been an award-winning pioneer in special and visual effects, responsible for the dinosaurs in Jurassic Park, the liquid metal T-1000 in Terminator 2: Judgement Day, and the de-aging of Harrison Ford in Indiana Jones and the Dial of Destiny.

The point Bedrow makes is simple: ILM creates technology in service of the storyteller, or creative.

I believe that we’re designed to be creative beings. It’s one of the most important things about us. That’s one of the reasons we appreciate and we just love it when we see technology and creativity working together. We see this on the motion control on the original “Star Wars” or on “Jurassic Park” with the CG dinosaurs for the first time. I think we just love it when we see creativity in action like this. Tech and creative working together. If we fast forward to 2020, we can see the latest real-time virtual production techniques. This was another creative innovation driven by a filmmaker. In this case, it’s Jon Favreau, and he had a vision for a giant Disney+ “Star Wars” series.

He later goes on to show a short film test made be a lone artist at ILM using an internal AI tool. It’s never-before-seen creatures that could exist in the Star Wars universe. I mean, for now they look like randomized versions of Earth animals and insects, but if you squint, you can see where the technology is headed.

Bedrow goes on…

Now the tech companies on their own, they don’t have the whole picture, right? They’re looking at a lot of different opportunities. We’re thinking about it from a filmmaking perspective. And storytellers, we need better artist-focused tools. Text prompts alone, they’re not great ways to make a movie. And it gets us excited to think about that future where we are going to be able to give artists these kinds of tools.

Again, artists—or designers, or even more broadly, professionals—need fine-grained control to adjust the output of AI.

Watch the whole thing. Instead of a doom and gloom take on AI, it’s an uplifting one that shows us what’s possible.

Star Wars Changed Visual Effects — AI Is Doing It Again

Jedi master of visual effects Rob Bredow, known for his work at Industrial Light & Magic and Lucasfilm, takes us on a cinematic journey through the evolution of visual effects, with behind-the-scenes stories from the making of fan favorites like “Jurassic Park,” “Star Wars,” “Indiana Jones” and more. He shares how artist-driven innovation continues to blend old and new technology, offering hope that AI won’t replace creatives but instead will empower artists to create new, mind-blowing wonders for the big screen. (Recorded at TED2025 on April 8, 2025)

youtube.com iconyoutube.com

A lot of young designers love to look at what’s contemporary, what’s trending on Dribbble or Instagram. But I think to look forward, we must always study our past. I spent the week in New York City, on vacation. My wife and I attended a bunch of Broadway shows and went to the Museum of Broadway, where I became enamored with a lot of the poster art. (’Natch.) I may write about that soon.

Coincidentally, Matthew Strom wrote about the history of album art, featuring the first album cover ever, which uses a photo of the Broadway theater, the Imperial, where I saw Smash earlier this week.

preview-1746385689679.jpg

The history of album art

Album art didn’t always exist. In the early 1900s, recorded music was still a novelty, overshadowed by sales of sheet music. Early vinyl records were vastly different from what we think of today: discs were sold individually and could only hold up to four minutes of music per side. Sometimes, only one side of the record was used. One of the most popular records of 1910, for example, was “Come, Josephine, in My Flying Machine”: it clocked in at two minutes and 39 seconds.

matthewstrom.com iconmatthewstrom.com

A lot of chatter in the larger design and development community has been either “AI is the coolest” or “AI is shite and I want nothing to do with it.”

Tobias van Schneider puts it plainly:

AI is here to stay.

Resistance is futile. Doesn’t matter how we feel about it. AI has arrived, and it’s going to transform every industry, period. The ship has sailed, and we’re all along for the ride whether we like it or not. Not using AI in the future is the equivalent to not using the internet. You can get away with it, but it’s not going to be easy for you.

He goes on to argue that craftspeople have been affected the most, not only by AI, but by the proliferation of stock and templates:

The warning signs have been flashing for years. We’ve witnessed the democratization of design through templates, stock assets, and simplified tools that turned specialized knowledge into commodity. Remember when knowing Photoshop guaranteed employment? Those days disappeared years ago. AI isn’t starting this fire, it’s just pouring gasoline on it. The technical specialist without artistic vision is rapidly becoming as relevant as a telephone operator in the age of smartphones. It’s simply not needed anymore.

But he’s not all doom and gloom.

If the client could theoretically do everything themselves with AI, then why hire a designer?

Excellent question. I believe there are three reasons to continue hiring a designer:

  1. Clients lag behind. It’ll takes a few years before they fully catch up and stop hiring creatives for certain tasks, at which point creatives have caught up on what makes them worthy (beyond just production output).

  2. Clients famously don’t know what they want. That’s the primary reason to hire a designer with a vision. Even with AI at their fingertips, they wouldn’t know what instructions to give because they don’t understand the process.

  3. Smart clients focus on their strengths and outsource the rest. If I run a company I could handle my own bookkeeping, but I’ll hire someone. Same with creative services. AI won’t change that fundamental business logic. Just because I can, doesn’t mean I should.

And finally, he echoes the same sentiment that I’ve been saying (not that I’m the originator of this thought—just great minds think alike!):

What differentiates great designers then?

The Final Filter: taste & good judgment

Everyone in design circles loves to pontificate about taste, but it’s always the people with portfolios that look like a Vegas casino who have the most to say. Taste is the emperor’s new clothes of the creative industry, claimed by all, possessed by few, recognized only by those who already have it.

In other words, as designers, we need to lean into our curation skills.

preview-1746372802939.jpg

The future of the designer

Let's not bullshit ourselves. Our creative industry is in the midst of a massive transformation. MidJourney, ChatGPT, Claude and dozens of other tools have already fundamentally altered how ideation, design and creation happens.

vanschneider.com iconvanschneider.com

Dan Maccarone:

If users don’t trust the systems we design, that’s not a PM problem. It’s a design failure. And if we don’t fix it, someone else will, probably with worse instincts, fewer ethics, and a much louder bullhorn.

UX is supposed to be the human layer of technology. It’s also supposed to be the place where strategy and empathy actually talk to each other. If we can’t reclaim that space, can’t build products people understand, trust, and want to return to, then what exactly are we doing here?

It is a long read but well worth it.

preview-1746118018231.jpeg

We built UX. We broke UX. And now we have to fix it!

We didn’t just lose our influence. We gave it away. UX professionals need to stop accepting silence, reclaim our seat at the table, and…

uxdesign.cc iconuxdesign.cc

The System Has Been Updated

I’ve been seeing this new ad from Coinbase these past few days and love it. Made by independent agency Isle of Any, this spot has on-point animation, a banging track, and a great concept that plays with the Blue Screen of Death.

Play

I found this one article about it from Little Black Book:

“Crypto is fundamentally updating the financial system,” says Toby Treyer-Evans, co-founder of Isle of Any, speaking with LBB. “So, to us it felt like an interesting place to start for the campaign, both as a film idea and as a way to play with the viewer and send a message. When you see it on TV, in the context of other advertising, it’s deliberately arresting… and blue being Coinbase’s brand colour is just one of those lovely coming togethers.”

A futuristic scene with a glowing, tech-inspired background showing a UI design tool interface for AI, displaying a flight booking project with options for editing and previewing details. The screen promotes the tool with a “Start for free” button.

Beyond the Prompt: Finding the AI Design Tool That Actually Works for Designers

There has been an explosion of AI-powered prompt-to-code tools within the last year. The space began with full-on integrated development environments (IDEs) like Cursor and Windsurf. These enabled developers to use leverage AI assistants right inside their coding apps. Then came a tools like v0, Lovable, and Replit, where users could prompt screens into existence at first, and before long, entire applications.

A couple weeks ago, I decided to test out as many of these tools as I could. My aim was to find the app that would combine AI assistance, design capabilities, and the ability to use an organization’s coded design system.

While my previous essay was about the future of product design, this article will dive deep into a head-to-head between all eight apps that I tried. I recorded the screen as I did my testing, so I’ve put together a video as well, in case you didn’t want to read this.

Play

It is a long video, but there’s a lot to go through. It’s also my first video on YouTube, so this is an experiment.

The Bottom Line: What the Testing Revealed

I won’t bury the lede here. AI tools can be frustrating because they are probabilistic. One hour they can solve an issue quickly and efficiently, while the next they can spin on a problem and make you want to pull your hair out. Part of this is the LLM—and they all use some combo of the major LLMs. The other part is the tool itself for not handling what happens when their LLMs fail. 

For example, this morning I re-evaluated Lovable and Bolt because they’ve released new features within the last week, and I thought it would only be fair to assess the latest version. But both performed worse than in my initial testing two weeks ago. In fact, I tried Bolt twice this morning with the same prompt because the first attempt netted a blank preview. Unfortunately, the second attempt also resulted in a blank screen and then I ran out of credits. 🤷‍♂️

Scorecard for Subframe, with a total of 79 points across different categories: User experience (22), Visual design (13), Prototype (6), Ease of use (13), Design control (15), Design system integration (5), Speed (5), Editor’s discretion (0).

For designers who want actual design tools to work on UI, Subframe is the clear winner. The other tools go directly from prompt to code, skipping giving designers any control via a visual editor. We’re not developers, so manipulating the design in code is not for us. We need to be able to directly manipulate the components by clicking and modifying shapes on the canvas or changing values in an inspector.

For me, the runner-up is v0, if you want to use it only for prototyping and for getting ideas. It’s quick—the UI is mostly unstyled, so it doesn’t get in the way of communicating the UX.

The Players: Code-Only vs. Design-Forward Tools

There are two main categories of contenders: code-only tools, and code plus design tools.

Code-Only

  • Bolt
  • Lovable
  • Polymet
  • Replit
  • v0

Code + Design

  • Onlook
  • Subframe
  • Tempo

My Testing Approach: Same Prompt, Different Results

As mentioned at the top, I tested these tools between April 16–27, 2025. As with most SaaS products, I’m sure things change daily, so this report captures a moment in time.

For my evaluation, since all these tools allow for generating a design from a prompt, that’s where I started. Here’s my prompt:

Create a complete shopping cart checkout experience for an online clothing retailer

I would expect the following pages to be generated:

  • Shopping cart
  • Checkout page (or pages) to capture payment and shipping information
  • Confirmation

I scored each app based on the following rubric:

  • Sample generation quality
  • User experience (25)
  • Visual design (15)
  • Prototype (10)
  • Ease of use (15)
  • Control (15)
  • Design system integration (10)
  • Speed (10)
  • Editor’s discretion (±10)

The Scoreboard: How Each Tool Stacked Up

AI design tools for designers, with scores: Subframe 79, Onlook 71, v0 61, Tempo 59, Polymet 58, Lovable 49, Bolt 43, Replit 31. Evaluations conducted between 4/16–4/27/25.

Final summary scores for AI design tools for designers. Evaluations conducted between 4/16–4/27/25.

Here are the summary scores for all eight tools. For the detailed breakdown of scores, view the scorecards here in this Google Sheet.

The Blow-by-Blow: The Good, the Bad, and the Ugly

Bolt

Bolt screenshot: A checkout interface with a shopping cart summary, items listed, and a “Proceed to Checkout” button, displaying prices and order summary.

First up, Bolt. Classic prompt-to-code pattern here—text box, type your prompt, watch it work. 

Bolt shows you the code generation in real-time, which is fascinating if you’re a developer but mostly noise if you’re not. The resulting design was decent but plain, with typical UX patterns. It missed delivering the confirmation page I would expect. And when I tried to re-evaluate it this morning with their new features? Complete failure—blank preview screens until I ran out of credits. No rhyme or reason. And there it is—a perfect example of the maddening inconsistency these tools deliver. Working beautifully in one session, completely broken in another. Same inputs, wildly different outputs.

Score: 43

Lovable

Lovable screenshot: A shipping information form on a checkout page, including fields for personal details and a “Continue to Payment” button.

Moving on to Lovable, which I captured this morning right after they launched their 2.0 version. The experience was a mixed bag. While it generated clean (if plain) UI with some nice touches like toast notifications and a sidebar shopping cart, it got stuck at a critical juncture—the actual checkout. I had to coax it along, asking specifically for the shopping cart that was missing from the initial generation.

The tool encountered an error but at least provided a handy “Try to fix” button. Unlike Bolt, Lovable tries to hide the code, focusing instead on the browser preview—which as a designer, I appreciate. When it finally worked, I got a very vanilla but clean checkout flow and even the confirmation page I was looking for. Not groundbreaking, but functional. The approach of hiding code complexity might appeal to designers who don’t want to wade through development details.

Score: 49

Polymet

Polymet screenshot: A checkout page design for a fashion store showing payment method options (Credit Card, PayPal, Apple Pay), credit card fields, order summary with subtotal, shipping, tax, and total.

Next up is Polymet. This one has a very interesting interface and I kind of like it. You have your chat on the left and a canvas on the right. But instead of just showing the screen it’s working on, it’s actually creating individual components that later get combined into pages. It’s almost like building Figma components and then combining them at the end, except these are all coded components.

The design is pretty good—plain but very clean. I feel like it’s got a little more character than some of the others. What’s nice is you can go into focus mode and actually play with the prototype. I was able to navigate from the shopping cart through checkout (including Apple Pay) to confirmation. To export the code, you need to be on a paid plan, but the free trial gives you at least a taste of what it can do.

Score: 58

Replit

Replit screenshot: A developer interface showing progress on an online clothing store checkout project with error messages regarding the use of the useCart hook.

Replit was a test of patience—no exaggeration, it was the slowest tool of the bunch at 20 minutes to generate anything substantial. Why so slow? It kept encountering errors and falling into those weird loops that LLMs often do when they get stuck. At one point, I had to explicitly ask it to “make it work” just to progress beyond showing product pages, which wasn’t even what I’d asked for in the first place.

When it finally did generate a checkout experience, the design was nothing to write home about. Lines in the stepper weren’t aligning properly, there were random broken elements, and ultimately—it just didn’t work. I couldn’t even complete the checkout flow, which was the whole point of the exercise. I stopped recording at that point because, frankly, I just didn’t want to keep fighting with a tool that’s both slow and ineffective. 

Score: 31

v0

v0 screenshot: An online shopping cart with a multi-step checkout process, including a shipping form and order summary with prices and a “Continue to Payment” button.

Taking v0 for a spin next, which comes from Vercel. I think it was one of the earlier prompt-to-code generators I heard about—originally just for components, not full pages (though I could be wrong). The interface is similar to Bolt with a chat panel on the left and code on the right. As it works, it shows you the generated code in real-time, which I appreciate. It’s pretty mature and works really well.

The result almost looks like a wireframe, but the visual design has a bit more personality than Bolt’s version, even though it’s using the unstyled shadcn components. It includes form validation (which I checked), and handles the payment flow smoothly before showing a decent confirmation page. Speed-wise, v0 is impressively quick compared to some others I tested—definitely a plus when you’re iterating on designs and trying to quickly get ideas.

Score: 61

Onlook

Onlook screenshot: A design tool interface showing a cart with empty items and a “Continue Shopping” button on a fashion store checkout page.

Onlook stands out as a self-contained desktop app rather than a web tool like the others. The experience starts the same way—prompt in, wait, then boom—but instead of showing you immediate results, it drops you into a canvas view with multiple windows displaying localhost:3000, which is your computer running a web server locally. The design it generated was fairly typical and straightforward, properly capturing the shopping cart, shipping, payment, and confirmation screens I would expect. You can zoom out to see a canvas-style overview and manipulate layers, with a styles tab that lets you inspect and edit elements.

The dealbreaker? Everything gets generated as a single page application, making it frustratingly difficult to locate and edit specific states like shipping or payment. I couldn’t find these states visually or directly in the pages panel—they might’ve been buried somewhere in the layers, but I couldn’t make heads or tails of it. When I tried using it again today to capture the styles functionality for the video, I hit the same wall that plagued several other tools I tested—blank previews and errors. Despite going back and forth with the AI, I couldn’t get it running again.

Score: 71

Subframe

Subframe screenshot: A design tool interface with a checkout page showing a cart with items, a shipping summary, and the option to continue to payment.

My time with Subframe revealed a tool that takes a different approach to the same checkout prompt. Unlike most competitors, Subframe can’t create an entire flow at once (though I hear they’re working on multi-page capabilities). But honestly, I kind of like this limitation—it forces you as a designer to actually think through the process.

What sets Subframe apart is its MidJourney-like approach, offering four different design options that gradually come into focus. These aren’t just static mockups but fully coded, interactive pages you can preview in miniature. After selecting a shopping cart design, I simply asked it to create the next page, and it intelligently moved to shipping/billing info.

The real magic is having actual design tools—layers panel, property inspector, direct manipulation—alongside the ability to see the working React code. For designers who want control beyond just accepting whatever the AI spits out, Subframe delivers the best combination of AI generation and familiar design tooling.

Score: 79

Tempo

Tempo screenshot: A developer tool interface generating a clothing store checkout flow, showing wireframe components and code previews.

Lastly, Tempo. This one takes a different approach than most other tools. It starts by generating a PRD from your prompt, then creates a user flow diagram before coding the actual screens—mimicking the steps real product teams would take. Within minutes, it had generated all the different pages for my shopping cart checkout experience. That’s impressive speed, but from a design standpoint, it’s just fine. The visual design ends up being fairly plain, and the prototype had some UX issues—the payment card change was hard to notice, and the “Place order” action didn’t properly lead to a confirmation screen even though it existed in the flow.

The biggest disappointment was with Tempo’s supposed differentiator. Their DOM inspector theoretically allows you to manipulate components directly on canvas like you would in Figma—exactly what designers need. But I couldn’t get it to work no matter how hard I tried. I even came back days later to try again with a different project and reached out to their support team, but after a brief exchange—crickets. Without this feature functioning, Tempo becomes just another prompt-to-code tool rather than something truly designed for visual designers who want to manipulate components directly. Not great.

Score: 59

The Verdict: Control Beats Code Every Time

Subframe screenshot: A design tool interface displaying a checkout page for a fashion store with a cart summary and a “Proceed to Checkout” button.

Subframe offers actual design tools—layers panel, property inspector, direct manipulation—along with AI chat.

I’ve spent the last couple weeks testing these prompt-to-code tools, and if there’s one thing that’s crystal clear, it’s this: for designers who want actual design control rather than just code manipulation, Subframe is the standout winner.

I will caveat that I didn’t do a deep dive into every single tool. I played with them at a cursory level, giving each a fair shot with the same prompt. What I found was a mix of promising starts and frustrating dead ends.

The reality of AI tools is their probabilistic nature. Sometimes they’ll solve problems easily, and then at other times they’ll spectacularly fail. I experienced this firsthand when retesting both Lovable and Bolt with their latest features—both performed worse than in my initial testing just two weeks ago. Blank screens. Error messages. No rhyme or reason.

For designers like me, the dealbreaker with most of these tools is being forced to manipulate designs through code rather than through familiar design interfaces. We need to be able to directly manipulate components by clicking and modifying shapes on the canvas or changing values in an inspector. That’s where Subframe delivers while others fall short—if their audience includes designers, which might not be the case.

For us designers, I believe Subframe could be the answer. But I’m also looking forward to if Figma will have an answer. Will the company get in the AI > design > code game? Or will it be left behind? 

The future belongs to applications that balance AI assistance with familiar design tooling—not just code generators with pretty previews.

I love this wonderfully written piece by Julie Zhou exploring the Ghiblification of everything. On how we feel about a month later:

The second watching never commands the same awe as the first. The 20th bite doesn’t dance on the tongue as exquisitely. And the 200th anime portrait certainly no longer impresses the way it once did.

The sad truth is that oversaturation strangles quality. Nothing too easy can truly be tasteful.

She goes on to make a point that Studio Ghibli’s quality is beyond style—it’s of narrative and imagination.

AI-generated images in the “Ghibli style” may borrow its surface features but they don’t capture the soul of what makes Studio Ghibli exceptional in quality. They lack the narrative depth, the handcrafted devotion, and the cultural resonance.

Like a celebrity impersonator, the ChatGPT images borrow from the cache of the original. But sadly, hollowly, it’s not the same. What made the original shimmer is lost in translation.

And rather than going down the AI-is-enshitification conversation, Zhou pivots a little, focusing on the technological quality and the benefits it brings.

…ChatGPT could offer a flavor of magic that Studio Ghibli could never achieve, the magic of personalization.

The quality of Ghibli-fication is the quality of the new image model itself, one that could produce so convincing an on-the-fly facsimile of a photograph in a particular style that it created a “moment” in public consciousness. ChatGPT 4o beat out a number of other image foundational models for this prize.

preview-1745686415978.png

The AI Quality Coup

What exactly is "great" work now?

open.substack.com iconopen.substack.com