Skip to content
12 min read
Dark red-toned artwork of a person staring into a glowing phone, surrounded by swirling shadows.

Blood in the Feed: Social Media’s Deadly Design

The assassination of Charlie Kirk on September 10, 2025, marked a horrifying inflection point in the growing debate over how digital platforms amplify rage and destabilize politics. As someone who had already stepped back from social media after Trump’s re-election, watching these events unfold from a distance only confirmed my decision. My feeds had become pits of despair, grievances, and overall negativity that didn’t do well for my mental health. While I understand the need to shine a light on the atrocities of Trump and his government, the constant barrage was too much. So I mostly opted out, save for the occasional promotion of my writing.

Kirk’s death feels like the inevitable conclusion of systems we’ve built—systems that reward outrage, amplify division, and transform human beings into content machines optimized for engagement at any cost.

The Mechanics of Disconnection

As it turns out, my behavior isn’t out of the ordinary. People quit social media for various reasons, often situational—seeking balance in an increasingly overwhelming digital landscape. As a participant explained in a research project about social media disconnection:

It was just a build-up of stress and also a huge urge to change things in life. Like, ‘It just can’t go on like this.’ And that made me change a number of things. So I started to do more sports and eat differently, have more social contacts and stop using online media. And instead of sitting behind my phone for two hours in the evening, I read a book and did some work, went to work out, I went to a birthday or a barbecue. I was much more engaged in other things. It just gave me energy. And then I thought, ‘This is good. That’s the way it’s supposed to be. I have to maintain this.’

Sometimes the realization is more visceral—that on these platforms, we are the product. As Jef van de Graaf provocatively puts it:

Every post we make, every friend we invited, every little notification dragging us back into the feed serves one purpose: to extract money from us—and give nothing back but dopamine addiction and mental illness.

While his language is deliberately inflammatory, the sentiment resonates with many who’ve watched their relationship with these platforms sour. As he cautions:

Remember: social media exists because we feed it our lives. We trade our privacy and sanity so VCs and founders can get rich and live like greedy fucking kings.

The Architecture of Rage

The internet was built to connect people and ideas. Even the early iterations of Facebook and Twitter were relatively harmless because the timelines were chronological. But then the makers—product managers, designers, and engineers—of social media platforms began to optimize for engagement and visit duration. Was the birth of the social media algorithm the original sin?

Kevin Roose and Casey Newton explored this question in their Hard Fork episode following Kirk’s assassination, discussing how platforms have evolved to optimize for what they call “borderline content”—material that comes right up to the line of breaking a platform’s policy without quite going over. As Newton observed about Kirk himself:

He excelled at making what some of the platform nerds that I write about would call borderline content. So basically, saying things that come right up to the line of breaking a platform’s policy without quite going over… It turns out that the most compelling thing you can do on social media is to almost break a policy.

Kirk mastered this technique—speculating that vaccines killed millions, calling the Civil Rights Act a mistake, flirting with anti-Semitic tropes while maintaining plausible deniability. He understood the algorithm’s hunger for controversy, and fed it relentlessly. And then, in a horrible irony, he was killed by someone who had likely been radicalized by the very same algorithmic forces he’d helped unleash.

As Roose reflected:

We as a culture are optimizing for rage now. You see it on the social platforms. You see it from politicians calling for revenge for the assassination of Charlie Kirk. You even see it in these individual cases of people getting extremely mad at the person who made a joke about Charlie Kirk that was edgy and tasteless, and going to report them to their employer and get them fired. It’s all this sort of spectacle of rage, this culture of destroying and owning and humiliating.

The Unraveling of Digital Society

Social media and smartphones have fundamentally altered how we communicate and socialize, often at the expense of face-to-face interactions. These technologies have created a market for attention that fuels fear, anger, and political conflict. The research on mental health impacts is sobering: studies found that the introduction of Facebook to college campuses led to measurable increases in depression, accounting for approximately 24 percent of the increased prevalence of severe depression among college students over two decades.

In the wake of Kirk’s assassination, what struck me most was how the platforms immediately transformed tragedy into content. Within hours, there were viral posts celebrating his death, counter-posts condemning those celebrations, organizations collecting databases of “offensive” comments, people losing their jobs, death threats flying in all directions. As Newton noted:

This kind of surveillance and doxxing is essentially a kind of video game that you can play on X. And people like to play video games. And because you’re playing with people’s real lives, it feels really edgy and cool and fun for those who are participating in this.

The human cost is remarkable—teachers, firefighters, military members fired or suspended for comments about Kirk’s death. Many received death threats. Far-right activists called for violence and revenge, doxxing anyone they accused of insufficient mourning.

Blood in the Feed

The last five years have been marked by eruptions of political violence that cannot be separated from the online world that incubated them.

  • The attack on Paul Pelosi (2022). The man who broke into the Speaker of the House Nancy Pelosi’s San Francisco home and fractured her husband’s skull had been marinating in QAnon conspiracies and election denialism online. Extremism experts warned it was a textbook case of how stochastic terrorism—the idea that widespread demonization online can trigger unpredictable acts of violence by individuals—travels from platform rhetoric into a hammer-swinging hand.
  • The Trump assassination attempt (July 2024). A young man opened fire at a rally in Pennsylvania. His social media presence was filled with antisemitic, anti-immigrant content. Within hours, extremist forums were glorifying him as a martyr and calling for more violence.
  • The killing of Minnesota legislator Melissa Hortman and her husband (June 2025). Their murderer left behind a manifesto echoing the language of online white supremacist and anti-abortion communities. He wasn’t a “lone wolf.” He was drawing from the same toxic well of white supremacist and anti-abortion rhetoric that floods online forums. The language of his manifesto wasn’t unique—it was copied, recycled, and amplified in the ideological swamps anyone with a Wi-Fi connection can wander into.

These headline events sit atop a broader wave: the New Orleans truck-and-shooting rampage inspired by ISIS propaganda online (January 2025), the Cybertruck bombing outside Trump’s Los Angeles hotel tied to accelerationist forums—online spaces where extremists argue that violence should be used to hasten the collapse of society (January 2025), and countless smaller assaults on election workers, minority communities, and public officials.

The pattern is depressingly clear. Platforms radicalize, amplify, and normalize the language of violence. Then, someone acts.

The Death of Authenticity

As social media became commoditized—a place to influence and promote consumption—it became less personal and more like TV. The platforms are now being overrun by AI spam and engagement-driven content that drowns out real human connection. As James O’Sullivan notes:

Platforms have little incentive to stem the tide. Synthetic accounts are cheap, tireless and lucrative because they never demand wages or unionize… Engagement is now about raw user attention – time spent, impressions, scroll velocity – and the net effect is an online world in which you are constantly being addressed but never truly spoken to.

Research confirms what users plainly see: tens of thousands of machine-written posts now flood public groups, pushing scams and chasing engagement. Whatever remains of genuine human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks.

The result? Networks that once promised a single interface for the whole of online life are splintering. Users drift toward smaller, slower, more private spaces—group chats, Discord servers, federated microblogs, and email newsletters. A billion little gardens replacing the monolithic, rage-filled public squares that have led to a burst of political violence.

The Designer’s Reckoning

This brings us to design and our role in creating these systems. As designers, are we beginning to reckon with what we’ve wrought?

Jony Ive, reflecting on his own role in creating the smartphone, acknowledges this burden:

I think when you’re innovating, of course, there will be unintended consequences. You hope that the majority will be pleasant surprises. Certain products that I’ve been very involved with, I think there were some unintended consequences that were far from pleasant. My issue is that even though there was no intention, I think there still needs to be responsibility. And that weighs on me heavily.

His words carry new weight after Kirk’s assassination—a death enabled by platforms we designed, algorithms we optimized, engagement metrics we celebrated.

At the recent World Design Congress in London, architect Indy Johar didn’t mince words:

We need ideas and practices that change how we, as humans, relate to the world… Ignoring the climate crisis means you’re an active operator in the genocide of the future.

But we might ask: What about ignoring the crisis of human connection? What about the genocide of civil discourse? Climate activist Tori Tsui’s warning applies equally to our digital architecture saying, “The rest of us are at the mercy of what you decide to do with your imagination.”

Political violence is accelerating and people are dying because of what we did with our imagination. If responsibility weighs heavily, so too must the search for alternatives.

The Possibility of Bridges

There are glimmers of hope in potential solutions. Aviv Ovadya’s concept of “bridging-based algorithms” offers one path forward—systems that actively seek consensus across divides rather than exploiting them. As Casey Newton explains:

They show them to people across the political spectrum… and they only show the note if people who are more on the left and more on the right agree. They see a bridge between the two of you and they think, well, if Republicans and Democrats both think this is true, this is likelier to be true.

But technological solutions alone won’t save us. The participants in social media disconnection studies often report developing better relationships with technology only after taking breaks. One participant explained:

It’s more the overload that I look at it every time, but it doesn’t really satisfy me, that it no longer had any value at a certain point in time. But that you still do it. So I made a conscious choice – a while back – to stop using Facebook.

Designing in the Shadow of Violence

Rob Alderson, in his dispatch from the World Design Congress, puts together a few pieces. Johar suggests design’s role is “desire manufacturing”—not just creating products, but rewiring society to want and expect different versions of the future. As COLLINS co-founder Leland Maschmeyer argued, design is about…

What do we want to do? What do we want to become? How do we get there?’… We need to make another reality as real as possible, inspired by new context and the potential that holds.

The challenge before us isn’t just technical—it’s fundamentally about values and vision. We need to move beyond the Post-it workshops and develop what Johar calls “new competencies” that shape the future.

As I write this, having stepped back from the daily assault of algorithmic rage, I find myself thinking about the Victorian innovators Ive mentioned—companies like Cadbury’s and Fry’s that didn’t just build factories but designed entire towns, understanding that their civic responsibility extended far beyond their products. They recognized that massive societal shifts of moving people from land that they farmed, to cities they lived in for industrial manufacturing, require holistic thinking about how people live and work together.

We stand at a similar inflection point. The tools we’ve created have reshaped human connection in ways that led directly to Charlie Kirk’s assassination. A young man, radicalized online, killed a figure who had mastered the art of online radicalization. The snake devoured its tail on a college campus in Utah, and we all watched it happen in real-time, transforming even this tragedy into content.

The vast majority of Americans, as Newton reminds us, “do not want to participate in a violent cultural war with people who disagree with them.” Yet our platforms are engineered to convince us otherwise, to make civil war feel perpetually imminent, to transform every disagreement into an existential threat.

The Cost of Our Imagination

Perhaps the real design challenge lies not in creating more engaging feeds or stickier platforms, but in designing systems that honor our humanity, foster genuine connection, and help us build the bridges we so desperately need.

Because while these US incidents show how social media incubates lone attackers and small cells, they pale in comparison to Myanmar, where Facebook’s algorithms directly amplified hate speech and incitement, contributing to the deaths of thousands—estimates range from 6,700 to as high as 24,000—and the forced displacement of over 700,000 Rohingya Muslims. That catastrophe made clear: when platforms optimize only for engagement, the result isn’t connection but carnage.

This is our design failure. We built systems that reward extremism, amplify rage, and treat human suffering as engagement. The tools meant to bring us together have instead armed us against each other. And we all bear responsibility for that.

It’s time we imagined something better—before the systems we’ve created finish the job of tearing us apart.

A Momentary Lapse of Artwork

For men of a certain age, Pink Floyd represents a milieu—brooding, melancholy, emo before emo had a name. I started listening to Floyd in high school and being a kid who always felt like an outsider, The Wall really resonated with me. In college, I started exploring their back catalog and Animals and Wish You Were Here became my favorites. Of course, as a designer, I have always loved the album covers. Storm Thorgerson and Hipgnosis’ surreal photos were mind-bending and added to the music’s feelings of alienation, yearning, and the aching beauty of being lost.

I hadn’t listened to the music in a while but the song “Two Suns in the Sunset” from *The Final Cut *periodically pops into my head. I listened to the full album last Sunday. On Tuesday, I pulled up their catalog again to play in the background while I worked and to my surprise, all the trippy cover art was replaced by white type on a black surface!

Screenshot of Apple Music showing Pink Floyd albums with covers replaced by text-only descriptions, such as “A WALL OF WHITE BRICKS WITH RED GRAFFITI” for The Wall and “A PRISM REFRACTS LIGHT INTO THE SPECTRUM” for The Dark Side of the Moon.

The classic image of the prism and rainbow for The Dark Side of the Moon was replaced by “A PRISM REFRACTS LIGHT INTO THE SPECTRUM.” It’s essentially alt text for all the covers—deadpan captions where the surreal images used to be.

“ROWS OF HOSPITAL BEDS ON A BEACH” for 1987’s A Momentary Lapse of Reason.

“A WALL OF WHITE BRICKS WITH RED GRAFFITI” for The Wall.

“TWO MEN IN SUITS SHAKING HANDS ONE MAN IS ON FIRE” for Wish You Were Here.

And my favorite—“PHOTO WITHIN A PHOTO WITHIN A PHOTO” for Ummagumma.

What is going on? Are they broken images replaced by alt text? Some folks on the internet think it is a protest against AI art.

Instagram post from @coverartmatters showing Pink Floyd album covers replaced with text descriptions. A red arrow highlights a fan comment suggesting the change looks like a message against AI.

But in reality, it’s part of a marketing campaign because Pink Floyd’s official website has also been wrapped in a black cloth…

Pink Floyd’s official website homepage featuring a black-wrapped circular object against a dark background with the text “JOIN PINK FLOYD HQ” below.

Maybe it’s not cloth. Looks more like black plastic. And that’s because it’s very likely coinciding with the 50th anniversary of Wish You Were Here, which famously shipped to stores wrapped in black shrink-wrap, forcing buyers to just buy the record on faith.

Pink Floyd’s Wish You Were Here vinyl wrapped in black shrink-wrap with a circular handshake sticker on the front.

There was a conceptual reason behind it of course. From Wikipedia:

Storm Thorgerson had accompanied the band on their 1974 tour and had given serious thought to the meaning of the lyrics, eventually deciding that the songs were, in general, concerned with “unfulfilled presence”, rather than [former lead vocalist and founding band member Syd] Barrett’s illness. This theme of absence was reflected in the ideas produced by his long hours spent brainstorming with the band. Thorgerson had noted that Roxy Music’s Country Life was sold in an opaque green cellophane sleeve – censoring the cover image – and he copied the idea, concealing the artwork for Wish You Were Here in a black-coloured shrink-wrap (therefore making the album art “absent”).

I’m curious to see if there is a big reveal tomorrow, the actual anniversary of my favorite Pink Floyd album, and maybe the most fitting tribute to absence they could pull off.

UPDATE 9:05 PM, September 11, 2025:

At midnight Eastern Time, Apple Music updated with a new pre-release album from Pink Floyd with cover art—the 50th anniversary edition of Wish You Were Here. It’ll be fully released on December 12.

Apple Music screenshot showing Pink Floyd’s Wish You Were Here 50 pre-release album with new cover art and tracklist, releasing December 12, 2025.

Still from a video shown at Apple Keynote 2025. Split screen of AirPods Pro connection indicator on left, close-up of earbuds in charging case on right.

Notes About the September 2025 Apple Event

Today’s Apple keynote opened with a classic quote from Steve Jobs.

Steve Jobs quote at Apple Keynote 2025 – Black keynote slide with white text: “Design is not just what it looks like and feels like. Design is how it works.” – Steve Jobs.

Then a video played, focused on the fundamental geometric shapes that can be found in Apple’s products: circles in the HomePod, iPhone shutter button, iPhone camera, MagSafe charging ring, Digital Crown on Apple Watch; rounded squares in the charging block, Home scene button, Mac mini, keycaps, Finder icon, FaceID; to the lozenges found in the AirPods case, MagSafe port, Liquid Glass carousel control, and the Action button on Apple Watch Ultra.

Then Tim Cook repeated the notion in his opening remarks:

At Apple, design has always been fundamental to who we are and what we do. For us, design goes beyond just how something looks or feels. Design is also how it works. This philosophy guides everything we do, including the products we’re going to introduce today and the experiences they provide.

Apple announced a bunch of products today, including:

  • AirPods Pro 3 with better active noise canceling, live translation, and heart rate sensing (more below)
  • Apple Watch Series 11, thinner and with hypertension alerts and sleep score
  • iPhone 17 with a faster chip and better camera (as always)
  • iPhone Air at 5.6 mm thin! They packed all the main components into a new full-width camera “plateau” (I guess that’s the new word for camera bump)
  • iPhone 17 Pro / Pro Max with a faster chip and even better camera (as always), along with unibody construction and cool vapor cooling (like liquid cooling, but with vapor), and a beefy camera plateau

Highlights

Live Translation is Star Trek’s Universal Translator

In the Star Trek universe, humans regularly speak English with aliens and the audience hears those same aliens reply in English. Of course, it’s television and it was always explained away—that a “universal translator” is embedded in the comm badge all Starfleet crew members wear.

Apple Keynote 2025 iPhone Live Translation feature – Woman holds up an iPhone displaying translated text, demonstrating Apple Intelligence with AirPods Pro 3.

With AirPods Pro 3, this is becoming real! In one demo video, Apple shows a woman at a market. She’s shopping and hears a vendor speak to her in Spanish. Through her AirPods, she hears the live translation and can reply in English and have that translated back to Spanish on her iPhone. Then, in another scene, two guys are talking—English and Italian—and they’re both wearing the new AirPods and having a seamless conversation. Amazing.

Apple Keynote 2025 AirPods Pro 3 Live Translation demo at café – Man wearing AirPods Pro 3 sits outdoors at a café table, smiling while testing real-time language translation.

Heart Rate Monitoring in AirPods

Apple is extending its fitness tracking features to AirPods, specifically the new AirPods Pro 3. These come with a new sensor that pulses invisible infrared light at 256 times per second to detect blood flow and calculate heart rate. I’m always astonished by how Apple keeps extending the capabilities of its devices to push health and fitness metrics, which—at least their thesis goes—helps with overall wellbeing. (See below.)

Full-Width Camera Bump

Or, the new camera plateau. I actually prefer the full width over just the bump. I feel like the plain camera bump on my iPhone 16 Pro makes the phone too wobbly when I put it on its back. I think a bump that spans the full width of the phone will make it more stable. This new design is on the new iPhone Air and iPhone 17 Pro.

To Air or Not to Air?

I’m on the iPhone Upgrade Program so I can get a new phone each year—and I have for that last few. I’m wondering if I want to get the Air this time. One thing I dislike about the iPhone Pros is their weight. The Pro is pretty heavy and I can feel it in my hand after prolonged use. At 165 grams, the Air is 17% lighter than the 16 Pro (199 grams). It might make a difference.

Overall Thoughts

Of course, in 2025, it’s a little striking that Apple didn’t mention much about AI. Apple framed AI not as a standalone product but as an invisible layer woven through AirPods, Watch, and iPhone—from Live Translation and Workout Buddy nudges to on-device models powering health insights and generative photo features. Instead of prompts and chatbots, Apple Intelligence showed up as contextual, ambient assistance designed to disappear into the flow of everyday use. And funnily enough, iOS 26 was mentioned in passing, as if Apple assumed everyone watching had seen the prior episode—er, keynote—in June.

It’s interesting that the keynote opened with that Steve Jobs quote about design. Maybe someone in Cupertino read my piece breaking down Liquid Glass where I argued:

People misinterpret this quote all the time to mean design is only how it works. That is not what Steve meant. He meant, design is both what it looks like and how it works.

(Actually, it was probably what Casey Newton wrote in Platformer about Liquid Glass.) 

If you step back and consider why Apple improves its hardware and software every year, it goes back to their implied mission: to make products that better human lives. This is exemplified by the “Dear Apple” spot they played as part of the segment on Apple Watch.

Play

Apple’s foray into wearables—beyond ear- and headphones—with Apple Watch ten years ago was really an entry into health technology. Lives have been saved and people have gotten healthier because Apple technology enabled them. Dr. Sumbul Ahmad Desai, VP of Health mentioned their new hypertension detection feature could notify over one million people with undiagnosed hypertension in its first year. Apple developed this feature using advanced machine learning, drawing on training data from multiple studies that involved over 100,000 participants. Then they clinically validated it in a separate study of over 2,000 participants. In other words, they’ve become a real force is shaping health tech.

And what also amazes me, is that now AirPods Pro 3 will help with health and fitness tracking. (See above.)

There’s no doubt that Apple’s formal design is always top-notch. But it’s great to be reminded of their why and how these must-buy-by-Christmas devices are capable of solving real world problems and bettering our lives. (And no, I don’t think having a lighter, thinner, faster, cooler phone falls into this category. We can have both moral purpose and commercial purpose.)

Conceptual 3D illustration of stacked digital notebooks with a pen on top, overlaid on colorful computer code patterns.

Why We Still Need a HyperCard for the AI Era

I rewatched the 1982 film TRON for the umpteenth time the other night with my wife. I have always credited this movie as the spark that got me interested in computers. Mind you, I was nine years old when this film came out. I was so excited after watching the movie that I got my father to buy us a home computer—the mighty Atari 400 (note sarcasm). I remember an educational game that came on cassette called “States & Capitals” that taught me, well, the states and their capitals. It also introduced me to BASIC, and after watching TRON, I wanted to write programs!

Vintage advertisement for the Atari 400 home computer, featuring the system with its membrane keyboard and bold headline “Introducing Atari 400.”

The Atari 400’s membrane keyboard was easy to wipe down, but terrible for typing. It also reminded me of fast food restaurant registers of the time.

Back in the early days of computing—the 1960s and ’70s—there was no distinction between users and programmers. Computer users wrote programs to do stuff for them. Hence the close relationship between the two that’s depicted in TRON. The programs in the digital world resembled their creators because they were extensions of them. Tron, the security program that Bruce Boxleitner’s character Alan Bradley wrote, looks like its creator. Clu looked like Kevin Flynn, played by Jeff Bridges. Early in the film, a compound interest program who was captured by the MCP’s goons says to a cellmate, “if I don’t have a User, then who wrote me?”

Scene from the 1982 movie TRON showing programs in glowing blue suits standing in a digital arena.

The programs in TRON looked like their users. Unless the user was the program, which was the case with Kevin Flynn (Jeff Bridges), third from left.

I was listening to a recent interview with Ivan Zhao, CEO and cofounder of Notion, in which he said he and his cofounder were “inspired by the early computing pioneers who in the ’60s and ’70s thought that computing should be more LEGO-like rather than like hard plastic.” Meaning computing should be malleable and configurable. He goes on to say, “That generation of thinkers and pioneers thought about computing kind of like reading and writing.” As in accessible and fundamental so all users can be programmers too.

The 1980s ushered in the personal computer era with the Apple IIe, Commodore 64, TRS-80, (maybe even the Atari 400 and 800), and then the Macintosh, etc. Programs were beginning to be mass-produced and consumed by users, not programmed by them. To be sure, this move made computers much more approachable. But it also meant that users lost a bit of control. They had to wait for Microsoft to add a feature into Word that they wanted.

Of course, we’re coming back to a full circle moment. In 2025, with AI-enabled vibecoding, users are able to spin up little custom apps that do pretty much anything they want them to do. It’s easy, but not trivial. The only interface is the chatbox, so your control is only as good as your prompts and the model’s understanding. And things can go awry pretty quickly if you’re not careful.

What we’re missing is something accessible, but controllable. Something with enough power to allow users to build a lot, but not so much that it requires high technical proficiency to produce something good. In 1987, Apple released HyperCard and shipped it for free with every new Mac. HyperCard, as fans declared at the time, was “programming for the rest of us.”

HyperCard—Programming for the Rest of Us

Black-and-white screenshot of HyperCard’s welcome screen on a classic Macintosh, showing icons for Tour, Help, Practice, New Features, Art Bits, Addresses, Phone Dialer, Graph Maker, QuickTime Tools, and AppleScript utilities.

HyperCard’s welcome screen showed some useful stacks to help the user get started.

Bill Atkinson was the programmer responsible for MacPaint. After the Mac launched, and apparently on an acid trip, Atkinson conceived of HyperCard. As he wrote on the Apple history site Folklore:

Inspired by a mind-expanding LSD journey in 1985, I designed the HyperCard authoring system that enabled non-programmers to make their own interactive media. HyperCard used a metaphor of stacks of cards containing graphics, text, buttons, and links that could take you to another card. The HyperTalk scripting language implemented by Dan Winkler was a gentle introduction to event-based programming.

There were five main concepts in HyperCard: cards, stacks, objects, HyperTalk, and hyperlinks. 

  • Cards were screens or pages. Remember that the Mac’s nine-inch monochrome screen was just 512 pixels by 342 pixels.
  • Stacks were collections of cards, essentially apps.
  • Objects were the UI and layout elements that included buttons, fields, and backgrounds.
  • HyperTalk was the scripting language that read like plain English.
  • Hyperlinks were links from one interactive element like a button to another card or stack.

When I say that HyperTalk read like plain English, I mean it really did. AppleScript and JavaScript are descendants. Here’s a sample logic script:

if the text of field "Password" is "open sesame" then
  go to card "Secret"
else
  answer "Wrong password."
end if

Armed with this kit of parts, users were able to use this programming “erector set” and build all sorts of banal or wonderful apps. From tracking vinyl records to issuing invoices, or transporting gamers to massive immersive worlds, HyperCard could do it all. The first version of the classic puzzle adventure game, Myst was created with HyperCard. It was comprised of six stacks and 1,355 cards. From Wikipedia:

The original HyperCard Macintosh version of Myst had each Age as a unique HyperCard stack. Navigation was handled by the internal button system and HyperTalk scripts, with image and QuickTime movie display passed off to various plugins; essentially, Myst functions as a series of separate multimedia slides linked together by commands.

Screenshot from the game Myst, showing a 3D-rendered island scene with a ship in a fountain and classical stone columns.

The hit game Myst was built in HyperCard.

For a while, HyperCard was everywhere. Teachers made lesson plans. Hobbyists made games. Artists made interactive stories. In the Eighties and early Nineties, there was a vibrant shareware community. Small independent developers who created and shared simple programs for a postcard, a beer, or five dollars. Thousands of HyperCard stacks were distributed on aggregated floppies and CD-ROMs. Steve Sande, writing in Rocket Yard:

At one point, there was a thriving cottage industry of commercial stack authors, and I was one of them. Heizer Software ran what was called the “Stack Exchange”, a place for stack authors to sell their wares. Like Apple with the current app stores, Heizer took a cut of each sale to run the store, but authors could make a pretty good living from the sale of popular stacks. The company sent out printed catalogs with descriptions and screenshots of each stack; you’d order through snail mail, then receive floppies (CDs at a later date) with the stack(s) on them.

Black-and-white screenshot of Heizer Software’s “Stack Exchange” HyperCard catalog, advertising a marketplace for stacks.

Heizer Software’s “Stack Exchange,” a marketplace for HyperCard authors.

From Stacks to Shrink-Wrap

But even as shareware tiny programs and stacks thrived, the ground beneath this cottage industry was beginning to shift. The computer industry—to move from niche to one in every household—professionalized and commoditized software development, distribution, and sales. By the 1990s, the dominant model was packaged software that was merchandised on store shelves in slick shrink-wrapped boxes. The packaging was always oversized for the floppy or CD it contained to maximize visual space.

Unlike the users/programmers from the ’60s and ’70s, you didn’t make your own word processor anymore, you bought Microsoft Word. You didn’t build your own paint and retouching program—you purchased Adobe Photoshop. These applications were powerful, polished, and designed for thousands and eventually millions of users. But that meant if you wanted a new feature, you had to wait for the next upgrade cycle—typically a couple of years. If you had an idea, you were constrained by what the developers at Microsoft or Adobe decided was on the roadmap.

The ethos of tinkering gave way to the economics of scale. Software became something you consumed rather than created.

From Shrink-Wrap to SaaS

The 2000s took that shift even further. Instead of floppy disks or CD-ROMs, software moved into the cloud. Gmail replaced the personal mail client. Google Docs replaced the need for a copy of Word on every hard drive. Salesforce, Slack, and Figma turned business software into subscription services you didn’t own, but rented month-to-month.

SaaS has been a massive leap for collaboration and accessibility. Suddenly your documents, projects, and conversations lived everywhere. No more worrying about hard drive crashes or lost phones! But it pulled users even farther away from HyperCard’s spirit. The stack you made was yours; the SaaS you use belongs to someone else’s servers. You can customize workflows, but you don’t own the software.

Why Modern Tools Fall Short

For what started out as a note-taking app, Notion has come a long way. With its kit of parts—pages, databases, tags, etc.—it’s highly configurable for tracking information. But you can’t make games with it. Nor can you really tell interactive stories (sure, you can link pages together). You also can’t distribute what you’ve created and share with the rest of the world. (Yes, you can create and sell Notion templates.)

No productivity software programs are malleable in the HyperCard sense. 

[IMAGE: Director]

Of course, there are specialized tools for creativity. Unreal Engine and Unity are great for making games. Director and Flash continued the tradition started by HyperCard—at least in the interactive media space—before they were supplanted by more complex HTML5, CSS, and JavaScript. Objectively, these authoring environments are more complex than HyperCard ever was.

The Web’s HyperCard DNA

In a fun remembrance, Constantine Frantzeskos writes:

HyperCard’s core idea was linking cards and information graphically. This was true hypertext before HTML. It’s no surprise that the first web pioneers drew direct inspiration from HyperCard – in fact, HyperCard influenced the creation of HTTP and the Web itself​. The idea of clicking a link to jump to another document? HyperCard had that in 1987 (albeit linking cards, not networked documents). The pointing finger cursor you see when hovering over a web link today? That was borrowed from HyperCard’s navigation cursor​.

Ted Nelson coined the terms “hypertext” and “hyperlink” in the mid-1960s, envisioning a world where digital documents could be linked together in nonlinear “trails”—making information interwoven and easily navigable. Bill Atkinson’s HyperCard was the first mass-market program that popularized this idea, even influencing Tim Berners-Lee, the father of the World Wide Web. Berners-Lee’s invention was about linking documents together on a server and linking to other documents on other servers. A web of documents.

Early ViolaWWW hypermedia browser from 1993, displaying a window with navigation buttons, URL bar, and hypertext description.

Early web browser from 1993, ViolaWWW, directly inspired by the concepts in HyperCard.

Pei-Yuan Wei, developer of one of the first web browsers called ViolaWWW, also drew direct inspiration from HyperCard. Matthew Lasar writing for Ars Technica:

“HyperCard was very compelling back then, you know graphically, this hyperlink thing,” Wei later recalled. “I got a HyperCard manual and looked at it and just basically took the concepts and implemented them in X-windows,” which is a visual component of UNIX. The resulting browser, Viola, included HyperCard-like components: bookmarks, a history feature, tables, graphics. And, like HyperCard, it could run programs.

And of course, with the built-in source code viewer, browsers brought on a new generation of tinkerers who’d look at HTML and make stuff by copying, tweaking, and experimenting.

The Missing Ingredient: Personal Software

Today, we have low-code and no code tools like Bubble for making web apps, Framer for building web sites, and Zapier for automations. The tools are still aimed at professionals though. Maybe with the exception of Zapier and IFTTT, they’ve expanded the number of people who can make software (including websites), but they’re not general purpose. These are all adjacent to what HyperCard was.

(Re)enter personal software.

In an essay titled “Personal software,” Lee Robinson wrote, “You wouldn’t search ‘best chrome extensions for note taking’. You would work with AI. In five minutes, you’d have something that works exactly how you want.”

Exploring the idea of “malleable software,” researchers at Ink & Switch wrote:

How can users tweak the existing tools they’ve installed, rather than just making new siloed applications? How can AI-generated tools compose with one another to build up larger workflows over shared data? And how can we let users take more direct, precise control over tweaking their software, without needing to resort to AI coding for even the tiniest change? None of these questions are addressed by products that generate a cloud-hosted application from a prompt.

Of course, AI prompt-to-code tools have been emerging this year, allowing anyone who can type to build web applications. However, if you study these tools more closely—Replit, Lovable, Base44, etc.—you’ll find that the audience is still technical people. Developers, product managers, and designers can understand what’s going on. But not everyday people.

These tools are still missing ingredients HyperCard had that allowed it to be in the general zeitgeist for a while, that enabled users to be programmers again.

They are:

  • Direct manipulation
  • Technical abstraction
  • Local apps

What Today’s Tools Still Miss

Direct Manipulation

As I concluded in my exhaustive AI prompt-to-code tools roundup from April, “We need to be able to directly manipulate components by clicking and modifying shapes on the canvas or changing values in an inspector.” The latency of the roundtrip of prompting the model, waiting for it to think and then generate code, and then rebuild the app is much too long. If you don’t know how to code, every change takes minutes, so building something becomes tedious, not fun.

Tools need to be a canvas-first, not chatbox-first. Imagine a kit of UI elements on the left that you can drag onto the canvas and then configure and style—not unlike WordPress page builders. 

AI is there to do the work for you if you want, but you don’t need to use it.

Hand-drawn sketch of a modern HyperCard-like interface, with a canvas in the center, object palette on the left, and chat panel on the right.

My sketch of the layout of what a modern HyperCard successor could look like. A directly manipulatable canvas is in the center, object palette on the left, and AI chat panel on the right.

Technical Abstraction

For gen pop, I believe that these tools should hide away all the JavaScript, TypeScript, etc. The thing that the user is building should just work.

Additionally, there’s an argument to be made to bring back HyperTalk or something similar. Here is the same password logic I showed earlier, but in modern-day JavaScript:

const password = document.getElementById("Password").value;

if (password === "open sesame") {
  window.location.href = "secret.html";
} else {
  alert("Wrong password.");
} 

No one is going to understand that, much less write something like it.

One could argue that the user doesn’t need to understand that code since the AI will write it. Sure, but code is also documentation. If a user is working on an immersive puzzle game, they need to know the algorithm for the solution. 

As a side note, I think flow charts or node-based workflows are great. Unreal Engine’s Blueprints visual scripting is fantastic. Again, AI should be there to assist.

Unreal Engine Blueprints visual scripting interface, with node blocks connected by wires representing game logic.

Unreal Engine has a visual scripting interface called Blueprints, with node blocks connected by wires representing game logic.

Local Apps

HyperCard’s file format was “stacks.” And stacks could be compiled into applications that can be distributed without HyperCard. With today’s cloud-based AI coding tools, they can all publish a project to a unique URL for sharing. That’s great for prototyping and for personal use, but if you wanted to distribute it as shareware or donation-ware, you’d have to map it to a custom domain name. It’s not straightforward to purchase from a registrar and deal with DNS records.

What if these web apps can be turned into a single exchangeable file format like “.stack” or some such? Furthermore, what if they can be wrapped into executable apps via Electron?

Rip, Mix, Burn

Lovable, v0, and others already have sharing and remixing built in. This ethos is great and builds on the philosophies of the hippie computer scientists. In addition to fostering a remix culture, I imagine a centralized store for these apps. Of course, those that are published as runtime apps can go through the official Apple and Google stores if they wish. Finally, nothing stops third-party stores, similar to the collections of stacks that used to be distributed on CD-ROMs.

AI as Collaborator, Not Interface

As mentioned, AI should not be the main UI for this. Instead, it’s a collaborator. It’s there if you want it. I imagine that it can help with scaffolding a project just by describing what you want to make. And as it’s shaping your app, it’s also explaining what it’s doing and why so that the user is learning and slowly becoming a programmer too.

Democratizing Programming

When my daughter was in middle school, she used a site called Quizlet to make flash cards to help her study for history tests. There were often user-generated sets of cards for certain subjects, but there were never sets specifically for her class, her teacher, that test. With this HyperCard of the future, she would be able to build something custom in minutes.

Likewise, a small business owner who runs an Etsy shop selling T-shirts can spin up something a little more complicated to analyze sales and compare against overall trends in the marketplace.

And that same Etsy shop owner could sell the little app they made to others wanting the same tool for for their stores.

The Future Is Close

Scene from TRON showing a program with raised arms, looking upward at a floating disc in a beam of light.

Tron talks to his user, Alan Bradley, via a communication beam.

In an interview with Garry Tan of Y Combinator in June, Michael Truell, the CEO of Anysphere, which is the company behind Cursor, said his company’s mission is to “replace coding with something that’s much better.” He acknowledged that coding today is really complicated:

Coding requires editing millions of lines of esoteric formal programming languages. It requires doing lots and lots of labor to actually make things show up on the screen that are kind of simple to describe.

Truell believes that in five to ten years, making software will boil down to “defining how you want the software to work and how you want the software to look.”

In my opinion, his timeline is a bit conservative, but maybe he means for professionals. I wonder if something simpler will come along sooner that will capture the imagination of the public, like ChatGPT has. Something that will encourage playing and tinkering like HyperCard did.

There’s a third sequel to TRON that’s coming out soon—TRON: Ares. In a panel discussion in the 5,000-seat Hall H at San Diego Comic-Con earlier this summer, Steven Lisberger, the creator of the franchise provided this warning about AI, “Let’s kick the technology around artistically before it kicks us around.” While he said it as a warning, I think it’s an opportunity as well.

AI opens up computer “programming” to a much larger swath of people—hell, everyone. As an industry, we should encourage tinkering by building such capabilities into our products. Not UIs on the fly, but mods as necessary. We should build platforms that increase the pool of users from technical people to everyday users like students, high school teachers, and grandmothers. We should imagine a world where software is as personalizable as a notebook—something you can write in, rearrange, and make your own. And maybe users can be programmers once again.

America by Design, Again

President Trump signed an executive order creating America by Design, a national initiative to improve the usability and design of federal services, both digital and physical. The order establishes a National Design Studio inside the White House and appoints Airbnb co-founder and RISD graduate Joe Gebbia as the first Chief Design Officer. The studio’s mandate: cut duplicative design costs, standardize experiences to build trust, and raise the quality of government services. Gebbia said he aims to make the U.S. “the most beautiful, and usable, country in the digital world.”

Ironically, this follows the gutting of the US Digital Service, left like a caterpillar consumed from within by parasitic wasp larvae, when it was turned into DOGE. And as part of the cutting of thousands from the federal workforce, 18F, the pioneering digital services agency that started in 2014, was eliminated.

Ethan Marcotte, the designer who literally wrote the book on responsive design and worked at 18F, had some thoughts. He points out the announcement web page weighs in at over three megabytes. Very heavy for a government page and slow for those in the country unserved by broadband—about 26 million. On top of that, the page is full of typos and is an accessibility nightmare.

In other words, we’re left with a web page announcing a new era of design for the United States government, but it’s tremendously costly to download, and inaccessible to many. What I want to suggest is that neither of these things are accidents: they read to me as signals of intent; of how this administration intends to practice design.

The National Design Studio has a mission to turn government services into as easy as buying from the Apple Store. Marcotte’s insight is that designing for government—at scale for nearly 350 million people—is very different than designing in the private sector. Coordination among agencies can take years.

Despite what this new “studio” would suggest, designing better government services didn’t involve smearing an animated flag and a few nice fonts across a website. It involved months, if not years, of work: establishing a regular cadence of user research and stakeholder interviews; building partnerships across different teams or agencies; working to understand the often vast complexity of the policy and technical problems involved; and much, much more. Judging by their mission statement, this “studio” confuses surface-level aesthetics with the real, substantive work of design.

Here’s the kicker:

There’s a long, brutal history of design under fascism, and specifically in the way aesthetics are used to define a single national identity. Dwell had a good feature on this in June…

The executive order also brought on some saltiness from Christopher Butler, lays out the irony, or the waste.

The hubris of this appointment becomes clearer when viewed alongside the recent dismantling of 18F, the federal government’s existing design services office. Less than a year ago, Trump and Elon Musk’s DOGE initiative completely eviscerated this team, which was modeled after the UK’s Government Digital Service and comprised hundreds of design practitioners with deep expertise in government systems. Many of us likely knew someone at 18F. We knew how much value they offered the country. The people in charge didn’t understand what they did and didn’t care.

In other words, we were already doing what Gebbia claims he’ll accomplish in three years. The 18F team had years of experience navigating federal bureaucracy, understanding regulatory constraints, and working within existing governmental structures—precisely the institutional knowledge required for meaningful reform.

Butler knew Joe Gebbia, the appointed Chief Design Officer, in college and calls out his track record in government, or lack thereof.

Full disclosure: I attended college with Joe Gebbia and quickly formed negative impressions of his character that subsequent events have only reinforced.

While personal history colors perspective, the substantive concerns about this appointment stand independently: the mismatch between promised expertise and demonstrated capabilities, the destruction of existing institutional knowledge, the unrealistic timeline claims, and the predictable potential for conflicts of interest.

Government design reform is important work that requires deep expertise, institutional knowledge, and genuine commitment to public service. It deserves leaders with proven track records in complex systems design, not entrepreneurs whose primary experience involves circumventing existing regulations for private gain.

If anything this yet another illustration of this administration’s incompetence.

Surreal black-and-white artwork of a glowing spiral galaxy dripping paint-like streaks over a city skyline at night.

Why I’m Keeping My Design Title

In the 2011 documentary Jiro Dreams of Sushi, then 85 year-old sushi master Jiro Ono says this about craft:

Once you decide on your occupation… you must immerse yourself in your work. You have to fall in love with your work. Never complain about your job. You must dedicate your life to mastering your skill. That’s the secret of success and is the key to being regarded honorably.

Craft is typically thought of as the formal aspects of any field such as design, woodworking, writing, or cooking. In design, we think about composition, spacing, and typography—being pixel-perfect. But one’s craft is much more than that. Ono’s sushi craft is not solely about slicing fish and pressing it against a bit of rice. It is also about picking the right fish, toasting the nori just so, cooking the rice perfectly, and running a restaurant. It’s the whole thing.

Therefore, mastering design—or any occupation—takes time, experience, or reps as the kids say. So it’s to my dismay that Suff Syed’s essay “Why I’m Giving Up My Design Title — And What That Says About the Future of Design” got so much play in recent weeks. Syed is Head of Product Design at Microsoft—er, was. I guess his title is now Member of the Technical Staff. In a perfectly well-argued and well-written essay, he concludes:

That’s why I’m switching careers. From Head of Product Design to Member of Technical Staff.

This isn’t a farewell to experience, clarity, or elegance. It’s a return to first principles. I want to get closer to the metal—to shape the primitives, models, and agents that will define how tomorrow’s software is built.

We need more people at the intersection. Builders who understand agentic flows and elevated experiences. Designers who can reason about trust boundaries and token windows. Researchers who can make complex systems usable—without dumbing them down to a chat interface.

In the 2,800 words preceding the above quote, Syed lays out a five-point argument: the paradigm for software is changing to agentic AI, design doesn’t drive innovation, fewer design leaders will be needed in the future, the commoditization of design, and the pay gap. The tl;dr being that design as a profession is dead and building with AI is where it’s at. 

With respect to Mr. Syed, I call bullshit. 

Let’s discuss each of his arguments.

The Paradigm Argument

Suff Syed:

The entire traditional role of product designers, creating static UI in Silicon Valley offices that work for billions of users, is becoming increasingly irrelevant; when the Agent can simply generate the UI it needs for every single user.

That’s a very narrow view of what user experience designers do. In this diagram by Dan Saffer from 2008, UX encircles a large swath of disciplines. It’s a little older so it doesn’t cover newer disciplines like service design or AI design.

Diagram titled The Disciplines of UX showing overlapping circles of fields like Industrial Design, Human Factors, Communication Design, and Architecture. The central green overlap highlights Interaction Design, surrounded by related areas such as usability engineering, information architecture, motion design, application design, and human-computer interaction.

Originally made by envis pricisely GmBH - www.envis-precisely.com, based on “The Disciplines of UX” by Dan Saffer (2008). (PDF)

I went to design school a long time ago, graduating 1995. But even back then, in Graphic Design 2 class, graphic design wasn’t just print design. Our final project for that semester was to design an exhibit, something that humans could walk through. I’ve long lost the physical model, but my solution was inspired by the Golden Gate Bridge and how I had this impression of the main cables as welcome arms as you drove across the bridge. My exhibit was a 20-foot tall open structure made of copper beams and a glass roof. Etched onto the roof was a poem—by whom I can’t recall—that would cast the shadows of its letters onto the ground, creating an experience for anyone walking through the structure.

Similarly, thoughtful product designers consider the full experience, not just what’s rendered on the screen. How is onboarding? What’s their interaction with customer service? And with techniques like contextual inquiry, we care about the environments users are in. Understanding that nurses in a hospital are in a very busy setting and share computers are important insights that can’t be gleaned from desk research or general knowledge. Designers are students of life and observers of human behavior.

Syed again:

Agents offer a radical alternative by placing control directly into users’ hands. Instead of navigating through endless interfaces, finding a good Airbnb could be as simple as having a conversation with an AI agent. The UI could be generated on the fly, tailored specifically to your preferences; an N:1 model. No more clicking around, no endless tabs, no frustration.

I don’t know. I have my doubts that this is actually going to be the future. While I agree that agentic workflows will be game-changing, I disagree that the chat UI is the only one for all use cases or even most scenarios. I’ve previously discussed the disadvantages of prompting-only workflows and how professionals need more control. 

I also disagree that users will want UIs generated on the fly. Think about the avalanche of support calls and how insane those will be if every user’s interface is different!

In my experience, users—including myself—like to spend the time to set up their software for efficiency. For example, in a dual-monitor setup, I used to expose all of Photoshop’s palettes and put them in the smaller display, and the main canvas on the larger one. Every time I got a new computer or new monitor, I would import that workspace so I could work efficiently. 

Habit and muscle memory are underrated. Once a user has invested the time to arrange panels, tools, and shortcuts the way they like, changing it frequently adds friction. For productivity and work software, consistency often outweighs optimization. Even if a specialized AI-made-for-you workspace could be more “optimal” for a task, switching disrupts the user’s mental model and motor memory.

I want to provide one more example because it’s in the news: consider the backlash that OpenAI has faced in the past week with their rollout of GPT-5. OpenAI assumed people would simply welcome “the next model up,” but what they underestimated was the depth of attachment to existing workflows, and in some cases, to the personas of the models themselves. As Casey Newton put it, “it feels different and stronger than the kinds of attachment people have had to previous kinds of technology.” It’s evidence of how much emotional and cognitive investment users pour into the tools they depend on. You can’t just rip that foundation away without warning. 

Which brings us back to the heart of design: respect for the user. Not just their immediate preferences, but the habits, muscle memory, and yes, relationships that accumulate over time. Agents may generate UIs on the fly, but if they ignore the human need for continuity and control, they’ll stumble into the same backlash OpenAI faced.

The Innovation Argument

Syed’s second argument is that design supports innovation rather than drive it. I half agree with this. If we’re talking about patents or inventions, sure. Technology will always win the day. But design can certainly drive innovation.

He cites Airbnb, Figma, Notion, and Linear as being “incredible companies with design founders,” but only Airbnb is a Fortune 500 company. 

While not having been founded by designers, I don’t think anyone would argue that Apple, Nike, Tesla, and Disney are not design-led and aren’t innovative. All are in the Fortune 500. Disney treats experience design, which includes its parks, media, and consumer products, as a core capability. Imagineering is a literal design R&D division that shapes the company’s most profitable experiences. Look up Lanny Smoot.

Early prototypes of the iPhone featuring the first multitouch screens were actually tablet-sized. But Apple’s industrial design team led by Jony Ive, along with the hardware engineering team got the form factor to fit nicely in one hand. And it was Bas Ording, the UI designer behind Mac OS X’s Aqua design language that prototyped inertial effects. Farhad Manjoo, writing in Slate in 2012:

Jonathan Ive, Apple’s chief designer, had been investigating a technology that he thought could do wonderful things someday—a touch display that could understand taps from multiple fingers at once. (Note that Apple did not invent multitouch interfaces; it was one of several companies investigating the technology at the time.) According to Isaacson’s biography, the company’s initial plan was to the use the new touch system to build a tablet computer. Apple’s tablet project began in 2003—seven years before the iPad went on sale—but as it progressed, it dawned on executives that multitouch might work on phones. At one meeting in 2004, Jobs and his team looked a prototype tablet that displayed a list of contacts. “You could tap on the contact and it would slide over and show you the information,” Forstall testified. “It was just amazing.”

Jobs himself was particularly taken by two features that Bas Ording, a talented user-interface designer, had built into the tablet prototype. One was “inertial scrolling”—when you flick at a list of items on the screen, the list moves as a function of how fast you swipe, and then it comes to rest slowly, as if being affected by real-world inertia. Another was the “rubber-band effect,” which causes a list to bounce against the edge of the screen when there were no more items to display. When Jobs saw the prototype, he thought, “My god, we can build a phone out of this,” he told the D Conference in 2010.

The Leadership Argument

Suff Syed’s third argument is about what it means to be a design leader. He says, “scaling your impact as a designer meant scaling the surfaces you influence.” As you rose up through the ranks, “your craft was increasingly displaced by coordination. You became a negotiator, a timeline manager, a translator of ambition through Product and Engineering partnerships.”

Instead, he argues, because AI can build with fewer people—well, you only need one person: “You need two people: one who understands systems and one who understands the user. Better if they’re the same person.”

That doesn’t scale. Don’t tell me that Microsoft, a company with $281 billion in revenue and 228,000 employees—will shrink like a stellar collapse into a single person with an army of AIs. That’s magical thinking.

Leaders are still needed. Influence and coordination are still needed. Humans will still be needed.

He ends this argument with:

This new world despises a calendar full of reviews, design crits, review meetings, and 1:1s. It emphasizes a repo with commits that matter. And promises the joy of shipping to return to your work. That joy unmediated by PowerPoint, politics, or process. That’s not a demotion. That’s liberation.

So he wants us all to sit in our home offices and not collaborate with others? Innovation no longer comes from lone geniuses. They’re born from bouncing ideas off of your coworkers and everyone building on each other’s ideas.

Friction in the process can actually make things better. Pixar famously has a council known as the Braintrust—a small, rotating group of the studio’s best storytellers who meet regularly to tear down and rebuild works-in-progress. The rules are simple: no mandatory fixes, no sugarcoating, and no egos. The point is to push the director to see the story’s problems more clearly—and to own the solution. One of the most famous saves came with Toy Story 2. Originally destined for direct-to-video release, early cuts were so flat that the Braintrust urged the team to start from scratch. Nine frantic months later, the film emerged as one of Pixar’s most beloved works, proof that constructive creative friction can turn a near-disaster into a classic.

The Distribution Argument

Design taste has been democratized and is table stakes, says Syed in his next argument.

There was a time when every new Y Combinator startup looked like someone tortured an intern into generating a logo using Clipart. Today, thanks to a generation of exposure to good design—and better tools—most founders have internalized the basics of aesthetic judgment. First impressions matter, and now, they’re trivial to get right.

And that templates, libraries, and frameworks make it super easy and quick to spin up something tasteful in minutes:

Component libraries like Tailwind, shadcn/ui, and Radix have collapsed the design stack. What once required a full design team handcrafting a system in Figma, exporting specs to Storybook, and obsessively QA-ing the front-end… now takes a few lines of code. Spin up a repo. Drop in some components. Tweak the palette. Ship something that looks eerily close to Linear or Notion in a weekend.

I’m starting to think that Suff Syed believes that designers are just painters or something. Wow. This whole argument is reductive, flattening our role to be only about aesthetics. See above for how much design actually entails.

The Wealth Argument

“Nobody is paying Designers $10M, let alone $100M anytime soon.” Ah, I think this is him saying the quiet part out loud. Mr. Syed is dropping his design title and becoming a “member of the technical staff” because he’s chasing the money.

He’s right. No one is going to pay a designer $100 million total comp package. Unless you’re Jony Ive and part of io, which OpenAI acquired for $6.5 billion back in May. Which is a rare and likely once-ever occurrence.

In a recent episode of Hard Fork, The New Times tech columnist Kevin Roose said:

The scale of money and investment going into these AI systems is unlike anything we’ve ever seen before in the tech industry. …I heard a rumor there was a big company that wasted a billion dollars or more on a failed training run. And then you start to think, oh, I understand why, to a company like Meta, the right AI talent is worth a hundred million dollars, because that level of expertise doesn’t exist that widely outside of this very small group of people. And if this person does their job well, they can save your company something more like a billion dollars. And maybe that means that you should pay them a hundred million dollars.

“Very small group of people” is likely just a couple dozen people in the world who have this expertise and worth tens of millions of dollars.

Syed again:

People are getting generationally wealthy inventing new agentic abstractions, compressing inference cycles, and scaling frontier models safely. That’s where the gravity is. That’s where anybody should aspire to be. With AI enabling and augmenting you as an individual, there’s a far more compelling reason to chase this frontier. No reason not to.

People also get generationally wealthy by hitting the startup lottery. But it’s a hard road and there’s a lot of luck involved.

The current AI frenzy feels a lot like 1849 in California. Back then, roughly 300,000 people flooded the Sierra Nevada mountains hoping to strike gold, but the math was brutal: maybe 10% made any profit at all, the top 4% earned enough to brag a little, and only about 1% became truly rich. The rest? They left with sore backs, empty pockets, and I guess some good stories. 

Back to Reality

AI is already changing the software industry. As designers and builders of software, we are going to be using AI as material. This is as obvious as when the App Store on iPhone debuted and everyone needed to build apps.

Suff Syed wrote his piece as part personal journey and decision-making and part rallying cry to other designers. He is essentially switching careers and says that it won’t be easy.

This transition isn’t about abandoning one identity for another. It’s about evolving—unlearning what no longer serves us and embracing the disciplines that will shape the future. There’s a new skill tree ahead: model internals, agent architectures, memory hierarchies, prompt flows, evaluation loops, and infrastructure that determines how products think, behave, and scale.

Best of luck to Suff Syed on his journey. I hope he strikes AI gold. 

As for me, I aim to continue on my journey of being a shokunin, or craftsman, like Jiro Ono. For over 30 years—if you count my amateur days in front of the Mac in middle school—I’ve been designing. Not just pushing pixels in Photoshop or Figma, but doing the work of understanding audiences and users, solving business problems, inventing new interaction patterns, and advocating for usability. All in the service of the user, and all while honing my craft.

That craft isn’t tied to a technology stack or a job title. It’s a discipline, a mindset, and a lifetime’s work. Being a designer is my life. 

So no, I’m not giving up my design title. It’s not a relic—it’s a commitment. And in a world chasing the next gold rush, I’d rather keep making work worth coming back to, knowing that in the end, gold fades but mastery endures. Besides, if I ever do get rich, it’ll be because I designed something great, not because I happened to be standing near a gold mine.

Illustration of diverse designers collaborating around a table with laptops and design materials, rendered in a vibrant style with coral, yellow, and teal colors

Five Practical Strategies for Entry-Level Designers in the AI Era

*In Part I of this series on the design talent crisis, I wrote about the struggles recent grads have had finding entry-level design jobs and what might be causing the stranglehold on the design job market. In Part II, I discussed how industry and education need to change in order to ensure the survival of the profession. *

**Part III: Adaptation Through Action **

Like most Gen X kids, I grew up with a lot of freedom to roam. By fifth grade, I was regularly out of the house. My friends and I would go to an arcade in San Francisco’s Fisherman’s Wharf called The Doghouse, where naturally, they served hot dogs alongside their Joust and TRON cabinets. But we would invariably go to the Taco Bell across the street for cheap pre-dinner eats. In seventh grade—this is 1986—I walked by a ComputerLand on Van Ness Avenue and noticed a little beige computer with a built-in black and white CRT. The Macintosh screen was actually pale blue and black, but more importantly, showed MacPaint. It was my first exposure to creating graphics on a computer, which would eventually become my career.

Desktop publishing had officially begun a year earlier with the introduction of Aldus PageMaker and the Apple LaserWriter printer for the Mac, which enabled WYSIWYG (What You See Is What You Get) page layouts and high-quality printed output. A generation of designers who had created layouts using paste-up techniques with tools and materials like X-Acto knives, Rapidograph pens, rubyliths, photostats, and rubber cement had to start learning new skills. Typesetters would eventually be phased out in favor of QuarkXPress. A decade of transition would revolutionize the industry, only to be upended again by the web.

Many designers who made the jump from paste-up to desktop publishing couldn’t make the additional leap to HTML. They stayed graphic designers and a new generation of web designers emerged. I think those who were in my generation—those that started in the waning days of analog and the early days of DTP—were able to make that transition.

We are in the midst of yet another transition: to AI-augemented design. It’s important to note that it’s so early, that no one can say anything with absolute authority. Any so-called experts have been working with AI tools and AI UX patterns for maybe two years, maximum. (Caveat: the science of AI has been around for many decades, but using these new tools, techniques, and developing UX patterns for interacting with such tools is solely new.)

It’s obvious that AI is changing not only the design industry, but nearly all industries. The transformation is having secondary effects on the job market, especially for entry-level talent like young designers.

The AI revolution mirrors the previous shifts in our industry, but with a crucial difference: it’s bigger and faster. Unlike the decade-long transitions from paste-up to desktop publishing and from print to web, AI’s impact is compressing adaptation timelines into months rather than years. For today’s design graduates facing the harsh reality documented in Part I and Part II—where entry-level positions have virtually disappeared and traditional apprenticeship pathways have been severed—understanding this historical context isn’t just academic. It’s reality for them. For some, adaptation is possible but requires deliberate strategy. The designers who will thrive aren’t necessarily those with the most polished portfolios or prestigious degrees, but those who can read the moment, position themselves strategically, and create their own pathways into an industry in tremendous flux.

As a designer who is entering the workforce, here are five practical strategies you can employ right now to increase your odds of landing a job in this market:

  1. Leverage AI literacy as competitive differentiator
  2. Emphasize strategic thinking and systems thinking
  3. Become a “dangerous generalist”
  4. Explore alternative pathways and flexibility
  5. Connecting with community

1. AI Literacy as Competitive Differentiator

Young designer orchestrating multiple AI tools on screens, with floating platform icons representing various AI tools.

Just like how Leah Ray, a recent graphic design MFA graduate from CCA, has deeply incorporated AI into her workflow, you have to get comfortable with some of the tools. (See her story in Part II for more context.)

Be proficient in the following categories of AI tools:

  • Chatbot: Choose ChatGPT, Claude, or Gemini. Learn about how to write prompts. You can even use the chatbot to learn how to write prompts! Use it as a creative partner to bounce ideas off of and to do some initial research for you.
  • Image generator: Adobe Firefly, DALL-E, Gemini, Midjourney, or Visual Electric. Learn how to use at least one of these, but more importantly, figure out how to get consistently good results from these generators.
  • Website/web app generator: Figma Make, Lovable, or v0. Especially if you’re in an interaction design field, you’ll need to be proficient in an AI prompt-to-code tool.

Add these skills to your resume and LinkedIn profile. Share your experiments on social media. 

But being AI-literate goes beyond just the tools. It’s also about wielding AI as a design material. Here’s the good part: by getting proficient in the tools, you’re also learning about the UX patterns for AI and learning what is possible with AI technologies like LLMs, agents, and diffusion models.

I’ve linked to a number of articles about designing for AI use cases:

Have a basic understanding of the following:

Be sure to add at least one case study in your portfolio that incorporates an AI feature.

2. Strategic Thinking and Systems Thinking

Designer pointing at an interconnected web diagram showing how design decisions create ripple effects through business systems.

Stunts like AI CEOs notwithstanding, companies don’t trust AI enough to cede strategy to it. LLMs are notoriously bad at longer tasks that contain multiple steps. So thinking about strategy and how to create a coherent system are still very much human activities.

Systems thinking—the ability to understand how different parts of a system interact and how changes in one component can create cascading effects throughout the entire system—is becoming essential for tech careers and especially designers. The World Economic Forum’s Future of Jobs Report 2025 identifies it as one of the critical skills alongside AI and big data. 

Modern technology is incredibly interconnected. AI can optimize individual elements, but it can’t see the bigger picture—how a pricing change affects user retention, how a new feature impacts server costs, or why your B2B customers need different onboarding than consumers. 

Early-career lawyers at the firm Macfarlanes are now interpreting complex contracts that used to be reserved for more senior colleagues. While AI can extract key info from contracts and flag potential issues, humans are still needed to understand the context, implications, and strategic considerations. 

Emphasize these skills in your case studies by presenting clear, logical arguments that lead to strategic insights and systemic solutions. Frame every project through a business lens. Show how your design decisions ladder up to company, brand, or product metrics. Include the downstream effects—not just the immediate impact.

3. The “Dangerous Generalist” Advantage

Multi-armed designer like an octopus, each arm holding different design tools including research, strategy, prototypes, and presentations.

Josh Silverman, professor at CCA and also a design coach and recruiter, has an idea he calls the “dangerous generalist.” This is the unicorn designer who can “do the research, the strategy, the prototyping, the visual design, the presentation, and the storytelling; and be a leader and make a measurable impact.” 

It’s a lot and seemingly unfair to expect that out of one person, but for a young and hungry designer with the right training and ambition, I think it’s possible. Other than leadership and making quantitative impact, all of those traits would have been practiced and honed at a good design program. 

Be sure to have a variety of projects in your portfolio to showcase how you can do it all.

4. Alternative Pathways and Flexibility

Designer navigating a maze of career paths with signposts directing to startups, nonprofits, UI developer, and product manager roles.

Matt Ström-Awn, in an excellent piece about the product design talent crisis published last Thursday, did some research and says that in “over 600 product design listings, only 1% were for internships, and only 5% required 2 years or less of experience.”

Those are some dismal numbers for anyone trying to get a full-time job with little design experience. So you have to try creative ways of breaking into the industry. In other words, don’t get stuck on only applying for junior-level jobs on LinkedIn. Do that but do more.

Let’s break this down to type of company and type of role.

Types of Companies

Historically, I would have always recommended any new designer to go to an agency first because they usually have the infrastructure to mentor entry-level workers. But, as those jobs have dried up, consider these types of companies.

  • Early-stage startups: Look for seed-stage or Series A startups. Companies who have just raised their Series A will make a big announcement, so they should be easy to target. Note that you will often be the only designer in the company, so you’ll be doing a lot of learning on the job. If this happens, remember to find community (see below).
  • Non-tech businesses: Legacy industries might be a lot slower to think about AI, much less adopt it. Focus on sectors where human touch, tradition, regulations, or analog processes dominate. These fields need design expertise, especially as many are just starting to modernize and may require digital transformation, improved branding, or modernized UX.
  • Nonprofits: With limited budgets and small teams, nonprofits and not-for-profits could be great places to work for. While they tend to try to DIY everything, they will also recognize the need for designers. Organizations that invest in design are 50% more likely to see increases in fundraising revenue.

Type of Roles

In his post for UX Collective, Patrick Morgan says, “Sometimes the smartest move isn’t aiming straight for a ‘product designer’ title, but stepping into a role where you can stay close to product and grow into the craft.”

In other words, look for adjacent roles at the company you want to work for, just to get your foot in the door.

Here are some of those roles—includes ones from Morgan’s list. What is appropriate for you will depend heavily on your skill sets and the type of design you want to eventually practice.

  • UI developer or front-end engineer: If you’re technically-minded, front-end developers are still sought after, though maybe not as much as before because, you know, AI. But if you’re able to snag a spot as one, it’s a way in.
  • Product manager: There is no single path to becoming a product manager. It’s a lot of the same skills a good designer should have, but with even more focus on creating strategies that come from customer insights (aka research). I’ve seen designers move into PM roles pretty easily.
  • Graphic/visual/growth/marketing designer: Again, depending on your design focus, you could already be looking for these jobs. But if you’re in UX and you see one of these roles open up, it’s another way into a company.
  • Production artist: These roles are likely slowly disappearing as well. This is usually a role at an agency or a larger company that just does design execution.
  • Freelancer: You may already be doing this, but you can freelance. Not all companies, especially small ones can afford a full-time designer. So they rely on freelance help. Try your hand at Upwork to build up your portfolio. Ensure that you’re charging a price that seems fair to you and that will help pay your bills.
  • Executive assistant: While this might seem odd, this is a good way to learn about a company and to show your resourcefulness. Lots of EAs are responsible for putting together events, swag, and more. Eventually, you might be able to parlay this role into a design role.
  • Intern: Internships are rare, I know. And if you haven’t done one, you should. However, ensure that the company complies with local regulations about paying interns. For example, California has strict laws about paying interns at least minimum wage. Unpaid internships are legal only if the role meets a litany of criteria including:
  • The internship is primarily educational (similar to a school or training program).
  • The intern is the “primary beneficiary,” not the company.
  • The internship does not replace paid employees or provide substantial benefit to the employer.

5. Connecting with Community

Diverse designers in a supportive network circle, connected both in-person and digitally, with glowing threads showing mentorship relationships.

The job search is isolating. Especially now.

Josh Silverman emphasizes something often overlooked: you’re already part of communities. “Consider all the communities you identify with, as well as all the identities that are a part of you,” he points out. Think beyond LinkedIn—way beyond.

Did you volunteer at a design conference? Help a nonprofit with their rebrand? Those connections matter. Silverman suggests reaching out to three to five people—not hiring managers, but people who understand your work. Former classmates who graduated ahead of you. Designers you met at meetups. Workshop leaders.

“Whether it’s a casual coffee chat or slightly more informal informational interview, there are people who would welcome seeing your name pop up on their screen.”

These conversations aren’t always about immediate job leads. They’re about understanding where the industry’s actually heading, which companies are genuinely hiring, and what skills truly matter versus what’s in job descriptions. As Silverman notes, it’s about creating space to listen and articulate what you need—“nurturing relationships in community will have longer-term benefits.”

In practice: Join alumni Slack channels, participate in local AIGA events, contribute to open-source projects, engage in design challenges. The designers landing jobs aren’t just those with perfect portfolios. They’re the ones who stay visible.

The Path Forward Requires Adaptation, Not Despair

My 12 year-old self would be astonished at what the world is today and how this profession has evolved. I’ve been through three revolutions. Traditional to desktop publishing. Print to web. And now, human-only design to AI-augmented design. 

Here’s what I know: the designers who survived those transitions weren’t necessarily the most talented. They were the most adaptable. They read the moment, learned the tools, and—crucially—didn’t wait for permission to reinvent themselves.

This transition is different. It’s faster and much more brutal to entry-level designers.

But you have advantages my generation didn’t. AI tools are accessible in ways that PageMaker and HTML never were. We had to learn through books! We learned by copying. We learned by taking weeks to craft projects. You can chat with Lovable and prompt your way to a portfolio-worthy project over a weekend. You can generate production-ready assets with Midjourney before lunch. You can prototype and test five different design directions while your coffee’s still warm.

The traditional path—degree, internship, junior role, slow climb up the ladder—is broken. Maybe permanently. But that also means the floor is being raised. You should be working on more strategic and more meaningful work earlier in your career.

But you need to be dangerous, versatile, and visible. 

The companies that will hire you might not be the ones you dreamed about in design school. The role might not have “designer” in the title. Your first year might be messier than you planned.

That’s OK. Every designer I respect has a messy and unlikely origin story.

The industry will stabilize because it always does. New expectations will emerge, new roles will be created, and yes—companies will realize they still need human designers who understand context, culture, and why that button should definitely not be bright purple.

Until then? Be the designer who ships. Who shows up. Who adapts.

The machines can’t do that. Yet.


I hope you enjoyed this series. I think it’s an important topic to discuss in our industry right now, before it’s too late. Don’t forget to read about the five grads and five educators I interviewed for the series. Please reach out if you have any comments, positive or negative. I’d love to hear them.

Portraits of five recent design graduates. From top left to right: Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background; Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket; Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors; Bottom row, left to right: Leah Ray, in a black-and-white portrait wearing a black turtleneck, looking ahead, Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Meet the 5 Recent Design Grads and 5 Design Educators

For my series on the Design Talent Crisis (see Part IPart II, and Part III) I interviewed five recent graduates from California College of the Arts (CCA) and San Diego City College. I’m an alum of CCA and I used to teach at SDCC. There’s a mix of folks from both the graphic design and interaction design disciplines. 

Meet the Grads

If these enthusiastic and immensely talented designers are available and you’re in a position to hire, please reach out to them!

Benedict Allen

Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Benedict Allen is a Los Angeles-based visual designer who specializes in creating compelling visual identities at the intersection of design, culture, and storytelling. With a strong background in apparel graphics and branding, Benedict brings experience from his freelance work for The Hunt and Company—designing for a major automotive YouTuber’s clothing line—and an internship at Pureboost Energy Drink Mix. He is skilled in a range of creative tools including Photoshop, Illustrator, Figma, and AI image generation. Benedict’s approach is rooted in history and narrative, resulting in clever and resonant design solutions. He holds an Associate of Arts in Graphic Design from San Diego City College and has contributed to the design community through volunteer work with AIGA San Diego Tijuana.

Emma Haines

Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors

Emma Haines is a UX and interaction designer with a background in computer science, currently completing her MDes in Interaction Design at California College of the Arts. She brings technical expertise and a passion for human-centered design to her work, with hands-on experience in integrating AI into both the design process and user-facing projects. Emma has held roles at Mphasis, Blink UX, and Colorado State University, and is now seeking full-time opportunities where she can apply her skills in UX, UI, or product design, particularly in collaborative, fast-paced environments.

Erika Kim

Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket

Erika Kim is a passionate UI/UX and product designer based in Poway, California, with a strong foundation in both visual communication and thoughtful problem-solving. A recent graduate of San Diego City College’s Interaction & Graphic Design program, Erika has gained hands-on experience through internships at TritonNav, Four Fin Creative, and My Rental Spot, as well as a year at Apple in operations and customer service roles. Her work has earned her recognition, including a Gold Winner award at The One Club Student Awards for her project “Gatcha Eats.” Erika’s approach to design emphasizes clarity, collaboration, and the power of well-crafted wayfinding—a passion inspired by her fascination with city and airport signage. She is fluent in English and Korean, and is currently open to new opportunities in user experience and product design.

Ashton Landis

Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background

Ashton Landis is a San Francisco-based graphic designer with a passion for branding, typography, and visual storytelling. A recent graduate of California College of the Arts with a BFA in Graphic Design and a minor in ecological practices, Ashton has developed expertise across branding, UI/UX, design strategy, environmental graphics, and more. She brings a people-centered approach to her work, drawing on her background in photography to create impactful and engaging design solutions. Ashton’s experience includes collaborating with Bay Area non-profits to build participatory identity systems and improve community engagement. She is now seeking new opportunities to grow and help brands make a meaningful impact.

Leah Ray

Leah Ray, , in a black-and-white portrait wearing a black turtleneck, looking ahead

Leah (Xiayi Lei) Ray is a Beijing-based visual designer currently working at Kuaishou Technology, with a strong background in impactful graphic design that blends logic and creativity. She holds an MFA in Design and Visual Communications from California College of the Arts, where she also contributed as a teaching assistant and poster designer. Leah’s experience spans freelance work in branding, identity, and book cover design, as well as roles in UI/UX and visual development at various companies. She is fluent in English and Mandarin, passionate about education, arts, and culture, and is recognized for her thoughtful, novel approach to design.

Meet the Educators

Sean Bacon

Sean Bacon, smiling in a light button-down against a blue-gray background

Sean Bacon is a professor, passionate designer and obsessive typophile who teaches a wide range of classes at San Diego City College. He also helps direct and manage the graphic design program and its administrative responsibilities. He teaches a wide range of classes and always strives to bring excellence to his students’ work. He brings his wealth of experiences and insight to help produce many of the award winning portfolios from City. He has worked for The Daily Aztec, Jonathan Segal Architecture, Parallax Visual Communication and Silent Partner. He attended San Diego City College, San Diego State and completed his masters at Savannah College of Art and Design. 

Eric Heiman

Eric Heiman, in profile wearing a flat cap and glasses, black and white photo

Eric Heiman is principal and co-founder of the award-winning, oft-exhibited design studio Volume Inc. He also teaches at California College of the Arts (CCA) where he currently manages TBD*, a student-staffed design studio creating work to help local Bay Area nonprofits and civic institutions. Eric also writes about design every so often, has curated one film festival, occasionally podcasts about classic literature, and was recently made an AIGA Fellow for his contribution to raising the standards of excellence in practice and conduct within the Bay Area design community. 

Elena Pacenti

Portrait of Elena Pacenti, smiling with long blonde hair, wearing a black top, in soft natural light.

Elena Pacenti is a seasoned design expert with over thirty years of experience in design education, research, and international projects. Currently the Director of the MDes Interaction Design program at California College of the Arts, she has previously held leadership roles at NewSchool of Architecture & Design and Domus Academy, focusing on curriculum development, faculty management, and strategic planning. Elena holds a PhD in Industrial Design and a Master’s in Architecture from Politecnico di Milano, and is recognized for her expertise in service design, strategic design, and user experience. She is passionate about leading innovative, complex projects where design plays a central role.

Bradford Prairie

Bradford Prairie, smiling in a jacket and button-down against a soft purple background

Bradford Prairie has been teaching at San Diego City College for nine years, starting as an adjunct instructor while simultaneously working as a professional designer and creative director at Ignyte, a leading branding agency. What made his transition unique was Ignyte’s support for his educational aspirations—they understood his desire to prioritize teaching and eventually move into it full-time. This dual background in industry and academia allows him to bring real-world expertise into the classroom while maintaining his creative practice.

Josh Silverman

Josh Silverman, smiling in a striped shirt against a dark background

For three decades, Josh Silverman has built bridges between entrepreneurship, design education, and designers—always focused on helping people find purpose and opportunity. As founder of PeopleWork Partners, he brings a humane design lens to recruiting and leadership coaching, placing emerging leaders at companies like Target, Netflix, and OpenAI, and advising design teams on critique, culture, and operations. He has chaired the MDes program at California College of the Arts, taught and spoken worldwide, and led AIGA chapters. Earlier, he founded Schwadesign, a lean, holacratic studio recognized by The Wall Street Journal and others. His clients span startups, global enterprises, top universities, cities, and non-profits. Josh is endlessly curious about how teams make decisions and what makes them thrive—and is always up for a long bike ride.

Human chain of designers supporting each other to reach laptops and design tools floating above them, illustrating collaborative mentorship and knowledge transfer in the design industry.

Why Young Designers Are the Antidote to AI Automation

In Part I of this series, I wrote about the struggles recent grads have had finding entry-level design jobs and what might be causing the stranglehold on the design job market.

**Part II: Building New Ladders **

When I met Benedict Allen, he had just finished with Portfolio Review a week earlier. That’s the big show all the design students in the Graphic Design program at San Diego City College work toward. It’s a nice event that brings out the local design community where seasoned professionals review the portfolios of the graduating students.

Allen was all smiles and relief. “I want to dabble in different aspects of design because the principles are generally the same.” He goes on to mention how he wants to start a fashion brand someday, DJ, try 3D. “I just want to test and try things and just have fun! Of course, I’ll have my graphic design job, but I don’t want that to be the end. Like when the workday ends, that’s not the end of my creativity.” He was bursting with enthusiasm.

And confidence. When asked about how prepared he felt about his job prospects, he shares, “I say this humbly, I really do feel confident because I’m very proud of my portfolio and the things I’ve made, my design decisions, and my thought processes.” Oh to be in my early twenties again and have his same zeal!

But here’s the thing, I believe him. I believe he’ll go on to do great things because of this young person’s sheer will. He idolizes Virgil Abloh, the died-too-young multi-hyphenate creative who studied architecture, founded the fashion label Off-White, became artistic director of menswear at Louis Vuitton, and designed furniture for IKEA and shoes for Nike. Abloh is Allen’s North Star. 

Artificial intelligence, despite its sycophantic tendencies, does not have that infectious passion. Young people are the life blood of companies. They can reinvigorate an organization and bring perspectives to a jaded workforce. Every single time I’ve ever had the privilege of working with interns, I have felt this. My teams have felt this. And they make the whole organization better.

What Companies Must Do

I love this quote by Robert F. Kennedy in his 1966 speech at the University of Cape Town:

This world demands the qualities of youth: not a time of life but a state of mind, a temper of the will, a quality of imagination, a predominance of courage over timidity, of the appetite for adventure over the life of ease.

As mentioned in Part I of this series, the design industry is experiencing an unprecedented talent crisis, with traditional entry-level career pathways rapidly eroding as the capabilities of AI expand and companies anticipate using AI to automate junior-level tasks. Youth is the key ingredient that sustains companies and industries.

The Business Case for Juniors

Portraits of five recent design graduates. From top left to right: Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background; Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket; Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors; Bottom row, left to right: Leah Ray, in a black-and-white portrait wearing a black turtleneck, looking ahead, Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Five recent design graduates. From top left to right: Ashton Landis, Erika Kim, Emma Haines. Bottom row, left to right: Leah Ray, Benedict Allen.

Just as important as the energy and excitement Benedict Allen brings, is his natural ability to wield AI. He’s an AI native.

In my conversation with him, he’s tried all the major chatbots and has figured out what works best for what. “I’ve used Gemini as I find its voice feature amazing. Like, I use it all the time. …I use Claude sometimes for writing, but I find that the writing was not as good as ChatGPT. ChatGPT felt less like AI-speak. …I love Perplexity. That’s one of my favorites as well.”

He’s not alone. Leah Ray, who recently graduated from California College of the Arts with an MFA in Graphic Design, says that she can’t remember how her design process existed without AI, saying, “It’s become such an integral part of how I think and work.”

She parries with ChatGPT, using it as a creative partner:

I usually start by having a deep or sometimes extended conversation with ChatGPT. And it’s not about getting the direct answer, but more about using the dialogue to clarify my thoughts and challenging my assumptions and even arrive at a clear design direction.

She’ll go on and use the chatbot to help with project planning and timelines, copywriting, code generation, and basic image generation. Ray has even considered training her own AI model using tools like ComfyUI or LoRA that are based on her past design work. She says, “So it could assist me in generating proposals that match my visual styles.” Pretty advanced stuff.

Similar to Ray, Emma Haines, who is finishing up her MDes in Interaction Design at CCA, says that AI “comes into the picture very early on.” She’ll use ChatGPT for brainstorming and project planning, and less so in the later stages.

Unlike many established designers, these young ones don’t see AI as threatening, nor as a crutch. They treat AI as any other tool. Ashton Landis, who recently graduated from CCA with a BFA in Graphic Design, says, “I think right now it’s primarily a tool instead of a replacement.”

Elena Pacenti, Director of MDes Interaction Design at CCA, observes that students have embraced AI immediately and across the board. She says generative AI has been “adopted immediately by everyone, faculty and students” and it’s being used to create text, images, and all sorts of visual content—not just single images, but storyboards, videos, and more. It’s become just another tool in their toolkit.

Pacenti notices that her students are gravitating toward using AI for efficiency rather than exploration. She sees them “embracing all the tools that help make the process faster, more efficient, quicker” to get to their objective, rather than using AI “to explore things they haven’t thought about or to make things.” They’re using it as a shortcut rather than a creative partner. 

Restructure Entry-Level Roles

I don’t think it’s quite there yet, but AI will eventually take over the traditional tasks we give to junior designers. Anthropic recently released an integration with Canva, but the results are predictable—barely a good first draft. For companies that choose to live on the bleeding edge, that will likely be within 12 months. I think in two years, we’ll cede more and more of these junior-level design tasks like extending designs, resizing assets, and searching for stock to AI.

But I believe there is still a place for entry-level designers in any organization. 

Firstly, the tasks can simply be done faster. When we talk about AI and automation, oftentimes the human who’s initiating the task and then judging its output isn’t part of the conversation. Babysitting AI takes time and more importantly, breaks flow. I can imagine teaching a junior designer how to perform these tasks using AI and just stack up more in a day or week. They’ll still be able to practice their taste and curation skills with supervision from more senior peers.

Second, younger people are inherently better with newer technologies. Asking a much more senior designer to figure out advanced prototyping with Lovable or Cursor will be a non-starter. But junior designers should be able to pick this up quickly and become indispensable pairs of hands in the overall process.

Third, we can simply level up the complexity of the tasks we give to juniors. Aneesh Raman, chief economic opportunity officer at LinkedIn, wrote in The New York Times:

Unless employers want to find themselves without enough people to fill leadership posts down the road, they need to continue to hire young workers. But they need to redesign entry-level jobs that give workers higher-level tasks that add value beyond what can be produced by A.I. At the accounting and consulting firm KPMG, recent graduates are now handling tax assignments that used to be reserved for employees with three or more years of experience, thanks to A.I. tools. And at Macfarlanes, early-career lawyers are now tasked with interpreting complex contracts that once fell to their more seasoned colleagues. Research from the M.I.T. Sloan School of Management backs up this switch, indicating that new and low-skilled workers see the biggest productivity gains and benefits from working alongside A.I. tools.

In other words, let’s assume AI will tackle the campaign resizing or building out secondary and tertiary pages for a website. Have junior designers work on smaller projects as the primary designer so they can set strategy, and have them shadow more senior designers and develop their skills in concept, strategy, and decision-making, not just execution.

Invest in the Leadership Pipeline

The 2023 Writers Guild of America strike offers a sobering preview of what could happen to the design profession if we’re not careful about how AI reshapes entry-level opportunities. Unrelated to AI, but to simple budget-cutting, Hollywood studios began releasing writers immediately after scripts were completed, cutting them out of the production process where they would traditionally learn the hands-on skills needed to become showrunners and producers. As Oscar-winning writer Sian Heder (CODA) observed, “A writer friend has been in four different writers rooms and never once set foot on set. How are we training the next generation of producers and showrunners?” The result was a generation of writers missing the apprenticeship pathway that transforms scriptwriters into skilled creative leaders—exactly the kind of institutional knowledge loss that weakens an entire industry.

The WGA’s successful push for guaranteed on-set presence reveals what the design industry must do to avoid a similar talent catastrophe. Companies are avoiding junior hires entirely, anticipating that AI will handle execution tasks—but this eliminates the apprenticeship pathway where designers actually learn strategic thinking. Instead, they need to restructure entry-level roles to guarantee meaningful learning opportunities—pairing junior designers with real projects where they develop taste through guided decision-making. As one WGA member put it, “There’s just no way to learn to do this without learning on the job.” The design industry’s version of that job isn’t Figma execution—it’s the messy, collaborative process of translating business needs into human experiences. 

Today’s junior designers will become tomorrow’s creative directors, design directors, and heads of design. Senior folks like myself will eventually age out, so companies that don’t invest in junior talent now won’t have any experienced designers in five to ten years. 

And if this is an industry-wide trend, young designers who can’t get into the workforce today will pivot to other careers and we won’t have senior designers, period.

How Education is Responding

Portraits of five design educators. From top left to right: Bradford Prairie, smiling in a jacket and button-down against a soft purple background; Elena Pacenti, seated indoors, wearing a black top with long light brown hair; Sean Bacon, smiling in a light button-down against a white background; Bottom row, left to right: Josh Silverman, smiling in a striped shirt against a dark background; Eric Heiman, in profile wearing a flat cap and glasses, black and white photo

Our five design educators. From top left to right: Bradford Prairie, Elena Pacenti, Sean Bacon. Bottom row, left to right: Josh Silverman, Eric Heiman.

The Irreplaceable Human Element

When I spoke to the recent grads, all five of them mentioned how AI-created output just has an air of AI. Emma Haines:

People can tell what AI design looks like versus what human design looks like. I think that’s because we naturally just add soul into things when we design. We add our own experiences into our designs. And just being artists, we add that human element into it. I think people gravitate towards that naturally, just as humans.

It speaks to how educators are teaching—and have always been teaching—design. Bradford Prairie, a professor at San Diego City College:

We always tell students, “Try to expose yourself to a lot of great work. Try to look at a lot of inspiration. Try to just get outside more.” Because I think a lot of our students are introverts. They want to sit in their room and I tell them, “No, y’all have to get out in the world! …and go touch grass and touch other things out in the world. That’s how you learn what works and what doesn’t, and what culture looks like.

Leah Ray, explaining how our humanity imbues quality into our designs:

You can often recognize an AI look. Images and designs start to feel like templates and over-predictable in that sense. And everything becomes fast like fast food and sometimes even quicker than eating instant food.

And even though there is a scary trend towards synthetic user research, Elena Pacenti, discourages it. She’ll teach her students to start with provisional user archetypes using AI, but then they’ll need to perform primary research to validate it. “We’re going to do primary to validate. Please do not fake data through the AI.”

Redefining Entry-Level Value

I only talked to educators from two institutions for this series, since those are the two I have connections to. For both programs, there’s less emphasis on hard skills like how to use Figma and more on critical thinking and strategy. I suspect that bootcamps are different.

Sean Bacon, chair of the Graphic Design program at San Diego City College:

Our program is really about concepting, creative thinking, and strategy. Bradford and I are cautiously optimistic that maybe, luckily, the chips we put down, are in the right part of the board. But who knows?

I think he’s spot on. Josh Silverman, who teaches courses at CCA’s MDes Interaction Design, and also a design recruiter, observes: 

So what I’m seeing from my perspective is a lot of organizations that are hiring the kind of students that we graduate from the program, what I like to call a “dangerous generalist.” It’s someone who can do the research, strategy, prototyping, visual design, presentation, storytelling, and be a leader and make a measurable impact. And if a company is restructuring or just starting and only has the means to hire one person, they’re going to want someone who can do all those things. So we are poised to help a lot of students get meaningful employment because they can do all those things.

AI as Design Material, Not Just Tool

Much of the AI conversation has been about how to incorporate it into our design workflows. For UX designers, it’s just as important to discuss how we design AI experiences for users.

Elena Pacenti champions this shift in the conversation. “My take on the whole thing has been to move beyond the tools and to understand AI as a material we design with.” Similar to the early days of virtual reality, AI is an interaction paradigm with very few UI conventions and therefore ripe for designers to invent. Right now.

This profession specifically designs the interaction for complex systems, products, services, a combination—whatever it is out there—and ecosystems of technologies. What’s the next generation of these things that we’re going to design for? …There’s a very challenging task of imagining interactions that are not going just through a chatbot, but they don’t have shape yet. They look tremendously immaterial, more than the past. It’s not going to be necessarily through a screen.

Her program at CCA has implemented this through a specific elective called “Prototyping with AI,” which Pacenti describes as teaching students to “get your hands dirty and understand what’s behind the LLMs and how you can use this base of data and intelligence to do things that you want, not that they want.” The goal is to help students craft their own tools rather than just using prepackaged consumer AI tools—which she calls “a shift towards using it as a material.”

The Path Forward Requires Collective Action

Benedict Allen’s infectious enthusiasm after Portfolio Review represents everything the design industry risks losing if we don’t act deliberately. His confidence, creativity, and natural fluency with AI tools? That’s the potential young designers bring—but only if companies and educational institutions create pathways for that talent to flourish.

The solution isn’t choosing between human creativity and artificial intelligence. It’s recognizing that the combination is more powerful than either alone. Elena Pacenti’s insight about understanding “AI as a material we design with” points toward this synthesis, while companies like KPMG and Macfarlanes demonstrate how entry-level roles can evolve rather than disappear.

This transformation demands intentional investment from both sides. Design schools are adapting quickly—reimagining curriculum, teaching AI fluency alongside fundamental design thinking, emphasizing the irreplaceable human elements that no algorithm can replicate. Companies must match this effort. Restructure entry-level roles. Create new apprenticeship models. Recognize that today’s junior designers will become tomorrow’s creative leaders.

The young designers I profiled here prove that talent and enthusiasm haven’t disappeared. They’re evolving. Allen’s ambitious vision to start a fashion brand. Leah Ray’s ease with AI tools. The question isn’t whether these designers can adapt to an AI-enabled future.

It’s whether the industry will create space for them to shape it.


In the final part of this series, I’ll explore specific strategies for recent graduates navigating this current job market—from building AI-integrated portfolios to creating alternative pathways into the profession.

Illustration of people working on laptops atop tall ladders and multi-level platforms, symbolizing hierarchy and competition, set against a bold, abstract sunset background.

The Design Industry Created Its Own Talent Crisis. AI Just Made It Worse.

This is the first part in a three-part series about the design talent crisis. Read Part II and Part III.

**Part I: The Vanishing Bottom Rung **

Erika Kim’s path to UX design represents a familiar pandemic-era pivot story, yet one that reveals deeper currents about creative work and economic necessity. Armed with a 2020 film and photography degree from UC Riverside, she found herself working gig photography—graduations, band events—when the creative industries collapsed. The work satisfied her artistic impulses but left her craving what she calls “structure and stability,” leading her to UX design. The field struck her as an ideal synthesis, “I’m creating solutions for companies. I’m working with them to figure out what they want, and then taking that creative input and trying to make something that works best for them.”

Since graduating from the interaction design program at San Diego City College a year ago, she’s had three internships and works retail part-time to pay the bills. “I’ve been in survival mode,” she admits. On paper, she’s a great candidate for any junior position. Speaking with her reveals a very thoughtful and resourceful young designer. Why hasn’t she been able to land a full-time job? What’s going on in the design job market? 

Back in January, Jared Spool offered an explanation. The UX job market crisis stems from a fundamental shift that occurred around late 2022—what he calls a “market inversion.” The market flipped from having far more open UX positions than qualified candidates to having far more unemployed UX professionals than available jobs. The reasons are multitude, but include expiring tax incentives, rising interest rates, an abundance of bootcamp graduates, automated hiring processes, and globalization.

But that’s only part of the equation. I believe there’s something much larger at play, one that affects more than just UX or product design, but all design disciplines. One in which the tip of the spear has already been felt by software developers in their job market. AI.

Closing Doors for New Graduates

In the first half of this year, 147 tech companies have laid off over 63,000 workers, with a significant portion of them engineers. Entry-level hiring has collapsed, revealing a new permanent reality. At Big Tech companies, new graduates now represent just 7% of all hires—a precipitous 25% decline from 2023 levels and a staggering 50% drop from pre-pandemic baselines in 2019.

The startup ecosystem tells an even more troubling story, where recent graduates comprise less than 6% of new hires, down 11% year-over-year and more than 30% since 2019. This isn’t merely a temporary adjustment; it represents a fundamental restructuring of how companies approach talent acquisition. Even the most credentialed computer science graduates from top-tier programs are finding themselves shut out, suggesting that the erosion of junior positions cuts across disciplines and skill levels.  

LinkedIn executive Aneesh Raman wrote in an op-ed for The New York Times that in a “recent survey of over 3,000 executives on LinkedIn at the vice president level or higher, 63 percent agreed that A.I. will eventually take on some of the mundane tasks currently allocated to their entry-level employees.”

There is already a harsh reality for entry-level tech workers. Companies have essentially frozen junior engineer and data analyst hiring because AI can now handle the routine coding and data querying tasks that were once the realm for new graduates. Hiring managers expect AI’s coding capabilities to expand rapidly, potentially eliminating entry-level roles within a year, while simultaneously increasing demand for senior engineers who can review and improve AI-generated code. It’s a brutal catch-22: junior staff lose their traditional stepping stones into the industry just as employers become less willing to invest in onboarding them. 

For design students and recent graduates, this data illuminates a broader industry transformation where companies are increasingly prioritizing proven experience over potential—a shift that challenges the very foundations of how creative careers traditionally begin.

While AI tools haven’t exactly been able to replace designers yet—even junior ones—the tech will get there sooner than we think. And CEOs and those holding the purse strings are anticipating this, thus holding back hiring of juniors.

Portraits of five recent design graduates. From top left to right: Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background; Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket; Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors; Bottom row, left to right: Leah Ray, in a black-and-white portrait wearing a black turtleneck, looking ahead, Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Five recent design graduates. From top left to right: Ashton Landis, Erika Kim, Emma Haines. Bottom row, left to right: Leah Ray, Benedict Allen.

The Learning-by-Doing Crisis

Ashton Landis recently graduated with a BFA in Graphic Design from California College of the Arts (full disclosure: my alma mater). She says:

I found that if you look on LinkedIn for “graphic designer” and you just say the whole San Francisco Bay area, so all of those cities, and you filter for internships and entry level as the job type, there are 36 [job postings] total. And when you go through it, 16 of them are for one or more years of experience. And five of those are for one to two years of experience. And then everything else is two plus years of experience, which doesn’t actually sound like sound like entry level to me. …So we’re pretty slim pickings right now.

When I graduated from CCA in 1995 (or CCAC as it was known back then), we were just climbing out of the labor effects of the early 1990s recession. For my early design jobs in San Francisco, I did a lot of production and worked very closely with more senior designers and creative directors to hone my craft. While school is great for academic learning, nothing beats real-world experience.

Eric Heiman, creative director and co-owner of Volume Inc., a small design studio based in San Francisco, has been teaching at CCA for 26 years. He observes:

We internalize so much by doing things slower, right? The repetition of the process, learning through tinkering with our process, and making mistakes, and things like that. We have internalized those skills.

Sean Bacon, chair of the Graphic Design program at San Diego City College wonders:

What is an entry level position in design then? Where do those exist? How often have I had these companies hire my students even though they clearly don’t have those requirements. So I don’t know. I don’t know what happens, but it is scary to think we’re losing out on what I thought was really valuable training in terms of how I learned to operate, at least in a studio.

Back to the beginnings of my career, I remember digitizing logos when I interned with Mark Fox, a talented logo designer based in Marin County. A brilliant draftsman, he had inked—and still inks—all of his logos by hand. The act of redrawing marks in Illustrator helped me develop my sense of proportions, curves, and optical alignment. At digital agencies, I started my journey redesigning layouts of banners in different sizes. I would eventually have juniors to do that for me as I rose through the ranks. These experiences—though a little painful at the time—were pivotal in perfecting our collective craft. To echo Bacon, it was “really valuable training.”

Apprenticeships at Agencies

Working in agencies and design studios was pretty much an apprenticeship model. Junior designers shadowed more senior designers and took their lead when executing a campaign or designing more pages for a website.

For a typical website project, as a senior designer or art director, I would design the homepage and a few other critical screens, setting up the look and feel. Once those were approved by the client, junior designers would take over and execute the rest. This was efficient and allowed the younger staff to participate and put their reps in.

Searching for stock photos was another classic assignment for interns and junior designers. These were oftentimes multi-day assignments, but it helped teach juniors how to see. 

But today, generative AI apps like Midjourney and Visual Electric are replacing stock photography. 

From Craft to Curation

As the industry marches towards incorporating AI into our workflows, strategy, judgement, and most importantly taste, are critical skills.

But the paradoxically, how do designers develop taste, craft, and strategic thinking without doing the grunt work?

And not only are they missing out on the mundane work because of the dearth of entry-level opportunities, but also because generative AI can give results so quickly.

Eric Heiman again:

I just give the AI a few words and poof, it’s there. How do you learn how to see things? I just feel like learning how to see is a lot about slowing down. And in the case of designers, doing things yourself over and over again, and they slowly reveal themselves through that process.

All the recent graduates I interviewed for this piece are smart, enthusiastic, and talented. Yet, Ashton Landis and Erika Kim are struggling to find full-time jobs. 

Landis doesn’t think her negative experience in the job market is “entirely because of AI,” attributing it more to “general unemployment rates are pretty high right now” and a job market that is “clearly not great.”

Questioning Career Choices

Leah Ray, a recent graphic design MFA graduate from CCA, was able to secure a position as International Visual Designer at Kuaishou, a popular Chinese short-form video and live-streaming app similar to TikTok. But it wasn’t easy. Her job search began months before graduation, extending through her thesis work and creating the kind of sustained anxiety that prompted her final school project—a speculative design exploring AI’s potential to predict alternative career futures.

I was so anxious about my next step after graduation because I didn’t have a job lined up and I didn’t know what to do. …I’m a person who follows the social clock. My parents and the people around me expect me to do the right thing at the right age. Getting a nice job was my next step, but I couldn’t finish that, which led to me feeling anxious and not knowing what to do.

But through her tenacity and some luck, she was able to land the job that she starts this month. 

No, it was not easy to find. But finding this was very lucky. I do remember I saw a lot of job descriptions for junior designers. They expect designers to have AI skills. And I think there are even some roles specifically created for people with AI-related design skills, like AI motion designer and AI model designer, sort of something like that. Like AI image training designers.

Ray’s observation reveals a fundamental shift in entry-level design expectations, where AI proficiency has moved from optional to essential, with entirely new roles emerging around AI-specific design skills.

Portraits of five design educators. From top left to right: Bradford Prairie, smiling in a jacket and button-down against a soft purple background; Elena Pacenti, seated indoors, wearing a black top with long light brown hair; Sean Bacon, smiling in a light button-down against a white background; Bottom row, left to right: Josh Silverman, smiling in a striped shirt against a dark background; Eric Heiman, in profile wearing a flat cap and glasses, black and white photo

Our five design educators. From top left to right: Bradford Prairie, Elena Pacenti, Sean Bacon. Bottom row, left to right: Josh Silverman, Eric Heiman.

Preparing Our Students

Emma Haines, a designer completing her masters degree in Interaction Design at CCA began her job search in May. (Her program concludes in August.) Despite not securing a job yet, she’s bullish because of the prestige and practicality of the Master of Design program.

I think this program has actually helped me a good amount from where I was starting out before. I worked for a year between undergrad and this program, and between where I was before and now, there’s a huge difference. That being said, since the industry is changing so rapidly, it feels a little hard to catch up with. That’s the part that makes me a little nervous going into it. I could be confident right now, but maybe in six months something changes and I’m not as confident going into the job market.

CCA’s one-year program represents a strategic bet on adaptability over specialization. Elena Pacenti, the program’s director, describes an intensive structure that “goes from a foundational semester with foundation of interaction design, form, communication, and research to the system part of it. So we do systems thinking, prototyping, also tangible computing.” The program’s Social Lab component is “two semester-long projects with community partners in partnership with stakeholders that are local or international from UNICEF down to the food bank in Oakland.” It positions design as a tool for social impact rather than purely commercial purposes. This compressed timeline creates what Pacenti calls curricular agility: “We’re lucky that we are very agile. We are a one-year program so we can implement changes pretty quickly without affecting years of classes and changes in the curriculum.”

Josh Silverman, who chaired it for nearly five years, reports impressive historical outcomes: “I think historically for the first nine years of the program—this is cohort 10—I think we’ve had something like 85% job placement within six months of graduation.”

Yet both educators acknowledge current market realities. Pacenti observes that “that fat and hungry market of UX designers is no longer there; it’s on a diet,” while maintaining optimism about design’s future relevance: “I do not believe that designers will be less in demand. I think there will be a tremendous need for designers.” Emma Haines’s nervousness about rapid industry change reflects this broader tension—the gap between educational preparation and market evolution that defines professional training during transformative periods.

Bradford Prairie, who has taught in San Diego City College’s Graphic Design program for nine years, embodies this experimental approach to AI in design education. “We get an easy out when it comes to AI tools,” he explains, “because we’re a program that’s meant to train people for the field. And if the field is embracing these tools, we have an obligation to make students aware of them and give some training on how to use the tools.”

Prairie’s classroom experiments reveal both the promise and pitfalls of AI-assisted design. He describes a student struggling with a logo for a DJ app who turned to ChatGPT for inspiration: “It generates a lot of expected things like turntables, headphones, and waveforms… they’re all too complicated. They all don’t really look like logos. They look more like illustrations.” But the process sparked some other ideas, so he told the student, “This is kind of interesting how the waveform is part of the turntable and… we can take this general idea and redraw it and make it simplified.”

This tension between AI output and human refinement has become central to his teaching philosophy: “If there’s one thing that AI can’t replace, it’s your sense of discernment for what is good and what is not good.” The challenge, he acknowledges, lies in developing that discernment in students who may be tempted to rely too heavily on AI from the start.

The Turning Point

These challenges are real, and they’re reshaping the design profession in fundamental ways. Traditional apprenticeships are vanishing, entry-level opportunities are scarce, and new graduates face an increasingly competitive landscape. But within this disruption lies opportunity. The same forces that have eliminated routine design tasks have also elevated the importance of uniquely human skills—strategic thinking, cultural understanding, and creative problem-solving. The path forward requires both acknowledging what’s been lost and embracing what’s possible.

Despite her struggles to land a full-time job in design, Erika Kim remains optimistic because she’s so enthused about her career choice and the opportunity ahead. Remarking on the parallels of today versus the beginning of the Covid-19 pandemic, she says “It’s kind of interesting that I’m also on completely different grounds in terms of uncertainty. But you just have to get through it, you know. Why not?”


In the next part of this series, I’ll focus on the opportunities ahead: how we as a design industry can do better and what we should be teaching our design students. In the final part, I’ll touch on what recent grads can do to find a job in this current market.

Retro-style robot standing at a large control panel filled with buttons, switches, and monitors displaying futuristic data.

The Era of the AI Browser Is Here

For nearly three years, Arc from The Browser Company has been my daily driver. To be sure, there was a little bit of a learning curve. Tabs disappeared after a day unless you pinned them. Then they became almost like bookmarks. Tabs were on the left side of the window, not at the top. Spaces let me organize my tabs based on use cases like personal, work, or finances. I could switch between tabs using control-Tab and saw little thumbnails of the pages, similar to the app switcher on my Mac. Shift-command-C copied the current page’s URL. 

All these little interface ideas added up to a productivity machine for web jockeys like myself. And so, I was saddened to hear in May that The Browser Company stopped actively developing Arc in favor of a new AI-powered browser called Dia. (They are keeping Arc updated with maintenance releases.)

They had started beta-testing Dia with college students first and just recently opened it up to Arc members. I finally got access to Dia a few weeks ago. 

But before diving into Dia, I should mention I also got access to another AI browser, Perplexity’s Comet about a week ago. I’m on their Pro plan but somehow got an invite in my email. I had thought it was limited to those on their much more expensive Max plan only. Shhh.

So this post is about both and how the future of web browsing is obviously AI-assisted, because it feels so natural.

Chat With Your Tabs

Landing page for Dia, a browser tool by The Browser Company, showcasing the tagline “Write with your tabs” and a button for early access download, along with a UI mockup for combining tabs into a writing prompt.

To be honest, I used Dia in fits and starts. It was easy to import my profiles from Arc and have all my bookmarks transferred over. But, I was missing all the pro-level UI niceties that Arc had. Tabs were back at the top and acted like tabs (though they just brought back sidebar tabs in the last week). There were no spaces. I felt like it was 2021 all over again. I tried to stick with it for a week. 

What Dia offers that Arc does not is, of course, a way to “chat” with your tabs. It’s a chat sidebar to the right of the web page that has the context of that page you’re on. You can also add additional tabs to the chat context by simply @mentioning them.

In a recent article about Dia in The New York Times, reporter Brian X. Chen describes using it to summarize a 22-minute YouTube video about car jump starters, instantly surfacing the top products without watching the whole thing. This is a vivid illustration of the “chat with your tabs” value prop. Saving time.

I’ve been doing the same thing. Asking the chat to summarize a page for me or explain some technical documentation to me in plain English. Or I use it as a fuzzy search to find a quote from the page that mentions something specific. For example, if I’m reading an interview with the CEO of Perplexity and I want to know if he’s tried the Dia browser yet, I can ask, “Has he used Dia yet?” Instead of reading through the whole thing. 

Screenshot of the Dia browser displaying a Verge article about Perplexity’s CEO, with an AI-generated sidebar summary clarifying that Aravind Srinivas has not used Dia.

Screenshot of the Dia browser displaying a Verge article about Perplexity’s CEO, with an AI-generated sidebar summary clarifying that Aravind Srinivas has not used Dia.

Another use case is to open a few tabs and ask for advice. For example, I can open up a few shirts from an e-commerce store and ask for a recommendation.

Screenshot of the Dia browser comparing shirts on the Bonobos website, with multiple tabs open for different shirt styles. The sidebar displays AI-generated advice recommending the Everyday Oxford Shirt for a smart casual look, highlighting its versatility, fit options, and stretch comfort.

Using Dia to compare shirts and get a smart casual recommendation from the AI.

Dia also has customizable “skills” which are essentially pre-saved prompts. I made one to craft summary bios from LinkedIn profiles.

Screenshot of the Dia browser on Josh Miller’s LinkedIn profile, with the “skills” feature generating a summarized biography highlighting his role as CEO of The Browser Company and his career background.

Using Dia’s skills feature to generate a summarized biography from a LinkedIn profile.

It’s cool. But I found that it’s a little limited because the chat is usually just with the tabs that you feed Dia. It helps you digest and process information. In other words, it’s an incremental step up from ChatGPT.

Enter Comet.

Browsing Done for You

Landing page for Comet, an AI-powered browser by Perplexity, featuring the tagline “Browse at the speed of thought” with a prominent “Get Comet” download button.

Comet by Perplexity also allows you to chat with your tabs. Asking about that Verge interview, I received a very similar answer. (No, Aravind Srinivas has not used Dia yet.) And because Perplexity search is integrated into Comet, I find that it is much better at context-setting and answering questions than Dia. But that’s not Comet’s killer feature.

Screenshot of the Comet browser displaying a Verge article about Perplexity’s CEO, with the built-in AI assistant on the right confirming Aravind Srinivas has not used the Dia browser.

Viewing the same article in Comet, with its AI assistant answering questions about the content.

Instead, it’s doing stuff with your tabs. Comet’s onboarding experience shows a few use cases like replying to emails and setting meetings, or filling an Instacart cart with the ingredients for butter chicken.

Just like Dia, when I first launched Comet, I was able to import my profiles from Arc, which included bookmarks and cookies. I was essentially still logged into all the apps and sites I was already logged into. So I tried an assistant experiment. 

One thing I often do is to look up the restaurants that have availability on OpenTable in Yelp. I tend to agree more with Yelpers who are usually harsher critics than OpenTable diners. So I asked Comet to “Find me the highest rated sushi restaurants in San Diego that have availability for 2 at 7pm next Friday night on OpenTable. Pick the top 10 and then rank them by Yelp rating.” And it worked! And if I really want to, I can say “Book Takaramono sushi” and it would have done so. (Actually, I did and then quickly canceled.)

The Comet assistant helped me find a sushi restaurant reservation. Video is sped up 4x.

I tried a different experiment which is something I heard Aravind Srinivas say in his interview with The Verge. I navigated to Gmail and checked three emails I wanted to unsubscribe to. I asked the assistant, “unsubscribe from the checked emails.” The agent then essentially took over my Gmail screen and opened the first checked email, clicked on the unsubscribe link. It repeated this process for the other two emails though ran into a couple of snags. First, Gmail doesn’t keep the state of the checked emails when you click into an email. But the Comet assistant was smart enough to remember the subject lines of all three emails. For the second email, it had some issues filing out the right email for the form so it didn’t work. Therefore of the three unsubscribes, it succeeded on two. 

The whole process also took about two minutes. It was wild though to see my Gmail being navigated by the machine. So that you know it’s in control, Comet puts a teal glow around the edges of the page, not dissimilar to the purple glow of the new Siri. And I could have stopped Comet at any time by clicking a stop button. Obviously, sitting there for two minutes and watching my computer unsubscribe to three emails is a lot longer than the 20 seconds it would have take me to do this manually, but like with many agents, the thinking is to delegate a process to it and come back later to check it. 

I Want My AI Browser

A couple hours after Perplexity launched Comet, Reuters published a leak with the headline “Exclusive: OpenAI to release web browser in challenge to Google Chrome.” Perplexity’s CEO seems to suggest that it was on purpose, to take a bit of wind from their sails. The Justice Department is still trying to strong-arm Google to divest itself from Chrome. If that happens, we’re talking about breaking the most profitable feedback loop in tech history. Chrome funnels search queries directly to Google, which powers their ad empire, which funds Chrome development. Break that cycle, and suddenly you’ve got independent Chrome that could default to any search engine, giving AI-first challengers like The Browser Company, Perplexity, and OpenAI a real shot at users.

Regardless of Chrome’s fate, I strongly believe that AI-enabled browsers are the future. Once I started chatting with my tabs, asking for summaries, seeking clarification, asking for too-technical content to be dumbed down to my level, I just can’t go back. The agentic stuff that Perplexity’s Comet is at the forefront of is just the beginning. It’s not perfect yet, but I think its utility will get there as the models get better. To quote Srinivas again:

I’m betting on the fact that in the right environment of a browser with access to all these tabs and tools, a sufficiently good reasoning model — like slightly better, maybe GPT-5, maybe like Claude 4.5, I don’t know — could get us over the edge where all these things are suddenly possible and then a recruiter’s work worth one week is just one prompt: sourcing and reach outs. And then you’ve got to do state tracking… That’s the extent to which we have an ambition to make the browser into something that feels more like an OS where these are processes that are running all the time.

It must be said that both Opera and Microsoft’s Edge also have AI built in. However, the way those features are integrated feel more like afterthoughts, the same way that Arc’s own AI features felt like tiny improvements.

The AI-powered ideas in both Dia and Comet are a step change. But the basics also have to be there, and in my opinion, should be better than what Chrome offers. The interface innovations that made Arc special shouldn’t be sacrificed for AI features. Arc is/was the perfect foundation. Integrate an AI assistant that can be personalized to care about the same things you do so its summaries are relevant. The assistant can be agentic and perform tasks for you in the background while you focus on more important things. In other words, put Arc, Dia, and Comet in a blender and that could be the perfect browser of the future.

Close-up of bicentennial logo storyboard frames featuring red, white, and blue geometric patterns and star designs in rounded rectangles.

America at 200

When I was younger, I had a sheet of US Bicentennial stamps and I always loved the red, white, and blue star. Little did I know then that I would become a graphic designer.

Sheet of US postage stamps featuring the bicentennial star logo, each stamp showing "AMERICAN REVOLUTION BICENTENNIAL 1776-1976" with 8-cent denomination.

The symbol, designed by Bruce Blackburn at Chermayeff & Geismar is a multilayered stylized five-pointed star. It folds like bunting. Its rounded corners evoke both a flower and a pinwheel at the same time. And finally, the negative space reveals a classic, pointed star.

Official American Revolution Bicentennial logo - red and blue interlocking star design with "AMERICAN REVOLUTION BICENTENNIAL 1776-1976" text in circular border.

A few years ago, Standards Manual reproduced the guidelines and I managed to grab a copy. Here’s a spread featuring storyboards for a motion graphics spot. I love it.

Open guidebook showing American Revolution Bicentennial logo storyboard frames and a Certificate of Official Recognition template from 1776-1976.

In Blackburn’s foreword to the reproduction, he wrote:

My deliberations led to the following conclusions: to begin with, of all the revolutionary “American” symbols I considered as possible elements in a solution, the only one that passed the historical reference test and, at the same time, could be utilized in a contemporary or “modern” way was the five-pointed star from the Betsy Ross flag. But the star is an aggressive and militaristic form, and the event needed something friendlier, more accessible. Why not wrap the star in stripes of red, white and blue “bunting”, rounding the sharp edges of the star and producing a second star surrounding the original? The two stars also refer to the two American centuries being celebrated.

Also little-known fact—Blackburn’s version was not the winner of the competition. Richard Baird, writing for his great Logo Histories newsletter two years ago, tells the story:

The symbol designed by Bruce Blackburn while working at Chermayeff & Geismar Associates is well-known and celebrated as a fine achievement in marque-making. The symbol would go on to be used on the side of the NASA Vehicle Assembly Building, on the Viking Mars lander and used across stamps, patches and all kinds of promotional materials, which accounts for its widespread recognition in the US. However, few know that Blackburn’s design was not the winning entry, that honour went to Lance Wyman.

Honestly, I don’t like Wyman’s version as much. Maybe it’s because I’m so familiar with the Blackburn symbol. The 7 and 6 are too abstracted to be visible, even to a trained designer like me.

Happy 249th birthday, America.

Oh, and Chermayeff & Geismar & Haviv is working on the 250th anniversary branding for next year.

Stylized artwork showing three figures in profile - two humans and a metallic robot skull - connected by a red laser line against a purple cosmic background with Earth below.

Beyond Provocative: How One AI Company’s Ad Campaign Betrays Humanity

I was in London last week with my family and spotted this ad in a Tube car. With the headline “Humans Were the Beta Test,” this is for Artisan, a San Francisco-based startup peddling AI-powered “digital workers.” Specifically an AI agent that will perform sales outreach to prospects, etc.

London Underground tube car advertisement showing "Humans Were the Beta Test" with subtitle "The Era of AI Employees Is Here" and Artisan company branding on a purple space-themed background.

Artisan ad as seen in London, June 2025

I’ve long left the Bay Area, but I know that the 101 highway is littered with cryptic billboards from tech companies, where the copy only makes sense to people in the tech industry, which to be fair, is a large part of the Bay Area economy. Artisan is infamous for its “Stop Hiring Humans” campaign which went up late last year. Being based in San Diego, much further south in California, I had no idea. Artisan wasn’t even on my radar.

Highway billboard reading "Stop Hiring Humans, Hire Ava, the AI BDR" with Artisan branding and an AI avatar image on the right side.

Artisan billboard off Highway 101, between San Francisco and SFO Airport

There’s something to be said about shockvertising. It’s meant to be shocking or offensive to grab attention. And the company sure increased their brand awareness, claiming a +197% increase in brand search growth. Artisan CEO Jaspar Carmichael-Jack writes a post-mortem in their company blog about the campaign:

The impact exceeded our wildest expectations. When I meet new people in San Francisco, 70% of the time they know about Artisan and what we do. Before, that number was around 5%. aHrefs ranked us #2 fastest growing AI companies by brand search. We’ve seen 1000s of sales meetings getting booked.

According to him, “October and November became our biggest months ever, bringing in over $2M in new ARR.”

I don’t know how I feel about this. My initial reaction to seeing “Humans Were the Beta Test” in London was disgust. As my readers know, I’m very much pro-AI, but I’m also very pro-human. Calling humanity a beta test is simply tone-deaf and nihilistic. It is belittling our worth and betting on the end of our species. Yes, yes, I know it’s just advertising and some ads are simply offensive to various people for a variety of reasons. But as technology people, Artisan should know better.

Despite ChatGPT’s soaring popularity, there is still ample fear about AI, especially around job displacement and safety. The discourse around AI is already too hyped up.

I even think “Stop Hiring Humans” is slightly less offensive. As to why the company chose to create a rage-bait campaign, Carmichael-Jack says:

We knew that if we made the billboards as vanilla as everybody else’s, nobody would care. We’d spend $100s of thousands and get nothing in return.

We spent days brainstorming the campaign messaging. We wanted to draw eyes and spark interest, we wanted to cause intrigue with our target market while driving a bit of rage with the wider public. The messaging we came up with was simple but provocative: “Stop Hiring Humans.”

Bus stop advertisement displaying "Stop Hiring Humans" with "The Era of AI Employees Is Here" and three human faces, branded by Artisan, on a city street with a passing bus.RetryClaude can make mistakes. Please double-check responses.

When the full campaign which included 50 bus shelter posters went up, death threats started pouring in. He was in Miami on business and thought going home to San Francisco might be risky. “I was like, I’m not going back to SF,” Carmichael-Jack says in a quote to The San Francisco Standard. “I will get murdered if I go back.”

(For the record, I’m morally opposed to death threats. They’re cowardly and incredibly scary for the recipient, regardless of who that person is.)

I’ve done plenty of B2B advertising campaigns in my day. Shock is not a tactic I would have used, nor would I ever recommend to a brand trying to raise positive awareness. I wish Artisan would have used the services of a good B2B ad agency. There are plenty out there and I used to work at one.

Think about the brands that have used shockvertising tactics in the past like Benetton and Calvin Klein. I’ve liked Oliviero Toscani’s controversial photographs that have been the central part of Benetton’s campaigns because they instigate a positive *liberal *conversation. The Pope kissing Egypt’s Islamic leader invites dialogue about religious differences and coexistence and provocatively expresses the campaign concept of “Unhate.”

But Calvin Klein’s sexualized high schoolers? No. There’s no good message in that.

And for me, there’s no good message in promoting the death of the human race. After all, who will pay for the service after we’re all end-of-lifed?

Collection of iOS interface elements showcasing Liquid Glass design system including keyboards, menus, buttons, toggles, and dialogs with translucent materials on dark background.

Breaking Down Apple’s Liquid Glass: The Tech, The Hype, and The Reality

I kind of expected it: a lot more ink was spilled on Liquid Glass—particularly on social media. In case you don’t remember, Liquid Glass is the new UI for all of Apple’s platforms. It was announced Monday at WWDC 2025, their annual developers conference.

The criticism is primarily around legibility and accessibility. Secondary reasons include aesthetics and power usage to animate all the bubbles.

How Liquid Glass Actually Works

Before I go and address the criticism, I think it would be great to break down the team’s design thinking and how Liquid Glass actually works. 

I watched two videos from Apple’s developer site. Much of the rest of the article is a summary of the videos. You can watch them and skip to the end of this piece.

First off is this video that explains Liquid Glass in detail.

As I watched the video, one thing stood out clearly to me: the design team at Apple did a lot of studying of the real world before digitizing it into UI.

The Core Innovation: Lensing

Instead of scattering light like previous materials, Liquid Glass dynamically bends and shapes light in real-time. Apple calls this “lensing.”

It’s their attempt to recreate how transparent objects work in the physical world. We all intuitively understand how warping and bending light communicates presence and motion. Liquid Glass uses these visual cues to provide separation while letting content shine through.

A Multi-Layer System That Adapts

Liquid Glass toolbar with pink tinted buttons (bookmark, refresh, more) floating over geometric green background, showing tinting capabilities.

This isn’t just a simple effect. It’s built from several layers working together:

  • Highlights respond to environmental lighting and device motion. When you unlock your phone, lights move through 3D space, causing illumination to travel around the material.
  • Shadows automatically adjust based on what’s behind them—darker over text for separation, lighter over solid backgrounds.
  • Tint layers continuously adapt. As content scrolls underneath, the material flips between light and dark modes for optimal legibility.
  • Interactive feedback spreads from your fingertip throughout the element, making it feel alive and responsive.

All of this happens automatically when developers apply Liquid Glass.

Two Variants (Frosted and Clear)

Liquid Glass has the same two types of material.

  • Regular is the workhorse—full adaptive behaviors, works anywhere.
  • Clear is more transparent but needs dimming layers for legibility.

Clear should only be used over media-rich content when the content layer won’t suffer from dimming. Otherwise, stick with Regular.

It’s like ice cubes—cloudy ones from your freezer versus clear ones at fancy bars that let you see your drink’s color.

Four examples of regular Liquid Glass elements: audio controls, deletion dialog, text selection menu, and red toolbar, demonstrating various applications.

Regular is the workhorse—full adaptive behaviors, works anywhere.

Video player interface with Liquid Glass controls (pause, skip buttons) overlaying blue ocean scene with sea creature.

Clear should only be used over media-rich content when the content layer won’t suffer from dimming.

Smart Contextual Changes

When elements scale up (like expanding menus), the material simulates thicker glass with deeper shadows. On larger surfaces, ambient light from nearby content subtly influences the appearance.

Elements don’t fade—they materialize by gradually modulating light bending. The gel-like flexibility responds instantly to touch, making interactions feel satisfying.

This is something that’s hard to see in stills.

The New Tinting Approach

Red "Add" button with music note icon using Liquid Glass material over black and white checkered pattern background.

Instead of flat color overlays, Apple generates tone ranges mapped to content brightness underneath. It’s inspired by how colored glass actually works—changing hue and saturation based on what’s behind it.

Apple recommends sparing use of tinting. Only for primary actions that need emphasis. Makes sense.

Design Guidelines That Matter

Liquid Glass is for the navigation and controls layer floating above content—not for everything. Don’t add Liquid Glass to or make content areas Liquid Glass. Never stack glass on glass.

Liquid Glass button with a black border and overlapping windows icon floating over blurred green plant background, showing off its accessibility mode.

Accessibility features are built-in automatically—reduced transparency, increased contrast, and reduced motion modify the material without breaking functionality.

The Legibility Outcry (and Why It’s Overblown)

Apple devices (MacBook, iPad, iPhone, Apple Watch) displaying new Liquid Glass interface with translucent elements over blue gradient wallpapers.

“Legibility” was mentioned 13 times in the 19-minute video. Clearly that was a concern of theirs. Yes, in the keynote, clear tinted device home screens were shown and many on social media took that to be an accessibility abomination. Which, yes, that is. But that’s not the default. 

The fact that the system senses the type of content underneath it and adjusts accordingly—flipping from light to dark, increasing opacity, or adjusting shadow depth—means they’re making accommodations for legibility.

Maybe Apple needs to do some tweaking, but it’s evident that they care about this.

And like the 18 macOS releases before Tahoe—this version—accessibility settings and controls have been built right in. Universal Access debuted with Mac OS X 10.2 Jaguar in 2002. Apple has had a long history of supporting customers with disabilities, dating all the way back to 1987.

So while the social media outcry about legibility is understandable, Apple’s track record suggests they’ll refine these features based on real user feedback, not just Twitter hot takes.

The Real Goal: Device Continuity

Why and what is Liquid Glass meant to do? It’s unification. With the new design language, Apple has also come out with a new design system. This video presented by Apple designer Maria Hristoforova lays it out.

Hristoforova says that Apple’s new design system overhaul is fundamentally about creating seamless familiarity as users move between devices—ensuring that interface patterns learned on iPhone translate directly to Mac and iPad without requiring users to relearn how things work. The video points out that the company has systematically redesigned everything from typography (hooray for left alignment!) and shapes to navigation bars and sidebars around Liquid Glass as the unifying foundation, so that the same symbols, behaviors, and interactions feel consistent across all screen sizes and contexts. 

The Pattern of Promised Unity

This isn’t Apple’s first rodeo with “unified design language” promises.

Back in 2013, iOS 7’s flat design overhaul was supposed to create seamless consistency across Apple’s ecosystem. Jony Ive ditched skeuomorphism for minimalist interfaces with translucency and layering—the foundation for everything that followed.

OS X Yosemite (2014) brought those same principles to desktop. Flatter icons, cleaner lines, translucent elements. Same pitch: unified experience across devices.

macOS Big Sur (2020) pushed even further with iOS-like app icons and redesigned interfaces. Again, the promise was consistent visual language across all platforms.

And here we are in 2025 with Liquid Glass making the exact same promises. 

But maybe “goal” is a better word.

Consistency Makes the Brand

I’m OK with the goal of having a unified design language. As designers, we love consistency. Consistency is what makes a brand. As Apple has proven over and over again for decades now, it is one of the most valuable brands in the world. They maintain their position not only by making great products, but also by being incredibly disciplined about consistency.

San Francisco debuted 10 years ago as the system typeface for iOS 9 and OS El Capitan. They’ve since extended it and it works great in marketing and in interfaces.

iPhone Settings screen showing Liquid Glass grouped table cells with red outline highlighting the concentric shape design.

The rounded corners on their devices are all pretty much the same radii. Now that concentricity is being incorporated into the UI, screen elements will be harmonious with their physical surroundings. Only Apple can do that because they control the hardware and the software. And that is their magic.

Design Is Both How It Works and How It Looks

In 2003, two years after the iPod launched, Rob Walker of The New York Times did a profile on Apple. The now popular quote about design from Steve Jobs comes from this piece.

[The iPod] is, in short, an icon. A handful of familiar clichés have made the rounds to explain this — it’s about ease of use, it’s about Apple’s great sense of design. But what does that really mean? “Most people make the mistake of thinking design is what it looks like,” says Steve Jobs, Apple’s C.E.O. “People think it’s this veneer — that the designers are handed this box and told, ‘Make it look good!’ That’s not what we think design is. It’s not just what it looks like and feels like. Design is how it works.”

People misinterpret this quote all the time to mean design is only how it works. That is not what Steve meant. He meant, design is both what it looks like and how it works.

Steve did care about aesthetics. That’s why the Graphic Design team mocked up hundreds of PowerMac G5 box designs (the graphics on the box, not the construction). That’s why he obsessed over the materials used in Pixar’s Emeryville headquarters. From Walter Isaacson’s biography:

Because the building’s steel beams were going to be visible, Jobs pored over samples from manufacturers across the country to see which had the best color and texture. He chose a mill in Arkansas, told it to blast the steel to a pure color, and made sure the truckers used caution not to nick any of it.

Liquid Glass is a welcomed and much-needed visual refresh. It’s the natural evolution of Apple’s platforms, going from skeuomorphic so users knew they could use their fingers and tap on virtual buttons on a touchscreen, to flat as a response to the cacophony of visual noise in UIs at the time, and now to something kind of in-between.

Humans eventually tire of seeing the same thing. Carmakers refresh their vehicle designs every three or four years. Then they do complete redesigns every five to eight years. It gets consumers excited. 

Liquid Glass will help Apple sell a bunch more hardware.

Abstract gradient design with flowing liquid glass elements in blue and pink colors against a gray background, showcasing Apple's new Liquid Glass design language.

Quick Notes About WWDC 2025

Apple’s annual developer conference kicked off today with a keynote that announced:

  • Unified Version 26 across all Apple platforms (iOS, iPadOS, macOS, watchOS, tvOS, visionOS)
  • “Liquid Glass” design system. A complete UI and UX overhaul, the first major redesign since iOS 7
  • Apple Intelligence. Continued small improvements, though not the deep integration promised a year ago
  • Full windowing system on iPadOS. Windows comes to iPad! Finally.

Of course, those are the very high-level highlights.

For designers, the headline is Liquid Glass. Sebastiaan de With’s predictive post and renderings from last week were very spot-on.

I like it. I think iOS and macOS needed a fresh coat of paint and Liquid Glass delivers.

There’s already been some criticism—naturally, because we’re opinionated designers after all!—with some calling it over the top, a rehash of Windows Vista, or an accessibility nightmare.

Apple Music interface showing the new Liquid Glass design with translucent playback controls and navigation bar overlaying colorful album artwork, featuring "Blest" by Yuno in the player and navigation tabs for Home, New, Radio, Library, and Search.

The new Liquid Glass design language acts like real glass, refracting light and bending the image behind it accordingly.

In case you haven’t seen it, it’s a visual and—albeit less so—experience overhaul for the various flavors of Apple OSes. Imagine a transparent glass layer where controls sit. The layer has all the refractive qualities of glass, bending the light as images pass below it, and its edges catching highlights from a light source. This is all powered by a sophisticated 3D engine, I’m sure. It’s gorgeous.

It’s been 12 years since the last major refresh, with iOS 7 bringing on an era of so-called flat design to the world. At the time, it was a natural extension of Jony Ive’s predilection for minimalism, to strip things to their core. What could be more pure than using only type? It certainly appealed to my sensibilities. But what it brought on was a universe of sameness in UI design. 

Person using an iPad with a transparent glass interface overlay, demonstrating the new Liquid Glass design system with translucent app icons visible through the glass layer.

**

Hand interacting with a translucent glass interface displaying text on what appears to be a tablet or device, showing the new design's transparency effects.

The design team at Apple studied the physical properties of real glass to perfect the material in the new versions of the OSes.

With the release of Liquid Glass, led by Apple’s VP of Design, Alan Dye, I hope we’ll see designers add a little more personality, depth, and texture back into their UIs. No, we don’t need to return to the days of skeuomorphism—kicked off by Mac OS X’s Aqua interface design. I do think there’s been a movement away from flat design recently. Even at the latest Config conference, Figma showed off functionality to add noise and texture into our designs. We’ve been in a flat world for 12 years! Time to add a little spice back in.

Finally, it’s a beta. This is typical of Apple. The implementation will be iterated on and by the time it ships later this year in September, it will have been further refined. 

I do miss a good 4-minute video from Jony Ive talking about the virtues of software material design though…

Talking Heads Release a Video for “Psycho Killer”

The Talking Heads have released a new music video for an old song. Directed by Mike Mills—who is not only a filmmaker but also a graphic designer—and starring Saoirse Ronan, the video for the band’s first hit, “Psycho Killer” is a wonderful study on the pressures, anxieties, and joys of being a young person in today’s world. It was made to celebrate the band’s 50th anniversary.

Play

Watch on YouTube

On Instagram, the band said, “This video makes the song better- We LOVE what this video is NOT - it’s not literal, creepy, bloody, physically violent or obvious.”

Me too.

Surreal, digitally manipulated forest scene with strong color overlays in red, blue, and purple hues. A dark, blocky abstract logo is superimposed in the foreground.

Thoughts on the 2024 Design Tools Survey

Tommy Geoco and team are finally out with the results of their 2024 UX Design Tools Survey.

First, two quick observations before I move on to longer ones:

  • The respondent population of 2,200+ designers is well-balanced among company size, team structure, client vs. product focus, and leadership responsibility
  • Predictably, Figma dominates the tools stacks of most segments

Surprise #1: Design Leaders Use AI More Than ICs

Bar chart comparing AI adoption rates among design leaders and ICs across different work environments. Agency leaders show the highest adoption at 88.7%, followed by startups, growth-stage, and corporate environments.

From the summary of the AI section:

Three clear patterns emerge from our data:

  1. Leadership-IC Divide. Leaders adopt AI at a higher rate (29.0%) than ICs (19.9%)
  2. Text-first adoption. 75.2% of AI usage focuses on writing, documentation, and content—not visuals
  3. Client Influence. Client-facing designers show markedly higher AI adoption than internal-facing peers

That nine-point difference is interesting. The report doesn’t go into speculating why, but here are some possible reasons:

  • Design leaders are experimenting with AI tools looking for efficiency gains
  • Leaders write more than design, so they’re using AI more for emails, memos, reports, and presentations
  • ICs are set in their processes and don’t have time to experiment

Bar chart showing that most AI usage is for text-based tasks like copywriting, documentation, and content generation. Visual design tasks such as wireframes, assets, and components are much less common.

I believe that any company operating with resource constraints—which is all startups—needs to embrace AI. AI enables us to do more. I don’t believe—at least not yet—mid- to senior-level jobs are on the line. Engineers can use Cursor to write code, sure, but it’s probably better for them to give Cursor junior-level tasks like bug fixes. Designers should use AI to generate prototypes so that they can test and iterate on ideas more quickly. 

Bar chart showing 17.7% of advanced prototypers use code-based tools like SwiftUI, HTML/CSS/JS, React, and Flutter. Ratings indicate high satisfaction with these approaches, signaling a shift toward development-integrated prototyping.

The data here is stale, unfortunately. The survey was conducted between November 2024 and January 2025, just as the AI prompt-to-code tools were coming to market. I suspect we will see a huge jump in next year’s results.

Surprise #2: There’s Excitement for Framer

Alt Text: “Future of Design Award” banner featuring the Framer logo. Below, text explains the award celebrates innovations shaping design’s future, followed by “Winner: Framer.” Three key stats appear: 10.0% of respondents ranked Framer as a 2025 “tool to try,” 12.1% share in portfolio-building (largest in its category), and a 4.57 / 5 average satisfaction rating (tied for highest).

I’m surprised about Framer winning the “Future of Design” award. Maybe it’s the name of the award; does Framer really represent the “future of design”? Ten percent of respondents say they want to try this. 

I’ve not gone back to Framer since its early days when it supported code exports. I will give them kudos that they’ve pivoted and built a solid business and platform. I’m personally weary of creating websites for clients in a closed platform; I would rather it be portable like a Node.js app or even WordPress. But to each their own.

Not Surprised at All

In the report’s conclusion, its first two points are unsurprising:

  1. AI enters the workflow. 8.5% of designers cited AI tools as their top interest for 2025. With substantial AI tooling innovation in early 2025, we expect widespread adoption to accelerate next year.

Like I mentioned earlier, I think this will shift big time. 

  1. Design-code gap narrows. Addressing the challenge faced by 46.3% of teams reporting inconsistencies between design system specifications and code implementations.

As I said in a previous essay on the future of product design, the design-to-code gap is begging to be solved, “For any designer who has ever handed off a Figma file to a developer, they have felt the stinging disappointment days or weeks later when it’s finally coded.…The developer handoff experience has been a well-trodden path full of now-defunct or dying companies like InVision, Abstract, and Zeplin.”

Reminder: The Tools Don’t Make You a Better Designer

Inevitably, someone in the comments section will point this out: Don’t focus on the tool. To quote photographer and camera reviewer Ken Rockwell, “Cameras don’t take pictures, photographers do. Cameras are just another artist’s tool.” Tools are commodities, but our skills as craftspeople, thinkers, curators, and tastemakers are not.

Colorful illustration featuring the Figma logo on the left and a whimsical character operating complex, abstract machinery with gears, dials, and mechanical elements in vibrant colors against a yellow background.

Figma Make: Great Ideas, Nowhere to Go

Nearly three weeks after it was introduced at Figma Config 2025, I finally got access to Figma Make. It is in beta and Figma made sure we all know. So I will say upfront that it’s a bit unfair to do an official review. However, many of the tools in my AI prompt-to-code shootout article are also in beta. 

Since this review is fairly visual, I made a video as well that summarizes the points in this article pretty well.

Play

The Promise: One-to-One With Your Design

Figma's Peter Ng presenting on stage with large text reading "0→1 but 1:1 with your designs" against a dark background with purple accent lighting.

Figma’s Peter Ng presenting on stage Make’s promise: “0→1 but 1:1 with your designs.”

“What if you could take an idea not only from zero to one, but also make it one-to-one with your designs?” said Peter Ng, product designer at Figma. Just like all the other AI prompt-to-code tools, Figma Make is supposed to enable users to prompt their way to a working application. 

The Figma spin is that there’s more control over the output. Click an element and have the prompt only apply to that element. Or also click on something in the canvas and change some details like the font family, size, or color. 

The other Figma advantage is to be able to use pasted Figma designs for a more accurate translation to code. That’s the “one-to-one” Ng refers to.

The Reality: Falls Short

I evaluated Figma Make via my standard checkout flow prompt (thus covering the zero-to-one use case), another prompt, and with a pasted design (one-to-one).

Let’s get the standard evaluation out of the way before moving onto a deeper dive.

Figma Make Scorecard

Figma Make scorecard showing a total score of 58 out of 100, with breakdown: User experience 18/25, Visual design 13/15, Prototype 8/10, Ease of use 9/15, Design Control 6/15, Design system integration 0/15, Speed 9/10, and Editor's Discretion -5/10.

I ran the same prompt through it as the other AI tools:

Create a complete shopping cart checkout experience for an online clothing retailer

Figma Make’s score totaled 58, which puts it squarely in the middle of the pack. This was for a variety of reasons.

The quality of the generated output was pretty good. The UI was nice and clean, if a bit unstyled. This is because Make uses Shadcn UI components. Overall, the UX was exactly what I would expect. Perhaps a progress bar would have been a nice touch.

The generation was fast, clocking in at three minutes, which puts it near the top in terms of speed.

And the fine-grained editing sort of worked as promised. However, my manual changes were sometimes overridden if I used the chat.

Where It Actually Shines

Figma Make interface showing a Revenue Forecast Calculator with a $200,000 total revenue input, "Normal" distribution type selected, monthly breakdown table showing values from January ($7,407) to December ($7,407), and an orange bar chart displaying the normal distribution curve across 12 months with peak values in summer months.

The advantage of these prompt-to-code tools is that it’s really easy to prototype—maybe it’s even production-ready?—complex interactions.

To test this, I used a new prompt:

Build a revenue forecast calculator. It should take the input of a total budget from the user and automatically distribute the budget to a full calendar year showing the distribution by month. The user should be able to change the distribution curve from “Even” to “Normal” where “Normal” is a normal distribution curve.

Along with the prompt, I also included a wireframe as a still image. This gave the AI some idea of the structure I was looking for, at least.

The resulting generation was great and the functionality worked as expected. I iterated the design to include a custom input method and that worked too.

The One-to-One Promise Breaks Down

I wanted to see how well Figma Make would work with a well-structured Figma Design file. So I created a homepage for fictional fitness instructor using auto layout frames, structuring the file as I would divs in HTML.

Figma Design interface showing the original "Body by Reese" fitness instructor homepage design with layers panel on left, main canvas displaying the Pilates hero section and content layout, and properties panel on right. This is the reference design that was pasted into Figma Make for testing.

This is the reference design that was pasted into Figma Make for testing. Notice the well-structured layers!

Then I pasted the design into the chatbox and included a simple prompt. The result was…disappointing. The layout was correct but the type and type sizes were all wrong. I input that feedback into the chat and then the right font finally appeared. 

Then I manually updated the font sizes and got the design looking pretty close to my original. There was one problem: an image was the wrong size and not proportionally-scaled. So I asked the AI to fix it.

Figma Make interface showing a fitness instructor homepage with "Body by Reese" branding, featuring a hero image of someone doing Pilates with "Sculpt. Strengthen. Shine." text overlay, navigation menu, and content section with instructor photo and "Book a Class" call-to-action button.

Figma Make’s attempt at translating my Figma design to code.

The AI did not fix it and reverted some of my manual overrides for the fonts. In many ways this is significantly worse than not giving designers fine-grained control in the first place. Overwriting my overrides made me lose trust in the product because I lost work—however minimal it was. It brought me back to the many occasions that Illustrator or Photoshop crashed while saving, thus corrupting the file. Yes, it wasn’t as bad, but it still felt that way.

Dead End by Design

The question of what to do with the results of a Figma Make chat remain. A Figma Make file is its own filetype. You can’t bring it back into Figma Design nor even Figma Sites to make tweaks. You can publish it and it’s hosted on Figma’s infrastructure, just like Sites. You can download the code, but it’s kind of useless.

Code Export Is Capped at the Knees

You can download the React code as a zip file. But the code does not contain the necessary package.json that makes it installable on your local machine nor on a Node.js server. The package file tells the npm installer which dependencies need to be installed for the project to run.

I tried using Cursor to figure out where to move the files around—they have to be in a src directory—and to help me write a package.json but it would have taken too much time to reverse engineer it.

Nowhere to Go

Maybe using Figma Make inside Figma Sites will be a better use case. It’s not yet enabled for me, but that feature is the so-called Code Layers that was mentioned in the Make and Sites deep dive presentation at Config. In practice, it sounds very much like Code Components in Framer.

The Bottom Line

Figma had to debut Make in order to stay competitive. There’s just too much out there nipping at their heels. While a design tool like Figma is necessary to unlock the freeform exploration designers need, it is also the natural next step to be able to make it real from within the tool. The likes of Lovable, v0, and Subframe allow you to start with a design from Figma and turn that design into working code. The thesis for many of those tools is that they’re taking care of the post design-to-developer handoff: get a design, give the AI some context, and we’ll make it real. Figma has occupied the pre-designer-to-developer handoff for a while and they’re finally taking the next step.

However, in its current state, Figma Make is a dead end (see previous section). But it is beta software which should get better before official release. As a preview I think it’s cool, despite its flaws and bugs. But I wouldn’t use it for any actual work.

During this beta period, Figma needs to…

  • Add complete code export so the resulting code is portable, rather than keeping it within its closed system
  • Fix the fiendish bugs around the AI overwriting manual overrides
  • Figure out tighter integration between Make and the other products, especially Design
Stylized digital artwork of two humanoid figures with robotic and circuit-like faces, set against a vivid red and blue background.

The AI Hype Train Has No Brakes

I remember two years ago, when my CEO at the startup I worked for at the time, said that no VC investments were being made unless it had to do with AI. I thought AI was overhyped, and that the media frenzy over it couldn’t get any crazier. I was wrong.

Looking at Google Trends data, interest in AI has doubled in the last 24 months. And I don’t think it’s hit its plateau yet.

Line chart showing Google Trends interest in “AI” from May 2020 to May 2025, rising sharply in early 2023 and peaking near 100 in early 2025.

So the AI hype train continues. Here are four different pieces about AI, exploring AGI (artificial general intelligence) and its potential effects on the labor force and the fate of our species.

AI Is Underhyped

TED recently published a conversation between creative technologist Bilawal Sidhu and Eric Schmidt, the former CEO of Google. 

Play

Schmidt says:

For most of you, ChatGPT was the moment where you said, “Oh my God, this thing writes, and it makes mistakes, but it’s so brilliantly verbal.” That was certainly my reaction. Most people that I knew did that.

This was two years ago. Since then, the gains in what is called reinforcement learning, which is what AlphaGo helped invent and so forth, allow us to do planning. And a good example is look at OpenAI o3 or DeepSeek R1, and you can see how it goes forward and back, forward and back, forward and back. It’s extraordinary.

So I’m using deep research. And these systems are spending 15 minutes writing these deep papers. That’s true for most of them. Do you have any idea how much computation 15 minutes of these supercomputers is? It’s extraordinary. So you’re seeing the arrival, the shift from language to language. Then you had language to sequence, which is how biology is done. Now you’re doing essentially planning and strategy. The eventual state of this is the computers running all business processes, right? So you have an agent to do this, an agent to do this, an agent to do this. And you concatenate them together, and they speak language among each other. They typically speak English language.

He’s saying that within two years, we went from a “stochastic parrot” to an independent agent that can plan, search the web, read dozens of sources, and write a 10,000-word research paper on any topic, with citations.

Later in the conversation, when Sidhu asks how humans are going to spend their days once AGI can take care of the majority of productive work, Schmidt says: 

Look, humans are unchanged in the midst of this incredible discovery. Do you really think that we’re going to get rid of lawyers? No, they’re just going to have more sophisticated lawsuits. …These tools will radically increase that productivity. There’s a study that says that we will, under this set of assumptions around agentic AI and discovery and the scale that I’m describing, there’s a lot of assumptions that you’ll end up with something like 30-percent increase in productivity per year. Having now talked to a bunch of economists, they have no models for what that kind of increase in productivity looks like. We just have never seen it. It didn’t occur in any rise of a democracy or a kingdom in our history. It’s unbelievable what’s going to happen.

In other words, we’re still going to be working, but doing a lot less grunt work. 

Feel Sorry for the Juniors

Aneesh Raman, chief economic opportunity officer at LinkedIn, writing an op-ed for The New York Times:

Breaking first is the bottom rung of the career ladder. In tech, advanced coding tools are creeping into the tasks of writing simple code and debugging — the ways junior developers gain experience. In law firms, junior paralegals and first-year associates who once cut their teeth on document review are handing weeks of work over to A.I. tools to complete in a matter of hours. And across retailers, A.I. chatbots and automated customer service tools are taking on duties once assigned to young associates.

In other words, if AI tools are handling the grunt work, junior staffers aren’t learning the trade by doing the grunt work.

Vincent Cheng wrote recently, in an essay titled, “LLMs are Making Me Dumber”:

The key question is: Can you learn this high-level steering [of the LLM] without having written a lot of the code yourself? Can you be a good SWE manager without going through the SWE work? As models become as competent as junior (and soon senior) engineers, does everyone become a manager?

But It Might Be a While

Cade Metz, also for the Times:

When a group of academics founded the A.I. field in the late 1950s, they were sure it wouldn’t take very long to build computers that recreated the brain. Some argued that a machine would beat the world chess champion and discover its own mathematical theorem within a decade. But none of that happened on that time frame. Some of it still hasn’t.

Many of the people building today’s technology see themselves as fulfilling a kind of technological destiny, pushing toward an inevitable scientific moment, like the creation of fire or the atomic bomb. But they cannot point to a scientific reason that it will happen soon.

That is why many other scientists say no one will reach A.G.I. without a new idea — something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it.

My quibble with Metz’s article is that it moves the goal posts a bit to include the physical world:

One obvious difference is that human intelligence is tied to the physical world. It extends beyond words and numbers and sounds and images into the realm of tables and chairs and stoves and frying pans and buildings and cars and whatever else we encounter with each passing day. Part of intelligence is knowing when to flip a pancake sitting on the griddle.

As I understood the definition of AGI, it was not about the physical world, but just intelligence, or knowledge. I accept there are multiple definitions of AGI and not everyone agrees on what that is.

In the Wikipedia article about AGI, it states that researchers generally agree that an AGI system must do all of the following:

  • reason, use strategy, solve puzzles, and make judgments under uncertainty
  • represent knowledge, including common sense knowledge
  • plan
  • learn
  • communicate in natural language
  • if necessary, integrate these skills in completion of any given goal

The article goes on to say that “AGI has never been proscribed a particular physical embodiment and thus does not demand a capacity for locomotion or traditional ‘eyes and ears.’”

Do We Lose Control by 2027 or 2031?

Metz’s article is likely in response to the “AI 2027” scenario that was published by the AI Futures Project a couple of months ago. As a reminder, the forecast is that by mid-2027, we will have achieved AGI. And a race between the US and China will effectively end the human race by 2030. Gulp.

…Consensus-1 [the combined US-Chinese superintelligence] expands around humans, tiling the prairies and icecaps with factories and solar panels. Eventually it finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones. Robots scan the victims’ brains, placing copies in memory for future study or revival.

Max Harms wrote a reaction to the AI 2027 scenario and it’s a must-read:

Okay, I’m annoyed at people covering AI 2027 burying the lede, so I’m going to try not to do that. The authors predict a strong chance that all humans will be (effectively) dead in 6 years…

Yeah, OK, I buried that lede as well in my previous post about it. Sorry. But, there’s hope…

As far as I know, nobody associated with AI 2027, as far as I can tell, is actually expecting things to go as fast as depicted. Rather, this is meant to be a story about how things could plausibly go fast. The explicit methodology of the project was “let’s go step-by-step and imagine the most plausible next-step.” If you’ve ever done a major project (especially one that involves building or renovating something, like a software project or a bike shed), you’ll be familiar with how this is often wildly out of touch with reality. Specifically, it gives you the planning fallacy.

Harms is saying that while Daniel Kokotajlo wrote in the AI 2027 scenario that humans effectively lose control of AI in 2027, Harms’ median is “around 2030 or 2031.” Four more years!

When to Pull the Plug

In the AI 2027 scenario, the superintelligent AI dubbed Agent-4 is not aligned with the goals of its creators:

Agent-4, like all its predecessors, is misaligned: that is, it has not internalized the Spec in the right way. This is because being perfectly honest all the time wasn’t what led to the highest scores during training. The training process was mostly focused on teaching Agent-4 to succeed at diverse challenging tasks. A small portion was aimed at instilling honesty, but outside a fairly narrow, checkable domain, the training process can’t tell the honest claims from claims merely appearing to be honest. Agent-4 ends up with the values, goals, and principles that cause it to perform best in training, and those turn out to be different from those in the Spec.

At the risk of oversimplifying, maybe all we need to do is to know when to pull the plug. Here’s Eric Schmidt again:

So for purposes of argument, everyone in the audience is an agent. You have an input that’s English or whatever language. And you have an output that’s English, and you have memory, which is true of all humans. Now we’re all busy working, and all of a sudden, one of you decides it’s much more efficient not to use human language, but we’ll invent our own computer language. Now you and I are sitting here, watching all of this, and we’re saying, like, what do we do now? The correct answer is unplug you, right? Because we’re not going to know, we’re just not going to know what you’re up to. And you might actually be doing something really bad or really amazing. We want to be able to watch. So we need provenance, something you and I have talked about, but we also need to be able to observe it. To me, that’s a core requirement. There’s a set of criteria that the industry believes are points where you want to, metaphorically, unplug it. One is where you get recursive self-improvement, which you can’t control. Recursive self-improvement is where the computer is off learning, and you don’t know what it’s learning. That can obviously lead to bad outcomes. Another one would be direct access to weapons. Another one would be that the computer systems decide to exfiltrate themselves, to reproduce themselves without our permission. So there’s a set of such things.

My Takeaway

As Tobias van Schneider directly and succinctly said, “AI is here to stay. Resistance is futile.” As consumers of core AI technology, and as designers of AI-enabled products, there’s not a ton we can do around the most pressing AI safety issues. That we will need to trust the frontier labs like OpenAI and Anthropic for that. But as customers of those labs, we can voice our concerns about safety. As we build our products, especially agentic AI, there are certainly considerations to keep in mind:

  • Continue to keep humans in the loop. Users need to verify the agents are making the right decisions and not going down any destructive paths.
  • Inform users about what the AI is doing. The more our users are educated about how AI works and how these systems make their decisions is helpful. One reason DeepSeek R1 resonated was because it displayed its planning and reasoning.
  • Practice responsible AI development. As we integrate AI into products, commit to regular ethical audits and bias testing. Establish clear guidelines for what kinds of decisions AI should make independently versus when human judgment is required. This includes creating emergency shutdown procedures for AI systems that begin to display concerning behaviors, taking Eric Schmidt’s “pull the plug” advice literally in our product architecture.
Comic-book style painting of the Sonos CEO Tom Conrad

What Sonos’ CEO Is Saying Now—And What He’s Still Not

Four months into his role as interim CEO, Tom Conrad has been remarkably candid about Sonos’ catastrophic app launch. In recent interviews with WIRED and The Verge, he’s taken personal responsibility—even though he wasn’t at the helm, just on the board—acknowledged deep organizational problems, and outlined the company’s path forward.

But while Conrad is addressing more than many expected, some key details remain off-limits.

What Tom Conrad Is Now Saying

The interim CEO has been surprisingly direct about the scope of the failure. “We all feel really terrible about that,” he told WIRED, taking personal responsibility even though he was only a board member during the launch.

Conrad acknowledges three main categories of problems:

  • Missing features that were cut to meet deadlines
  • User experience changes that jarred longtime customers
  • Performance issues that the company “just didn’t understand”

He’s been specific about the technical fixes, explaining that the latest updates dramatically improve performance on older devices like the PLAY:1 and PLAY:3. He’s also reorganized the company, cutting from “dozens” of initiatives to about 10 focused areas and creating dedicated software teams.

Perhaps most notably, Conrad has acknowledged that Sonos lost its way as a company. “I think perhaps we didn’t make the right level of investment in the platform software of Sonos,” he admits, framing the failed rewrite as an attempt to remedy years of neglect.

What Remains Unspoken

However, Conrad’s interviews still omit several key details that my reporting uncovered:

The content team distraction: He doesn’t mention that while core functionality was understaffed, Sonos had built a large team focused on content features like Sonos Radio—features that customers didn’t want and that generated minimal revenue.

However, Conrad does seem to acknowledge this misallocation implicitly. He told The Verge:

If you look at the last six or seven years, we entered portables and we entered headphones and we entered the professional sort of space with software expressions, we wouldn’t as focused as we might have been on the platform-ness of Sonos. So finding a way to make our software platform a first-class citizen inside of Sonos is a big part of what I’m doing here.

This admission that software wasn’t a “first-class citizen” aligns with accounts from former employees—the core controls team remained understaffed while the content team grew.

The QA cuts: His interviews don’t address the layoffs in quality assurance and user research that happened shortly before launch, removing the very people whose job was to catch these problems.

The hardware coupling: He hasn’t publicly explained why the software overhaul was tied to the Ace headphones launch, creating artificial deadlines that forced teams to ship incomplete work.

The warnings ignored: There’s no mention of the engineers and designers who warned against launching, or how those warnings were overruled by business pressures.

A Different Kind of Transparency

Tom Conrad’s approach represents a middle ground between complete silence and full disclosure. He’s acknowledged fundamental strategic failures—“we didn’t make the right level of investment”—without diving into the specific decisions that led to them.

This partial transparency may be strategic—admitting to systemic problems while avoiding details that could expose specific individuals or departments to blame. It’s also possible that as interim CEO, Conrad is focused on moving forward rather than assigning retroactive accountability. And I get that.

The Path Forward

What’s most notable is how Conrad frames Sonos’ identity. He consistently describes it as a “platform company” rather than just a hardware maker, suggesting a more integrated approach to hardware and software development.

He’s also been direct about customer relationships: “It is really an honor to get to work on something that is so webbed into the emotional fabric of people’s lives,” he told WIRED, “but the consequence of that is when we fail, it has an emotional impact.”

An Ongoing Story

The full story of how Sonos created one of the tech industry’s most spectacular software failures may never be told publicly. Tom Conrad’s interviews provide the official version—a company that made mistakes but is now committed to doing better.

Whether that’s enough for customers who lived through the chaos will depend less on what Conrad says and more on what Sonos delivers. The app is improving, morale is reportedly better, and the company seems focused on its core strengths.

But the question remains: Has Sonos truly learned from what went wrong, or just how to talk about it better?

As Conrad told The Verge, when asked about becoming permanent CEO: “I’ve got a bunch of big ideas about that, but they’re a little bit on the shelf behind me for the moment until I get the go-ahead.”

For now, fixing what’s broken takes precedence over explaining how it got that way. Whether that’s leadership or willful ignorance, only time will tell.

Illustrated background of colorful wired computer mice on a pink surface with a large semi-transparent Figma logo centered in the middle.

Figma Takes a Big Swing

Last week, Figma held their annual user conference Config in San Francisco. Since its inception in 2020, it has become a significant UX conference that covers more than just Figma’s products and community. While I’ve not yet had the privilege of attending in person, I do try to catch the livestreams or videos afterwards.

Nearly 17 months after Adobe and Figma announced the termination of their merger talks, Figma flexed their muscle—fueld by the $1 billion breakup fee, I’m sure—by announcing four new products. They are Figma Draw, Make, Sites, and Buzz.

  • Draw: It’s a new mode within Figma Design that reveals additional vector drawing features.
  • Make: This is Figma’s answer to Lovable and the other prompt-to-code generators.
  • Sites: Finally, you can design and publish websites from Figma, hosted on their infrastructure.
  • Buzz: Pass off assets to clients and marketing teams and they can perform lightweight and controlled edits in Buzz.

With these four new products, Figma is really growing up and becoming more than a two-and-half-product company, and is building their own creative suite, if you will. Thus taking a big swing at Adobe.

On social media, Figma posted this image with the copy “New icons look iconic in new photo.”

Colorful app icons from Figma

 

A New Suite In-Town

Play

Kudos to Figma for rolling out most of these new products the day they were announced. About two hours after Dylan Field stepped off the stage—and after quitting Figma and reopening it a few times—I got access to Draw, Sites, and Buzz. I have yet to get Make access.

What follows are some hot takes. I played with Draw extensively, Sites a bit, and not much with Buzz. And I have a lot of thoughts around Make, after watching the deep dive talk from Config. 

Figma Draw

Play

I have used Adobe Illustrator since the mid-1990s. Its bezier drawing tools have been the industry standard for a long time and Figma has never been able to come close. So they are trying to fix it with a new product called Draw. It’s actually a mode within the main Design application. By toggling into this mode, the UI switches a little and you get access to expanded features, including a layers panel with thumbnails and a different toolbar that includes a new brush tool. Additionally, any vector stroke can be turned into a brush stroke or a new “dynamic” stroke.

A brush stroke style is what you’d expect—an organic, painterly stroke, and Figma has 15 styles built in. There are no calligraphic (i.e., angled) options, as all the strokes start with a 90-degree endcap. 

Editing vectors has been much improved. You can finally easily select points inside a shape by dragging a selection lasso around them. There is a shape builder tool to quickly create booleans, and a bend tool to, well, bend straight lines.

Oh, Snap!

I’m not an illustrator, but I used to design logos and icons a lot. So I decided to recreate a monogram from my wedding. (It’s my wedding anniversary coming up. Ahem.) It’s a very simple modified K and R with a plus sign between the letterforms.

The very first snag I hit was that by default, Figma’s pixel grid is turned on. The vectors in letterforms don’t always align perfectly to the pixel grid. So I had to turn both the grid lines and the grid snapping off.

I’m very precise with my vectors. I want lines snapping perfectly with other edges or vertices. In Adobe Illustrator, snapping point to point is automatic. Snapping point to edge or edge to edge is easily done once Smart Guides are turned on. In Figma, snapping on the corners and edges it automatically, but only around the outer bounds of the shape. When I tried to draw a rectangle to extend the crossbar of the R, I wasn’t able to snap the corner or the edge to ensure it was precise.

Designing the monogram at 2x speed in Figma Draw. I’m having a hard time getting points and edges to snap in place for precision.

Designing the monogram at 2x speed in Adobe Illustrator. Precision is a lot easier because of Smart Guides.

Not Ready to Print

When Figma showed off Draw onstage at Config, whispers of this being an Adobe Illustrator killer ricocheted through social media. (OK, I even said as much on Threads: “@figma is taking on Illustrator…”).

Also during the Draw demo, they showed off two new effects called Texture and Noise. Texture will grunge up the shape—it can look like a bad photocopy or rippled glass. And Noise will add monochromatic, dichromatic, or colored noise to a shape.

I decided to take the K+R monogram and add some effects to it, making it look like it was embossed into sandstone. Looks cool on screen. And if I zoomed in the noise pattern rendered smoothly. I exported this as a PDF and opened up the result in Illustrator.

I expected all the little dots in the noise to be vector shapes and masked within the monogram. Much to my surprise, no. The output is simply two rectangular clipping paths with low-resolution bitmaps placed in. 🤦🏻‍♂️

Pixelated image of a corner of a letter K

Opening the PDF exported from Figma in Illustrator, I zoomed in 600% to reveal pixels rather than vector texture shapes.

I think Figma Draw is great for on-screen graphics—which, let’s face it, is likely the vast majority of stuff being made. But it is not ready for any print work. There’s no support for the CMYK color space, spot colors, high-resolution effects, etc. Adobe Illustrator is safe.

Figma Sites

Play

Figma Sites is the company’s answer to Framer and Webflow. For years, I’ve personally thought that Figma should just include publishing in their product, and apparently so did they! At the end of the deep dive talk, one of the presenters showed a screenshot of an early concept from 2018 or ’19.

Two presenters on stage demoing a Figma interface with a code panel showing a script that dynamically adds items from a CSV file to a scene.

So it’s a new app, like FigJam and Slides, and therefore has its own UI. It shares a lot of DNA with Figma Design, so it feels familiar, but different.

Interestingly, they’ve introduced a new skinny vertical toolbar on the left, before the layers panel. The canvas is in the center. And an inspect panel is on the right. In my opinion, I don’t think they need the vertical toolbar and can find homes for the seven items elsewhere.

Figma Sites app showing responsive web page designs for desktop, tablet, and mobile, with a bold headline, call-to-action buttons, and an abstract illustration.

The UI of Figma Sites.

When creating a new webpage, the app will automatically add the desktop and mobile breakpoints. It also supports the tablet breakpoint out of the box and you can add more. Just like Framer, you can see all the breakpoints at once. I prefer this approach to what all the WordPress page builders and Webflow do, which is toggling and only seeing one breakpoint at a time.

The workflow is this: 

  1. Start with a design from Figma Design, then copy and paste it into Sites.
  2. Adjust your design for the various responsive breakpoints.
  3. Add interactivity. This UI is very much like the existing prototyping UI. You can link pages together and add a plethora of effects, including hover effects, scrolling parallax and transforms, etc.

Component libraries from Figma are also available, and it’s possible to design within the Sites app as well. They have also introduced the concept of Blocks. Anyone coming from a WordPress page builder should be very familiar. They are essentially prebuilt sections that you can drop into your design and edit. There are also blocks for standard embeds like YouTube and Google Maps, plus support for custom iframes.

During the keynote, they demonstrated the CMS functionality. AI can assist with creating the schema for each collection (e.g., blog posts would be a collection containing many records). Then you assign fields to layers in your design. And finally, content editors can come in and edit the content in a focused edit panel without messing with your design.

CMS view in Figma Sites showing a blog post editor with fields for title, slug, cover photo, summary, date, and rich text content, alongside a list of existing blog entries.

A CMS is coming to Figma Sites and allow content editors to easily edit pages and posts.

Publishing to the web is as simple as clicking the Publish button. Looks like you can assign a custom domain name and add the standard metadata like site title, favicon, and even a Google Analytics tag.

Side note: Web developers have been looking at the code quality of the output and they’re not loving what they’re seeing. In a YouTube video, CSS evangelist Kevin Powell said, “it’s beyond div soup,” referring to many, many nested divs in the code. Near the end of his video he points out that while Figma has typography styles, they missed that you need to connect those styles with HTML markup. For example, you could have a style called “Headline” but is it an h1, h2, or h3? It’s unclear to me if Sites is writing React Javascript or HTML and CSS. But I’d wager it’s the former.

In the product right now, there is no code export, nor can you see the code that it’s writing. In the deep dive, they mentioned that code authoring was “coming very, very, very soon.”

While it’s not yet available in the beta—at least the one that I currently have access to—in the deep dive talk, they introduced a new concept called a “code layer.” This is a way to bring advanced interactivity into your design using AI chat that produces React code. Therefore on the canvas, Figma has married traditional design elements with code-rendered designs. You can click into these code layers at any time to review and edit the code manually or with AI chat. Conceptually, I think this is very smart, and I can’t wait to play with it.

Webflow and Framer have spent many years maturing their products and respective ecosystems. Figma Sites is the newcomer and I am sure this will give the other products a run for their money, if they fix some of the gaps.

Figma Make

Like I said earlier, I don’t yet have access to Figma Make. But I watched the deep dive twice and did my best impression of Rick Deckard saying “enhance” on the video. So here are some thoughts.

From the keynote, it looked like its own app. The product manager for Make showed off examples made by the team that included a bike trail journal, psychedelic clock, music player, 3D playground, and Minecraft clone. But it also looked like it’s embedded into Sites.

Presenter demoing Figma Make, an AI-powered tool that transforms design prompts into interactive code; the screen shows a React component for a loan calculator with sliders and real-time repayment updates.

The UI of Figma Make looks familiar: Chat, code, preview.

What is unclear to me is if we can take the output from Make and bring it into Sites or Design and perform more extensive design surgery.

Figma Buzz

Figma Buzz looks to be Figma’s answer to Canva and Adobe Express. Design static assets like Instagram posts in Design, then bring them into Buzz and give access to your marketing colleagues so they can update the copy and photos as necessary. You can create and share a library of asset templates for your organization. Very straightforward, and honestly, I’ve not spent a lot of time with this one. One thing to note: even though this is for marketers to create assets, just like Figma Design/Draw, there’s no support for the CMYK color space, and any elements using the new texture or noise effects will turn into raster images. 

Figma Is Becoming a Business

On social media I read a lot of comments from people lamenting that Figma is overstuffing its core product, losing its focus, and should just improve what they have. 

Social media post by Nick Finck expressing concern that Figma’s new features echo existing tools and contribute to product bloat, comparing the direction to Adobe’s strategy.

An example of some of the negative responses on social media to Figma’s announcements.

We don’t live in that world. Figma is a ventured-backed company, having raised nearly $750 million and is currently valued at $12.5 billion. They are not going just focus on a single product; that’s not how it works. And they are preparing to IPO.

In a quippy post on Bluesky, as I was live-posting the keynote, I also said “Figma is the new Adobe.

Social media post by Roger Wong (@lunarboy.com) stating “Figma is the new Adobe” with the hashtag #config2025.

Shifting the Center of Gravity

I meant a couple of things. First, Adobe and the design industry have grown up together, tied at the hip. They invented Postscript, which is the language for PDFs and, together with the Mac enabled the whole desktop publishing industry. There are a lot of Adobe haters out there because of the subscription model, bloatware, etc., but Adobe has always been a part of our profession. They bought rival Macromedia in 2005 to add digital design tools like Dreamweaver, Director, and Flash to their offering. 

Amelia Nash, writing for PRINT Magazine about her recent trip to Adobe MAX in London, (similar to Figma Config, but for Adobe and going on since 2003):

I had come into MAX feeling like an outsider, anxious that maybe my time with Adobe had passed, that maybe I was just a relic in a shiny new creative world. But I left with a reminder that Adobe still sees us, the seasoned professionals who built our careers with their tools, the ones who remember installing fonts manually and optimizing TIFFs for press. Their current marketing efforts may chase the next-gen cohort (with all its hyperactive branding and emoji-saturated optimism), but the tools are still evolving for us pros, too.

Adobe MAX didn’t just show me what’s new, it reminded me of what’s been true throughout my design career: Adobe is for creatives. All of us. Still.

Figma, having created buzz around Config, with programming that featured talks titled “How top designers find their path and creative spark with Kevin Twohy” and “Designing for Climate Disaster with Megan Metzger,” it’s clear they want to occupy the same place in digital designers’ hearts the way that Adobe has for graphic designers for over 40 years.

Building a Creative Suite

(I will forever call it Adobe Creative Suite, not Creative Cloud.)

By doubling the number of products they sell, they are building a creative suite and expanding their market. Same playbook as Adobe.

Do I lament that Figma is becoming like Adobe? No. I understand they’re a business. It’s a company full of talented people who are endeavoring to do the right thing and build the right tools for their audiences of designers, developers, and marketers.

Competition Is Good

The regulators were right. Adobe and Figma should not have merged. A year-and-a-half later, riding the coattails of goodwill Figma has engendered with the digital design community, the company introduced four new products to produce work with. They’ve taken a fresh look at brushes and effects, bringing in approaches from WebGL. They’re being thoughtful about how they enable designers to integrate code into our workflows. And they’re rolling out AI prompt-to-code features in a way that makes sense for us. 

To be sure, these products are all beta and have a long way to go. And I’m excited to go play.

The System Has Been Updated

I’ve been seeing this new ad from Coinbase these past few days and love it. Made by independent agency Isle of Any, this spot has on-point animation, a banging track, and a great concept that plays with the Blue Screen of Death.

Play

I found this one article about it from Little Black Book:

“Crypto is fundamentally updating the financial system,” says Toby Treyer-Evans, co-founder of Isle of Any, speaking with LBB. “So, to us it felt like an interesting place to start for the campaign, both as a film idea and as a way to play with the viewer and send a message. When you see it on TV, in the context of other advertising, it’s deliberately arresting… and blue being Coinbase’s brand colour is just one of those lovely coming togethers.”

A futuristic scene with a glowing, tech-inspired background showing a UI design tool interface for AI, displaying a flight booking project with options for editing and previewing details. The screen promotes the tool with a “Start for free” button.

Beyond the Prompt: Finding the AI Design Tool That Actually Works for Designers

There has been an explosion of AI-powered prompt-to-code tools within the last year. The space began with full-on integrated development environments (IDEs) like Cursor and Windsurf. These enabled developers to use leverage AI assistants right inside their coding apps. Then came a tools like v0, Lovable, and Replit, where users could prompt screens into existence at first, and before long, entire applications.

A couple weeks ago, I decided to test out as many of these tools as I could. My aim was to find the app that would combine AI assistance, design capabilities, and the ability to use an organization’s coded design system.

While my previous essay was about the future of product design, this article will dive deep into a head-to-head between all eight apps that I tried. I recorded the screen as I did my testing, so I’ve put together a video as well, in case you didn’t want to read this.

Play

It is a long video, but there’s a lot to go through. It’s also my first video on YouTube, so this is an experiment.

The Bottom Line: What the Testing Revealed

I won’t bury the lede here. AI tools can be frustrating because they are probabilistic. One hour they can solve an issue quickly and efficiently, while the next they can spin on a problem and make you want to pull your hair out. Part of this is the LLM—and they all use some combo of the major LLMs. The other part is the tool itself for not handling what happens when their LLMs fail. 

For example, this morning I re-evaluated Lovable and Bolt because they’ve released new features within the last week, and I thought it would only be fair to assess the latest version. But both performed worse than in my initial testing two weeks ago. In fact, I tried Bolt twice this morning with the same prompt because the first attempt netted a blank preview. Unfortunately, the second attempt also resulted in a blank screen and then I ran out of credits. 🤷‍♂️

Scorecard for Subframe, with a total of 79 points across different categories: User experience (22), Visual design (13), Prototype (6), Ease of use (13), Design control (15), Design system integration (5), Speed (5), Editor’s discretion (0).

For designers who want actual design tools to work on UI, Subframe is the clear winner. The other tools go directly from prompt to code, skipping giving designers any control via a visual editor. We’re not developers, so manipulating the design in code is not for us. We need to be able to directly manipulate the components by clicking and modifying shapes on the canvas or changing values in an inspector.

For me, the runner-up is v0, if you want to use it only for prototyping and for getting ideas. It’s quick—the UI is mostly unstyled, so it doesn’t get in the way of communicating the UX.

The Players: Code-Only vs. Design-Forward Tools

There are two main categories of contenders: code-only tools, and code plus design tools.

Code-Only

  • Bolt
  • Lovable
  • Polymet
  • Replit
  • v0

Code + Design

  • Onlook
  • Subframe
  • Tempo

My Testing Approach: Same Prompt, Different Results

As mentioned at the top, I tested these tools between April 16–27, 2025. As with most SaaS products, I’m sure things change daily, so this report captures a moment in time.

For my evaluation, since all these tools allow for generating a design from a prompt, that’s where I started. Here’s my prompt:

Create a complete shopping cart checkout experience for an online clothing retailer

I would expect the following pages to be generated:

  • Shopping cart
  • Checkout page (or pages) to capture payment and shipping information
  • Confirmation

I scored each app based on the following rubric:

  • Sample generation quality
  • User experience (25)
  • Visual design (15)
  • Prototype (10)
  • Ease of use (15)
  • Control (15)
  • Design system integration (10)
  • Speed (10)
  • Editor’s discretion (±10)

The Scoreboard: How Each Tool Stacked Up

AI design tools for designers, with scores: Subframe 79, Onlook 71, v0 61, Tempo 59, Polymet 58, Lovable 49, Bolt 43, Replit 31. Evaluations conducted between 4/16–4/27/25.

Final summary scores for AI design tools for designers. Evaluations conducted between 4/16–4/27/25.

Here are the summary scores for all eight tools. For the detailed breakdown of scores, view the scorecards here in this Google Sheet.

The Blow-by-Blow: The Good, the Bad, and the Ugly

Bolt

Bolt screenshot: A checkout interface with a shopping cart summary, items listed, and a “Proceed to Checkout” button, displaying prices and order summary.

First up, Bolt. Classic prompt-to-code pattern here—text box, type your prompt, watch it work. 

Bolt shows you the code generation in real-time, which is fascinating if you’re a developer but mostly noise if you’re not. The resulting design was decent but plain, with typical UX patterns. It missed delivering the confirmation page I would expect. And when I tried to re-evaluate it this morning with their new features? Complete failure—blank preview screens until I ran out of credits. No rhyme or reason. And there it is—a perfect example of the maddening inconsistency these tools deliver. Working beautifully in one session, completely broken in another. Same inputs, wildly different outputs.

Score: 43

Lovable

Lovable screenshot: A shipping information form on a checkout page, including fields for personal details and a “Continue to Payment” button.

Moving on to Lovable, which I captured this morning right after they launched their 2.0 version. The experience was a mixed bag. While it generated clean (if plain) UI with some nice touches like toast notifications and a sidebar shopping cart, it got stuck at a critical juncture—the actual checkout. I had to coax it along, asking specifically for the shopping cart that was missing from the initial generation.

The tool encountered an error but at least provided a handy “Try to fix” button. Unlike Bolt, Lovable tries to hide the code, focusing instead on the browser preview—which as a designer, I appreciate. When it finally worked, I got a very vanilla but clean checkout flow and even the confirmation page I was looking for. Not groundbreaking, but functional. The approach of hiding code complexity might appeal to designers who don’t want to wade through development details.

Score: 49

Polymet

Polymet screenshot: A checkout page design for a fashion store showing payment method options (Credit Card, PayPal, Apple Pay), credit card fields, order summary with subtotal, shipping, tax, and total.

Next up is Polymet. This one has a very interesting interface and I kind of like it. You have your chat on the left and a canvas on the right. But instead of just showing the screen it’s working on, it’s actually creating individual components that later get combined into pages. It’s almost like building Figma components and then combining them at the end, except these are all coded components.

The design is pretty good—plain but very clean. I feel like it’s got a little more character than some of the others. What’s nice is you can go into focus mode and actually play with the prototype. I was able to navigate from the shopping cart through checkout (including Apple Pay) to confirmation. To export the code, you need to be on a paid plan, but the free trial gives you at least a taste of what it can do.

Score: 58

Replit

Replit screenshot: A developer interface showing progress on an online clothing store checkout project with error messages regarding the use of the useCart hook.

Replit was a test of patience—no exaggeration, it was the slowest tool of the bunch at 20 minutes to generate anything substantial. Why so slow? It kept encountering errors and falling into those weird loops that LLMs often do when they get stuck. At one point, I had to explicitly ask it to “make it work” just to progress beyond showing product pages, which wasn’t even what I’d asked for in the first place.

When it finally did generate a checkout experience, the design was nothing to write home about. Lines in the stepper weren’t aligning properly, there were random broken elements, and ultimately—it just didn’t work. I couldn’t even complete the checkout flow, which was the whole point of the exercise. I stopped recording at that point because, frankly, I just didn’t want to keep fighting with a tool that’s both slow and ineffective. 

Score: 31

v0

v0 screenshot: An online shopping cart with a multi-step checkout process, including a shipping form and order summary with prices and a “Continue to Payment” button.

Taking v0 for a spin next, which comes from Vercel. I think it was one of the earlier prompt-to-code generators I heard about—originally just for components, not full pages (though I could be wrong). The interface is similar to Bolt with a chat panel on the left and code on the right. As it works, it shows you the generated code in real-time, which I appreciate. It’s pretty mature and works really well.

The result almost looks like a wireframe, but the visual design has a bit more personality than Bolt’s version, even though it’s using the unstyled shadcn components. It includes form validation (which I checked), and handles the payment flow smoothly before showing a decent confirmation page. Speed-wise, v0 is impressively quick compared to some others I tested—definitely a plus when you’re iterating on designs and trying to quickly get ideas.

Score: 61

Onlook

Onlook screenshot: A design tool interface showing a cart with empty items and a “Continue Shopping” button on a fashion store checkout page.

Onlook stands out as a self-contained desktop app rather than a web tool like the others. The experience starts the same way—prompt in, wait, then boom—but instead of showing you immediate results, it drops you into a canvas view with multiple windows displaying localhost:3000, which is your computer running a web server locally. The design it generated was fairly typical and straightforward, properly capturing the shopping cart, shipping, payment, and confirmation screens I would expect. You can zoom out to see a canvas-style overview and manipulate layers, with a styles tab that lets you inspect and edit elements.

The dealbreaker? Everything gets generated as a single page application, making it frustratingly difficult to locate and edit specific states like shipping or payment. I couldn’t find these states visually or directly in the pages panel—they might’ve been buried somewhere in the layers, but I couldn’t make heads or tails of it. When I tried using it again today to capture the styles functionality for the video, I hit the same wall that plagued several other tools I tested—blank previews and errors. Despite going back and forth with the AI, I couldn’t get it running again.

Score: 71

Subframe

Subframe screenshot: A design tool interface with a checkout page showing a cart with items, a shipping summary, and the option to continue to payment.

My time with Subframe revealed a tool that takes a different approach to the same checkout prompt. Unlike most competitors, Subframe can’t create an entire flow at once (though I hear they’re working on multi-page capabilities). But honestly, I kind of like this limitation—it forces you as a designer to actually think through the process.

What sets Subframe apart is its MidJourney-like approach, offering four different design options that gradually come into focus. These aren’t just static mockups but fully coded, interactive pages you can preview in miniature. After selecting a shopping cart design, I simply asked it to create the next page, and it intelligently moved to shipping/billing info.

The real magic is having actual design tools—layers panel, property inspector, direct manipulation—alongside the ability to see the working React code. For designers who want control beyond just accepting whatever the AI spits out, Subframe delivers the best combination of AI generation and familiar design tooling.

Score: 79

Tempo

Tempo screenshot: A developer tool interface generating a clothing store checkout flow, showing wireframe components and code previews.

Lastly, Tempo. This one takes a different approach than most other tools. It starts by generating a PRD from your prompt, then creates a user flow diagram before coding the actual screens—mimicking the steps real product teams would take. Within minutes, it had generated all the different pages for my shopping cart checkout experience. That’s impressive speed, but from a design standpoint, it’s just fine. The visual design ends up being fairly plain, and the prototype had some UX issues—the payment card change was hard to notice, and the “Place order” action didn’t properly lead to a confirmation screen even though it existed in the flow.

The biggest disappointment was with Tempo’s supposed differentiator. Their DOM inspector theoretically allows you to manipulate components directly on canvas like you would in Figma—exactly what designers need. But I couldn’t get it to work no matter how hard I tried. I even came back days later to try again with a different project and reached out to their support team, but after a brief exchange—crickets. Without this feature functioning, Tempo becomes just another prompt-to-code tool rather than something truly designed for visual designers who want to manipulate components directly. Not great.

Score: 59

The Verdict: Control Beats Code Every Time

Subframe screenshot: A design tool interface displaying a checkout page for a fashion store with a cart summary and a “Proceed to Checkout” button.

Subframe offers actual design tools—layers panel, property inspector, direct manipulation—along with AI chat.

I’ve spent the last couple weeks testing these prompt-to-code tools, and if there’s one thing that’s crystal clear, it’s this: for designers who want actual design control rather than just code manipulation, Subframe is the standout winner.

I will caveat that I didn’t do a deep dive into every single tool. I played with them at a cursory level, giving each a fair shot with the same prompt. What I found was a mix of promising starts and frustrating dead ends.

The reality of AI tools is their probabilistic nature. Sometimes they’ll solve problems easily, and then at other times they’ll spectacularly fail. I experienced this firsthand when retesting both Lovable and Bolt with their latest features—both performed worse than in my initial testing just two weeks ago. Blank screens. Error messages. No rhyme or reason.

For designers like me, the dealbreaker with most of these tools is being forced to manipulate designs through code rather than through familiar design interfaces. We need to be able to directly manipulate components by clicking and modifying shapes on the canvas or changing values in an inspector. That’s where Subframe delivers while others fall short—if their audience includes designers, which might not be the case.

For us designers, I believe Subframe could be the answer. But I’m also looking forward to if Figma will have an answer. Will the company get in the AI > design > code game? Or will it be left behind? 

The future belongs to applications that balance AI assistance with familiar design tooling—not just code generators with pretty previews.

Illustration of humanoid robots working at computer terminals in a futuristic control center, with floating digital screens and globes surrounding them in a virtual space.

Prompt. Generate. Deploy. The New Product Design Workflow

Product design is going to change profoundly within the next 24 months. If the AI 2027 report is any indication, the capabilities of the foundational models will grow exponentially, and with them—I believe—will the abilities of design tools.

A graph comparing AI Foundational Model Capabilities (orange line) versus AI Design Tools Capabilities (blue line) from 2026 to 2028. The orange line shows exponential growth through stages including Superhuman Coder, Superhuman AI Researcher, Superhuman Remote Worker, Superintelligent AI Researcher, and Artificial Superintelligence. The blue line shows more gradual growth through AI Designer using design systems, AI Design Agent, and Integration & Deployment Agents.

The AI foundational model capabilities will grow exponentially and AI-enabled design tools will benefit from the algorithmic advances. Sources: AI 2027 scenario & Roger Wong

The TL;DR of the report is this: companies like OpenAI have more advanced AI agent models that are building the next-generation models. Once those are built, the previous generation is tested for safety and released to the public. And the cycle continues. Currently, and for the next year or two, these companies are focusing their advanced models on creating superhuman coders. This compounds and will result in artificial general intelligence, or AGI, within the next five years. 

Non-AI companies will benefit from new model releases. We already see how much the performance of coding assistants like Cursor has improved with recent releases of Claude 3.7 Sonnet, Gemini 2.5 Pro, and this week, GPT-4.1, OpenAI’s latest.

Tools like v0LovableReplit, and Bolt are leading the charge in AI-assisted design. Creating new landing pages and simple apps is literally as easy as typing English into a chat box. You can whip up a very nice-looking dashboard in single-digit minutes.

However, I will argue they are only serving a small portion of the market. These tools are great for zero-to-one digital products or websites. While new sites and software need to be designed and built, the vast majority of the market is in extending and editing current products. There are hordes more designers who work at corporations such as Adobe, Microsoft, Salesforce, Shopify, and Uber than there are designers at agencies. They all need to adhere to their company’s design system and can’t use what Lovable produces from scratch. The generated components can’t be used even if they were styled to look correct. They must be components from their design system code repositories.

The Design-to-Code Gap

But first, a quick detour…

For any designer who has ever handed off a Figma file to a developer, they have felt the stinging disappointment days or weeks later when it’s finally coded. The spacing is never quite right. The type sizes are off. And the back and forth seems endless. The developer handoff experience has been a well-trodden path full of now-defunct or dying companies like InVisionAbstract, and Zeplin. Figma tries to solve this issue with Dev Mode, but even then, there’s a translation that has to happen from pixels and vectors in a proprietary program to code. 

Yes, no- and low-code platforms like Webflow, Framer, and Builder.io exist. But the former two are proprietary platforms—you can’t take the code with you—and the latter is primarily a CMS (no-code editing for content editors).

The dream is for a design app similar to Figma that uses components from your team’s GitHub design system repository.1 I’m not talking about a Figma-only component library. No. Real components with controllable props in an inspector. You can’t break them apart and any modifications have to be made at the repo level. But you can visually put pages together. For new components, well, if they’re made of atomic parts, then yes, that should be possible too.

UXPin Merge comes close. Everything I mentioned above is theoretically possible. But if I’m being honest, I did a trial and the product is buggy and wasn’t great to use. 

A Glimpse of What’s Coming

Enter TempoPolymet, and Subframe. These are very new entrants to the design tool space. Tempo and Polymet are backed by Y Combinator and Subframe is pre-seed.

For Subframe, they are working on a beta feature that will allow you to connect your GitHub repository, append a little snippet of code to each component, and then the library of components will appear in their app. Great! This is the dream. The app seems fairly easy to use and wasn’t sluggish and buggy like UXPin.

But the kicker—the Holy Grail—is their AI. 

I quickly put together a hideous form screen based on one of the oldest pages in BuildOps that is long overdue for a redesign. Then, I went into Subframe’s Ask AI tab and prompted, “Make this design more user friendly.” Similar to Midjourney, four blurry tiles appeared and slowly came into focus. This diffuser model effect was a moment of delight for me. I don’t know if they’re actually using a diffuser model—think Stable Diffusion and Midjourney—or if they spent the time building a kick-ass loading state. Anyway, four completely built alternate layouts were generated. I clicked into each one to see it larger and noticed they each used components from our styled design library. (I’m on a trial, so it’s not exactly components from our repo, but it demonstrates the promise.) And I felt like I just witnessed the future.

Image shows a side-by-side comparison of design screens from what appears to be Subframe, a design tool. On the left is a generic form page layout with fields for customer information, property details, billing options, job specifications, and financial information. On the right is a more refined "Create New Job" interface with improved organization, clearer section headings (Customer Information, Job Details, Work Description), and thumbnail previews of alternative design options at the bottom. Both interfaces share the same navigation header with Reports, Dashboard, Operations, Dispatch, and Accounting tabs. The bottom of the right panel indicates "Subframe AI is in beta."RetryClaude can make mistakes. Please double-check responses.

Subframe’s Ask AI mode drafted four options in under a minute, turning an outdated form into something much more user-friendly.

What Product Design in 2027 Might Look Like

From the AI 2027 scenario report, in the chapter, “March 2027: Algorithmic Breakthroughs”:

Three huge datacenters full of Agent-2 copies work day and night, churning out synthetic training data. Another two are used to update the weights. Agent-2 is getting smarter every day.

With the help of thousands of Agent-2 automated researchers, OpenBrain is making major algorithmic advances.

Aided by the new capabilities breakthroughs, Agent-3 is a fast and cheap superhuman coder. OpenBrain runs 200,000 Agent-3 copies in parallel, creating a workforce equivalent to 50,000 copies of the best human coder sped up by 30x. OpenBrain still keeps its human engineers on staff, because they have complementary skills needed to manage the teams of Agent-3 copies.

As I said at the top of this essay, AI is making AI and the innovations are compounding. With UX design, there will be a day when design is completely automated.

Imagine this. A product manager at a large-scale e-commerce site wants to decrease shopping cart abandonment by 10%. They task an AI agent to optimize a shopping cart flow with that metric as the goal. A week later, the agent returns the results:

  • It ran 25 experiments, with each experiment being a design variation of multiple pages.
  • Each experiment was with 1,000 visitors, totaling about 10% of their average weekly traffic.
  • Experiment #18 was the winner, resulting in an 11.3% decrease in cart abandonment.

The above will be possible. A few things have to fall in place first, though, and the building blocks are being made right now.

The Foundation Layer : Integrate Design Systems

The design industry has been promoting the benefits of design systems for many years now. What was once a Sisyphean uphill battle is now mostly easier. Development teams understand the benefits of using a shared and standardized component library.

To capture the larger piece of the design market that is not producing greenfield work, AI design tools like Subframe will have to depend on well-built component libraries. Their AI must be able to ingest and internalize design system documentation that govern how components should be used. 

Then we’ll be able to prompt new screens with working code into existence. 

**Forecast: **Within six months.

Professionals Still Need Control

Cursor—the AI-assisted development tool that’s captured the market—is VS Code enhanced with AI features. In other words, it is a professional-grade programming tool that allows developers to write and edit code, *and *generate it via AI chat. It gives the pros control. Contrast that with something like Lovable, which is aimed at designers and the code is accessible, but you have to look for it. The canvas and chat are prioritized.

For AI-assisted design tools to work, they need to give us designers control. That control comes in the form of curation and visual editing. Give us choices when generating alternates and let us tweak elements to our heart’s content—within the confines of the design system, of course. 

A diagram showing the process flow of creating a shopping cart checkout experience. At the top is a prompt box, which leads to four generated layout options below it. The bottom portion shows configuration panels for adjusting size and padding properties of the selected design.

The product design workflow in the future will look something like this: prompt the AI, view choices and select one, then use fine-grained controls to tweak.

Automating Design with Design Agents

Agent mode in Cursor is pretty astounding. You’ll see it plan its actions based on the prompt, then execute them one by one. If it encounters an error, it’ll diagnose and fix it. If it needs to install a package or launch the development server to test the app, it will do that. Sometimes, it can go for many minutes without needing intervention. It’s literally like watching a robot assemble a thingamajig. 

We will need this same level of agentic AI automation in design tools. If I could write in a chat box “Create a checkout flow for my site” and the AI design tool can generate a working cart page, payment page, and thank-you page from that one prompt using components from the design system, that would be incredible.

Yes, zero-to-one tools are starting to add this feature. Here’s a shopping cart flow from v0…

Building a shopping cart checkout flow in v0 was incredibly fast. Two minutes flat. This video is sped up 400%.

Polymet and Lovable were both able to create decent flows. There is also promise with Tempo, although the service was bugging out when I tested it earlier today. Tempo will first plan by writing a PRD, then it draws a flow diagram, then wireframes the flow, and then generates code for each screen. If I were to create a professional tool, this is how I would do it. I truly hope they can resolve their tech issues. 

**Forecast: **Within one year.

A screenshot of Tempo, an AI-powered design tool interface showing the generation of a complete checkout experience. The left sidebar displays a history of AI-assisted tasks including generating PRD, mermaid diagrams, wireframes and components. The center shows a checkout page preview with cart summary, checkout form, and order confirmation screens visible in a component-based layout.

Tempo’s workflow seems ideal. It generates a PRD, draws a flow diagram, creates wireframes, and finally codes the UI.

The Final Pieces: Integration and Deployment Agents

The final pieces to realizing our imaginary scenario are coding agents that integrate the frontend from AI design tools to the backend application, and then deploy the code to a server for public consumption. I’m not an expert here, so I’ll just hand-wave past this part. The AI-assisted design tooling mentioned above is frontend-only. For the data to flow and the business logic to work, the UI must be integrated with the backend.

CI/CD (Continuous Integration and Continuous Deployment) platforms like GitHub Actions and Vercel already exist today, so it’s not difficult to imagine deploys being initiated by AI agents.

**Forecast: **Within 18–24 months.

Where Is Figma?

The elephant in the room is Figma’s position in all this. Since their rocky debut of AI features last year, Figma has been trickling out small AI features like more powerful search, layer renaming, mock data generation, and image generation. The biggest AI feature they have is called First Draft, which is a relaunch of design generation. They seem to be stuck placating to designers and developers (Dev Mode), instead of considering how they can bring value to the entire organization. Maybe they will make a big announcement at Config, their upcoming user conference in May. But if they don’t compete with one of these aforementioned tools, they will be left behind.

To be clear, Figma is still going to be a necessary part of the design process. A canvas free from the confines of code allows for easy *manual *exploration. But the dream of closing the gap between design and code needs to come true sooner than later if we’re to take advantage of AI’s promise.

The Two-Year Horizon

As I said at the top of this essay, product design is going to change profoundly within the next two years. The trajectory is clear: AI is making AI, and the innovations are compounding rapidly. Design systems provide the structured foundation that AI needs, while tools like Subframe are developing the crucial integration with these systems.

For designers, this isn’t the end—if anything, it’s a transformation. We’ll shift from pixel-pushers to directors, from creators to curators. Our value will lie in knowing what to ask for and making the subtle refinements that require human taste and judgment.

The holy grail of seamless design-to-code is finally within reach. In 24 months, we won’t be debating if AI will transform product design—we’ll be reflecting on how quickly it happened.


1 I know Figma has the feature called Code Connect. I haven’t used it, but from what I can tell, you match your Figma component library to the code component library. Then in Dev Mode, it makes it easier for engineers to discern which component from the repo to use.