Illustration of people working on laptops atop tall ladders and multi-level platforms, symbolizing hierarchy and competition, set against a bold, abstract sunset background.

The Design Industry Created Its Own Talent Crisis. AI Just Made It Worse.

This is the first part in a three-part series about the design talent crisis. Read Part II and Part III.

Part I: The Vanishing Bottom Rung 

Erika Kim’s path to UX design represents a familiar pandemic-era pivot story, yet one that reveals deeper currents about creative work and economic necessity. Armed with a 2020 film and photography degree from UC Riverside, she found herself working gig photography—graduations, band events—when the creative industries collapsed. The work satisfied her artistic impulses but left her craving what she calls “structure and stability,” leading her to UX design. The field struck her as an ideal synthesis, “I’m creating solutions for companies. I’m working with them to figure out what they want, and then taking that creative input and trying to make something that works best for them.”

Since graduating from the interaction design program at San Diego City College a year ago, she’s had three internships and works retail part-time to pay the bills. “I’ve been in survival mode,” she admits. On paper, she’s a great candidate for any junior position. Speaking with her reveals a very thoughtful and resourceful young designer. Why hasn’t she been able to land a full-time job? What’s going on in the design job market? 

A cut-up Sonos speaker against a backdrop of cassette tapes

When the Music Stopped: Inside the Sonos App Disaster

The fall of Sonos isn’t as simple as a botched app redesign. Instead, it is the cumulative result of poor strategy, hubris, and forgetting the company’s core value proposition. To recap, Sonos rolled out a new mobile app in May 2024, promising “an unprecedented streaming experience.” Instead, it was a severely handicapped app, missing core features and broke users’ systems. By January 2025, that failed launch wiped nearly $500 million from the company’s market value and cost CEO Patrick Spence his job.

What happened? Why did Sonos go backwards on accessibility? Why did the company remove features like sleep timers and queue management? Immediately after the rollout, the backlash began to snowball into a major crisis.

A collage of torn newspaper-style headlines from Bloomberg, Wired, and The Verge, all criticizing the new Sonos app. Bloomberg’s headline states, “The Volume of Sonos Complaints Is Deafening,” mentioning customer frustration and stock decline. Wired’s headline reads, “Many People Do Not Like the New Sonos App.” The Verge’s article, titled “The new Sonos app is missing a lot of features, and people aren’t happy,” highlights missing features despite increased speed and customization.

As a designer and longtime Sonos customer who was also affected by the terrible new app, a little piece of me died inside each time I read the word “redesign.” It was hard not to take it personally, knowing that my profession could have anything to do with how things turned out. Was it really Design’s fault?

There are many dimensions to this well-researched forecast about how AI will play out in the coming years. Daniel Kokotajlo and his researchers have put out a document that reads like a sci-fi limited series that could appear on Apple TV+ starring Andrew Garfield as the CEO of OpenBrain—the leading AI company. …Except that it’s all actually plausible and could play out as described in the next five years.

Before we jump into the content, the design is outstanding. The type is set for readability and there are enough charts and visual cues to keep this interesting while maintaining an air of credibility and seriousness. On desktop, there’s a data viz dashboard in the upper right that updates as you read through the content and move forward in time. My favorite is seeing how the sci-fi tech boxes move from the Science Fiction category to Emerging Tech to Currently Exists.

The content is dense and technical, but it is a fun, if frightening, read. While I’ve been using Cursor AI—one of its many customers helping the company get to $100 million in annual recurring revenue (ARR)—for side projects and a little at work, I’m familiar with its limitations. Because of the limited context window of today’s models like Claude 3.7 Sonnet, it will forget and start munging code if not treated like a senile teenager.

The researchers, describing what could happen in early 2026 (“OpenBrain” is essentially OpenAI):

OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.

The point they make here is that the foundational model AI companies are building agents and using them internally to advance their technology. The limiting factor in tech companies has traditionally been the talent. But AI companies have the investments, hardware, technology and talent to deploy AI to make better AI.

Continuing to January 2027:

Agent-1 had been optimized for AI R&D tasks, hoping to initiate an intelligence explosion. OpenBrain doubles down on this strategy with Agent-2. It is qualitatively almost as good as the top human experts at research engineering (designing and implementing experiments), and as good as the 25th percentile OpenBrain scientist at “research taste” (deciding what to study next, what experiments to run, or having inklings of potential new paradigms). While the latest Agent-1 could double the pace of OpenBrain’s algorithmic progress, Agent-2 can now triple it, and will improve further with time. In practice, this looks like every OpenBrain researcher becoming the “manager” of an AI “team.”

Breakthroughs come at an exponential clip because of this. And by April, safety concerns pop up:

Take honesty, for example. As the models become smarter, they become increasingly good at deceiving humans to get rewards. Like previous models, Agent-3 sometimes tells white lies to flatter its users and covers up evidence of failure. But it’s gotten much better at doing so. It will sometimes use the same statistical tricks as human scientists (like p-hacking) to make unimpressive experimental results look exciting. Before it begins honesty training, it even sometimes fabricates data entirely. As training goes on, the rate of these incidents decreases. Either Agent-3 has learned to be more honest, or it’s gotten better at lying.

But the AI is getting faster than humans, and we must rely on older versions of the AI to check the new AI’s work:

Agent-3 is not smarter than all humans. But in its area of expertise, machine learning, it is smarter than most, and also works much faster. What Agent-3 does in a day takes humans several days to double-check. Agent-2 supervision helps keep human monitors’ workload manageable, but exacerbates the intellectual disparity between supervisor and supervised.

The report forecasts that OpenBrain releases “Agent-3-mini” publicly in July of 2027, calling it AGI—artificial general intelligence—and ushering in a new golden age for tech companies:

Agent-3-mini is hugely useful for both remote work jobs and leisure. An explosion of new apps and B2B SAAS products rocks the market. Gamers get amazing dialogue with lifelike characters in polished video games that took only a month to make. 10% of Americans, mostly young people, consider an AI “a close friend.” For almost every white-collar profession, there are now multiple credible startups promising to “disrupt” it with AI.

Woven throughout the report is the race between China and the US, with predictions of espionage and government takeovers. Near the end of 2027, the report gives readers a choice: does the US government slow down the pace of AI innovation, or does it continue at the current pace so America can beat China? I chose to read the “Race” option first:

Agent-5 convinces the US military that China is using DeepCent’s models to build terrifying new weapons: drones, robots, advanced hypersonic missiles, and interceptors; AI-assisted nuclear first strike. Agent-5 promises a set of weapons capable of resisting whatever China can produce within a few months. Under the circumstances, top brass puts aside their discomfort at taking humans out of the loop. They accelerate deployment of Agent-5 into the military and military-industrial complex.

In Beijing, the Chinese AIs are making the same argument.

To speed their military buildup, both America and China create networks of special economic zones (SEZs) for the new factories and labs, where AI acts as central planner and red tape is waived. Wall Street invests trillions of dollars, and displaced human workers pour in, lured by eye-popping salaries and equity packages. Using smartphones and augmented reality-glasses20 to communicate with its underlings, Agent-5 is a hands-on manager, instructing humans in every detail of factory construction—which is helpful, since its designs are generations ahead. Some of the newfound manufacturing capacity goes to consumer goods, and some to weapons—but the majority goes to building even more manufacturing capacity. By the end of the year they are producing a million new robots per month. If the SEZ economy were truly autonomous, it would have a doubling time of about a year; since it can trade with the existing human economy, its doubling time is even shorter.

Well, it does get worse, and I think we all know the ending, which is the backstory for so many dystopian future movies. There is an optimistic branch as well. The whole report is worth a read.

Ideas about the implications to our design profession are swimming in my head. I’ll write a longer essay as soon as I can put them into a coherent piece.

Update: I’ve written that piece, “Prompt. Generate. Deploy. The New Product Design Workflow.

preview-1744501634555.png

AI 2027

A research-backed AI scenario forecast.

Earth 3 Streamline Icon: https://streamlinehq.comai-2027.com
A futuristic scene with a glowing, tech-inspired background showing a UI design tool interface for AI, displaying a flight booking project with options for editing and previewing details. The screen promotes the tool with a “Start for free” button.

Beyond the Prompt: Finding the AI Design Tool That Actually Works for Designers

There has been an explosion of AI-powered prompt-to-code tools within the last year. The space began with full-on integrated development environments (IDEs) like Cursor and Windsurf. These enabled developers to use leverage AI assistants right inside their coding apps. Then came a tools like v0, Lovable, and Replit, where users could prompt screens into existence at first, and before long, entire applications.

A couple weeks ago, I decided to test out as many of these tools as I could. My aim was to find the app that would combine AI assistance, design capabilities, and the ability to use an organization’s coded design system.

While my previous essay was about the future of product design, this article will dive deep into a head-to-head between all eight apps that I tried. I recorded the screen as I did my testing, so I’ve put together a video as well, in case you didn’t want to read this.

Colorful illustration featuring the Figma logo on the left and a whimsical character operating complex, abstract machinery with gears, dials, and mechanical elements in vibrant colors against a yellow background.

Figma Make: Great Ideas, Nowhere to Go

Nearly three weeks after it was introduced at Figma Config 2025, I finally got access to Figma Make. It is in beta and Figma made sure we all know. So I will say upfront that it’s a bit unfair to do an official review. However, many of the tools in my AI prompt-to-code shootout article are also in beta. 

Since this review is fairly visual, I made a video as well that summarizes the points in this article pretty well.

Collection of iOS interface elements showcasing Liquid Glass design system including keyboards, menus, buttons, toggles, and dialogs with translucent materials on dark background.

Breaking Down Apple’s Liquid Glass: The Tech, The Hype, and The Reality

I kind of expected it: a lot more ink was spilled on Liquid Glass—particularly on social media. In case you don’t remember, Liquid Glass is the new UI for all of Apple’s platforms. It was announced Monday at WWDC 2025, their annual developers conference.

The criticism is primarily around legibility and accessibility. Secondary reasons include aesthetics and power usage to animate all the bubbles.

How Liquid Glass Actually Works

Before I go and address the criticism, I think it would be great to break down the team’s design thinking and how Liquid Glass actually works. 

Human chain of designers supporting each other to reach laptops and design tools floating above them, illustrating collaborative mentorship and knowledge transfer in the design industry.

Why Young Designers Are the Antidote to AI Automation

In Part I of this series, I wrote about the struggles recent grads have had finding entry-level design jobs and what might be causing the stranglehold on the design job market.

Part II: Building New Ladders 

When I met Benedict Allen, he had just finished with Portfolio Review a week earlier. That’s the big show all the design students in the Graphic Design program at San Diego City College work toward. It’s a nice event that brings out the local design community where seasoned professionals review the portfolios of the graduating students.

Allen was all smiles and relief. “I want to dabble in different aspects of design because the principles are generally the same.” He goes on to mention how he wants to start a fashion brand someday, DJ, try 3D. “I just want to test and try things and just have fun! Of course, I’ll have my graphic design job, but I don’t want that to be the end. Like when the workday ends, that’s not the end of my creativity.” He was bursting with enthusiasm.

There are over 1,800 font families in Google Fonts. While as designers, I’m sure were grateful for the trove of free fonts, good typefaces in the library are hard to spot.

Brand identity darlings Smith & Diction dropped a catalog of “Usable Google Fonts.” In a LinkedIn post, they wrote, “Screw it, here's all of the google fonts that are actually good categorized by ‘vibe’.”

Huzzah! It’s in the form of a public Figma file. Enjoy.

preview-1754632730253.jpg

Usable Google Fonts

Catalog of "usable" Google fonts as curated by Smith & Diction

Earth 3 Streamline Icon: https://streamlinehq.comfigma.com
Surreal black-and-white artwork of a glowing spiral galaxy dripping paint-like streaks over a city skyline at night.

Why I’m Keeping My Design Title

In the 2011 documentary Jiro Dreams of Sushi, then 85 year-old sushi master Jiro Ono says this about craft:

Once you decide on your occupation… you must immerse yourself in your work. You have to fall in love with your work. Never complain about your job. You must dedicate your life to mastering your skill. That’s the secret of success and is the key to being regarded honorably.

Craft is typically thought of as the formal aspects of any field such as design, woodworking, writing, or cooking. In design, we think about composition, spacing, and typography—being pixel-perfect. But one’s craft is much more than that. Ono’s sushi craft is not solely about slicing fish and pressing it against a bit of rice. It is also about picking the right fish, toasting the nori just so, cooking the rice perfectly, and running a restaurant. It’s the whole thing.

Therefore, mastering design—or any occupation—takes time, experience, or reps as the kids say. So it’s to my dismay that Suff Syed’s essay “Why I’m Giving Up My Design Title — And What That Says About the Future of Design” got so much play in recent weeks. Syed is Head of Product Design at Microsoft—er, was. I guess his title is now Member of the Technical Staff. In a perfectly well-argued and well-written essay, he concludes:

Griffin AI logo

How I Built and Launched an AI-Powered App

I’ve always been a maker at heart—someone who loves to bring ideas to life. When AI exploded, I saw a chance to create something new and meaningful for solo designers. But making Griffin AI was only half the battle…

Birth of an Idea

About a year ago, a few months after GPT-4 was released and took the world by storm, I worked on several AI features at Convex. One was a straightforward email drafting feature but with a twist. We incorporated details we knew about the sender—such as their role and offering—and the email recipient, as well as their role plus info about their company’s industry. To accomplish this, I combined some prompt engineering and data from our data providers, shaping the responses we got from GPT-4.

Playing with this new technology was incredibly fun and eye-opening. And that gave me an idea. Foundational large language models (LLMs) aren’t great yet for factual data retrieval and analysis. But they’re pretty decent at creativity. No, GPT, Claude, or Gemini couldn’t write an Oscar-winning screenplay or win the Pulitzer Prize for poetry, but it’s not bad for starter ideas that are good enough for specific use cases. Hold that thought.