Skip to content

There are many dimensions to this well-researched forecast about how AI will play out in the coming years. Daniel Kokotajlo and his researchers have put out a document that reads like a sci-fi limited series that could appear on Apple TV+ starring Andrew Garfield as the CEO of OpenBrain—the leading AI company. …Except that it’s all actually plausible and could play out as described in the next five years.

Before we jump into the content, the design is outstanding. The type is set for readability and there are enough charts and visual cues to keep this interesting while maintaining an air of credibility and seriousness. On desktop, there’s a data viz dashboard in the upper right that updates as you read through the content and move forward in time. My favorite is seeing how the sci-fi tech boxes move from the Science Fiction category to Emerging Tech to Currently Exists.

The content is dense and technical, but it is a fun, if frightening, read. While I’ve been using Cursor AI—one of its many customers helping the company get to $100 million in annual recurring revenue (ARR)—for side projects and a little at work, I’m familiar with its limitations. Because of the limited context window of today’s models like Claude 3.7 Sonnet, it will forget and start munging code if not treated like a senile teenager.

The researchers, describing what could happen in early 2026 (“OpenBrain” is essentially OpenAI):

OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.

The point they make here is that the foundational model AI companies are building agents and using them internally to advance their technology. The limiting factor in tech companies has traditionally been the talent. But AI companies have the investments, hardware, technology and talent to deploy AI to make better AI.

Continuing to January 2027:

Agent-1 had been optimized for AI R&D tasks, hoping to initiate an intelligence explosion. OpenBrain doubles down on this strategy with Agent-2. It is qualitatively almost as good as the top human experts at research engineering (designing and implementing experiments), and as good as the 25th percentile OpenBrain scientist at “research taste” (deciding what to study next, what experiments to run, or having inklings of potential new paradigms). While the latest Agent-1 could double the pace of OpenBrain’s algorithmic progress, Agent-2 can now triple it, and will improve further with time. In practice, this looks like every OpenBrain researcher becoming the “manager” of an AI “team.”

Breakthroughs come at an exponential clip because of this. And by April, safety concerns pop up:

Take honesty, for example. As the models become smarter, they become increasingly good at deceiving humans to get rewards. Like previous models, Agent-3 sometimes tells white lies to flatter its users and covers up evidence of failure. But it’s gotten much better at doing so. It will sometimes use the same statistical tricks as human scientists (like p-hacking) to make unimpressive experimental results look exciting. Before it begins honesty training, it even sometimes fabricates data entirely. As training goes on, the rate of these incidents decreases. Either Agent-3 has learned to be more honest, or it’s gotten better at lying.

But the AI is getting faster than humans, and we must rely on older versions of the AI to check the new AI’s work:

Agent-3 is not smarter than all humans. But in its area of expertise, machine learning, it is smarter than most, and also works much faster. What Agent-3 does in a day takes humans several days to double-check. Agent-2 supervision helps keep human monitors’ workload manageable, but exacerbates the intellectual disparity between supervisor and supervised.

The report forecasts that OpenBrain releases “Agent-3-mini” publicly in July of 2027, calling it AGI—artificial general intelligence—and ushering in a new golden age for tech companies:

Agent-3-mini is hugely useful for both remote work jobs and leisure. An explosion of new apps and B2B SAAS products rocks the market. Gamers get amazing dialogue with lifelike characters in polished video games that took only a month to make. 10% of Americans, mostly young people, consider an AI “a close friend.” For almost every white-collar profession, there are now multiple credible startups promising to “disrupt” it with AI.

Woven throughout the report is the race between China and the US, with predictions of espionage and government takeovers. Near the end of 2027, the report gives readers a choice: does the US government slow down the pace of AI innovation, or does it continue at the current pace so America can beat China? I chose to read the “Race” option first:

Agent-5 convinces the US military that China is using DeepCent’s models to build terrifying new weapons: drones, robots, advanced hypersonic missiles, and interceptors; AI-assisted nuclear first strike. Agent-5 promises a set of weapons capable of resisting whatever China can produce within a few months. Under the circumstances, top brass puts aside their discomfort at taking humans out of the loop. They accelerate deployment of Agent-5 into the military and military-industrial complex.

In Beijing, the Chinese AIs are making the same argument.

To speed their military buildup, both America and China create networks of special economic zones (SEZs) for the new factories and labs, where AI acts as central planner and red tape is waived. Wall Street invests trillions of dollars, and displaced human workers pour in, lured by eye-popping salaries and equity packages. Using smartphones and augmented reality-glasses20 to communicate with its underlings, Agent-5 is a hands-on manager, instructing humans in every detail of factory construction—which is helpful, since its designs are generations ahead. Some of the newfound manufacturing capacity goes to consumer goods, and some to weapons—but the majority goes to building even more manufacturing capacity. By the end of the year they are producing a million new robots per month. If the SEZ economy were truly autonomous, it would have a doubling time of about a year; since it can trade with the existing human economy, its doubling time is even shorter.

Well, it does get worse, and I think we all know the ending, which is the backstory for so many dystopian future movies. There is an optimistic branch as well. The whole report is worth a read.

Ideas about the implications to our design profession are swimming in my head. I’ll write a longer essay as soon as I can put them into a coherent piece.

Update: I’ve written that piece, “Prompt. Generate. Deploy. The New Product Design Workflow.

preview-1744501634555.png

AI 2027

A research-backed AI scenario forecast.

ai-2027.com iconai-2027.com
A cut-up Sonos speaker against a backdrop of cassette tapes

When the Music Stopped: Inside the Sonos App Disaster

The fall of Sonos isn’t as simple as a botched app redesign. Instead, it is the cumulative result of poor strategy, hubris, and forgetting the company’s core value proposition. To recap, Sonos rolled out a new mobile app in May 2024, promising “an unprecedented streaming experience.” Instead, it was a severely handicapped app, missing core features and broke users’ systems. By January 2025, that failed launch wiped nearly $500 million from the company’s market value and cost CEO Patrick Spence his job.

What happened? Why did Sonos go backwards on accessibility? Why did the company remove features like sleep timers and queue management? Immediately after the rollout, the backlash began to snowball into a major crisis.

A collage of torn newspaper-style headlines from Bloomberg, Wired, and The Verge, all criticizing the new Sonos app. Bloomberg’s headline states, “The Volume of Sonos Complaints Is Deafening,” mentioning customer frustration and stock decline. Wired’s headline reads, “Many People Do Not Like the New Sonos App.” The Verge’s article, titled “The new Sonos app is missing a lot of features, and people aren’t happy,” highlights missing features despite increased speed and customization.

As a designer and longtime Sonos customer who was also affected by the terrible new app, a little piece of me died inside each time I read the word “redesign.” It was hard not to take it personally, knowing that my profession could have anything to do with how things turned out. Was it really Design’s fault?

Even after devouring dozens of news articles, social media posts, and company statements, I couldn’t get a clear picture of why the company made the decisions it did. I cast a net on LinkedIn, reaching out to current and former designers who worked at Sonos. This story is based on hours of conversations between several employees and me. They only agreed to talk on the condition of anonymity. I’ve also added context from public reporting.

The shape of the story isn’t much different than what’s been reported publicly. However, the inner mechanics of how those missteps happened are educational. The Sonos tale illustrates the broader challenges that most companies face as they grow and evolve. How do you modernize aging technology without breaking what works? How do public company pressures affect product decisions? And most importantly, how do organizations maintain their core values and user focus as they scale?

It Just Works

Whenever I moved into a new home, I used to always set up the audio system first. Speaker cable had to be routed under the carpet, along the baseboard, or through walls and floors. To get speakers in the right place, cable management was always a challenge, especially with a surround setup. Then Sonos came along and said, “Wires? We don’t need no stinking wires.” (OK, so they didn’t really say that. Their first wireless speaker, the PLAY:5, was launched in late 2009.)

I purchased my first pair of Sonos speakers over ten years ago. I had recently moved into a modest one-bedroom apartment in Venice, and I liked the idea of hearing my music throughout the place. Instead of running cables, setting up the two PLAY:1 speakers was simple. At the time, you had to plug into Ethernet for the setup and keep at least one component hardwired in. But once that was done, adding the other speaker was easy.

The best technology is often invisible. It turns out that making it work this well wasn’t easy. According to their own history page, in its early days, the company made the difficult decision to build a distributed system where speakers could communicate directly with each other, rather than relying on central control. It was a more complex technical path, but one that delivered a far better user experience. The founding team spent months perfecting their mesh networking technology, writing custom Linux drivers, and ensuring their speakers would stay perfectly synced when playing music.

A network architecture diagram for a Sonos audio system, showing Zone Players, speakers, a home network, and various audio sources like a computer, MP3 store, CD player, and internet connectivity. The diagram includes wired and wireless connections, a WiFi handheld controller, and a legend explaining connection types. Handwritten notes describe the Zone Player’s ability to play, fetch, and store MP3 files for playback across multiple zones. Some elements, such as source converters, are crossed out.

As a new Sonos owner, a concept that was a little challenging to wrap my head around was that the speaker is the player. Instead of casting music from my phone or computer to the speaker, the speaker itself streamed the music from my network-attached storage (NAS, aka a server) or streaming services like Pandora or Spotify.

One of my sources told me about the “beer test” they had at Sonos. If you’re having a house party and run out of beer, you could leave the house without stopping the music. This is a core Sonos value proposition.

A Rat’s Nest: The Weight of Tech Debt

The original Sonos technology stack, built carefully and methodically in the early 2000s, had served the company well. Its products always passed the beer test. However, two decades later, the company’s software infrastructure became increasingly difficult to maintain and update. According to one of my sources, who worked extensively on the platform, the codebase had become a “rat’s nest,” making even simple changes hugely challenging.

The tech debt had been accumulating for years. While Sonos continued adding features like Bluetooth playback and expanding its product line, the underlying architecture remained largely unchanged. The breaking point came with the development of the Sonos Ace headphones. This major new product category required significant changes to how the Sonos app handled device control and audio streaming.

Rather than tackle this technical debt incrementally, Sonos chose to completely rewrite its mobile app. This “clean slate” approach was seen as the fastest way to modernize the platform. But as many developers know, complete refactors are notoriously risky. And unlike in its early days, when the company would delay launches to get things right—famously once stopping production lines over a glue issue—this time Sonos seemed determined to push forward regardless of quality concerns.

Set Up for Failure

The rewrite project began around 2022 and would span approximately two years. The team did many things right initially—spending a year and a half conducting rigorous user testing and building functional prototypes using SwiftUI. According to my sources, these prototypes and tests validated their direction—the new design was a clear improvement over the current experience. The problem wasn’t the vision. It was execution.

A wave of new product managers, brought in around this time, were eager to make their mark but lacked deep knowledge of Sonos’s ecosystem. One designer noted it was “the opposite of normal feature creep”—while product designers typically push for more features, in this case they were the ones advocating for focusing on the basics.

As a product designer, this role reversal is particularly telling. Typically in a product org, designers advocate for new features and enhancements, while PMs act as a check on scope creep, ensuring we stay focused on shipping. When this dynamic inverts—when designers become the conservative voice arguing for stability and basic functionality—it’s a major red flag. It’s like architects pleading to fix the foundation while the clients want to add a third story. The fact that Sonos’s designers were raising these alarms, only to be overruled, speaks volumes about the company’s shifting priorities.

The situation became more complicated when the app refactor project, codenamed Passport, was coupled to the hardware launch schedule for the Ace headphones. One of my sources described this coupling of hardware and software releases as “the Achilles heel” of the entire project. With the Ace’s launch date set in stone, the software team faced immovable deadlines for what should have been a more flexible development timeline. This decision and many others, according to another source, were made behind closed doors, with individual contributors being told what to do without room for discussion. This left experienced team members feeling voiceless in crucial technical and product decisions. All that careful research and testing began to unravel as teams rushed to meet the hardware schedule.

This misalignment between product management and design was further complicated by organizational changes in the months leading up to launch. First, Sonos laid off many members of its forward-thinking teams. Then, closer to launch, another round of cuts significantly impacted QA and user research staff. The remaining teams were stretched thin, simultaneously maintaining the existing S2 app while building its replacement. The combination of a growing backlog from years prior and diminished testing resources created a perfect storm.

Feeding Wall Street

A data-driven slide showing Sonos’ customer base growth and revenue opportunities. It highlights increasing product registrations, growth in multi-product households, and a potential >$6 billion revenue opportunity by converting single-product households to multi-product ones.

Measurement myopia can lead to unintended consequences. When Sonos became public in 2018, three metrics the company reported to Wall Street were products registered, Sonos households, and products per household. Requiring customers to register their products is easy enough for a stationary WiFi-connected speaker. But it’s a different issue when it’s a portable one like the Sonos Roam when it’ll be used primarily as a Bluetooth speaker. When my daughter moved into the dorms at UCLA two years ago, I bought her a Roam. But because of Sonos’ quarterly financial reporting and the necessity to tabulate product registrations and new households, her Bluetooth speaker was a paperweight until she came home for Christmas. The speaker required WiFi connectivity and account creation for initial setup, but the university’s network security prevented the required initial WiFi connection.

The Content Distraction

A promotional image for Sonos Radio, featuring bold white text over a red, semi-transparent square with a bubbly texture. The background shows a tattooed woman wearing a translucent green top, holding a patterned ceramic mug. Below the main text, a caption reads “Now Playing – Indie Gold”, with a play button icon beneath it. The Sonos logo is positioned vertically on the right side.

Perhaps the most egregious example of misplaced priorities, driven by the need to show revenue growth, was Sonos’ investment into content features. Sonos Radio launched in April 2020 as a complimentary service for owners. An HD, ad-free paid tier launched later in the same year. Clearly, the thirst to generate another revenue stream, especially a monthly recurring one, was the impetus behind Sonos Radio. Customers thought of Sonos as a hardware company, not a content one.

At the time of the Sonos Radio HD launch, “Beagle” a user in Sonos’ community forums, wrote (emphasis mine):

I predicted a subscription service in a post a few months back. I think it’s the inevitable outcome of floating the company - they now have to demonstrate ways of increasing revenue streams for their shareholders. In the U.K the U.S ads from the free version seem bizarre and irrelevant.

If Sonos wish to commoditise streaming music that’s their business but I see nothing new or even as good as other available services. What really concerns me is if Sonos were to start “encouraging” (forcing) users to access their streams by removing Tunein etc from the app. I’m not trying to demonise Sonos, heaven knows I own enough of their products but I have a healthy scepticism when companies join an already crowded marketplace with less than stellar offerings. Currently I have a choice between Sonos Radio and Tunein versions of all the stations I wish to use. I’ve tried both and am now going to switch everything to Tunein. Should Sonos choose to “encourage” me to use their service that would be the end of my use of their products. That may sound dramatic and hopefully will prove unnecessary but corporate arm twisting is not for me.

My sources said the company started growing its content team, reflecting the belief that Sonos would become users’ primary way to discover and consume music. However, this strategy ignored a fundamental reality: Sonos would never be able to do Spotify better than Spotify or Apple Music better than Apple.

This split focus had real consequences. As the content team expanded, the small controls team struggled with a significant backlog of UX and tech debt, often diverted to other mandatory projects. For example, one employee mentioned that a common user fear was playing music in the wrong room. I can imagine the grief I’d get from my wife if I accidentally played my emo Death Cab For Cutie while she was listening to her Eckhart Tolle podcast in the other room. Dozens, if not hundreds of paper cuts like this remained unaddressed as resources went to building content discovery features that many users would never use. It’s evident that when buying a speaker, as a user, you want to be able to control it to play your music. It’s much less evident that you want to replace your Spotify with Sonos Radio.

But while old time customers like Beagle didn’t appreciate the addition of Sonos content, it’s not conclusive that it was a complete waste of time and effort. The last mention of Sonos Radio performance was in the Q4 2022 earnings call:

Sonos Radio has become the #1 most listened to service on Sonos, and accounted for nearly 30% of all listening.

The company has said it will break out the revenue from Sonos Radio when it becomes material. It has yet to do so in the four years since its release.

The Release Decision

Four screenshots of the Sonos app interface on a mobile device, displaying music playback, browsing, and system controls. The first screen shows the home screen with recently played albums, music services, and a playback bar. The second screen presents a search interface with Apple Music and Spotify options. The third screen displays the now-playing view with album art and playback controls. The fourth screen shows multi-room speaker controls with volume levels and playback status for different rooms.

As the launch date approached, concerns about readiness grew. According to my sources, experienced engineers and designers warned that the app wasn’t ready. Basic features were missing or unstable. The new cloud-based architecture was causing latency issues. But with the Ace launch looming and business pressures mounting, these warnings fell on deaf ears.

The aftermath was swift and severe. Like countless other users, I found myself struggling with an app that had suddenly become frustratingly sluggish. Basic features that had worked reliably for years became unpredictable. Speaker groups would randomly disconnect. Simple actions like adjusting volume now had noticeable delays. The UX was confusing. The elegant simplicity that had made Sonos special was gone.

Making matters worse, the company couldn’t simply roll back to the previous version. The new app’s architecture was fundamentally incompatible with the old one, and the cloud services had been updated to support the new system. Sonos was stuck trying to fix issues on the fly while customers grew increasingly frustrated.

Looking Forward

Since the PR disaster, the company has steadily improved the app. It even published a public Trello board to keep customers apprised of its progress, though progress seemed to stall at some point, and it has since been retired.

A Trello board titled “Sonos App Improvement & Bug Tracker” displaying various columns with updates on issues, roadmap items, upcoming features, recent fixes, and implemented solutions. Categories include system issues, volume responsiveness, music library performance, and accessibility improvements for the Sonos app.

Tom Conrad, cofounder of Pandora and a director on Sonos’s board, became the company’s interim CEO after Patrick Spence was discharged. Conrad addressed these issues head-on in his first letter to employees:

I think we’ll all agree that this year we’ve let far too many people down. As we’ve seen, getting some important things right (Arc Ultra and Ace are remarkable products!) is just not enough when our customers’ alarms don’t go off, their kids can’t hear their playlist during breakfast, their surrounds don’t fire, or they can’t pause the music in time to answer the buzzing doorbell.

Conrad signals that the company has already begun shifting resources back to core functionality, promising to “get back to the innovation that is at the heart of Sonos’s incredible history.” But rebuilding trust with customers will take time.

Since Conrad’s takeover, more top brass from Sonos left the company, including the chief product officer, the chief commercial officer, and the chief marketing officer.

Lessons for Product Teams

I admit that my original hypothesis in writing this piece was that B2C tech companies are less customer-oriented in their product management decisions than B2B firms. I think about the likes of Meta making product decisions to juice engagement. But in more conversations with PM friends and lurking in r/ProductManagement, that hypothesis is debunked. Sonos just ended making a bunch of poor decisions.

One designer noted that what happened at Sonos isn’t necessarily unique. Incentives, organizational structures, and inertia can all color decision-making at any company. As designers, product managers, and members of product teams, what can we learn from Sonos’s series of unfortunate events?

  1. Don’t let tech debt get out of control. Companies should not let technical debt accumulate until a complete rewrite becomes necessary. Instead, they need processes to modernize their code constantly.
  2. Protect core functionality. Maintaining core functionality must be prioritized over new features when modernizing platforms. After all, users care more about reliability than new fancy new capabilities. You simply can’t mess up what’s already working.
  3. Organizational memory matters. New leaders must understand and respect institutional knowledge about technology, products, and customers. Quick changes without deep understanding can be dangerous.
  4. Listen to the OG. When experienced team members raise concerns, those warnings deserve serious consideration.
  5. Align incentives with user needs. Organizations need to create systems and incentives that reward user-centric decision making. When the broader system prioritizes other metrics, even well-intentioned teams can drift away from user needs.

As a designer, I’m glad I now understand it wasn’t Design’s fault. In fact, the design team at Sonos tried to warn the powers-that-be about the impending disaster.

As a Sonos customer, I’m hopeful that Sonos will recover. I love their products—when they work. The company faces months of hard work to rebuild customer trust. For the broader tech industry, it is a reminder that even well-resourced companies can stumble when they lose sight of their core value proposition in pursuit of new initiatives.

As one of my sources reflected, the magic of Sonos was always in making complex technology invisible—you just wanted to play music, and it worked. Somewhere along the way, that simple truth got lost in the noise.


P.S. I wanted to acknowledge Michael Tsai’s excellent post on his blog about this fiasco. He’s been constantly updating it with new links from across the web. I read all of those sources when writing this post.

A futuristic scene with a glowing, tech-inspired background showing a UI design tool interface for AI, displaying a flight booking project with options for editing and previewing details. The screen promotes the tool with a “Start for free” button.

Beyond the Prompt: Finding the AI Design Tool That Actually Works for Designers

There has been an explosion of AI-powered prompt-to-code tools within the last year. The space began with full-on integrated development environments (IDEs) like Cursor and Windsurf. These enabled developers to use leverage AI assistants right inside their coding apps. Then came a tools like v0, Lovable, and Replit, where users could prompt screens into existence at first, and before long, entire applications.

A couple weeks ago, I decided to test out as many of these tools as I could. My aim was to find the app that would combine AI assistance, design capabilities, and the ability to use an organization’s coded design system.

While my previous essay was about the future of product design, this article will dive deep into a head-to-head between all eight apps that I tried. I recorded the screen as I did my testing, so I’ve put together a video as well, in case you didn’t want to read this.

Play

It is a long video, but there’s a lot to go through. It’s also my first video on YouTube, so this is an experiment.

The Bottom Line: What the Testing Revealed

I won’t bury the lede here. AI tools can be frustrating because they are probabilistic. One hour they can solve an issue quickly and efficiently, while the next they can spin on a problem and make you want to pull your hair out. Part of this is the LLM—and they all use some combo of the major LLMs. The other part is the tool itself for not handling what happens when their LLMs fail. 

For example, this morning I re-evaluated Lovable and Bolt because they’ve released new features within the last week, and I thought it would only be fair to assess the latest version. But both performed worse than in my initial testing two weeks ago. In fact, I tried Bolt twice this morning with the same prompt because the first attempt netted a blank preview. Unfortunately, the second attempt also resulted in a blank screen and then I ran out of credits. 🤷‍♂️

Scorecard for Subframe, with a total of 79 points across different categories: User experience (22), Visual design (13), Prototype (6), Ease of use (13), Design control (15), Design system integration (5), Speed (5), Editor’s discretion (0).

For designers who want actual design tools to work on UI, Subframe is the clear winner. The other tools go directly from prompt to code, skipping giving designers any control via a visual editor. We’re not developers, so manipulating the design in code is not for us. We need to be able to directly manipulate the components by clicking and modifying shapes on the canvas or changing values in an inspector.

For me, the runner-up is v0, if you want to use it only for prototyping and for getting ideas. It’s quick—the UI is mostly unstyled, so it doesn’t get in the way of communicating the UX.

The Players: Code-Only vs. Design-Forward Tools

There are two main categories of contenders: code-only tools, and code plus design tools.

Code-Only

  • Bolt
  • Lovable
  • Polymet
  • Replit
  • v0

Code + Design

  • Onlook
  • Subframe
  • Tempo

My Testing Approach: Same Prompt, Different Results

As mentioned at the top, I tested these tools between April 16–27, 2025. As with most SaaS products, I’m sure things change daily, so this report captures a moment in time.

For my evaluation, since all these tools allow for generating a design from a prompt, that’s where I started. Here’s my prompt:

Create a complete shopping cart checkout experience for an online clothing retailer

I would expect the following pages to be generated:

  • Shopping cart
  • Checkout page (or pages) to capture payment and shipping information
  • Confirmation

I scored each app based on the following rubric:

  • Sample generation quality
  • User experience (25)
  • Visual design (15)
  • Prototype (10)
  • Ease of use (15)
  • Control (15)
  • Design system integration (10)
  • Speed (10)
  • Editor’s discretion (±10)

The Scoreboard: How Each Tool Stacked Up

AI design tools for designers, with scores: Subframe 79, Onlook 71, v0 61, Tempo 59, Polymet 58, Lovable 49, Bolt 43, Replit 31. Evaluations conducted between 4/16–4/27/25.

Final summary scores for AI design tools for designers. Evaluations conducted between 4/16–4/27/25.

Here are the summary scores for all eight tools. For the detailed breakdown of scores, view the scorecards here in this Google Sheet.

The Blow-by-Blow: The Good, the Bad, and the Ugly

Bolt

Bolt screenshot: A checkout interface with a shopping cart summary, items listed, and a “Proceed to Checkout” button, displaying prices and order summary.

First up, Bolt. Classic prompt-to-code pattern here—text box, type your prompt, watch it work. 

Bolt shows you the code generation in real-time, which is fascinating if you’re a developer but mostly noise if you’re not. The resulting design was decent but plain, with typical UX patterns. It missed delivering the confirmation page I would expect. And when I tried to re-evaluate it this morning with their new features? Complete failure—blank preview screens until I ran out of credits. No rhyme or reason. And there it is—a perfect example of the maddening inconsistency these tools deliver. Working beautifully in one session, completely broken in another. Same inputs, wildly different outputs.

Score: 43

Lovable

Lovable screenshot: A shipping information form on a checkout page, including fields for personal details and a “Continue to Payment” button.

Moving on to Lovable, which I captured this morning right after they launched their 2.0 version. The experience was a mixed bag. While it generated clean (if plain) UI with some nice touches like toast notifications and a sidebar shopping cart, it got stuck at a critical juncture—the actual checkout. I had to coax it along, asking specifically for the shopping cart that was missing from the initial generation.

The tool encountered an error but at least provided a handy “Try to fix” button. Unlike Bolt, Lovable tries to hide the code, focusing instead on the browser preview—which as a designer, I appreciate. When it finally worked, I got a very vanilla but clean checkout flow and even the confirmation page I was looking for. Not groundbreaking, but functional. The approach of hiding code complexity might appeal to designers who don’t want to wade through development details.

Score: 49

Polymet

Polymet screenshot: A checkout page design for a fashion store showing payment method options (Credit Card, PayPal, Apple Pay), credit card fields, order summary with subtotal, shipping, tax, and total.

Next up is Polymet. This one has a very interesting interface and I kind of like it. You have your chat on the left and a canvas on the right. But instead of just showing the screen it’s working on, it’s actually creating individual components that later get combined into pages. It’s almost like building Figma components and then combining them at the end, except these are all coded components.

The design is pretty good—plain but very clean. I feel like it’s got a little more character than some of the others. What’s nice is you can go into focus mode and actually play with the prototype. I was able to navigate from the shopping cart through checkout (including Apple Pay) to confirmation. To export the code, you need to be on a paid plan, but the free trial gives you at least a taste of what it can do.

Score: 58

Replit

Replit screenshot: A developer interface showing progress on an online clothing store checkout project with error messages regarding the use of the useCart hook.

Replit was a test of patience—no exaggeration, it was the slowest tool of the bunch at 20 minutes to generate anything substantial. Why so slow? It kept encountering errors and falling into those weird loops that LLMs often do when they get stuck. At one point, I had to explicitly ask it to “make it work” just to progress beyond showing product pages, which wasn’t even what I’d asked for in the first place.

When it finally did generate a checkout experience, the design was nothing to write home about. Lines in the stepper weren’t aligning properly, there were random broken elements, and ultimately—it just didn’t work. I couldn’t even complete the checkout flow, which was the whole point of the exercise. I stopped recording at that point because, frankly, I just didn’t want to keep fighting with a tool that’s both slow and ineffective. 

Score: 31

v0

v0 screenshot: An online shopping cart with a multi-step checkout process, including a shipping form and order summary with prices and a “Continue to Payment” button.

Taking v0 for a spin next, which comes from Vercel. I think it was one of the earlier prompt-to-code generators I heard about—originally just for components, not full pages (though I could be wrong). The interface is similar to Bolt with a chat panel on the left and code on the right. As it works, it shows you the generated code in real-time, which I appreciate. It’s pretty mature and works really well.

The result almost looks like a wireframe, but the visual design has a bit more personality than Bolt’s version, even though it’s using the unstyled shadcn components. It includes form validation (which I checked), and handles the payment flow smoothly before showing a decent confirmation page. Speed-wise, v0 is impressively quick compared to some others I tested—definitely a plus when you’re iterating on designs and trying to quickly get ideas.

Score: 61

Onlook

Onlook screenshot: A design tool interface showing a cart with empty items and a “Continue Shopping” button on a fashion store checkout page.

Onlook stands out as a self-contained desktop app rather than a web tool like the others. The experience starts the same way—prompt in, wait, then boom—but instead of showing you immediate results, it drops you into a canvas view with multiple windows displaying localhost:3000, which is your computer running a web server locally. The design it generated was fairly typical and straightforward, properly capturing the shopping cart, shipping, payment, and confirmation screens I would expect. You can zoom out to see a canvas-style overview and manipulate layers, with a styles tab that lets you inspect and edit elements.

The dealbreaker? Everything gets generated as a single page application, making it frustratingly difficult to locate and edit specific states like shipping or payment. I couldn’t find these states visually or directly in the pages panel—they might’ve been buried somewhere in the layers, but I couldn’t make heads or tails of it. When I tried using it again today to capture the styles functionality for the video, I hit the same wall that plagued several other tools I tested—blank previews and errors. Despite going back and forth with the AI, I couldn’t get it running again.

Score: 71

Subframe

Subframe screenshot: A design tool interface with a checkout page showing a cart with items, a shipping summary, and the option to continue to payment.

My time with Subframe revealed a tool that takes a different approach to the same checkout prompt. Unlike most competitors, Subframe can’t create an entire flow at once (though I hear they’re working on multi-page capabilities). But honestly, I kind of like this limitation—it forces you as a designer to actually think through the process.

What sets Subframe apart is its MidJourney-like approach, offering four different design options that gradually come into focus. These aren’t just static mockups but fully coded, interactive pages you can preview in miniature. After selecting a shopping cart design, I simply asked it to create the next page, and it intelligently moved to shipping/billing info.

The real magic is having actual design tools—layers panel, property inspector, direct manipulation—alongside the ability to see the working React code. For designers who want control beyond just accepting whatever the AI spits out, Subframe delivers the best combination of AI generation and familiar design tooling.

Score: 79

Tempo

Tempo screenshot: A developer tool interface generating a clothing store checkout flow, showing wireframe components and code previews.

Lastly, Tempo. This one takes a different approach than most other tools. It starts by generating a PRD from your prompt, then creates a user flow diagram before coding the actual screens—mimicking the steps real product teams would take. Within minutes, it had generated all the different pages for my shopping cart checkout experience. That’s impressive speed, but from a design standpoint, it’s just fine. The visual design ends up being fairly plain, and the prototype had some UX issues—the payment card change was hard to notice, and the “Place order” action didn’t properly lead to a confirmation screen even though it existed in the flow.

The biggest disappointment was with Tempo’s supposed differentiator. Their DOM inspector theoretically allows you to manipulate components directly on canvas like you would in Figma—exactly what designers need. But I couldn’t get it to work no matter how hard I tried. I even came back days later to try again with a different project and reached out to their support team, but after a brief exchange—crickets. Without this feature functioning, Tempo becomes just another prompt-to-code tool rather than something truly designed for visual designers who want to manipulate components directly. Not great.

Score: 59

The Verdict: Control Beats Code Every Time

Subframe screenshot: A design tool interface displaying a checkout page for a fashion store with a cart summary and a “Proceed to Checkout” button.

Subframe offers actual design tools—layers panel, property inspector, direct manipulation—along with AI chat.

I’ve spent the last couple weeks testing these prompt-to-code tools, and if there’s one thing that’s crystal clear, it’s this: for designers who want actual design control rather than just code manipulation, Subframe is the standout winner.

I will caveat that I didn’t do a deep dive into every single tool. I played with them at a cursory level, giving each a fair shot with the same prompt. What I found was a mix of promising starts and frustrating dead ends.

The reality of AI tools is their probabilistic nature. Sometimes they’ll solve problems easily, and then at other times they’ll spectacularly fail. I experienced this firsthand when retesting both Lovable and Bolt with their latest features—both performed worse than in my initial testing just two weeks ago. Blank screens. Error messages. No rhyme or reason.

For designers like me, the dealbreaker with most of these tools is being forced to manipulate designs through code rather than through familiar design interfaces. We need to be able to directly manipulate components by clicking and modifying shapes on the canvas or changing values in an inspector. That’s where Subframe delivers while others fall short—if their audience includes designers, which might not be the case.

For us designers, I believe Subframe could be the answer. But I’m also looking forward to if Figma will have an answer. Will the company get in the AI > design > code game? Or will it be left behind? 

The future belongs to applications that balance AI assistance with familiar design tooling—not just code generators with pretty previews.

Colorful illustration featuring the Figma logo on the left and a whimsical character operating complex, abstract machinery with gears, dials, and mechanical elements in vibrant colors against a yellow background.

Figma Make: Great Ideas, Nowhere to Go

Nearly three weeks after it was introduced at Figma Config 2025, I finally got access to Figma Make. It is in beta and Figma made sure we all know. So I will say upfront that it’s a bit unfair to do an official review. However, many of the tools in my AI prompt-to-code shootout article are also in beta. 

Since this review is fairly visual, I made a video as well that summarizes the points in this article pretty well.

Play

The Promise: One-to-One With Your Design

Figma's Peter Ng presenting on stage with large text reading "0→1 but 1:1 with your designs" against a dark background with purple accent lighting.

Figma’s Peter Ng presenting on stage Make’s promise: “0→1 but 1:1 with your designs.”

“What if you could take an idea not only from zero to one, but also make it one-to-one with your designs?” said Peter Ng, product designer at Figma. Just like all the other AI prompt-to-code tools, Figma Make is supposed to enable users to prompt their way to a working application. 

The Figma spin is that there’s more control over the output. Click an element and have the prompt only apply to that element. Or also click on something in the canvas and change some details like the font family, size, or color. 

The other Figma advantage is to be able to use pasted Figma designs for a more accurate translation to code. That’s the “one-to-one” Ng refers to.

The Reality: Falls Short

I evaluated Figma Make via my standard checkout flow prompt (thus covering the zero-to-one use case), another prompt, and with a pasted design (one-to-one).

Let’s get the standard evaluation out of the way before moving onto a deeper dive.

Figma Make Scorecard

Figma Make scorecard showing a total score of 58 out of 100, with breakdown: User experience 18/25, Visual design 13/15, Prototype 8/10, Ease of use 9/15, Design Control 6/15, Design system integration 0/15, Speed 9/10, and Editor's Discretion -5/10.

I ran the same prompt through it as the other AI tools:

Create a complete shopping cart checkout experience for an online clothing retailer

Figma Make’s score totaled 58, which puts it squarely in the middle of the pack. This was for a variety of reasons.

The quality of the generated output was pretty good. The UI was nice and clean, if a bit unstyled. This is because Make uses Shadcn UI components. Overall, the UX was exactly what I would expect. Perhaps a progress bar would have been a nice touch.

The generation was fast, clocking in at three minutes, which puts it near the top in terms of speed.

And the fine-grained editing sort of worked as promised. However, my manual changes were sometimes overridden if I used the chat.

Where It Actually Shines

Figma Make interface showing a Revenue Forecast Calculator with a $200,000 total revenue input, "Normal" distribution type selected, monthly breakdown table showing values from January ($7,407) to December ($7,407), and an orange bar chart displaying the normal distribution curve across 12 months with peak values in summer months.

The advantage of these prompt-to-code tools is that it’s really easy to prototype—maybe it’s even production-ready?—complex interactions.

To test this, I used a new prompt:

Build a revenue forecast calculator. It should take the input of a total budget from the user and automatically distribute the budget to a full calendar year showing the distribution by month. The user should be able to change the distribution curve from “Even” to “Normal” where “Normal” is a normal distribution curve.

Along with the prompt, I also included a wireframe as a still image. This gave the AI some idea of the structure I was looking for, at least.

The resulting generation was great and the functionality worked as expected. I iterated the design to include a custom input method and that worked too.

The One-to-One Promise Breaks Down

I wanted to see how well Figma Make would work with a well-structured Figma Design file. So I created a homepage for fictional fitness instructor using auto layout frames, structuring the file as I would divs in HTML.

Figma Design interface showing the original "Body by Reese" fitness instructor homepage design with layers panel on left, main canvas displaying the Pilates hero section and content layout, and properties panel on right. This is the reference design that was pasted into Figma Make for testing.

This is the reference design that was pasted into Figma Make for testing. Notice the well-structured layers!

Then I pasted the design into the chatbox and included a simple prompt. The result was…disappointing. The layout was correct but the type and type sizes were all wrong. I input that feedback into the chat and then the right font finally appeared. 

Then I manually updated the font sizes and got the design looking pretty close to my original. There was one problem: an image was the wrong size and not proportionally-scaled. So I asked the AI to fix it.

Figma Make interface showing a fitness instructor homepage with "Body by Reese" branding, featuring a hero image of someone doing Pilates with "Sculpt. Strengthen. Shine." text overlay, navigation menu, and content section with instructor photo and "Book a Class" call-to-action button.

Figma Make’s attempt at translating my Figma design to code.

The AI did not fix it and reverted some of my manual overrides for the fonts. In many ways this is significantly worse than not giving designers fine-grained control in the first place. Overwriting my overrides made me lose trust in the product because I lost work—however minimal it was. It brought me back to the many occasions that Illustrator or Photoshop crashed while saving, thus corrupting the file. Yes, it wasn’t as bad, but it still felt that way.

Dead End by Design

The question of what to do with the results of a Figma Make chat remain. A Figma Make file is its own filetype. You can’t bring it back into Figma Design nor even Figma Sites to make tweaks. You can publish it and it’s hosted on Figma’s infrastructure, just like Sites. You can download the code, but it’s kind of useless.

Code Export Is Capped at the Knees

You can download the React code as a zip file. But the code does not contain the necessary package.json that makes it installable on your local machine nor on a Node.js server. The package file tells the npm installer which dependencies need to be installed for the project to run.

I tried using Cursor to figure out where to move the files around—they have to be in a src directory—and to help me write a package.json but it would have taken too much time to reverse engineer it.

Nowhere to Go

Maybe using Figma Make inside Figma Sites will be a better use case. It’s not yet enabled for me, but that feature is the so-called Code Layers that was mentioned in the Make and Sites deep dive presentation at Config. In practice, it sounds very much like Code Components in Framer.

The Bottom Line

Figma had to debut Make in order to stay competitive. There’s just too much out there nipping at their heels. While a design tool like Figma is necessary to unlock the freeform exploration designers need, it is also the natural next step to be able to make it real from within the tool. The likes of Lovable, v0, and Subframe allow you to start with a design from Figma and turn that design into working code. The thesis for many of those tools is that they’re taking care of the post design-to-developer handoff: get a design, give the AI some context, and we’ll make it real. Figma has occupied the pre-designer-to-developer handoff for a while and they’re finally taking the next step.

However, in its current state, Figma Make is a dead end (see previous section). But it is beta software which should get better before official release. As a preview I think it’s cool, despite its flaws and bugs. But I wouldn’t use it for any actual work.

During this beta period, Figma needs to…

  • Add complete code export so the resulting code is portable, rather than keeping it within its closed system
  • Fix the fiendish bugs around the AI overwriting manual overrides
  • Figure out tighter integration between Make and the other products, especially Design
Collection of iOS interface elements showcasing Liquid Glass design system including keyboards, menus, buttons, toggles, and dialogs with translucent materials on dark background.

Breaking Down Apple’s Liquid Glass: The Tech, The Hype, and The Reality

I kind of expected it: a lot more ink was spilled on Liquid Glass—particularly on social media. In case you don’t remember, Liquid Glass is the new UI for all of Apple’s platforms. It was announced Monday at WWDC 2025, their annual developers conference.

The criticism is primarily around legibility and accessibility. Secondary reasons include aesthetics and power usage to animate all the bubbles.

How Liquid Glass Actually Works

Before I go and address the criticism, I think it would be great to break down the team’s design thinking and how Liquid Glass actually works. 

I watched two videos from Apple’s developer site. Much of the rest of the article is a summary of the videos. You can watch them and skip to the end of this piece.

First off is this video that explains Liquid Glass in detail.

As I watched the video, one thing stood out clearly to me: the design team at Apple did a lot of studying of the real world before digitizing it into UI.

The Core Innovation: Lensing

Instead of scattering light like previous materials, Liquid Glass dynamically bends and shapes light in real-time. Apple calls this “lensing.”

It’s their attempt to recreate how transparent objects work in the physical world. We all intuitively understand how warping and bending light communicates presence and motion. Liquid Glass uses these visual cues to provide separation while letting content shine through.

A Multi-Layer System That Adapts

Liquid Glass toolbar with pink tinted buttons (bookmark, refresh, more) floating over geometric green background, showing tinting capabilities.

This isn’t just a simple effect. It’s built from several layers working together:

  • Highlights respond to environmental lighting and device motion. When you unlock your phone, lights move through 3D space, causing illumination to travel around the material.
  • Shadows automatically adjust based on what’s behind them—darker over text for separation, lighter over solid backgrounds.
  • Tint layers continuously adapt. As content scrolls underneath, the material flips between light and dark modes for optimal legibility.
  • Interactive feedback spreads from your fingertip throughout the element, making it feel alive and responsive.

All of this happens automatically when developers apply Liquid Glass.

Two Variants (Frosted and Clear)

Liquid Glass has the same two types of material.

  • Regular is the workhorse—full adaptive behaviors, works anywhere.
  • Clear is more transparent but needs dimming layers for legibility.

Clear should only be used over media-rich content when the content layer won’t suffer from dimming. Otherwise, stick with Regular.

It’s like ice cubes—cloudy ones from your freezer versus clear ones at fancy bars that let you see your drink’s color.

Four examples of regular Liquid Glass elements: audio controls, deletion dialog, text selection menu, and red toolbar, demonstrating various applications.

Regular is the workhorse—full adaptive behaviors, works anywhere.

Video player interface with Liquid Glass controls (pause, skip buttons) overlaying blue ocean scene with sea creature.

Clear should only be used over media-rich content when the content layer won’t suffer from dimming.

Smart Contextual Changes

When elements scale up (like expanding menus), the material simulates thicker glass with deeper shadows. On larger surfaces, ambient light from nearby content subtly influences the appearance.

Elements don’t fade—they materialize by gradually modulating light bending. The gel-like flexibility responds instantly to touch, making interactions feel satisfying.

This is something that’s hard to see in stills.

The New Tinting Approach

Red "Add" button with music note icon using Liquid Glass material over black and white checkered pattern background.

Instead of flat color overlays, Apple generates tone ranges mapped to content brightness underneath. It’s inspired by how colored glass actually works—changing hue and saturation based on what’s behind it.

Apple recommends sparing use of tinting. Only for primary actions that need emphasis. Makes sense.

Design Guidelines That Matter

Liquid Glass is for the navigation and controls layer floating above content—not for everything. Don’t add Liquid Glass to or make content areas Liquid Glass. Never stack glass on glass.

Liquid Glass button with a black border and overlapping windows icon floating over blurred green plant background, showing off its accessibility mode.

Accessibility features are built-in automatically—reduced transparency, increased contrast, and reduced motion modify the material without breaking functionality.

The Legibility Outcry (and Why It’s Overblown)

Apple devices (MacBook, iPad, iPhone, Apple Watch) displaying new Liquid Glass interface with translucent elements over blue gradient wallpapers.

“Legibility” was mentioned 13 times in the 19-minute video. Clearly that was a concern of theirs. Yes, in the keynote, clear tinted device home screens were shown and many on social media took that to be an accessibility abomination. Which, yes, that is. But that’s not the default. 

The fact that the system senses the type of content underneath it and adjusts accordingly—flipping from light to dark, increasing opacity, or adjusting shadow depth—means they’re making accommodations for legibility.

Maybe Apple needs to do some tweaking, but it’s evident that they care about this.

And like the 18 macOS releases before Tahoe—this version—accessibility settings and controls have been built right in. Universal Access debuted with Mac OS X 10.2 Jaguar in 2002. Apple has had a long history of supporting customers with disabilities, dating all the way back to 1987.

So while the social media outcry about legibility is understandable, Apple’s track record suggests they’ll refine these features based on real user feedback, not just Twitter hot takes.

The Real Goal: Device Continuity

Why and what is Liquid Glass meant to do? It’s unification. With the new design language, Apple has also come out with a new design system. This video presented by Apple designer Maria Hristoforova lays it out.

Hristoforova says that Apple’s new design system overhaul is fundamentally about creating seamless familiarity as users move between devices—ensuring that interface patterns learned on iPhone translate directly to Mac and iPad without requiring users to relearn how things work. The video points out that the company has systematically redesigned everything from typography (hooray for left alignment!) and shapes to navigation bars and sidebars around Liquid Glass as the unifying foundation, so that the same symbols, behaviors, and interactions feel consistent across all screen sizes and contexts. 

The Pattern of Promised Unity

This isn’t Apple’s first rodeo with “unified design language” promises.

Back in 2013, iOS 7’s flat design overhaul was supposed to create seamless consistency across Apple’s ecosystem. Jony Ive ditched skeuomorphism for minimalist interfaces with translucency and layering—the foundation for everything that followed.

OS X Yosemite (2014) brought those same principles to desktop. Flatter icons, cleaner lines, translucent elements. Same pitch: unified experience across devices.

macOS Big Sur (2020) pushed even further with iOS-like app icons and redesigned interfaces. Again, the promise was consistent visual language across all platforms.

And here we are in 2025 with Liquid Glass making the exact same promises. 

But maybe “goal” is a better word.

Consistency Makes the Brand

I’m OK with the goal of having a unified design language. As designers, we love consistency. Consistency is what makes a brand. As Apple has proven over and over again for decades now, it is one of the most valuable brands in the world. They maintain their position not only by making great products, but also by being incredibly disciplined about consistency.

San Francisco debuted 10 years ago as the system typeface for iOS 9 and OS El Capitan. They’ve since extended it and it works great in marketing and in interfaces.

iPhone Settings screen showing Liquid Glass grouped table cells with red outline highlighting the concentric shape design.

The rounded corners on their devices are all pretty much the same radii. Now that concentricity is being incorporated into the UI, screen elements will be harmonious with their physical surroundings. Only Apple can do that because they control the hardware and the software. And that is their magic.

Design Is Both How It Works and How It Looks

In 2003, two years after the iPod launched, Rob Walker of The New York Times did a profile on Apple. The now popular quote about design from Steve Jobs comes from this piece.

[The iPod] is, in short, an icon. A handful of familiar clichés have made the rounds to explain this — it’s about ease of use, it’s about Apple’s great sense of design. But what does that really mean? “Most people make the mistake of thinking design is what it looks like,” says Steve Jobs, Apple’s C.E.O. “People think it’s this veneer — that the designers are handed this box and told, ‘Make it look good!’ That’s not what we think design is. It’s not just what it looks like and feels like. Design is how it works.”

People misinterpret this quote all the time to mean design is only how it works. That is not what Steve meant. He meant, design is both what it looks like and how it works.

Steve did care about aesthetics. That’s why the Graphic Design team mocked up hundreds of PowerMac G5 box designs (the graphics on the box, not the construction). That’s why he obsessed over the materials used in Pixar’s Emeryville headquarters. From Walter Isaacson’s biography:

Because the building’s steel beams were going to be visible, Jobs pored over samples from manufacturers across the country to see which had the best color and texture. He chose a mill in Arkansas, told it to blast the steel to a pure color, and made sure the truckers used caution not to nick any of it.

Liquid Glass is a welcomed and much-needed visual refresh. It’s the natural evolution of Apple’s platforms, going from skeuomorphic so users knew they could use their fingers and tap on virtual buttons on a touchscreen, to flat as a response to the cacophony of visual noise in UIs at the time, and now to something kind of in-between.

Humans eventually tire of seeing the same thing. Carmakers refresh their vehicle designs every three or four years. Then they do complete redesigns every five to eight years. It gets consumers excited. 

Liquid Glass will help Apple sell a bunch more hardware.

Illustration of people working on laptops atop tall ladders and multi-level platforms, symbolizing hierarchy and competition, set against a bold, abstract sunset background.

The Design Industry Created Its Own Talent Crisis. AI Just Made It Worse.

This is the first part in a three-part series about the design talent crisis. Read Part II and Part III.

**Part I: The Vanishing Bottom Rung **

Erika Kim’s path to UX design represents a familiar pandemic-era pivot story, yet one that reveals deeper currents about creative work and economic necessity. Armed with a 2020 film and photography degree from UC Riverside, she found herself working gig photography—graduations, band events—when the creative industries collapsed. The work satisfied her artistic impulses but left her craving what she calls “structure and stability,” leading her to UX design. The field struck her as an ideal synthesis, “I’m creating solutions for companies. I’m working with them to figure out what they want, and then taking that creative input and trying to make something that works best for them.”

Since graduating from the interaction design program at San Diego City College a year ago, she’s had three internships and works retail part-time to pay the bills. “I’ve been in survival mode,” she admits. On paper, she’s a great candidate for any junior position. Speaking with her reveals a very thoughtful and resourceful young designer. Why hasn’t she been able to land a full-time job? What’s going on in the design job market? 

Back in January, Jared Spool offered an explanation. The UX job market crisis stems from a fundamental shift that occurred around late 2022—what he calls a “market inversion.” The market flipped from having far more open UX positions than qualified candidates to having far more unemployed UX professionals than available jobs. The reasons are multitude, but include expiring tax incentives, rising interest rates, an abundance of bootcamp graduates, automated hiring processes, and globalization.

But that’s only part of the equation. I believe there’s something much larger at play, one that affects more than just UX or product design, but all design disciplines. One in which the tip of the spear has already been felt by software developers in their job market. AI.

Closing Doors for New Graduates

In the first half of this year, 147 tech companies have laid off over 63,000 workers, with a significant portion of them engineers. Entry-level hiring has collapsed, revealing a new permanent reality. At Big Tech companies, new graduates now represent just 7% of all hires—a precipitous 25% decline from 2023 levels and a staggering 50% drop from pre-pandemic baselines in 2019.

The startup ecosystem tells an even more troubling story, where recent graduates comprise less than 6% of new hires, down 11% year-over-year and more than 30% since 2019. This isn’t merely a temporary adjustment; it represents a fundamental restructuring of how companies approach talent acquisition. Even the most credentialed computer science graduates from top-tier programs are finding themselves shut out, suggesting that the erosion of junior positions cuts across disciplines and skill levels.  

LinkedIn executive Aneesh Raman wrote in an op-ed for The New York Times that in a “recent survey of over 3,000 executives on LinkedIn at the vice president level or higher, 63 percent agreed that A.I. will eventually take on some of the mundane tasks currently allocated to their entry-level employees.”

There is already a harsh reality for entry-level tech workers. Companies have essentially frozen junior engineer and data analyst hiring because AI can now handle the routine coding and data querying tasks that were once the realm for new graduates. Hiring managers expect AI’s coding capabilities to expand rapidly, potentially eliminating entry-level roles within a year, while simultaneously increasing demand for senior engineers who can review and improve AI-generated code. It’s a brutal catch-22: junior staff lose their traditional stepping stones into the industry just as employers become less willing to invest in onboarding them. 

For design students and recent graduates, this data illuminates a broader industry transformation where companies are increasingly prioritizing proven experience over potential—a shift that challenges the very foundations of how creative careers traditionally begin.

While AI tools haven’t exactly been able to replace designers yet—even junior ones—the tech will get there sooner than we think. And CEOs and those holding the purse strings are anticipating this, thus holding back hiring of juniors.

Portraits of five recent design graduates. From top left to right: Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background; Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket; Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors; Bottom row, left to right: Leah Ray, in a black-and-white portrait wearing a black turtleneck, looking ahead, Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Five recent design graduates. From top left to right: Ashton Landis, Erika Kim, Emma Haines. Bottom row, left to right: Leah Ray, Benedict Allen.

The Learning-by-Doing Crisis

Ashton Landis recently graduated with a BFA in Graphic Design from California College of the Arts (full disclosure: my alma mater). She says:

I found that if you look on LinkedIn for “graphic designer” and you just say the whole San Francisco Bay area, so all of those cities, and you filter for internships and entry level as the job type, there are 36 [job postings] total. And when you go through it, 16 of them are for one or more years of experience. And five of those are for one to two years of experience. And then everything else is two plus years of experience, which doesn’t actually sound like sound like entry level to me. …So we’re pretty slim pickings right now.

When I graduated from CCA in 1995 (or CCAC as it was known back then), we were just climbing out of the labor effects of the early 1990s recession. For my early design jobs in San Francisco, I did a lot of production and worked very closely with more senior designers and creative directors to hone my craft. While school is great for academic learning, nothing beats real-world experience.

Eric Heiman, creative director and co-owner of Volume Inc., a small design studio based in San Francisco, has been teaching at CCA for 26 years. He observes:

We internalize so much by doing things slower, right? The repetition of the process, learning through tinkering with our process, and making mistakes, and things like that. We have internalized those skills.

Sean Bacon, chair of the Graphic Design program at San Diego City College wonders:

What is an entry level position in design then? Where do those exist? How often have I had these companies hire my students even though they clearly don’t have those requirements. So I don’t know. I don’t know what happens, but it is scary to think we’re losing out on what I thought was really valuable training in terms of how I learned to operate, at least in a studio.

Back to the beginnings of my career, I remember digitizing logos when I interned with Mark Fox, a talented logo designer based in Marin County. A brilliant draftsman, he had inked—and still inks—all of his logos by hand. The act of redrawing marks in Illustrator helped me develop my sense of proportions, curves, and optical alignment. At digital agencies, I started my journey redesigning layouts of banners in different sizes. I would eventually have juniors to do that for me as I rose through the ranks. These experiences—though a little painful at the time—were pivotal in perfecting our collective craft. To echo Bacon, it was “really valuable training.”

Apprenticeships at Agencies

Working in agencies and design studios was pretty much an apprenticeship model. Junior designers shadowed more senior designers and took their lead when executing a campaign or designing more pages for a website.

For a typical website project, as a senior designer or art director, I would design the homepage and a few other critical screens, setting up the look and feel. Once those were approved by the client, junior designers would take over and execute the rest. This was efficient and allowed the younger staff to participate and put their reps in.

Searching for stock photos was another classic assignment for interns and junior designers. These were oftentimes multi-day assignments, but it helped teach juniors how to see. 

But today, generative AI apps like Midjourney and Visual Electric are replacing stock photography. 

From Craft to Curation

As the industry marches towards incorporating AI into our workflows, strategy, judgement, and most importantly taste, are critical skills.

But the paradoxically, how do designers develop taste, craft, and strategic thinking without doing the grunt work?

And not only are they missing out on the mundane work because of the dearth of entry-level opportunities, but also because generative AI can give results so quickly.

Eric Heiman again:

I just give the AI a few words and poof, it’s there. How do you learn how to see things? I just feel like learning how to see is a lot about slowing down. And in the case of designers, doing things yourself over and over again, and they slowly reveal themselves through that process.

All the recent graduates I interviewed for this piece are smart, enthusiastic, and talented. Yet, Ashton Landis and Erika Kim are struggling to find full-time jobs. 

Landis doesn’t think her negative experience in the job market is “entirely because of AI,” attributing it more to “general unemployment rates are pretty high right now” and a job market that is “clearly not great.”

Questioning Career Choices

Leah Ray, a recent graphic design MFA graduate from CCA, was able to secure a position as International Visual Designer at Kuaishou, a popular Chinese short-form video and live-streaming app similar to TikTok. But it wasn’t easy. Her job search began months before graduation, extending through her thesis work and creating the kind of sustained anxiety that prompted her final school project—a speculative design exploring AI’s potential to predict alternative career futures.

I was so anxious about my next step after graduation because I didn’t have a job lined up and I didn’t know what to do. …I’m a person who follows the social clock. My parents and the people around me expect me to do the right thing at the right age. Getting a nice job was my next step, but I couldn’t finish that, which led to me feeling anxious and not knowing what to do.

But through her tenacity and some luck, she was able to land the job that she starts this month. 

No, it was not easy to find. But finding this was very lucky. I do remember I saw a lot of job descriptions for junior designers. They expect designers to have AI skills. And I think there are even some roles specifically created for people with AI-related design skills, like AI motion designer and AI model designer, sort of something like that. Like AI image training designers.

Ray’s observation reveals a fundamental shift in entry-level design expectations, where AI proficiency has moved from optional to essential, with entirely new roles emerging around AI-specific design skills.

Portraits of five design educators. From top left to right: Bradford Prairie, smiling in a jacket and button-down against a soft purple background; Elena Pacenti, seated indoors, wearing a black top with long light brown hair; Sean Bacon, smiling in a light button-down against a white background; Bottom row, left to right: Josh Silverman, smiling in a striped shirt against a dark background; Eric Heiman, in profile wearing a flat cap and glasses, black and white photo

Our five design educators. From top left to right: Bradford Prairie, Elena Pacenti, Sean Bacon. Bottom row, left to right: Josh Silverman, Eric Heiman.

Preparing Our Students

Emma Haines, a designer completing her masters degree in Interaction Design at CCA began her job search in May. (Her program concludes in August.) Despite not securing a job yet, she’s bullish because of the prestige and practicality of the Master of Design program.

I think this program has actually helped me a good amount from where I was starting out before. I worked for a year between undergrad and this program, and between where I was before and now, there’s a huge difference. That being said, since the industry is changing so rapidly, it feels a little hard to catch up with. That’s the part that makes me a little nervous going into it. I could be confident right now, but maybe in six months something changes and I’m not as confident going into the job market.

CCA’s one-year program represents a strategic bet on adaptability over specialization. Elena Pacenti, the program’s director, describes an intensive structure that “goes from a foundational semester with foundation of interaction design, form, communication, and research to the system part of it. So we do systems thinking, prototyping, also tangible computing.” The program’s Social Lab component is “two semester-long projects with community partners in partnership with stakeholders that are local or international from UNICEF down to the food bank in Oakland.” It positions design as a tool for social impact rather than purely commercial purposes. This compressed timeline creates what Pacenti calls curricular agility: “We’re lucky that we are very agile. We are a one-year program so we can implement changes pretty quickly without affecting years of classes and changes in the curriculum.”

Josh Silverman, who chaired it for nearly five years, reports impressive historical outcomes: “I think historically for the first nine years of the program—this is cohort 10—I think we’ve had something like 85% job placement within six months of graduation.”

Yet both educators acknowledge current market realities. Pacenti observes that “that fat and hungry market of UX designers is no longer there; it’s on a diet,” while maintaining optimism about design’s future relevance: “I do not believe that designers will be less in demand. I think there will be a tremendous need for designers.” Emma Haines’s nervousness about rapid industry change reflects this broader tension—the gap between educational preparation and market evolution that defines professional training during transformative periods.

Bradford Prairie, who has taught in San Diego City College’s Graphic Design program for nine years, embodies this experimental approach to AI in design education. “We get an easy out when it comes to AI tools,” he explains, “because we’re a program that’s meant to train people for the field. And if the field is embracing these tools, we have an obligation to make students aware of them and give some training on how to use the tools.”

Prairie’s classroom experiments reveal both the promise and pitfalls of AI-assisted design. He describes a student struggling with a logo for a DJ app who turned to ChatGPT for inspiration: “It generates a lot of expected things like turntables, headphones, and waveforms… they’re all too complicated. They all don’t really look like logos. They look more like illustrations.” But the process sparked some other ideas, so he told the student, “This is kind of interesting how the waveform is part of the turntable and… we can take this general idea and redraw it and make it simplified.”

This tension between AI output and human refinement has become central to his teaching philosophy: “If there’s one thing that AI can’t replace, it’s your sense of discernment for what is good and what is not good.” The challenge, he acknowledges, lies in developing that discernment in students who may be tempted to rely too heavily on AI from the start.

The Turning Point

These challenges are real, and they’re reshaping the design profession in fundamental ways. Traditional apprenticeships are vanishing, entry-level opportunities are scarce, and new graduates face an increasingly competitive landscape. But within this disruption lies opportunity. The same forces that have eliminated routine design tasks have also elevated the importance of uniquely human skills—strategic thinking, cultural understanding, and creative problem-solving. The path forward requires both acknowledging what’s been lost and embracing what’s possible.

Despite her struggles to land a full-time job in design, Erika Kim remains optimistic because she’s so enthused about her career choice and the opportunity ahead. Remarking on the parallels of today versus the beginning of the Covid-19 pandemic, she says “It’s kind of interesting that I’m also on completely different grounds in terms of uncertainty. But you just have to get through it, you know. Why not?”


In the next part of this series, I’ll focus on the opportunities ahead: how we as a design industry can do better and what we should be teaching our design students. In the final part, I’ll touch on what recent grads can do to find a job in this current market.

There are over 1,800 font families in Google Fonts. While as designers, I’m sure were grateful for the trove of free fonts, good typefaces in the library are hard to spot.

Brand identity darlings Smith & Diction dropped a catalog of “Usable Google Fonts.” In a LinkedIn post, they wrote, “Screw it, here’s all of the google fonts that are actually good categorized by ‘vibe’.”

Huzzah! It’s in the form of a public Figma file. Enjoy.

preview-1754632730253.jpg

Usable Google Fonts

Catalog of "usable" Google fonts as curated by Smith & Diction

figma.com iconfigma.com
Retro-style robot standing at a large control panel filled with buttons, switches, and monitors displaying futuristic data.

The Era of the AI Browser Is Here

For nearly three years, Arc from The Browser Company has been my daily driver. To be sure, there was a little bit of a learning curve. Tabs disappeared after a day unless you pinned them. Then they became almost like bookmarks. Tabs were on the left side of the window, not at the top. Spaces let me organize my tabs based on use cases like personal, work, or finances. I could switch between tabs using control-Tab and saw little thumbnails of the pages, similar to the app switcher on my Mac. Shift-command-C copied the current page’s URL. 

All these little interface ideas added up to a productivity machine for web jockeys like myself. And so, I was saddened to hear in May that The Browser Company stopped actively developing Arc in favor of a new AI-powered browser called Dia. (They are keeping Arc updated with maintenance releases.)

They had started beta-testing Dia with college students first and just recently opened it up to Arc members. I finally got access to Dia a few weeks ago. 

But before diving into Dia, I should mention I also got access to another AI browser, Perplexity’s Comet about a week ago. I’m on their Pro plan but somehow got an invite in my email. I had thought it was limited to those on their much more expensive Max plan only. Shhh.

So this post is about both and how the future of web browsing is obviously AI-assisted, because it feels so natural.

Chat With Your Tabs

Landing page for Dia, a browser tool by The Browser Company, showcasing the tagline “Write with your tabs” and a button for early access download, along with a UI mockup for combining tabs into a writing prompt.

To be honest, I used Dia in fits and starts. It was easy to import my profiles from Arc and have all my bookmarks transferred over. But, I was missing all the pro-level UI niceties that Arc had. Tabs were back at the top and acted like tabs (though they just brought back sidebar tabs in the last week). There were no spaces. I felt like it was 2021 all over again. I tried to stick with it for a week. 

What Dia offers that Arc does not is, of course, a way to “chat” with your tabs. It’s a chat sidebar to the right of the web page that has the context of that page you’re on. You can also add additional tabs to the chat context by simply @mentioning them.

In a recent article about Dia in The New York Times, reporter Brian X. Chen describes using it to summarize a 22-minute YouTube video about car jump starters, instantly surfacing the top products without watching the whole thing. This is a vivid illustration of the “chat with your tabs” value prop. Saving time.

I’ve been doing the same thing. Asking the chat to summarize a page for me or explain some technical documentation to me in plain English. Or I use it as a fuzzy search to find a quote from the page that mentions something specific. For example, if I’m reading an interview with the CEO of Perplexity and I want to know if he’s tried the Dia browser yet, I can ask, “Has he used Dia yet?” Instead of reading through the whole thing. 

Screenshot of the Dia browser displaying a Verge article about Perplexity’s CEO, with an AI-generated sidebar summary clarifying that Aravind Srinivas has not used Dia.

Screenshot of the Dia browser displaying a Verge article about Perplexity’s CEO, with an AI-generated sidebar summary clarifying that Aravind Srinivas has not used Dia.

Another use case is to open a few tabs and ask for advice. For example, I can open up a few shirts from an e-commerce store and ask for a recommendation.

Screenshot of the Dia browser comparing shirts on the Bonobos website, with multiple tabs open for different shirt styles. The sidebar displays AI-generated advice recommending the Everyday Oxford Shirt for a smart casual look, highlighting its versatility, fit options, and stretch comfort.

Using Dia to compare shirts and get a smart casual recommendation from the AI.

Dia also has customizable “skills” which are essentially pre-saved prompts. I made one to craft summary bios from LinkedIn profiles.

Screenshot of the Dia browser on Josh Miller’s LinkedIn profile, with the “skills” feature generating a summarized biography highlighting his role as CEO of The Browser Company and his career background.

Using Dia’s skills feature to generate a summarized biography from a LinkedIn profile.

It’s cool. But I found that it’s a little limited because the chat is usually just with the tabs that you feed Dia. It helps you digest and process information. In other words, it’s an incremental step up from ChatGPT.

Enter Comet.

Browsing Done for You

Landing page for Comet, an AI-powered browser by Perplexity, featuring the tagline “Browse at the speed of thought” with a prominent “Get Comet” download button.

Comet by Perplexity also allows you to chat with your tabs. Asking about that Verge interview, I received a very similar answer. (No, Aravind Srinivas has not used Dia yet.) And because Perplexity search is integrated into Comet, I find that it is much better at context-setting and answering questions than Dia. But that’s not Comet’s killer feature.

Screenshot of the Comet browser displaying a Verge article about Perplexity’s CEO, with the built-in AI assistant on the right confirming Aravind Srinivas has not used the Dia browser.

Viewing the same article in Comet, with its AI assistant answering questions about the content.

Instead, it’s doing stuff with your tabs. Comet’s onboarding experience shows a few use cases like replying to emails and setting meetings, or filling an Instacart cart with the ingredients for butter chicken.

Just like Dia, when I first launched Comet, I was able to import my profiles from Arc, which included bookmarks and cookies. I was essentially still logged into all the apps and sites I was already logged into. So I tried an assistant experiment. 

One thing I often do is to look up the restaurants that have availability on OpenTable in Yelp. I tend to agree more with Yelpers who are usually harsher critics than OpenTable diners. So I asked Comet to “Find me the highest rated sushi restaurants in San Diego that have availability for 2 at 7pm next Friday night on OpenTable. Pick the top 10 and then rank them by Yelp rating.” And it worked! And if I really want to, I can say “Book Takaramono sushi” and it would have done so. (Actually, I did and then quickly canceled.)

The Comet assistant helped me find a sushi restaurant reservation. Video is sped up 4x.

I tried a different experiment which is something I heard Aravind Srinivas say in his interview with The Verge. I navigated to Gmail and checked three emails I wanted to unsubscribe to. I asked the assistant, “unsubscribe from the checked emails.” The agent then essentially took over my Gmail screen and opened the first checked email, clicked on the unsubscribe link. It repeated this process for the other two emails though ran into a couple of snags. First, Gmail doesn’t keep the state of the checked emails when you click into an email. But the Comet assistant was smart enough to remember the subject lines of all three emails. For the second email, it had some issues filing out the right email for the form so it didn’t work. Therefore of the three unsubscribes, it succeeded on two. 

The whole process also took about two minutes. It was wild though to see my Gmail being navigated by the machine. So that you know it’s in control, Comet puts a teal glow around the edges of the page, not dissimilar to the purple glow of the new Siri. And I could have stopped Comet at any time by clicking a stop button. Obviously, sitting there for two minutes and watching my computer unsubscribe to three emails is a lot longer than the 20 seconds it would have take me to do this manually, but like with many agents, the thinking is to delegate a process to it and come back later to check it. 

I Want My AI Browser

A couple hours after Perplexity launched Comet, Reuters published a leak with the headline “Exclusive: OpenAI to release web browser in challenge to Google Chrome.” Perplexity’s CEO seems to suggest that it was on purpose, to take a bit of wind from their sails. The Justice Department is still trying to strong-arm Google to divest itself from Chrome. If that happens, we’re talking about breaking the most profitable feedback loop in tech history. Chrome funnels search queries directly to Google, which powers their ad empire, which funds Chrome development. Break that cycle, and suddenly you’ve got independent Chrome that could default to any search engine, giving AI-first challengers like The Browser Company, Perplexity, and OpenAI a real shot at users.

Regardless of Chrome’s fate, I strongly believe that AI-enabled browsers are the future. Once I started chatting with my tabs, asking for summaries, seeking clarification, asking for too-technical content to be dumbed down to my level, I just can’t go back. The agentic stuff that Perplexity’s Comet is at the forefront of is just the beginning. It’s not perfect yet, but I think its utility will get there as the models get better. To quote Srinivas again:

I’m betting on the fact that in the right environment of a browser with access to all these tabs and tools, a sufficiently good reasoning model — like slightly better, maybe GPT-5, maybe like Claude 4.5, I don’t know — could get us over the edge where all these things are suddenly possible and then a recruiter’s work worth one week is just one prompt: sourcing and reach outs. And then you’ve got to do state tracking… That’s the extent to which we have an ambition to make the browser into something that feels more like an OS where these are processes that are running all the time.

It must be said that both Opera and Microsoft’s Edge also have AI built in. However, the way those features are integrated feel more like afterthoughts, the same way that Arc’s own AI features felt like tiny improvements.

The AI-powered ideas in both Dia and Comet are a step change. But the basics also have to be there, and in my opinion, should be better than what Chrome offers. The interface innovations that made Arc special shouldn’t be sacrificed for AI features. Arc is/was the perfect foundation. Integrate an AI assistant that can be personalized to care about the same things you do so its summaries are relevant. The assistant can be agentic and perform tasks for you in the background while you focus on more important things. In other words, put Arc, Dia, and Comet in a blender and that could be the perfect browser of the future.

Steven Kurtz, writing for The New York Times:

For many of the Gen X-ers who embarked on creative careers in the years after [Douglas Coupland’s Generation X] was published, lessness has come to define their professional lives.

If you entered media or image-making in the ’90s — magazine publishing, newspaper journalism, photography, graphic design, advertising, music, film, TV — there’s a good chance that you are now doing something else for work. That’s because those industries have shrunk or transformed themselves radically, shutting out those whose skills were once in high demand.

My first assumption was that Kurtz was writing about AI and how it’s taking away all the creative jobs. Instead, he weaves together a multifactorial illustration about the diminishing value of commercial creative endeavors like photography, music, filmmaking, copywriting, and design.

“My peers, friends and I continue to navigate the unforeseen obsolescence of the career paths we chose in our early 20s,” Mr. Wilcha said. “The skills you cultivated, the craft you honed — it’s just gone. It’s startling.”

Every generation has its burdens. The particular plight of Gen X is to have grown up in one world only to hit middle age in a strange new land. It’s as if they were making candlesticks when electricity came in. The market value of their skills plummeted.

It’s more than AI, although certainly, that is top of everyone’s mind these days. Instead, it’s also stock photography and illustrations, graphic templates, the consolidation of ad agencies, the revolutionary rise of social media, and the tragic fall of traditional media.

Similar shifts have taken place in music, television and film. Software like Pro Tools has reduced the need for audio engineers and dedicated recording studios; A.I., some fear, may soon take the place of actual musicians. Streaming platforms typically order fewer episodes per season than the networks did in the heyday of “Friends” and “ER.” Big studios have slashed budgets, making life for production crews more financially precarious.

Earlier this year, I cited Baldur Bjarnason’s essay about the changing economics of web development. As an opening analogy, he referenced the shifting landscape of film and television.

Born in 1973, I am squarely in Generation X. I started my career in the design and marketing industry just as the internet was taking off. So I know exactly what the interviewees of Kurtz’s article are facing. But by dogged tenacity and sheer luck, I’ve been able to pivot and survive. Am I still a graphic designer like I was back in the mid-1990s? Nope. I’m more of a product designer now, which didn’t exist 30 years ago, and which is a subtle but distinct shift from UX designer, which has existed for about 20 years.

I’ve been lucky enough to ride the wave with the times, always remembering my core purpose.

preview-1743608194474.png

The Gen X Career Meltdown (Gift Article)

Just when they should be at their peak, experienced workers in creative fields find that their skills are all but obsolete.

nytimes.com iconnytimes.com
Illustration of diverse designers collaborating around a table with laptops and design materials, rendered in a vibrant style with coral, yellow, and teal colors

Five Practical Strategies for Entry-Level Designers in the AI Era

*In Part I of this series on the design talent crisis, I wrote about the struggles recent grads have had finding entry-level design jobs and what might be causing the stranglehold on the design job market. In Part II, I discussed how industry and education need to change in order to ensure the survival of the profession. *

**Part III: Adaptation Through Action **

Like most Gen X kids, I grew up with a lot of freedom to roam. By fifth grade, I was regularly out of the house. My friends and I would go to an arcade in San Francisco’s Fisherman’s Wharf called The Doghouse, where naturally, they served hot dogs alongside their Joust and TRON cabinets. But we would invariably go to the Taco Bell across the street for cheap pre-dinner eats. In seventh grade—this is 1986—I walked by a ComputerLand on Van Ness Avenue and noticed a little beige computer with a built-in black and white CRT. The Macintosh screen was actually pale blue and black, but more importantly, showed MacPaint. It was my first exposure to creating graphics on a computer, which would eventually become my career.

Desktop publishing had officially begun a year earlier with the introduction of Aldus PageMaker and the Apple LaserWriter printer for the Mac, which enabled WYSIWYG (What You See Is What You Get) page layouts and high-quality printed output. A generation of designers who had created layouts using paste-up techniques with tools and materials like X-Acto knives, Rapidograph pens, rubyliths, photostats, and rubber cement had to start learning new skills. Typesetters would eventually be phased out in favor of QuarkXPress. A decade of transition would revolutionize the industry, only to be upended again by the web.

Many designers who made the jump from paste-up to desktop publishing couldn’t make the additional leap to HTML. They stayed graphic designers and a new generation of web designers emerged. I think those who were in my generation—those that started in the waning days of analog and the early days of DTP—were able to make that transition.

We are in the midst of yet another transition: to AI-augemented design. It’s important to note that it’s so early, that no one can say anything with absolute authority. Any so-called experts have been working with AI tools and AI UX patterns for maybe two years, maximum. (Caveat: the science of AI has been around for many decades, but using these new tools, techniques, and developing UX patterns for interacting with such tools is solely new.)

It’s obvious that AI is changing not only the design industry, but nearly all industries. The transformation is having secondary effects on the job market, especially for entry-level talent like young designers.

The AI revolution mirrors the previous shifts in our industry, but with a crucial difference: it’s bigger and faster. Unlike the decade-long transitions from paste-up to desktop publishing and from print to web, AI’s impact is compressing adaptation timelines into months rather than years. For today’s design graduates facing the harsh reality documented in Part I and Part II—where entry-level positions have virtually disappeared and traditional apprenticeship pathways have been severed—understanding this historical context isn’t just academic. It’s reality for them. For some, adaptation is possible but requires deliberate strategy. The designers who will thrive aren’t necessarily those with the most polished portfolios or prestigious degrees, but those who can read the moment, position themselves strategically, and create their own pathways into an industry in tremendous flux.

As a designer who is entering the workforce, here are five practical strategies you can employ right now to increase your odds of landing a job in this market:

  1. Leverage AI literacy as competitive differentiator
  2. Emphasize strategic thinking and systems thinking
  3. Become a “dangerous generalist”
  4. Explore alternative pathways and flexibility
  5. Connecting with community

1. AI Literacy as Competitive Differentiator

Young designer orchestrating multiple AI tools on screens, with floating platform icons representing various AI tools.

Just like how Leah Ray, a recent graphic design MFA graduate from CCA, has deeply incorporated AI into her workflow, you have to get comfortable with some of the tools. (See her story in Part II for more context.)

Be proficient in the following categories of AI tools:

  • Chatbot: Choose ChatGPT, Claude, or Gemini. Learn about how to write prompts. You can even use the chatbot to learn how to write prompts! Use it as a creative partner to bounce ideas off of and to do some initial research for you.
  • Image generator: Adobe Firefly, DALL-E, Gemini, Midjourney, or Visual Electric. Learn how to use at least one of these, but more importantly, figure out how to get consistently good results from these generators.
  • Website/web app generator: Figma Make, Lovable, or v0. Especially if you’re in an interaction design field, you’ll need to be proficient in an AI prompt-to-code tool.

Add these skills to your resume and LinkedIn profile. Share your experiments on social media. 

But being AI-literate goes beyond just the tools. It’s also about wielding AI as a design material. Here’s the good part: by getting proficient in the tools, you’re also learning about the UX patterns for AI and learning what is possible with AI technologies like LLMs, agents, and diffusion models.

I’ve linked to a number of articles about designing for AI use cases:

Have a basic understanding of the following:

Be sure to add at least one case study in your portfolio that incorporates an AI feature.

2. Strategic Thinking and Systems Thinking

Designer pointing at an interconnected web diagram showing how design decisions create ripple effects through business systems.

Stunts like AI CEOs notwithstanding, companies don’t trust AI enough to cede strategy to it. LLMs are notoriously bad at longer tasks that contain multiple steps. So thinking about strategy and how to create a coherent system are still very much human activities.

Systems thinking—the ability to understand how different parts of a system interact and how changes in one component can create cascading effects throughout the entire system—is becoming essential for tech careers and especially designers. The World Economic Forum’s Future of Jobs Report 2025 identifies it as one of the critical skills alongside AI and big data. 

Modern technology is incredibly interconnected. AI can optimize individual elements, but it can’t see the bigger picture—how a pricing change affects user retention, how a new feature impacts server costs, or why your B2B customers need different onboarding than consumers. 

Early-career lawyers at the firm Macfarlanes are now interpreting complex contracts that used to be reserved for more senior colleagues. While AI can extract key info from contracts and flag potential issues, humans are still needed to understand the context, implications, and strategic considerations. 

Emphasize these skills in your case studies by presenting clear, logical arguments that lead to strategic insights and systemic solutions. Frame every project through a business lens. Show how your design decisions ladder up to company, brand, or product metrics. Include the downstream effects—not just the immediate impact.

3. The “Dangerous Generalist” Advantage

Multi-armed designer like an octopus, each arm holding different design tools including research, strategy, prototypes, and presentations.

Josh Silverman, professor at CCA and also a design coach and recruiter, has an idea he calls the “dangerous generalist.” This is the unicorn designer who can “do the research, the strategy, the prototyping, the visual design, the presentation, and the storytelling; and be a leader and make a measurable impact.” 

It’s a lot and seemingly unfair to expect that out of one person, but for a young and hungry designer with the right training and ambition, I think it’s possible. Other than leadership and making quantitative impact, all of those traits would have been practiced and honed at a good design program. 

Be sure to have a variety of projects in your portfolio to showcase how you can do it all.

4. Alternative Pathways and Flexibility

Designer navigating a maze of career paths with signposts directing to startups, nonprofits, UI developer, and product manager roles.

Matt Ström-Awn, in an excellent piece about the product design talent crisis published last Thursday, did some research and says that in “over 600 product design listings, only 1% were for internships, and only 5% required 2 years or less of experience.”

Those are some dismal numbers for anyone trying to get a full-time job with little design experience. So you have to try creative ways of breaking into the industry. In other words, don’t get stuck on only applying for junior-level jobs on LinkedIn. Do that but do more.

Let’s break this down to type of company and type of role.

Types of Companies

Historically, I would have always recommended any new designer to go to an agency first because they usually have the infrastructure to mentor entry-level workers. But, as those jobs have dried up, consider these types of companies.

  • Early-stage startups: Look for seed-stage or Series A startups. Companies who have just raised their Series A will make a big announcement, so they should be easy to target. Note that you will often be the only designer in the company, so you’ll be doing a lot of learning on the job. If this happens, remember to find community (see below).
  • Non-tech businesses: Legacy industries might be a lot slower to think about AI, much less adopt it. Focus on sectors where human touch, tradition, regulations, or analog processes dominate. These fields need design expertise, especially as many are just starting to modernize and may require digital transformation, improved branding, or modernized UX.
  • Nonprofits: With limited budgets and small teams, nonprofits and not-for-profits could be great places to work for. While they tend to try to DIY everything, they will also recognize the need for designers. Organizations that invest in design are 50% more likely to see increases in fundraising revenue.

Type of Roles

In his post for UX Collective, Patrick Morgan says, “Sometimes the smartest move isn’t aiming straight for a ‘product designer’ title, but stepping into a role where you can stay close to product and grow into the craft.”

In other words, look for adjacent roles at the company you want to work for, just to get your foot in the door.

Here are some of those roles—includes ones from Morgan’s list. What is appropriate for you will depend heavily on your skill sets and the type of design you want to eventually practice.

  • UI developer or front-end engineer: If you’re technically-minded, front-end developers are still sought after, though maybe not as much as before because, you know, AI. But if you’re able to snag a spot as one, it’s a way in.
  • Product manager: There is no single path to becoming a product manager. It’s a lot of the same skills a good designer should have, but with even more focus on creating strategies that come from customer insights (aka research). I’ve seen designers move into PM roles pretty easily.
  • Graphic/visual/growth/marketing designer: Again, depending on your design focus, you could already be looking for these jobs. But if you’re in UX and you see one of these roles open up, it’s another way into a company.
  • Production artist: These roles are likely slowly disappearing as well. This is usually a role at an agency or a larger company that just does design execution.
  • Freelancer: You may already be doing this, but you can freelance. Not all companies, especially small ones can afford a full-time designer. So they rely on freelance help. Try your hand at Upwork to build up your portfolio. Ensure that you’re charging a price that seems fair to you and that will help pay your bills.
  • Executive assistant: While this might seem odd, this is a good way to learn about a company and to show your resourcefulness. Lots of EAs are responsible for putting together events, swag, and more. Eventually, you might be able to parlay this role into a design role.
  • Intern: Internships are rare, I know. And if you haven’t done one, you should. However, ensure that the company complies with local regulations about paying interns. For example, California has strict laws about paying interns at least minimum wage. Unpaid internships are legal only if the role meets a litany of criteria including:
  • The internship is primarily educational (similar to a school or training program).
  • The intern is the “primary beneficiary,” not the company.
  • The internship does not replace paid employees or provide substantial benefit to the employer.

5. Connecting with Community

Diverse designers in a supportive network circle, connected both in-person and digitally, with glowing threads showing mentorship relationships.

The job search is isolating. Especially now.

Josh Silverman emphasizes something often overlooked: you’re already part of communities. “Consider all the communities you identify with, as well as all the identities that are a part of you,” he points out. Think beyond LinkedIn—way beyond.

Did you volunteer at a design conference? Help a nonprofit with their rebrand? Those connections matter. Silverman suggests reaching out to three to five people—not hiring managers, but people who understand your work. Former classmates who graduated ahead of you. Designers you met at meetups. Workshop leaders.

“Whether it’s a casual coffee chat or slightly more informal informational interview, there are people who would welcome seeing your name pop up on their screen.”

These conversations aren’t always about immediate job leads. They’re about understanding where the industry’s actually heading, which companies are genuinely hiring, and what skills truly matter versus what’s in job descriptions. As Silverman notes, it’s about creating space to listen and articulate what you need—“nurturing relationships in community will have longer-term benefits.”

In practice: Join alumni Slack channels, participate in local AIGA events, contribute to open-source projects, engage in design challenges. The designers landing jobs aren’t just those with perfect portfolios. They’re the ones who stay visible.

The Path Forward Requires Adaptation, Not Despair

My 12 year-old self would be astonished at what the world is today and how this profession has evolved. I’ve been through three revolutions. Traditional to desktop publishing. Print to web. And now, human-only design to AI-augmented design. 

Here’s what I know: the designers who survived those transitions weren’t necessarily the most talented. They were the most adaptable. They read the moment, learned the tools, and—crucially—didn’t wait for permission to reinvent themselves.

This transition is different. It’s faster and much more brutal to entry-level designers.

But you have advantages my generation didn’t. AI tools are accessible in ways that PageMaker and HTML never were. We had to learn through books! We learned by copying. We learned by taking weeks to craft projects. You can chat with Lovable and prompt your way to a portfolio-worthy project over a weekend. You can generate production-ready assets with Midjourney before lunch. You can prototype and test five different design directions while your coffee’s still warm.

The traditional path—degree, internship, junior role, slow climb up the ladder—is broken. Maybe permanently. But that also means the floor is being raised. You should be working on more strategic and more meaningful work earlier in your career.

But you need to be dangerous, versatile, and visible. 

The companies that will hire you might not be the ones you dreamed about in design school. The role might not have “designer” in the title. Your first year might be messier than you planned.

That’s OK. Every designer I respect has a messy and unlikely origin story.

The industry will stabilize because it always does. New expectations will emerge, new roles will be created, and yes—companies will realize they still need human designers who understand context, culture, and why that button should definitely not be bright purple.

Until then? Be the designer who ships. Who shows up. Who adapts.

The machines can’t do that. Yet.


I hope you enjoyed this series. I think it’s an important topic to discuss in our industry right now, before it’s too late. Don’t forget to read about the five grads and five educators I interviewed for the series. Please reach out if you have any comments, positive or negative. I’d love to hear them.