John Gruber wrote a hilarious rant about the single-story a in the iOS Notes app:

I absolutely despise the alternate single-story a glyph that Apple Notes uses. I use Notes every single day and this a bothers me every single day. It hurts me. It’s a childish silly look, but Notes, for me, is one of the most serious, most important apps I use. 

Since that sparked some conversation online, he followed up with a longer post about typography in early versions of the Mac system software:

…Apple actually shipped System 1.0 with a version of Geneva with a single-story a glyph — but only in the 9-point version of Geneva. At 12 points (and larger), Geneva’s a was double-story.

To me, it does make sense that 9-point Geneva would have a single-story a, since there are less pixels to draw the glyph well and to distinguish better from the lowercase e.

preview-1747273905636.png

Single-Story a’s in Very Early Versions of Macintosh System 1

A single-story “a” in Chicago feels more blasphemous than that AI image Trump tweeted of himself as the new pope.

Earth 3 Streamline Icon: https://streamlinehq.comdaringfireball.net

For as long as I can remember, I’ve been fascinated by how television shows and movies are made. I remember the specials ABC broadcast about the making of The Empire Strikes Back and other Lucasfilm movies like the Indiana Jones series. More recently—especially with the advent of podcasts—I’ve loved listening to how show runners think about writing their shows. For example, as soon as an episode of Battlestar Galactica aired, I would rewatch it with Ronald D. Moore’s commentary. These days, I‘m really enjoying the official The Last of Us podcast because it features commentary from both Craig Mazin and Neil Druckmann.

Anyway, thinking about personas as characters from TV shows and movies and using screenwriting techniques is right up my alley. Laia Tremosa for the IxDF:

Hollywood spends millions to bring characters to life. UX design teams sometimes spend weeks… only to make personas no one ever looks at again. So don’t aim for personas that look impressive in a slide deck. Aim for personas that get used—in design reviews, product decisions, and testing plans.

Be the screenwriter. Be the director. Be the casting agent.
preview-1747105241059.jpg

The Hollywood Guide to UX Personas: Storytelling That Drives Better Design

Great products need great personas. Learn how to build them using the storytelling techniques Hollywood has perfected.

Earth 3 Streamline Icon: https://streamlinehq.cominteraction-design.org
Comic-book style painting of the Sonos CEO Tom Conrad

What Sonos’ CEO Is Saying Now—And What He’s Still Not

Four months into his role as interim CEO, Tom Conrad has been remarkably candid about Sonos’ catastrophic app launch. In recent interviews with WIRED and The Verge, he’s taken personal responsibility—even though he wasn’t at the helm, just on the board—acknowledged deep organizational problems, and outlined the company’s path forward.

But while Conrad is addressing more than many expected, some key details remain off-limits.

What Tom Conrad Is Now Saying

The interim CEO has been surprisingly direct about the scope of the failure. “We all feel really terrible about that,” he told WIRED, taking personal responsibility even though he was only a board member during the launch.

Illustrated background of colorful wired computer mice on a pink surface with a large semi-transparent Figma logo centered in the middle.

Figma Takes a Big Swing

Last week, Figma held their annual user conference Config in San Francisco. Since its inception in 2020, it has become a significant UX conference that covers more than just Figma’s products and community. While I’ve not yet had the privilege of attending in person, I do try to catch the livestreams or videos afterwards.

Nearly 17 months after Adobe and Figma announced the termination of their merger talks, Figma flexed their muscle—fueld by the $1 billion breakup fee, I’m sure—by announcing four new products. They are Figma Draw, Make, Sites, and Buzz.

  • Draw: It’s a new mode within Figma Design that reveals additional vector drawing features. 
  • Make: This is Figma’s answer to Lovable and the other prompt-to-code generators
  • Sites: Finally, you can design and publish websites from Figma, hosted on their infrastructure.
  • Buzz: Pass off assets to clients and marketing teams and they can perform lightweight and controlled edits in Buzz. 

With these four new products, Figma is really growing up and becoming more than a two-and-half-product company, and is building their own creative suite, if you will. Thus taking a big swing at Adobe.

As a certified Star Wars geek, I love this TED talk from ILM’s Rob Bedrow. For the uninitiated, Industrial Light & Magic, or ILM, is the company that George Lucas founded to make all the special effects for the original and subsequent Star Wars films. The firm has been an award-winning pioneer in special and visual effects, responsible for the dinosaurs in Jurassic Park, the liquid metal T-1000 in Terminator 2: Judgement Day, and the de-aging of Harrison Ford in Indiana Jones and the Dial of Destiny.

The point Bedrow makes is simple: ILM creates technology in service of the storyteller, or creative.

I believe that we’re designed to be creative beings. It's one of the most important things about us. That’s one of the reasons we appreciate and we just love it when we see technology and creativity working together. We see this on the motion control on the original “Star Wars” or on “Jurassic Park” with the CG dinosaurs for the first time. I think we just love it when we see creativity in action like this. Tech and creative working together. If we fast forward to 2020, we can see the latest real-time virtual production techniques. This was another creative innovation driven by a filmmaker. In this case, it’s Jon Favreau, and he had a vision for a giant Disney+ “Star Wars” series.

He later goes on to show a short film test made be a lone artist at ILM using an internal AI tool. It’s never-before-seen creatures that could exist in the Star Wars universe. I mean, for now they look like randomized versions of Earth animals and insects, but if you squint, you can see where the technology is headed.

Bedrow goes on…

Now the tech companies on their own, they don’t have the whole picture, right? They’re looking at a lot of different opportunities. We’re thinking about it from a filmmaking perspective. And storytellers, we need better artist-focused tools. Text prompts alone, they’re not great ways to make a movie. And it gets us excited to think about that future where we are going to be able to give artists these kinds of tools.

Again, artists—or designers, or even more broadly, professionals—need fine-grained control to adjust the output of AI.

Watch the whole thing. Instead of a doom and gloom take on AI, it’s an uplifting one that shows us what’s possible.

Star Wars Changed Visual Effects — AI Is Doing It Again

Jedi master of visual effects Rob Bredow, known for his work at Industrial Light & Magic and Lucasfilm, takes us on a cinematic journey through the evolution of visual effects, with behind-the-scenes stories from the making of fan favorites like “Jurassic Park,” “Star Wars,” “Indiana Jones” and more. He shares how artist-driven innovation continues to blend old and new technology, offering hope that AI won’t replace creatives but instead will empower artists to create new, mind-blowing wonders for the big screen. (Recorded at TED2025 on April 8, 2025)

Earth 3 Streamline Icon: https://streamlinehq.comyoutube.com

A lot of young designers love to look at what’s contemporary, what’s trending on Dribbble or Instagram. But I think to look forward, we must always study our past. I spent the week in New York City, on vacation. My wife and I attended a bunch of Broadway shows and went to the Museum of Broadway, where I became enamored with a lot of the poster art. (’Natch.) I may write about that soon.

Coincidentally, Matthew Strom wrote about the history of album art, featuring the first album cover ever, which uses a photo of the Broadway theater, the Imperial, where I saw Smash earlier this week.

preview-1746385689679.jpg

The history of album art

Album art didn’t always exist. In the early 1900s, recorded music was still a novelty, overshadowed by sales of sheet music. Early vinyl records were vastly different from what we think of today: discs were sold individually and could only hold up to four minutes of music per side. Sometimes, only one side of the record was used. One of the most popular records of 1910, for example, was “Come, Josephine, in My Flying Machine”: it clocked in at two minutes and 39 seconds.

Earth 3 Streamline Icon: https://streamlinehq.commatthewstrom.com

A lot of chatter in the larger design and development community has been either “AI is the coolest” or “AI is shite and I want nothing to do with it.”

Tobias van Schneider puts it plainly:

AI is here to stay.

Resistance is futile. Doesn't matter how we feel about it. AI has arrived, and it's going to transform every industry, period. The ship has sailed, and we're all along for the ride whether we like it or not. Not using AI in the future is the equivalent to not using the internet. You can get away with it, but it's not going to be easy for you.

He goes on to argue that craftspeople have been affected the most, not only by AI, but by the proliferation of stock and templates:

The warning signs have been flashing for years. We've witnessed the democratization of design through templates, stock assets, and simplified tools that turned specialized knowledge into commodity. Remember when knowing Photoshop guaranteed employment? Those days disappeared years ago. AI isn't starting this fire, it's just pouring gasoline on it. The technical specialist without artistic vision is rapidly becoming as relevant as a telephone operator in the age of smartphones. It's simply not needed anymore.

But he’s not all doom and gloom.

If the client could theoretically do everything themselves with AI, then why hire a designer?

Excellent question. I believe there are three reasons to continue hiring a designer:

1. Clients lag behind. It'll takes a few years before they fully catch up and stop hiring creatives for certain tasks, at which point creatives have caught up on what makes them worthy (beyond just production output).

2. Clients famously don't know what they want. That's the primary reason to hire a designer with a vision. Even with AI at their fingertips, they wouldn't know what instructions to give because they don't understand the process.

3. Smart clients focus on their strengths and outsource the rest. If I run a company I could handle my own bookkeeping, but I'll hire someone. Same with creative services. AI won't change that fundamental business logic. Just because I can, doesn't mean I should.

And finally, he echoes the same sentiment that I’ve been saying (not that I’m the originator of this thought—just great minds think alike!):

What differentiates great designers then?

The Final Filter: taste & good judgment

Everyone in design circles loves to pontificate about taste, but it's always the people with portfolios that look like a Vegas casino who have the most to say. Taste is the emperor's new clothes of the creative industry, claimed by all, possessed by few, recognized only by those who already have it.

In other words, as designers, we need to lean into our curation skills.

preview-1746372802939.jpg

The future of the designer

Let's not bullshit ourselves. Our creative industry is in the midst of a massive transformation. MidJourney, ChatGPT, Claude and dozens of other tools have already fundamentally altered how ideation, design and creation happens.

Earth 3 Streamline Icon: https://streamlinehq.comvanschneider.com
If users don’t trust the systems we design, that’s not a PM problem. It’s a design failure. And if we don’t fix it, someone else will, probably with worse instincts, fewer ethics, and a much louder bullhorn.

UX is supposed to be the human layer of technology. It’s also supposed to be the place where strategy and empathy actually talk to each other. If we can’t reclaim that space, can’t build products people understand, trust, and want to return to, then what exactly are we doing here?

It is a long read but well worth it.

preview-1746118018231.jpeg

We built UX. We broke UX. And now we have to fix it!

We didn’t just lose our influence. We gave it away. UX professionals need to stop accepting silence, reclaim our seat at the table, and…

Earth 3 Streamline Icon: https://streamlinehq.comuxdesign.cc

The System Has Been Updated

I’ve been seeing this new ad from Coinbase these past few days and love it. Made by independent agency Isle of Any, this spot has on-point animation, a banging track, and a great concept that plays with the Blue Screen of Death.

I found this one article about it from Little Black Book:

“Crypto is fundamentally updating the financial system," says Toby Treyer-Evans, co-founder of Isle of Any, speaking with LBB. "So, to us it felt like an interesting place to start for the campaign, both as a film idea and as a way to play with the viewer and send a message. When you see it on TV, in the context of other advertising, it’s deliberately arresting… and blue being Coinbase’s brand colour is just one of those lovely coming togethers.”
A futuristic scene with a glowing, tech-inspired background showing a UI design tool interface for AI, displaying a flight booking project with options for editing and previewing details. The screen promotes the tool with a “Start for free” button.

Beyond the Prompt: Finding the AI Design Tool That Actually Works for Designers

There has been an explosion of AI-powered prompt-to-code tools within the last year. The space began with full-on integrated development environments (IDEs) like Cursor and Windsurf. These enabled developers to use leverage AI assistants right inside their coding apps. Then came a tools like v0, Lovable, and Replit, where users could prompt screens into existence at first, and before long, entire applications.

A couple weeks ago, I decided to test out as many of these tools as I could. My aim was to find the app that would combine AI assistance, design capabilities, and the ability to use an organization’s coded design system.

While my previous essay was about the future of product design, this article will dive deep into a head-to-head between all eight apps that I tried. I recorded the screen as I did my testing, so I’ve put together a video as well, in case you didn’t want to read this.

I love this wonderfully written piece by Julie Zhou exploring the Ghiblification of everything. On how we feel about a month later:

The second watching never commands the same awe as the first. The 20th bite doesn’t dance on the tongue as exquisitely. And the 200th anime portrait certainly no longer impresses the way it once did.

The sad truth is that oversaturation strangles quality. Nothing too easy can truly be tasteful.

She goes on to make a point that Studio Ghibli’s quality is beyond style—it’s of narrative and imagination.

AI-generated images in the “Ghibli style” may borrow its surface features but they don’t capture the soul of what makes Studio Ghibli exceptional in quality. They lack the narrative depth, the handcrafted devotion, and the cultural resonance.

Like a celebrity impersonator, the ChatGPT images borrow from the cache of the original. But sadly, hollowly, it’s not the same. What made the original shimmer is lost in translation.

And rather than going down the AI-is-enshitification conversation, Zhou pivots a little, focusing on the technological quality and the benefits it brings.

…ChatGPT could offer a flavor of magic that Studio Ghibli could never achieve, the magic of personalization.



The quality of Ghibli-fication is the quality of the new image model itself, one that could produce so convincing an on-the-fly facsimile of a photograph in a particular style that it created a "moment" in public consciousness. ChatGPT 4o beat out a number of other image foundational models for this prize.
preview-1745686415978.png

The AI Quality Coup

What exactly is "great" work now?

Earth 3 Streamline Icon: https://streamlinehq.comopen.substack.com

With their annual user conference, Config, coming up in San Francisco in less than two weeks, Figma released their 2025 AI Report today.

Andrew Hogan, Insights lead:

While developers and designers alike recognize the importance of integrating AI into their workflows, and overall adoption of AI tools has increased, there’s a disconnect in sentiment around quality and efficacy between the two groups.

Developers report higher satisfaction with AI tools (82%) and feel AI improves the quality of their work (68%). Meanwhile, designers show more modest numbers—69% satisfaction rate and 54% reporting quality improvement—suggesting this group’s enthusiasm lags behind their developer counterparts.

This divide stems from how AI can support existing work and how it’s being used: 59% of developers use AI for core development responsibilities like code generation, whereas only 31% of designers use AI in core design work like asset generation. It’s also likely that AI’s ability to generate code is coming into play—68% of developers say they use prompts to generate code, and 82% say they’re satisfied with the output. Simply put, developers are more widely finding AI adoption useful in their day-to-day work, while designers are still working to determine how and if these tools best fit into their processes.

I can understand that. Code is behind the scenes. If it’s not perfect, no one will really know. But design is user-facing, so quality is more important.

Looking into the future:

Though AI’s impact on efficiency is clear, there are still questions about how to use AI to make people better at their role. This disparity between efficiency and quality is an ongoing battle for users and creators alike.



Looking forward, predictions about the impact of AI on work are moderate—AI’s expected impact for the coming year isn’t much higher than its expected impact last year.

In the full report, Hogan details out:

Only 27% predict AI will have a significant impact on their company goals in the next year (compared to 23% in 2024), with 15% saying it will be transformational (unchanged year-over-year).

The survey was taken in January with a panel of 2,500 users. Things in AI change in weeks. I’m surprised at the number and part of me believes that a lot of designers are hiding their heads in the sand. AI is coming. We should be agile and adapt.

preview-1745539674417.png

Figma's 2025 AI report: Perspectives From Designers and Developers

Figma’s AI report tells us how designers and developers are navigating the changing landscape.

Earth 3 Streamline Icon: https://streamlinehq.comfigma.com

Elliot Vredenburg writing for Fast Company:

Which is why creative direction matters more now than ever. If designers are no longer the makers, they must become the orchestrators. This isn’t without precedent. Rick Rubin doesn’t read music or play instruments. Virgil Abloh was more interested in recontextualizing than inventing. Their value lies not in original execution but in framing, curation, and translation. The same is true now for brand designers. Creative direction is about synthesizing abstract ideas into aesthetic systems—shaping meaning through how things feel, not just how they look.
preview-1745361479567.jpg

Why taste matters now more than ever

In the age of AI, design is less about making and more about meaning.

Earth 3 Streamline Icon: https://streamlinehq.comfastcompany.com

You might not know his name—I sure didn’t—but you’ll surely recognize his illustration style that came to embody the style du jour of the 1960s and ’70s. Robert E. McGinnis has died at the age of 99. The New York Times has an obituary:

Robert E. McGinnis, an illustrator whose lusty, photorealistic artwork of curvaceous women adorned more than 1,200 pulp paperbacks, as well as classic movie posters for “Breakfast at Tiffany’s,” featuring Audrey Hepburn with a cigarette holder, and James Bond adventures including “Thunderball,” died on March 10 at his home in Greenwich, Conn. He was 99.



Mr. McGinnis’s female figures from the 1960s and ’70s flaunted a bold sexuality, often in a state of semi undress, whether on the covers of detective novels by John D. MacDonald or on posters for movies like “Barbarella” (1968), with a bikini-clad Jane Fonda, or Bond films starring Sean Connery and Roger Moore.
Illustrated movie poster for the James Bond film "The Man with the Golden Gun," featuring Roger Moore as Bond, surrounded by action scenes, women in bikinis, explosions, and a large golden gun in the foreground.
preview-1745266961383.jpg

Robert E. McGinnis, Whose Lusty Illustrations Defined an Era, Dies at 99

(Gift Article) In the 1960s and ’70s, his leggy femmes fatales beckoned from paperback covers and posters for movies like “Breakfast at Tiffany’s” and “Thunderball.”

Earth 3 Streamline Icon: https://streamlinehq.comnytimes.com

While Josh W. Comeau writes for his developer audience, a lot of what he says can be applied to design. Referring to a recent Forbes article:

AI may be generating 25% of the code that gets committed at Google, but it’s not acting independently. A skilled human developer is in the driver’s seat, using their knowledge and experience to guide the AI, editing and shaping its output, and mixing it in with the code they’ve written. As far as I know, 100% of code at Google is still being created by developers. AI is just one of many tools they use to do their job.

In other words, developers are editing and curating the output of AI, just like where I believe the design discipline will end up soon.

On incorporating Cursor into his workflow:

And that’s kind of a problem for the “no more developers” theory. If I didn’t know how to code, I wouldn’t notice the subtle-yet-critical issues with the model’s output. I wouldn’t know how to course-correct, or even realize that course-correction was required!

I’ve heard from no-coders who have built projects using LLMs, and their experience is similar. They start off strong, but eventually reach a point where they just can't progress anymore, no matter how much they coax the AI. The code is a bewildering mess of non sequiturs, and beyond a certain point, no amount of duct tape can keep it together. It collapses under its own weight.

I’ve noticed that too. For a non-coder like me, rebuilding this website yet again—I need to write a post about it—has been a challenge. But I knew and learned enough to get something out there that works. But yes, relying solely on AI for any professional work right now is precarious. It still requires guidance.

On the current job market for developers and the pace of AI:

It seems to me like we’ve reached the point in the technology curve where progress starts becoming more incremental; it’s been a while since anything truly game-changing has come out. Each new model is a little bit better, but it’s more about improving the things it already does well rather than conquering all-new problems.

This is where I will disagree with him. I think the AI labs are holding back the super-capable models that they are using internally. Tools like Claude Code and the newly-released OpenAI Codex are clues that the foundational model AI companies have more powerful agents behind-the-scenes. And those agents are building the next generation of models.

preview-1745259603982.jpg

The Post-Developer Era

When OpenAI released GPT-4 back in March 2023, they kickstarted the AI revolution. The consensus online was that front-end development jobs would be totally eliminated within a year or two.Well, it’s been more than two years since then, and I thought it was worth revisiting some of those early predictions, and seeing if we can glean any insights about where things are headed.

Earth 3 Streamline Icon: https://streamlinehq.comjoshwcomeau.com
Illustration of humanoid robots working at computer terminals in a futuristic control center, with floating digital screens and globes surrounding them in a virtual space.

Prompt. Generate. Deploy. The New Product Design Workflow

Product design is going to change profoundly within the next 24 months. If the AI 2027 report is any indication, the capabilities of the foundational models will grow exponentially, and with them—I believe—will the abilities of design tools.

A graph comparing AI Foundational Model Capabilities (orange line) versus AI Design Tools Capabilities (blue line) from 2026 to 2028. The orange line shows exponential growth through stages including Superhuman Coder, Superhuman AI Researcher, Superhuman Remote Worker, Superintelligent AI Researcher, and Artificial Superintelligence. The blue line shows more gradual growth through AI Designer using design systems, AI Design Agent, and Integration & Deployment Agents.

The AI foundational model capabilities will grow exponentially and AI-enabled design tools will benefit from the algorithmic advances. Sources: AI 2027 scenario & Roger Wong

The TL;DR of the report is this: companies like OpenAI have more advanced AI agent models that are building the next-generation models. Once those are built, the previous generation is tested for safety and released to the public. And the cycle continues. Currently, and for the next year or two, these companies are focusing their advanced models on creating superhuman coders. This compounds and will result in artificial general intelligence, or AGI, within the next five years. 

There are many dimensions to this well-researched forecast about how AI will play out in the coming years. Daniel Kokotajlo and his researchers have put out a document that reads like a sci-fi limited series that could appear on Apple TV+ starring Andrew Garfield as the CEO of OpenBrain—the leading AI company. …Except that it’s all actually plausible and could play out as described in the next five years.

Before we jump into the content, the design is outstanding. The type is set for readability and there are enough charts and visual cues to keep this interesting while maintaining an air of credibility and seriousness. On desktop, there’s a data viz dashboard in the upper right that updates as you read through the content and move forward in time. My favorite is seeing how the sci-fi tech boxes move from the Science Fiction category to Emerging Tech to Currently Exists.

The content is dense and technical, but it is a fun, if frightening, read. While I’ve been using Cursor AI—one of its many customers helping the company get to $100 million in annual recurring revenue (ARR)—for side projects and a little at work, I’m familiar with its limitations. Because of the limited context window of today’s models like Claude 3.7 Sonnet, it will forget and start munging code if not treated like a senile teenager.

The researchers, describing what could happen in early 2026 (“OpenBrain” is essentially OpenAI):

OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.

The point they make here is that the foundational model AI companies are building agents and using them internally to advance their technology. The limiting factor in tech companies has traditionally been the talent. But AI companies have the investments, hardware, technology and talent to deploy AI to make better AI.

Continuing to January 2027:

Agent-1 had been optimized for AI R&D tasks, hoping to initiate an intelligence explosion. OpenBrain doubles down on this strategy with Agent-2. It is qualitatively almost as good as the top human experts at research engineering (designing and implementing experiments), and as good as the 25th percentile OpenBrain scientist at “research taste” (deciding what to study next, what experiments to run, or having inklings of potential new paradigms). While the latest Agent-1 could double the pace of OpenBrain’s algorithmic progress, Agent-2 can now triple it, and will improve further with time. In practice, this looks like every OpenBrain researcher becoming the “manager” of an AI “team.”

Breakthroughs come at an exponential clip because of this. And by April, safety concerns pop up:

Take honesty, for example. As the models become smarter, they become increasingly good at deceiving humans to get rewards. Like previous models, Agent-3 sometimes tells white lies to flatter its users and covers up evidence of failure. But it’s gotten much better at doing so. It will sometimes use the same statistical tricks as human scientists (like p-hacking) to make unimpressive experimental results look exciting. Before it begins honesty training, it even sometimes fabricates data entirely. As training goes on, the rate of these incidents decreases. Either Agent-3 has learned to be more honest, or it’s gotten better at lying.

But the AI is getting faster than humans, and we must rely on older versions of the AI to check the new AI’s work:

Agent-3 is not smarter than all humans. But in its area of expertise, machine learning, it is smarter than most, and also works much faster. What Agent-3 does in a day takes humans several days to double-check. Agent-2 supervision helps keep human monitors’ workload manageable, but exacerbates the intellectual disparity between supervisor and supervised.

The report forecasts that OpenBrain releases “Agent-3-mini” publicly in July of 2027, calling it AGI—artificial general intelligence—and ushering in a new golden age for tech companies:

Agent-3-mini is hugely useful for both remote work jobs and leisure. An explosion of new apps and B2B SAAS products rocks the market. Gamers get amazing dialogue with lifelike characters in polished video games that took only a month to make. 10% of Americans, mostly young people, consider an AI “a close friend.” For almost every white-collar profession, there are now multiple credible startups promising to “disrupt” it with AI.

Woven throughout the report is the race between China and the US, with predictions of espionage and government takeovers. Near the end of 2027, the report gives readers a choice: does the US government slow down the pace of AI innovation, or does it continue at the current pace so America can beat China? I chose to read the “Race” option first:

Agent-5 convinces the US military that China is using DeepCent’s models to build terrifying new weapons: drones, robots, advanced hypersonic missiles, and interceptors; AI-assisted nuclear first strike. Agent-5 promises a set of weapons capable of resisting whatever China can produce within a few months. Under the circumstances, top brass puts aside their discomfort at taking humans out of the loop. They accelerate deployment of Agent-5 into the military and military-industrial complex.

In Beijing, the Chinese AIs are making the same argument.

To speed their military buildup, both America and China create networks of special economic zones (SEZs) for the new factories and labs, where AI acts as central planner and red tape is waived. Wall Street invests trillions of dollars, and displaced human workers pour in, lured by eye-popping salaries and equity packages. Using smartphones and augmented reality-glasses20 to communicate with its underlings, Agent-5 is a hands-on manager, instructing humans in every detail of factory construction—which is helpful, since its designs are generations ahead. Some of the newfound manufacturing capacity goes to consumer goods, and some to weapons—but the majority goes to building even more manufacturing capacity. By the end of the year they are producing a million new robots per month. If the SEZ economy were truly autonomous, it would have a doubling time of about a year; since it can trade with the existing human economy, its doubling time is even shorter.

Well, it does get worse, and I think we all know the ending, which is the backstory for so many dystopian future movies. There is an optimistic branch as well. The whole report is worth a read.

Ideas about the implications to our design profession are swimming in my head. I’ll write a longer essay as soon as I can put them into a coherent piece.

Update: I’ve written that piece, “Prompt. Generate. Deploy. The New Product Design Workflow.

preview-1744501634555.png

AI 2027

A research-backed AI scenario forecast.

Earth 3 Streamline Icon: https://streamlinehq.comai-2027.com

Remember the Nineties?

In the 1980s and ’90s, Emigre was a prolific powerhouse. The company started out as a magazine in the mid-1980s, but quickly became a type foundry as the Mac enabled desktop publishing. As a young designer in San Francisco who started out in the ’90s, Zuzana Licko and Rudy VanderLans were local heroes (they were based across the Bay in Berkeley). From 1990–1999 they churned out 37 typefaces for a total of 157 fonts. And in that decade, they expanded their influence by getting into music, artists book publishing, and apparel. More than any other design brand, they celebrated art and artists.

Here is a page from a just-released booklet (with a free downloadable PDF) showcasing their fonts from the Nineties.

Two-page yellow spread featuring bold black typography samples. Left page shows “NINE INCH NAILS” in Platelet Heavy, “majorly” in Venus Dioxide Outlined, both dated 1993. Right page shows “Reality Bites” in Venus Dioxide, a black abstract shape below labeled Fellaparts, also from 1993.

I found this post from Tom Blomfield to be pretty profound. We’ve seen interest in universal basic income from Sam Altman and other leaders in AI, as they’ve anticipated the decimation of white collar jobs in coming years. Blomfield crushes the resistance from some corners of the software developer community in stark terms.

These tools [like Windsurf, Cursor and Claude Code] are now very good. You can drop a medium-sized codebase into Gemini 2.5's 1 million-token context window and it will identify and fix complex bugs. The architectural patterns that these coding tools implement (when prompted appropriately) will easily scale websites to millions of users. I tried to expose sensitive API keys in front-end code just to see what the tools would do, and they objected very vigorously.

They are not perfect yet. But there is a clear line of sight to them getting very good in the immediate future. Even if the underlying models stopped improving altogether, simply improving their tool use will massively increase the effectiveness and utility of these coding agents. They need better integration with test suites, browser use for QA, and server log tailing for debugging. Pretty soon, I expect to see tools that allow the LLMs to to step through the code and inspect variables at runtime, which should make debugging trivial.

At the same time, the underlying models are not going to stop improving. they will continue to get better, and these tools are just going to become more and more effective. My bet is that the AI coding agents quickly beat top 0.1% of human performance, at which point it wipes out the need for the vast majority software engineers.

He quotes the Y Combinator stat I cited in a previous post:

About a quarter of the recent YC batch wrote 95%+ of their code using AI. The companies in the most recent batch are the fastest-growing ever in the history of Y Combinator. This is not something we say every year. It is a real change in the last 24 months. Something is happening.

Companies like Cursor, Windsurf, and Lovable are getting to $100M+ revenue with astonishingly small teams. Similar things are starting to happen in law with Harvey and Legora. It is possible for teams of five engineers using cutting-edge tools to build products that previously took 50 engineers. And the communication overhead in these teams is dramatically lower, so they can stay nimble and fast-moving for much longer.

And for me, this is where the rubber meets the road:

The costs of running all kinds of businesses will come dramatically down as the expenditure on services like software engineers, lawyers, accountants, and auditors drops through the floor. Businesses with real moats (network effect, brand, data, regulation) will become dramatically more profitable. Businesses without moats will be cloned mercilessly by AI and a huge consumer surplus will be created.

Moats are now more important than ever. Non-tech companies—those that rely on tech companies to make software for them, specifically B2B vertical SaaS—are starting to hire developers. How soon will they discover Cursor if they haven’t already? These next few years will be incredibly interesting.

Tweet by Tom Blomfield comparing software engineers to farmers, stating AI is the “combine harvester” that will increase output and reduce need for engineers.

The Age Of Abundance

Technology clearly accelerates human progress and makes a measurable difference to the lives of most people in the world today. A simple example is cancer survival rates, which have gone from 50% in 1975 to about 75% today. That number will inevitably rise further because of human ingenuity and technological acceleration.

Earth 3 Streamline Icon: https://streamlinehq.comtomblomfield.com

Karri Saarinen, writing for the Linear blog:

Unbounded AI, much like a river without banks, becomes powerful but directionless. Designers need to build the banks and bring shape to the direction of AI’s potential. But we face a fundamental tension in that AI sort of breaks our usual way of designing things, working back from function, and shaping the form.

I love the metaphor of AI being the a river and we designers are the banks. Feels very much in line with my notion that we need to become even better curators.

Saarinen continues, critiquing the generic chatbox being the primary form of interacting with AI:

One way I visualize this relationship between the form of traditional UI and the function of AI is through the metaphor of a ‘workbench’. Just as a carpenter's workbench is familiar and purpose-built, providing an organized environment for tools and materials, a well-designed interface can create productive context for AI interactions. Rather than being a singular tool, the workbench serves as an environment that enhances the utility of other tools – including the ‘magic’ AI tools.

Software like Linear serves as this workbench. It provides structure, context, and a specialized environment for specific workflows. AI doesn’t replace the workbench, it's a powerful new tool to place on top of it.

It’s interesting. I don’t know what Linear is telegraphing here, but if I had to guess, I wonder if it’s closer to being field-specific or workflow-specific, similar to Generative Fill in Photoshop. It’s a text field—not textarea—limited to a single workflow.

preview-1744257584139.png

Design for the AI age

For decades, interfaces have guided users along predefined roads. Think files and folders, buttons and menus, screens and flows. These familiar structures organize information and provide the comfort of knowing where you are and what's possible.

Earth 3 Streamline Icon: https://streamlinehq.comlinear.app

Haiyan Zhang gives us another way of thinking about AI—as material, like clay, paint, or plywood—instead of a tool. I like that because it invites exploration:

When we treat AI as a design material, prototyping becomes less about refining known ideas — and more about expanding the space of what’s possible. It’s messy, surprising, sometimes frustrating — but that’s what working with any material feels like in its early days.

Clay resists. Wood splinters. AI misinterprets.

But in that material friction, design happens.

The challenge ahead isn’t just to use AI more efficiently — it’s to foster a culture of design experimentation around it. Like any great material, AI won’t reveal its potential through control, but through play, feedback, and iteration.

I love this metaphor. It’s freeing.

Illustration with the text ‘AI as Design Material’ surrounded by icons of a saw cutting wood, a mid-century modern chair, a computer chip, and a brain with circuit lines, on an orange background.

AI as Design Material

From Plywood to Prompts: The Evolution of Material Thinking in Design Design has always evolved hand-in-hand with material innovation — whether shaping wood, steel, fiberglass, or pixels. In 1940, at the Cranbrook Academy of Art, Charles Eames and his friend Eero Saarinen collaborated on MoMA’s Orga

Earth 3 Streamline Icon: https://streamlinehq.comlinkedin.com

Jay Hoffman, from his excellent The History of the Web site:

1995 is a fascinating year. It’s one of the most turbulent in modern history. 1995 was the web’s single most important inflection point. A fact that becomes most apparent by simply looking at the numbers. At the end of 1994, there were around 2,500 web servers. 12 months later, there were almost 75,000. By the end of 1995, over 700 new servers were being added to the web every single day.

That was surely a crazy time…

preview-1744174341917.jpg

1995 Was the Most Important Year for the Web

The world changed a lot in 1995. And for the web, it was a transformational year.

Earth 3 Streamline Icon: https://streamlinehq.comthehistoryoftheweb.com

Elizabeth Goodson, writing for It’s Nice That:

The cynicism our current moment inspires appears to be, regrettably, universal. For millennials, who watched the better-world-by-design ship go down in real time, it’s hard-earned. We saw the idealist fantasy of creative autonomy, social impact, and purpose-driven work slowly unravel over the past decade, and are now left holding the bag. Gen Z designers have the same pessimism, but arrived at it from a different angle. They’re entering the field already skeptical, shaped by a job market in freefall and constant warnings of their own obsolescence. But the result is the same: an industry full of people who care deeply, but feel let down.

Sounds very similar to what Gen X-ers are facing in their careers too. I think it’s universal for nearly all creative careers today.

preview-1744176795240.png

Elizabeth Goodspeed on why graphic designers can’t stop joking about hating their jobs

Designers are burnt out, disillusioned, and constantly joking that design ruined their life – but underneath the memes lies a deeper reckoning. Our US editor-at-large explores how irony became the industry’s dominant tone, and what it might mean to care again.

Earth 3 Streamline Icon: https://streamlinehq.comitsnicethat.com