Skip to content

Hard to believe that the Domino’s Pizza tracker debuted in 2008. The moment was ripe for them—about a year after the debut of the iPhone. Mobile e-commerce was in its early days.

Alex Mayyasi for The Hustle:

…the tracker’s creation was spurred by the insight that online orders were more profitable – and made customers more satisfied – than phone or in-person orders. The company’s push to increase digital sales from 20% to 50% of its business led to new ways to order (via a tweet, for example) and then a new way for customers to track their order.

Mayyasi weaves together a tale of business transparency, UI, and content design, tracing—or tracking?—the tracker’s impact on business since then. “The pizza tracker is essentially a progress bar.” But progress bars do so much for the user experience, most of which is setting proper expectations.

preview-1756791507284.png

How the Domino’s pizza tracker conquered the business world

One cheesy progress update at a time.

thehustle.co iconthehustle.co

America by Design, Again

President Trump signed an executive order creating America by Design, a national initiative to improve the usability and design of federal services, both digital and physical. The order establishes a National Design Studio inside the White House and appoints Airbnb co-founder and RISD graduate Joe Gebbia as the first Chief Design Officer. The studio’s mandate: cut duplicative design costs, standardize experiences to build trust, and raise the quality of government services. Gebbia said he aims to make the U.S. “the most beautiful, and usable, country in the digital world.”

Ironically, this follows the gutting of the US Digital Service, left like a caterpillar consumed from within by parasitic wasp larvae, when it was turned into DOGE. And as part of the cutting of thousands from the federal workforce, 18F, the pioneering digital services agency that started in 2014, was eliminated.

Ethan Marcotte, the designer who literally wrote the book on responsive design and worked at 18F, had some thoughts. He points out the announcement web page weighs in at over three megabytes. Very heavy for a government page and slow for those in the country unserved by broadband—about 26 million. On top of that, the page is full of typos and is an accessibility nightmare.

In other words, we’re left with a web page announcing a new era of design for the United States government, but it’s tremendously costly to download, and inaccessible to many. What I want to suggest is that neither of these things are accidents: they read to me as signals of intent; of how this administration intends to practice design.

The National Design Studio has a mission to turn government services into as easy as buying from the Apple Store. Marcotte’s insight is that designing for government—at scale for nearly 350 million people—is very different than designing in the private sector. Coordination among agencies can take years.

Despite what this new “studio” would suggest, designing better government services didn’t involve smearing an animated flag and a few nice fonts across a website. It involved months, if not years, of work: establishing a regular cadence of user research and stakeholder interviews; building partnerships across different teams or agencies; working to understand the often vast complexity of the policy and technical problems involved; and much, much more. Judging by their mission statement, this “studio” confuses surface-level aesthetics with the real, substantive work of design.

Here’s the kicker:

There’s a long, brutal history of design under fascism, and specifically in the way aesthetics are used to define a single national identity. Dwell had a good feature on this in June…

The executive order also brought on some saltiness from Christopher Butler, lays out the irony, or the waste.

The hubris of this appointment becomes clearer when viewed alongside the recent dismantling of 18F, the federal government’s existing design services office. Less than a year ago, Trump and Elon Musk’s DOGE initiative completely eviscerated this team, which was modeled after the UK’s Government Digital Service and comprised hundreds of design practitioners with deep expertise in government systems. Many of us likely knew someone at 18F. We knew how much value they offered the country. The people in charge didn’t understand what they did and didn’t care.

In other words, we were already doing what Gebbia claims he’ll accomplish in three years. The 18F team had years of experience navigating federal bureaucracy, understanding regulatory constraints, and working within existing governmental structures—precisely the institutional knowledge required for meaningful reform.

Butler knew Joe Gebbia, the appointed Chief Design Officer, in college and calls out his track record in government, or lack thereof.

Full disclosure: I attended college with Joe Gebbia and quickly formed negative impressions of his character that subsequent events have only reinforced.

While personal history colors perspective, the substantive concerns about this appointment stand independently: the mismatch between promised expertise and demonstrated capabilities, the destruction of existing institutional knowledge, the unrealistic timeline claims, and the predictable potential for conflicts of interest.

Government design reform is important work that requires deep expertise, institutional knowledge, and genuine commitment to public service. It deserves leaders with proven track records in complex systems design, not entrepreneurs whose primary experience involves circumventing existing regulations for private gain.

If anything this yet another illustration of this administration’s incompetence.

Here’s a fun project from Étienne Fortier-Dubois. It is both a timeline of tech innovations throughout history and a family tree. For example, the invention of the wheel led to chariots, or the ancestors of the bulletin board system were the home computer and the modem. From the about page:

The historical tech tree is a project by Étienne Fortier-Dubois to visualize the entire history of technologies, inventions, and (some) discoveries, from prehistory to today. Unlike other visualizations of the sort, the tree emphasizes the connections between technologies: prerequisites, improvements, inspirations, and so on.

These connections allow viewers to understand how technologies came about, at least to some degree, thus revealing the entire history in more detail than a simple timeline, and with more breadth than most historical narratives. The goal is not to predict future technology, except in the weak sense that knowing history can help form a better model of the world. Rather, the point of the tree is to create an easy way to explore the history of technology, discover unexpected patterns and connections, and generally make the complexity of modern tech feel less daunting.

preview-1756485191427.png

Historical Tech Tree

Interactive visualization of technological history

historicaltechtree.com iconhistoricaltechtree.com

I have always wanted to read 6,200 words about color! Sorry, that’s a lie. But I did skim it and really admired the very pretty illustrations. Dan Hollick is a saint for writing and illustrating this chapter in his living book called Making Software, a reference manual for designers and programmers that make digital products. From his newsletter:

I started writing this chapter just trying to explain what a color space is. But it turns out, you can’t really do that without explaining a lot of other stuff at the same time.

Part of the issue is color is really complicated and full of confusing terms that need a maths degree to understand. Gamuts, color models, perceptual uniformity, gamma etc. I don’t have a maths degree but I do have something better: I’m really stubborn.

And here are the opening sentences of the chapter on color:

Color is an unreasonably complex topic. Just when you think you’ve got it figured out, it reveals a whole new layer of complexity that you didn’t know existed.

This is partly because it doesn’t really exist. Sure, there are different wavelengths of light that our eyes perceive as color, but that doesn’t mean that color is actually a property of that light - it’s a phenomenon of our perception.

Digital color is about trying to map this complex interplay of light and perception into a format that computers can understand and screens can display. And it’s a miracle that any of it works at all.

I’m just waiting for him to put up a Stripe link so I can throw money at him.

preview-1756359522301.jpg

Making Software: What is a color space?

In which we answer every question you've ever had about digital color, and some you haven't.

makingsoftware.com iconmakingsoftware.com

Interesting piece from Vaughn Tan about a critical thinking framework that is disguised as a piece about building better AI UIs for critical thinking. Sorry, that sentence is kind of a tongue-twister. Tan calls out—correctly—that LLMs don’t think, or in his words, can’t make meaning:

Meaningmaking is making inherently subjective decisions about what’s valuable: what’s desirable or undesirable, what’s right or wrong. The machines behind the prompt box are remarkable tools, but they’re not meaningmaking entities.

Therefore when users ask LLMs for their opinions on matters, e.g., as in the therapy use case, the AIs won’t come back with actual thinking. IMHO, it’s semantics, but that’s another post.

Anyhow, Tan shares a pen and paper prototype he’s been testing, which breaks down a major decision into guided steps, or put another way, a framework.

This user experience was designed to simulate a multi-stage process of structured elicitation of various aspects of strongly reasoned arguments. This design explicitly addresses both requirements for good tool use. The structured prompts helped students think critically about what they were actually trying to accomplish with their custom major proposals — the meaningmaking work of determining value, worth, and personal fit. Simultaneously, the framework made clear what kinds of thinking work the students needed to do themselves versus what kinds of information gathering and analysis could potentially be supported by tools like LLMs.

This guided or framework-driven approach was something I attempted wtih Griffin AI. Via a series of AI-guided prompts to the user—or a glorified form, honestly—my tool helped users build brand strategies. To be sure, a lot of the “thinking” was done by the model, but the idea that an AI can guide you to critically think about your business or your client’s business was there.

preview-1756270668809.png

Designing AI tools that support critical thinking

Current AI interfaces lull us into thinking we’re talking to something that can make meaningful judgments about what’s valuable. We’re not — we’re using tools that are tremendously powerful but nonetheless can’t do “meaningmaking” work (the work of deciding what matters, what’s worth pursuing).

vaughntan.org iconvaughntan.org

Designer Tey Bannerman writes that when he hears “human in the loop,” he’s reminded of a story about Lieutenant Colonel Stanislav Petrov, a Soviet Union duty watch officer who monitored for incoming missile strikes from the US.

12:15 AM… the unthinkable. Every alarm in the facility started screaming. The screens showed five US ballistic missiles, 28 minutes from impact. Confidence level: 100%. Petrov had minutes to decide whether to trigger a chain reaction that would start nuclear war and could very well end civilisation as we knew it.

He was the “human in the loop” in the most literal, terrifying sense.

Everything told him to follow protocol. His training. His commanders. The computers.

But something felt wrong. His intuition, built from years of intelligence work, whispered that this didn’t match what he knew about US strategic thinking.

Against every protocol, against the screaming certainty of technology, he pressed the button marked “false alarm”.

Twenty-three minutes of gripping fear passed before ground radar confirmed: no missiles. The system had mistaken a rare alignment of sunlight on high-altitude clouds for incoming warheads.

His decision to break the loop prevented nuclear war.

Then Bannerman shares an awesome framework he developed that allows humans in the loop in AI systems “genuine authority, time to think, and understanding the bigger picture well enough to question” the system’s decision. Click on to get the PDF from his site.

Framework diagram by Tey Bannerman titled Beyond ‘human in the loop’. It shows a 4×4 matrix mapping AI oversight approaches based on what is being optimized (speed/volume, quality/accuracy, compliance, innovation) and what’s at stake (irreversible consequences, high-impact failures, recoverable setbacks, low-stakes outcomes). Colored blocks represent four modes: active control, human augmentation, guided automation, and AI autonomy. Right panel gives real-world examples in e-commerce email marketing and recruitment applicant screening.

Redefining ‘human in the loop’

"Human in the loop" is overused and vague. The Petrov story shows humans must have real authority, time, and context to safely override AI. Bannerman offers a framework that asks what you optimize for and what is at stake, then maps 16 practical approaches.

teybannerman.com iconteybannerman.com

Simon Sherwood, writing in The Register:

Amazon Web Services CEO Matt Garman has suggested firing junior workers because AI can do their jobs is “the dumbest thing I’ve ever heard.”

Garman made that remark in conversation with AI investor Matthew Berman, during which he talked up AWS’s Kiro AI-assisted coding tool and said he’s encountered business leaders who think AI tools “can replace all of our junior people in our company.”

That notion led to the “dumbest thing I’ve ever heard” quote, followed by a justification that junior staff are “probably the least expensive employees you have” and also the most engaged with AI tools.

“How’s that going to work when ten years in the future you have no one that has learned anything,” he asked. “My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.”

Yup. I agree.

preview-1756189648262.jpg

AWS CEO says AI replacing junior staff is 'dumbest idea'

They're cheap and grew up with AI … so you're firing them why?

theregister.com icontheregister.com

This post from Carly Ayres breaks down a beef between Michael Roberson (developer of an AI-enabled moodboard tool) and Elizabeth Goodspeed (writer and designer, oft-linked on this blog) and explores ragebait, putting in the reps as a junior, and designers as influencers.

Tweet by Michael Roberson defending Moodboard AI against criticism, saying if faster design research threatens your job, “you’re ngmi.” Screenshot shows a Sweetgreen brand audit board with colors, fonts, and imagery.

Tweet from Michael Roberson

The tweet earned 30,000 views, but only about 20 likes. “That ratio was pretty jarring,” [Roberson] said. Still, the strategy felt legible. “When I post things like, ‘if you don’t do X, you’re not going to make it,’ obviously, I don’t think that. These tools aren’t really capable of replacing designers just yet. It’s really easy to get views baiting and fear-mongering.”

Much like the provocative Artisan campaign, I think this is a net negative for the brand. Pretty sure I won’t be trying out Moodboard AI anytime soon, ngl.

But stepping back from the internet beef, Ayres argues that it’s a philosophical difference about the role friction in the creative process.

Michael’s experience mirrors that of many young designers: brand audits felt like busywork during his Landor internship. “That process was super boring,” he told me. “I wasn’t learning much by copy-pasting things into a deck.” His tool promises to cut through that inefficiency, letting teams reach visual consensus faster and spend more time on execution.

Young Michael, the process is the point! Without doing this boring stuff, by automating it with AI, how are you going to learn? This is but one facet of the whole discussion around expertise, wisdom, and the design talent crisis.

Goodspeed agrees with me:

Elizabeth sees it differently. “What’s interesting to me,” Elizabeth noted, “is how many people are now entering this space without a personal understanding of how the process of designing something actually works.” For her, that grunt work was formative. “The friction is the process,” she explained. “That’s how you form your point of view. You can’t just slap seven images on a board. You’re forced to think: What’s relevant? How do I organize this and communicate it clearly?”

Ultimately, the saddest point that Ayres makes—and noted by my friend Eric Heiman—is this:

When you’re young, online, and trying to get a project off the ground, caring about distribution is the difference between a hobby and a company. But there’s a cost. The more you perform expertise, the less you develop it. The more you optimize for engagement, the more you risk flattening what gave the work meaning in the first place. In a world where being known matters more than knowing, the incentives point toward performance over practice. And we all become performers in someone else’s growth strategy.

…Because when distribution matters more than craft, you don’t become a designer by designing. You become a designer by being known as one. That’s the game now.

preview-1755813084079.png

Mooooooooooooooood

Is design discourse the new growth hack?

open.substack.com iconopen.substack.com
Surreal black-and-white artwork of a glowing spiral galaxy dripping paint-like streaks over a city skyline at night.

Why I’m Keeping My Design Title

In the 2011 documentary Jiro Dreams of Sushi, then 85 year-old sushi master Jiro Ono says this about craft:

Once you decide on your occupation… you must immerse yourself in your work. You have to fall in love with your work. Never complain about your job. You must dedicate your life to mastering your skill. That’s the secret of success and is the key to being regarded honorably.

Craft is typically thought of as the formal aspects of any field such as design, woodworking, writing, or cooking. In design, we think about composition, spacing, and typography—being pixel-perfect. But one’s craft is much more than that. Ono’s sushi craft is not solely about slicing fish and pressing it against a bit of rice. It is also about picking the right fish, toasting the nori just so, cooking the rice perfectly, and running a restaurant. It’s the whole thing.

Therefore, mastering design—or any occupation—takes time, experience, or reps as the kids say. So it’s to my dismay that Suff Syed’s essay “Why I’m Giving Up My Design Title — And What That Says About the Future of Design” got so much play in recent weeks. Syed is Head of Product Design at Microsoft—er, was. I guess his title is now Member of the Technical Staff. In a perfectly well-argued and well-written essay, he concludes:

That’s why I’m switching careers. From Head of Product Design to Member of Technical Staff.

This isn’t a farewell to experience, clarity, or elegance. It’s a return to first principles. I want to get closer to the metal—to shape the primitives, models, and agents that will define how tomorrow’s software is built.

We need more people at the intersection. Builders who understand agentic flows and elevated experiences. Designers who can reason about trust boundaries and token windows. Researchers who can make complex systems usable—without dumbing them down to a chat interface.

In the 2,800 words preceding the above quote, Syed lays out a five-point argument: the paradigm for software is changing to agentic AI, design doesn’t drive innovation, fewer design leaders will be needed in the future, the commoditization of design, and the pay gap. The tl;dr being that design as a profession is dead and building with AI is where it’s at. 

With respect to Mr. Syed, I call bullshit. 

Let’s discuss each of his arguments.

The Paradigm Argument

Suff Syed:

The entire traditional role of product designers, creating static UI in Silicon Valley offices that work for billions of users, is becoming increasingly irrelevant; when the Agent can simply generate the UI it needs for every single user.

That’s a very narrow view of what user experience designers do. In this diagram by Dan Saffer from 2008, UX encircles a large swath of disciplines. It’s a little older so it doesn’t cover newer disciplines like service design or AI design.

Diagram titled The Disciplines of UX showing overlapping circles of fields like Industrial Design, Human Factors, Communication Design, and Architecture. The central green overlap highlights Interaction Design, surrounded by related areas such as usability engineering, information architecture, motion design, application design, and human-computer interaction.

Originally made by envis pricisely GmBH - www.envis-precisely.com, based on “The Disciplines of UX” by Dan Saffer (2008). (PDF)

I went to design school a long time ago, graduating 1995. But even back then, in Graphic Design 2 class, graphic design wasn’t just print design. Our final project for that semester was to design an exhibit, something that humans could walk through. I’ve long lost the physical model, but my solution was inspired by the Golden Gate Bridge and how I had this impression of the main cables as welcome arms as you drove across the bridge. My exhibit was a 20-foot tall open structure made of copper beams and a glass roof. Etched onto the roof was a poem—by whom I can’t recall—that would cast the shadows of its letters onto the ground, creating an experience for anyone walking through the structure.

Similarly, thoughtful product designers consider the full experience, not just what’s rendered on the screen. How is onboarding? What’s their interaction with customer service? And with techniques like contextual inquiry, we care about the environments users are in. Understanding that nurses in a hospital are in a very busy setting and share computers are important insights that can’t be gleaned from desk research or general knowledge. Designers are students of life and observers of human behavior.

Syed again:

Agents offer a radical alternative by placing control directly into users’ hands. Instead of navigating through endless interfaces, finding a good Airbnb could be as simple as having a conversation with an AI agent. The UI could be generated on the fly, tailored specifically to your preferences; an N:1 model. No more clicking around, no endless tabs, no frustration.

I don’t know. I have my doubts that this is actually going to be the future. While I agree that agentic workflows will be game-changing, I disagree that the chat UI is the only one for all use cases or even most scenarios. I’ve previously discussed the disadvantages of prompting-only workflows and how professionals need more control. 

I also disagree that users will want UIs generated on the fly. Think about the avalanche of support calls and how insane those will be if every user’s interface is different!

In my experience, users—including myself—like to spend the time to set up their software for efficiency. For example, in a dual-monitor setup, I used to expose all of Photoshop’s palettes and put them in the smaller display, and the main canvas on the larger one. Every time I got a new computer or new monitor, I would import that workspace so I could work efficiently. 

Habit and muscle memory are underrated. Once a user has invested the time to arrange panels, tools, and shortcuts the way they like, changing it frequently adds friction. For productivity and work software, consistency often outweighs optimization. Even if a specialized AI-made-for-you workspace could be more “optimal” for a task, switching disrupts the user’s mental model and motor memory.

I want to provide one more example because it’s in the news: consider the backlash that OpenAI has faced in the past week with their rollout of GPT-5. OpenAI assumed people would simply welcome “the next model up,” but what they underestimated was the depth of attachment to existing workflows, and in some cases, to the personas of the models themselves. As Casey Newton put it, “it feels different and stronger than the kinds of attachment people have had to previous kinds of technology.” It’s evidence of how much emotional and cognitive investment users pour into the tools they depend on. You can’t just rip that foundation away without warning. 

Which brings us back to the heart of design: respect for the user. Not just their immediate preferences, but the habits, muscle memory, and yes, relationships that accumulate over time. Agents may generate UIs on the fly, but if they ignore the human need for continuity and control, they’ll stumble into the same backlash OpenAI faced.

The Innovation Argument

Syed’s second argument is that design supports innovation rather than drive it. I half agree with this. If we’re talking about patents or inventions, sure. Technology will always win the day. But design can certainly drive innovation.

He cites Airbnb, Figma, Notion, and Linear as being “incredible companies with design founders,” but only Airbnb is a Fortune 500 company. 

While not having been founded by designers, I don’t think anyone would argue that Apple, Nike, Tesla, and Disney are not design-led and aren’t innovative. All are in the Fortune 500. Disney treats experience design, which includes its parks, media, and consumer products, as a core capability. Imagineering is a literal design R&D division that shapes the company’s most profitable experiences. Look up Lanny Smoot.

Early prototypes of the iPhone featuring the first multitouch screens were actually tablet-sized. But Apple’s industrial design team led by Jony Ive, along with the hardware engineering team got the form factor to fit nicely in one hand. And it was Bas Ording, the UI designer behind Mac OS X’s Aqua design language that prototyped inertial effects. Farhad Manjoo, writing in Slate in 2012:

Jonathan Ive, Apple’s chief designer, had been investigating a technology that he thought could do wonderful things someday—a touch display that could understand taps from multiple fingers at once. (Note that Apple did not invent multitouch interfaces; it was one of several companies investigating the technology at the time.) According to Isaacson’s biography, the company’s initial plan was to the use the new touch system to build a tablet computer. Apple’s tablet project began in 2003—seven years before the iPad went on sale—but as it progressed, it dawned on executives that multitouch might work on phones. At one meeting in 2004, Jobs and his team looked a prototype tablet that displayed a list of contacts. “You could tap on the contact and it would slide over and show you the information,” Forstall testified. “It was just amazing.”

Jobs himself was particularly taken by two features that Bas Ording, a talented user-interface designer, had built into the tablet prototype. One was “inertial scrolling”—when you flick at a list of items on the screen, the list moves as a function of how fast you swipe, and then it comes to rest slowly, as if being affected by real-world inertia. Another was the “rubber-band effect,” which causes a list to bounce against the edge of the screen when there were no more items to display. When Jobs saw the prototype, he thought, “My god, we can build a phone out of this,” he told the D Conference in 2010.

The Leadership Argument

Suff Syed’s third argument is about what it means to be a design leader. He says, “scaling your impact as a designer meant scaling the surfaces you influence.” As you rose up through the ranks, “your craft was increasingly displaced by coordination. You became a negotiator, a timeline manager, a translator of ambition through Product and Engineering partnerships.”

Instead, he argues, because AI can build with fewer people—well, you only need one person: “You need two people: one who understands systems and one who understands the user. Better if they’re the same person.”

That doesn’t scale. Don’t tell me that Microsoft, a company with $281 billion in revenue and 228,000 employees—will shrink like a stellar collapse into a single person with an army of AIs. That’s magical thinking.

Leaders are still needed. Influence and coordination are still needed. Humans will still be needed.

He ends this argument with:

This new world despises a calendar full of reviews, design crits, review meetings, and 1:1s. It emphasizes a repo with commits that matter. And promises the joy of shipping to return to your work. That joy unmediated by PowerPoint, politics, or process. That’s not a demotion. That’s liberation.

So he wants us all to sit in our home offices and not collaborate with others? Innovation no longer comes from lone geniuses. They’re born from bouncing ideas off of your coworkers and everyone building on each other’s ideas.

Friction in the process can actually make things better. Pixar famously has a council known as the Braintrust—a small, rotating group of the studio’s best storytellers who meet regularly to tear down and rebuild works-in-progress. The rules are simple: no mandatory fixes, no sugarcoating, and no egos. The point is to push the director to see the story’s problems more clearly—and to own the solution. One of the most famous saves came with Toy Story 2. Originally destined for direct-to-video release, early cuts were so flat that the Braintrust urged the team to start from scratch. Nine frantic months later, the film emerged as one of Pixar’s most beloved works, proof that constructive creative friction can turn a near-disaster into a classic.

The Distribution Argument

Design taste has been democratized and is table stakes, says Syed in his next argument.

There was a time when every new Y Combinator startup looked like someone tortured an intern into generating a logo using Clipart. Today, thanks to a generation of exposure to good design—and better tools—most founders have internalized the basics of aesthetic judgment. First impressions matter, and now, they’re trivial to get right.

And that templates, libraries, and frameworks make it super easy and quick to spin up something tasteful in minutes:

Component libraries like Tailwind, shadcn/ui, and Radix have collapsed the design stack. What once required a full design team handcrafting a system in Figma, exporting specs to Storybook, and obsessively QA-ing the front-end… now takes a few lines of code. Spin up a repo. Drop in some components. Tweak the palette. Ship something that looks eerily close to Linear or Notion in a weekend.

I’m starting to think that Suff Syed believes that designers are just painters or something. Wow. This whole argument is reductive, flattening our role to be only about aesthetics. See above for how much design actually entails.

The Wealth Argument

“Nobody is paying Designers $10M, let alone $100M anytime soon.” Ah, I think this is him saying the quiet part out loud. Mr. Syed is dropping his design title and becoming a “member of the technical staff” because he’s chasing the money.

He’s right. No one is going to pay a designer $100 million total comp package. Unless you’re Jony Ive and part of io, which OpenAI acquired for $6.5 billion back in May. Which is a rare and likely once-ever occurrence.

In a recent episode of Hard Fork, The New Times tech columnist Kevin Roose said:

The scale of money and investment going into these AI systems is unlike anything we’ve ever seen before in the tech industry. …I heard a rumor there was a big company that wasted a billion dollars or more on a failed training run. And then you start to think, oh, I understand why, to a company like Meta, the right AI talent is worth a hundred million dollars, because that level of expertise doesn’t exist that widely outside of this very small group of people. And if this person does their job well, they can save your company something more like a billion dollars. And maybe that means that you should pay them a hundred million dollars.

“Very small group of people” is likely just a couple dozen people in the world who have this expertise and worth tens of millions of dollars.

Syed again:

People are getting generationally wealthy inventing new agentic abstractions, compressing inference cycles, and scaling frontier models safely. That’s where the gravity is. That’s where anybody should aspire to be. With AI enabling and augmenting you as an individual, there’s a far more compelling reason to chase this frontier. No reason not to.

People also get generationally wealthy by hitting the startup lottery. But it’s a hard road and there’s a lot of luck involved.

The current AI frenzy feels a lot like 1849 in California. Back then, roughly 300,000 people flooded the Sierra Nevada mountains hoping to strike gold, but the math was brutal: maybe 10% made any profit at all, the top 4% earned enough to brag a little, and only about 1% became truly rich. The rest? They left with sore backs, empty pockets, and I guess some good stories. 

Back to Reality

AI is already changing the software industry. As designers and builders of software, we are going to be using AI as material. This is as obvious as when the App Store on iPhone debuted and everyone needed to build apps.

Suff Syed wrote his piece as part personal journey and decision-making and part rallying cry to other designers. He is essentially switching careers and says that it won’t be easy.

This transition isn’t about abandoning one identity for another. It’s about evolving—unlearning what no longer serves us and embracing the disciplines that will shape the future. There’s a new skill tree ahead: model internals, agent architectures, memory hierarchies, prompt flows, evaluation loops, and infrastructure that determines how products think, behave, and scale.

Best of luck to Suff Syed on his journey. I hope he strikes AI gold. 

As for me, I aim to continue on my journey of being a shokunin, or craftsman, like Jiro Ono. For over 30 years—if you count my amateur days in front of the Mac in middle school—I’ve been designing. Not just pushing pixels in Photoshop or Figma, but doing the work of understanding audiences and users, solving business problems, inventing new interaction patterns, and advocating for usability. All in the service of the user, and all while honing my craft.

That craft isn’t tied to a technology stack or a job title. It’s a discipline, a mindset, and a lifetime’s work. Being a designer is my life. 

So no, I’m not giving up my design title. It’s not a relic—it’s a commitment. And in a world chasing the next gold rush, I’d rather keep making work worth coming back to, knowing that in the end, gold fades but mastery endures. Besides, if I ever do get rich, it’ll be because I designed something great, not because I happened to be standing near a gold mine.

As a follow-up to yesterday’s item on how Google’s AI overviews are curtailing traffic to websites by as much as 25%, here is a link to Nielsen Norman Group’s just-published study showing that generative AI is reshaping search.

Kate Moran, Maria Rosala and Josh Brown:

While AI offers compelling shortcuts around tedious research tasks, it isn’t close to completely replacing traditional search. But, even when people are using traditional search, the AI-generated overview that now tops almost all search-results pages steals a significant amount of attention and often shortcuts the need to visit the actual pages.

They write that users have developed a way to search over the years, skipping sponsored results and heading straight for the organic links. Users also haven’t completely broken free of traditional Google Search, now adding chatbots to the mix:

While generative AI does offer enough value to change user behaviors, it has not replaced traditional search entirely. Traditional search and AI chats were often used in tandem to explore the same topic and were sometimes used to fact-check each other.

All our participants engaged in traditional search (using keywords, evaluating results pages, visiting content pages, etc.) multiple times in the study. Nobody relied entirely on genAI’s responses (in chat or in an AI overview) for all their information-seeking needs.

In many ways, I think this is smart. Unless “web search” is happening, I tend double-check ChatGPT and Claude, especially for anything historical and mission-critical. I also like Perplexity for that fact—because it shows me its receipts by giving me sources.

preview-1755581621661.png

How AI Is Changing Search Behaviors

Our study shows that generative AI is reshaping search, but long-standing habits persist. Many users still default to Google, giving Gemini a fighting chance.

nngroup.com iconnngroup.com

The designer of the iconic “007” logo from the James Bond movies has died. Joe Caroff was 103. Jeré Longman, writing for The New York Times:

For the first Bond movie, “Dr. No” (1962), Mr. Caroff was hired to create a logo for the letterhead of a publicity release. He began working with the idea that as a secret agent, James Bond had a license to kill (as designated by the numerals “00”), but Mr. Caroff did not find Bond’s compact Walther PPK pistol to be visually appealing.

As he sketched the numerals 007, he drew penciled lines above and below to guide him and noticed that the upper guideline resembled an elongated barrel of a pistol extending from the seven.

He refined his drawing and added a trigger, fashioning a mood of intrigue and espionage and crafting one of the most globally recognized symbols in cinematic history. With some modifications, the logo has been used for 25 official Bond films and endless merchandising.

John Gruber of Daring Fireball also wrote a piece about Caroff:

Caroff had a remarkably accomplished career. He created iconic posters for dozens of terrific films across a slew of genres. The fact that he created the 007 logo but only earned $300 from it is more like a curious footnote than anything.

Joe Caroff, Who Gave James Bond His Signature 007 Logo, Dies at 103

Joe Caroff, Who Gave James Bond His Signature 007 Logo, Dies at 103

(Gift Article) A quiet giant in graphic design, he created posters for hundreds of movies, including “West Side Story” and “A Hard Day’s Night.” But his work was often unsigned.

nytimes.com iconnytimes.com

Jessica Davies reports that new publisher data suggests that some sites are getting 25% less traffic from Google than the previous year.

Writing in Digiday:

Organic search referral traffic from Google is declining broadly, with the majority of DCN member sites — spanning both news and entertainment — experiencing traffic losses from Google search between 1% and 25%. Twelve of the respondent companies were news brands, and seven were non-news.

Jason Kint, CEO of DCN, says that this is a “direct consequence of Google AI Overviews.”

I wrote previously about the changing economics of the web here, here, and here.

And related, Eric Mersch writes in a LinkedIn post that Monday.com’s stock fell 23% because co-CEO Roy Mann said, “We are seeing some softness in the market due to Google algorithm,” during their Q2 earnings call and the analysts just kept hammering him and the CFO about how the algo changes might affect customer acquisition.

Analysts continued to press the issue, which caught company management completely off guard. Matthew Bullock from Bank of America Merrill Lynch asked frankly, “And then help us understand, why call this out now? How did the influence of Google SEO disruption change this quarter versus 1Q, for example?” The CEO could only respond, “So look, I think like we said, we optimize in real-time. We just budget daily,” implying that they were not aware of the problem until they saw Q2 results.

This is the first public sign that the shift from Google to AI-powered searches is having an impact.

preview-1755493440980.jpg

Google AI Overviews linked to 25% drop in publisher referral traffic, new data shows

The majority of Digital Content Next publisher members are seeing traffic losses from Google search between 1% and 25% due to AI Overviews.

digiday.com icondigiday.com

I grew up on MTV and I’m surprised that my Gen Z kids don’t watch music videos. ¯_(ツ)_/¯

Rob Schwartz, writing in PRINT Magazine:

…the network launched the iconic “I Want My MTV” ad campaign. Created by ad legend George Lois, the campaign featured the world’s biggest rock stars literally demanding MTV. At the time, this was unheard of. Unlike today, rock stars would never sell out to do ads. But here you had the biggest stars: Mick Jagger, David Bowie, Pete Townshend, the Police…and rising star Madonna, all shouting the same line in different executions: ‘I want my MTV!” The campaign was a stroke of genius. It mobilized viewers to call up their cable providers and shout over the phone: “I want my MTV!” In due time, MTV was on damn-near every cable box and damn-near every young person’s TV.

Play
preview-1755492997577.jpg

The MTV Effect

Rob Schwartz on the unconventional genius of music + TV.

printmag.com iconprintmag.com

In a fascinating thread about designing a typeface in Illustrator versus a font editor, renowned typographer Jonathan Hoefler lets us peek behind the curtains.

But moreover, the reason not to design typefaces in a drawing program is that there, you’re drawing letters in isolation, without regard to their neighbors. Here’s the lowercase G from first corner of the HTF Didot family, its 96pt Light Roman master, which I drew toward the end of 1991. (Be gentle; I was 21.) I remember being delighted by the results, no doubt focussing on that delicate ear, etc. But really, this is only half the picture, because it’s impossible to know if this letter works, unless you give it context. Here it is between lowercase Ns, which establish a typographic ‘control’ for an alphabet’s weight, width, proportions, contrast, fit, and rhythm. Is this still a good G? Should the upper bowl maybe move left a little? How do we feel about its weight, compared to its neighbors? Is the ear too dainty?

Jonathan Hoefler on designing fonts in a drawing program versus a font editor

Threads

Jonathan Hoefler on designing fonts in a drawing program versus a font editor

threads.com iconthreads.com

Cap Watkins, Head of Product Design at Lattice, was catching up with a former top-performing designer who was afraid other designers were mad at her for getting all the “cool” projects.

What made those projects glamorous and desirable was her and how she approached the work. There’s that old nugget about making your own luck and that is something she excelled at. She had a unique ability to take really hard or nebulous problems (both design and team-related) and morph them into something amazing that got people excited. Instead of getting discouraged, she’d respond to friction with more energy, more enthusiasm. In so many ways, she was a transformative presence on any team and project.

In other words, this designer cared and made the best of all her assignments.

Make things happen

Top designers aren’t handed “cool” projects—they transform hard, unglamorous work into exciting wins. Stop waiting. Make your work shine. Make things happen.

capwatkins.com iconcapwatkins.com

I enjoyed this interview with Notion’s CEO, Ivan Zhao over at the Decoder podcast, with substitute host, Casey Newton. What I didn’t quite get when I first used Notion was the “LEGO” aspect of it. Their vision is to build business software that is highly malleable and configurable to do all sorts of things. Here’s Zhao:

Well, because it didn’t quite exist with software. If you think about the last 15 years of [software-as-a-service], it’s largely people building vertical point solutions. For each buyer, for each point, that solution sort of makes sense. The way we describe it is that it’s like a hard plastic solution for your problem, but once you have 20 different hard plastic solutions, they sort of don’t fit well together. You cannot tinker with them. As an end user, you have to jump between half a dozen of them each day.

That’s not quite right, and we’re also inspired by the early computing pioneers who in the ‘60s and ‘70s thought that computing should be more LEGO-like rather than like hard plastic. That’s what got me started working on Notion a long time ago, when I was reading a computer science paper back in college.

From a user experience POV, Notion is both simple and exceedingly complicated. Taking notes is easy. Building the system for a workflow, not so much.

In the second half, Newton (gently) presses Zhao on the impact of AI on the workforce and how productivity software like Notion could replace headcount.

Newton: Do you think that AI and Notion will get to a point where executives will hire fewer people, because Notion will do it for them? Or are you more focused on just helping people do their existing jobs?

Zhao: We’re actually putting out a campaign about this, in the coming weeks or months. We want to push out a more amplifying, positive message about what Notion can do for you. So, imagine the billboard we’re putting out. It’s you in the center. Then, with a tool like Notion or other AI tools, you can have AI teammates. Imagine that you and I start a company. We’re two co-founders, we sign up for Notion, and all of a sudden, we’re supplemented by other AI teammates, some taking notes for us, some triaging, some doing research while we’re sleeping.

Zhao dodges the “hire fewer people” part of the question and instead, answers with “amplifying” people or making them more productive.

preview-1755062355751.jpg

Notion CEO Ivan Zhao wants you to demand better from your tools

Notion’s Ivan Zhao on AI agents, productivity, and how software will change in the future.

theverge.com icontheverge.com

As a child of immigrant parents, I grew up learning English from watching PBS, Sesame Street, specifically. But there were other favorites like 3-2-1 Contact, The Electric Company, and of course, Mr. Roger’s Neighborhood. The logo, with its head looking like a P was seared into my developing brain.

So I’m incredibly saddened to hear that the Corporation for Public Broadcasting, the government-funded entity behind PBS and NPR, will cease operations on September 30, 2025, because of a recent bill passed by the Republican-controlled Congress and signed into law by President Trump.

While PBS and NPR won’t disappear, it will be harder for those networks to stay afloat, now solely dependent on donations.

Lilly Smith, writing for Fast Company:

More than 70% of CPB’s annual federal appropriation goes directly to more than 1,500 local public media stations, according to a web page of its financials. This loss in funding could force local stations, especially in rural areas, to shut down, according to the CPB. Local member stations are independent and locally owned and operated, according to NPR. As a public-private partnership, local PBS stations get about 15% of their revenue from federal funding.

She reached out to Tom Geismar, who redesigned the PBS logo in 1984—the original was by Herb Lubalin and Ernie Smith in 1971. He had this perspective:

There is an ironic tie-in between the government decision to cut off all funding to public television and public radio, and what prompted the redesign of the PBS logo back in the early 1980s.

That was also a difficult time, financially, for the Public Broadcasting Service, and especially the stations in more remote regions of the country. Much of the public equated PBS with the major television networks CBS, NBC and ABC, and presumed that, like those major institutions, PBS was the parent of and significant funder for all the local public television stations throughout the country. But, in fact, the reality is somewhat the opposite. Although PBS local affiliates received a portion of funding from the federal government, it is the individual stations that have the responsibility to do public fund raising, and PBS, in a sense, works for them.

Because of this confusion, the PBS leadership felt that their existing logo (a famous design by by Herb Lubalin) needed to be more than just the classic 3-initials mark, something more evocative of a public-benefit system serving all people. Thus the “everyone” mark was born.

Geismar ends with, “And now, once again, with federal government funding stopped, it is the stations in the less populous regions who will suffer the most.”

preview-1754975217974.png

The designer behind the iconic 'everyman' PBS logo sees the irony in its demise

Tom Geismar designed the logo to represent the everyman. Now, he says, it’s those people who will suffer the most from the loss of public broadcast services.

fastcompany.com iconfastcompany.com

Ben Davies-Romano argues that the AI chat box is our new design interface:

Every interaction with a large language model starts the same way: a blinking cursor in a blank text field. That unassuming box is more than an input — it’s the interface between our human intent and the model’s vast, probabilistic brain.

This is where the translation happens. We pour in the nuance, constraints, and context of our ideas; the model converts them into an output. Whether it’s generating words, an image, a video sequence, or an interactive prototype, every request passes through this narrow bridge.

It’s the highest-stakes, lowest-fidelity design surface I’ve ever worked with: a single field that stands between human creativity and an engine capable of reshaping it into almost any form, albeit with all the necessary guidance and expertise applied.

In other words, don’t just say “Make it better,” but guide the AI instead.

That’s why a vague, lazy prompt, like “make it better”, is the design equivalent of telling a junior designer “make it intuitive” and walking away. You’ll get something generic, safe, and soulless, not because the AI “missed the brief,” but because there was no brief.

Without clear stakes, a defined brand voice, and rich context, the system will fill in the blanks with its default, most average response. And “average” is rarely what design is aiming for.

And he makes a point that designers should be leading the charge on showing others what generative AI can do:

In the age of AI, it shouldn’t be everyone designing, per say. It should be designers using AI as an extension of our craft. Bringing our empathy, our user focus, our discipline of iteration, and our instinct for when to stop generating and start refining. AI is not a replacement for that process; it’s a multiplier when guided by skilled hands.

So, let’s lead. Let’s show that the real power of AI isn’t in what it can generate, but in how we guide it — making it safer, sharper, and more human. Let’s replace the fear and the gimmicks with clarity, empathy, and intentionality.

The blank prompt is our new canvas. And friends, we need to be all over it.

preview-1754887809469.jpeg

Prompting is designing. And designers need to lead.

Forget “prompt hacks.” Designers have the skills to turn AI from a gimmick into a powerful, human-centred tool if we take the lead.

medium.com iconmedium.com

There are over 1,800 font families in Google Fonts. While as designers, I’m sure were grateful for the trove of free fonts, good typefaces in the library are hard to spot.

Brand identity darlings Smith & Diction dropped a catalog of “Usable Google Fonts.” In a LinkedIn post, they wrote, “Screw it, here’s all of the google fonts that are actually good categorized by ‘vibe’.”

Huzzah! It’s in the form of a public Figma file. Enjoy.

preview-1754632730253.jpg

Usable Google Fonts

Catalog of "usable" Google fonts as curated by Smith & Diction

figma.com iconfigma.com

Christopher K. Wong argues that desirability is a key part of design that helps decide which features users really want:

To give a basic definition, desirability is a strategic part of UX that revolves around a single user question: Have you defined (and solved) the right problem for users?

In other words, before drawing a single box or arrow, have you done your research and discovery to know you’re solving a pain point?

The way the post is written makes it hard to get at a succinct definition, but here’s my take. Desirability is about ensuring a product or feature is truly wanted, needed, and chosen by users—not just visual appeal—making it a core pillar for impactful design decisions and prioritization. And designers should own this.

preview-1754632102491.jpeg

Want to have a strategic design voice at work? Talk about desirability

Desirability isn’t just about visual appeal: it’s one of the most important user factors

dataanddesign.substack.com icondataanddesign.substack.com

Yesterday, OpenAI launched GPT-5, their latest and greatest model that replaces the confusing assortment of GPT-4o, o3, o4-mini, etc. with just two options: GPT-5 and GPT-5 pro. The reasoning is built in and the new model is smart enough to know what to think harder, or when a quick answer suffices.

Simon Willison deep dives into GPT-5, exploring its mix of speed and deep reasoning, massive context limits, and competitive pricing. He sees it as a steady, reliable default for everyday work rather than a radical leap forward:

I’ve mainly explored full GPT-5. My verdict: it’s just good at stuff. It doesn’t feel like a dramatic leap ahead from other LLMs but it exudes competence—it rarely messes up, and frequently impresses me. I’ve found it to be a very sensible default for everything that I want to do. At no point have I found myself wanting to re-run a prompt against a different model to try and get a better result.

It’s a long technical read but interesting nonetheless.

preview-1754630277862.jpg

GPT-5: Key characteristics, pricing and model card

I’ve had preview access to the new GPT-5 model family for the past two weeks (see related video) and have been using GPT-5 as my daily-driver. It’s my new favorite …

simonwillison.net iconsimonwillison.net

Jay Hoffman, writing in his excellent The History of the Web website, reflects on Kevin Kelly’s 2005 Wired piece that celebrated the explosive growth of blogging—50 million blogs, one created every two seconds—and predicted a future powered by open participation and user-created content. Kelly was right about the power of audiences becoming creators, but he missed the crucial detail: 2005 would mark the peak of that open web participation before everyone moved into centralized platforms.

There are still a lot of blogs, 600 million by some accounts. But they have been supplanted over the years by social media networks. Commerce on the web has consolidated among fewer and fewer sites. Open source continues to be a major backbone to web technologies, but it is underfunded and powered almost entirely by the generosity of its contributors. Open API’s barely exist. Forums and comment sections are finding it harder and harder to beat back the spam. Users still participate in the web each and every day, but it increasingly feels like they do so in spite of the largest web platforms and sites, not because of them.

My blog—this website—is a direct response to the consolidation. This site and its content are owned and operated by me and not stuck behind a login or paywall to be monetized by Meta, Medium, Substack, or Elon Musk. That is the open web.

Hoffman goes on to say, “The web was created for participation, by its nature and by its design. It can’t be bottled up long.” He concludes with:

Independent journalists who create unique and authentic connections with their readers are now possible. Open social protocols that experts truly struggle to understand, is being powered by a community that talks to each other.

The web is just people. Lots of people, connected across global networks. In 2005, it was the audience that made the web. In 2025, it will be the audience again.

preview-1754534872678.jpg

We Are Still the Web

Twenty years ago, Kevin Kelly wrote an absolutely seminal piece for Wired. This week is a great opportunity to look back at it.

thehistoryoftheweb.com iconthehistoryoftheweb.com
Illustration of diverse designers collaborating around a table with laptops and design materials, rendered in a vibrant style with coral, yellow, and teal colors

Five Practical Strategies for Entry-Level Designers in the AI Era

*In Part I of this series on the design talent crisis, I wrote about the struggles recent grads have had finding entry-level design jobs and what might be causing the stranglehold on the design job market. In Part II, I discussed how industry and education need to change in order to ensure the survival of the profession. *

**Part III: Adaptation Through Action **

Like most Gen X kids, I grew up with a lot of freedom to roam. By fifth grade, I was regularly out of the house. My friends and I would go to an arcade in San Francisco’s Fisherman’s Wharf called The Doghouse, where naturally, they served hot dogs alongside their Joust and TRON cabinets. But we would invariably go to the Taco Bell across the street for cheap pre-dinner eats. In seventh grade—this is 1986—I walked by a ComputerLand on Van Ness Avenue and noticed a little beige computer with a built-in black and white CRT. The Macintosh screen was actually pale blue and black, but more importantly, showed MacPaint. It was my first exposure to creating graphics on a computer, which would eventually become my career.

Desktop publishing had officially begun a year earlier with the introduction of Aldus PageMaker and the Apple LaserWriter printer for the Mac, which enabled WYSIWYG (What You See Is What You Get) page layouts and high-quality printed output. A generation of designers who had created layouts using paste-up techniques with tools and materials like X-Acto knives, Rapidograph pens, rubyliths, photostats, and rubber cement had to start learning new skills. Typesetters would eventually be phased out in favor of QuarkXPress. A decade of transition would revolutionize the industry, only to be upended again by the web.

Many designers who made the jump from paste-up to desktop publishing couldn’t make the additional leap to HTML. They stayed graphic designers and a new generation of web designers emerged. I think those who were in my generation—those that started in the waning days of analog and the early days of DTP—were able to make that transition.

We are in the midst of yet another transition: to AI-augemented design. It’s important to note that it’s so early, that no one can say anything with absolute authority. Any so-called experts have been working with AI tools and AI UX patterns for maybe two years, maximum. (Caveat: the science of AI has been around for many decades, but using these new tools, techniques, and developing UX patterns for interacting with such tools is solely new.)

It’s obvious that AI is changing not only the design industry, but nearly all industries. The transformation is having secondary effects on the job market, especially for entry-level talent like young designers.

The AI revolution mirrors the previous shifts in our industry, but with a crucial difference: it’s bigger and faster. Unlike the decade-long transitions from paste-up to desktop publishing and from print to web, AI’s impact is compressing adaptation timelines into months rather than years. For today’s design graduates facing the harsh reality documented in Part I and Part II—where entry-level positions have virtually disappeared and traditional apprenticeship pathways have been severed—understanding this historical context isn’t just academic. It’s reality for them. For some, adaptation is possible but requires deliberate strategy. The designers who will thrive aren’t necessarily those with the most polished portfolios or prestigious degrees, but those who can read the moment, position themselves strategically, and create their own pathways into an industry in tremendous flux.

As a designer who is entering the workforce, here are five practical strategies you can employ right now to increase your odds of landing a job in this market:

  1. Leverage AI literacy as competitive differentiator
  2. Emphasize strategic thinking and systems thinking
  3. Become a “dangerous generalist”
  4. Explore alternative pathways and flexibility
  5. Connecting with community

1. AI Literacy as Competitive Differentiator

Young designer orchestrating multiple AI tools on screens, with floating platform icons representing various AI tools.

Just like how Leah Ray, a recent graphic design MFA graduate from CCA, has deeply incorporated AI into her workflow, you have to get comfortable with some of the tools. (See her story in Part II for more context.)

Be proficient in the following categories of AI tools:

  • Chatbot: Choose ChatGPT, Claude, or Gemini. Learn about how to write prompts. You can even use the chatbot to learn how to write prompts! Use it as a creative partner to bounce ideas off of and to do some initial research for you.
  • Image generator: Adobe Firefly, DALL-E, Gemini, Midjourney, or Visual Electric. Learn how to use at least one of these, but more importantly, figure out how to get consistently good results from these generators.
  • Website/web app generator: Figma Make, Lovable, or v0. Especially if you’re in an interaction design field, you’ll need to be proficient in an AI prompt-to-code tool.

Add these skills to your resume and LinkedIn profile. Share your experiments on social media. 

But being AI-literate goes beyond just the tools. It’s also about wielding AI as a design material. Here’s the good part: by getting proficient in the tools, you’re also learning about the UX patterns for AI and learning what is possible with AI technologies like LLMs, agents, and diffusion models.

I’ve linked to a number of articles about designing for AI use cases:

Have a basic understanding of the following:

Be sure to add at least one case study in your portfolio that incorporates an AI feature.

2. Strategic Thinking and Systems Thinking

Designer pointing at an interconnected web diagram showing how design decisions create ripple effects through business systems.

Stunts like AI CEOs notwithstanding, companies don’t trust AI enough to cede strategy to it. LLMs are notoriously bad at longer tasks that contain multiple steps. So thinking about strategy and how to create a coherent system are still very much human activities.

Systems thinking—the ability to understand how different parts of a system interact and how changes in one component can create cascading effects throughout the entire system—is becoming essential for tech careers and especially designers. The World Economic Forum’s Future of Jobs Report 2025 identifies it as one of the critical skills alongside AI and big data. 

Modern technology is incredibly interconnected. AI can optimize individual elements, but it can’t see the bigger picture—how a pricing change affects user retention, how a new feature impacts server costs, or why your B2B customers need different onboarding than consumers. 

Early-career lawyers at the firm Macfarlanes are now interpreting complex contracts that used to be reserved for more senior colleagues. While AI can extract key info from contracts and flag potential issues, humans are still needed to understand the context, implications, and strategic considerations. 

Emphasize these skills in your case studies by presenting clear, logical arguments that lead to strategic insights and systemic solutions. Frame every project through a business lens. Show how your design decisions ladder up to company, brand, or product metrics. Include the downstream effects—not just the immediate impact.

3. The “Dangerous Generalist” Advantage

Multi-armed designer like an octopus, each arm holding different design tools including research, strategy, prototypes, and presentations.

Josh Silverman, professor at CCA and also a design coach and recruiter, has an idea he calls the “dangerous generalist.” This is the unicorn designer who can “do the research, the strategy, the prototyping, the visual design, the presentation, and the storytelling; and be a leader and make a measurable impact.” 

It’s a lot and seemingly unfair to expect that out of one person, but for a young and hungry designer with the right training and ambition, I think it’s possible. Other than leadership and making quantitative impact, all of those traits would have been practiced and honed at a good design program. 

Be sure to have a variety of projects in your portfolio to showcase how you can do it all.

4. Alternative Pathways and Flexibility

Designer navigating a maze of career paths with signposts directing to startups, nonprofits, UI developer, and product manager roles.

Matt Ström-Awn, in an excellent piece about the product design talent crisis published last Thursday, did some research and says that in “over 600 product design listings, only 1% were for internships, and only 5% required 2 years or less of experience.”

Those are some dismal numbers for anyone trying to get a full-time job with little design experience. So you have to try creative ways of breaking into the industry. In other words, don’t get stuck on only applying for junior-level jobs on LinkedIn. Do that but do more.

Let’s break this down to type of company and type of role.

Types of Companies

Historically, I would have always recommended any new designer to go to an agency first because they usually have the infrastructure to mentor entry-level workers. But, as those jobs have dried up, consider these types of companies.

  • Early-stage startups: Look for seed-stage or Series A startups. Companies who have just raised their Series A will make a big announcement, so they should be easy to target. Note that you will often be the only designer in the company, so you’ll be doing a lot of learning on the job. If this happens, remember to find community (see below).
  • Non-tech businesses: Legacy industries might be a lot slower to think about AI, much less adopt it. Focus on sectors where human touch, tradition, regulations, or analog processes dominate. These fields need design expertise, especially as many are just starting to modernize and may require digital transformation, improved branding, or modernized UX.
  • Nonprofits: With limited budgets and small teams, nonprofits and not-for-profits could be great places to work for. While they tend to try to DIY everything, they will also recognize the need for designers. Organizations that invest in design are 50% more likely to see increases in fundraising revenue.

Type of Roles

In his post for UX Collective, Patrick Morgan says, “Sometimes the smartest move isn’t aiming straight for a ‘product designer’ title, but stepping into a role where you can stay close to product and grow into the craft.”

In other words, look for adjacent roles at the company you want to work for, just to get your foot in the door.

Here are some of those roles—includes ones from Morgan’s list. What is appropriate for you will depend heavily on your skill sets and the type of design you want to eventually practice.

  • UI developer or front-end engineer: If you’re technically-minded, front-end developers are still sought after, though maybe not as much as before because, you know, AI. But if you’re able to snag a spot as one, it’s a way in.
  • Product manager: There is no single path to becoming a product manager. It’s a lot of the same skills a good designer should have, but with even more focus on creating strategies that come from customer insights (aka research). I’ve seen designers move into PM roles pretty easily.
  • Graphic/visual/growth/marketing designer: Again, depending on your design focus, you could already be looking for these jobs. But if you’re in UX and you see one of these roles open up, it’s another way into a company.
  • Production artist: These roles are likely slowly disappearing as well. This is usually a role at an agency or a larger company that just does design execution.
  • Freelancer: You may already be doing this, but you can freelance. Not all companies, especially small ones can afford a full-time designer. So they rely on freelance help. Try your hand at Upwork to build up your portfolio. Ensure that you’re charging a price that seems fair to you and that will help pay your bills.
  • Executive assistant: While this might seem odd, this is a good way to learn about a company and to show your resourcefulness. Lots of EAs are responsible for putting together events, swag, and more. Eventually, you might be able to parlay this role into a design role.
  • Intern: Internships are rare, I know. And if you haven’t done one, you should. However, ensure that the company complies with local regulations about paying interns. For example, California has strict laws about paying interns at least minimum wage. Unpaid internships are legal only if the role meets a litany of criteria including:
  • The internship is primarily educational (similar to a school or training program).
  • The intern is the “primary beneficiary,” not the company.
  • The internship does not replace paid employees or provide substantial benefit to the employer.

5. Connecting with Community

Diverse designers in a supportive network circle, connected both in-person and digitally, with glowing threads showing mentorship relationships.

The job search is isolating. Especially now.

Josh Silverman emphasizes something often overlooked: you’re already part of communities. “Consider all the communities you identify with, as well as all the identities that are a part of you,” he points out. Think beyond LinkedIn—way beyond.

Did you volunteer at a design conference? Help a nonprofit with their rebrand? Those connections matter. Silverman suggests reaching out to three to five people—not hiring managers, but people who understand your work. Former classmates who graduated ahead of you. Designers you met at meetups. Workshop leaders.

“Whether it’s a casual coffee chat or slightly more informal informational interview, there are people who would welcome seeing your name pop up on their screen.”

These conversations aren’t always about immediate job leads. They’re about understanding where the industry’s actually heading, which companies are genuinely hiring, and what skills truly matter versus what’s in job descriptions. As Silverman notes, it’s about creating space to listen and articulate what you need—“nurturing relationships in community will have longer-term benefits.”

In practice: Join alumni Slack channels, participate in local AIGA events, contribute to open-source projects, engage in design challenges. The designers landing jobs aren’t just those with perfect portfolios. They’re the ones who stay visible.

The Path Forward Requires Adaptation, Not Despair

My 12 year-old self would be astonished at what the world is today and how this profession has evolved. I’ve been through three revolutions. Traditional to desktop publishing. Print to web. And now, human-only design to AI-augmented design. 

Here’s what I know: the designers who survived those transitions weren’t necessarily the most talented. They were the most adaptable. They read the moment, learned the tools, and—crucially—didn’t wait for permission to reinvent themselves.

This transition is different. It’s faster and much more brutal to entry-level designers.

But you have advantages my generation didn’t. AI tools are accessible in ways that PageMaker and HTML never were. We had to learn through books! We learned by copying. We learned by taking weeks to craft projects. You can chat with Lovable and prompt your way to a portfolio-worthy project over a weekend. You can generate production-ready assets with Midjourney before lunch. You can prototype and test five different design directions while your coffee’s still warm.

The traditional path—degree, internship, junior role, slow climb up the ladder—is broken. Maybe permanently. But that also means the floor is being raised. You should be working on more strategic and more meaningful work earlier in your career.

But you need to be dangerous, versatile, and visible. 

The companies that will hire you might not be the ones you dreamed about in design school. The role might not have “designer” in the title. Your first year might be messier than you planned.

That’s OK. Every designer I respect has a messy and unlikely origin story.

The industry will stabilize because it always does. New expectations will emerge, new roles will be created, and yes—companies will realize they still need human designers who understand context, culture, and why that button should definitely not be bright purple.

Until then? Be the designer who ships. Who shows up. Who adapts.

The machines can’t do that. Yet.


I hope you enjoyed this series. I think it’s an important topic to discuss in our industry right now, before it’s too late. Don’t forget to read about the five grads and five educators I interviewed for the series. Please reach out if you have any comments, positive or negative. I’d love to hear them.

Figma is adding to its keyboard shortcuts to improve navigation and selection for power users and for keyboard-only users. It’s a win-win that improves accessibility and efficiency. Sarah Kelley, product marketer at Figma writes:

For millions, navigating digital tools with a keyboard isn’t just about preference for speed and ergonomics—it’s a fundamental need. …

We’re introducing a series of new features that remove barriers for keyboard-only designers across most Figma products. Users can now pan the canvas, insert objects, and make precise selections quickly and easily. And, with improved screen reader support, these actions are read aloud as users work, making it easier to stay oriented.

Nice work!

preview-1754373987228.png

Who Says Design Needs a Mouse?

Figma's new accessibility features bring better keyboard and screen reader support to all creators.

figma.com iconfigma.com

My former colleague from Organic, Christian Haas—now ECD at YouTube—has been experimenting with AI video generation recently. He’s made a trilogy of short films called AI Jobs.

Play

You can watch part one above 👆, but don’t sleep on parts two and three.

Haas put together a “behind the scenes” article explaining his process. It’s fascinating and I’ll want to play with video generation myself at some point.

I started with a rough script, but that was just the beginning of a conversation. As I started generating images, I was casting my characters and scouting locations in real time. What the model produced would inspire new ideas, and I would rewrite the script on the fly. This iterative loop continued through every stage. Decisions weren’t locked in; they were fluid. A discovery made during the edit could send me right back to “production” to scout a new location, cast a new character and generate a new shot. This flexibility is one of the most powerful aspects of creating with Gen AI.

It’s a wonderful observation Haas has made—the workflow enabled by gen AI allows for more creative freedom. In any creative endeavor where the production of the final thing is really involved and utilizes a significant amount of labor and materials, be it a film, commercial photography, or software, planning is a huge part. We work hard to spec out everything before a crew of a hundred shows up on set or a team of highly-paid engineers start coding. With gen AI, as shown here with Google’s Veo 3, you have more room for exploration and expression.

UPDATE: I came across this post from Rory Flynn after I published this. He uses diagrams to direct Veo 3.

preview-1754327232920.jpg

Behind the Prompts — The Making of "AI Jobs"

Christian Haas created the first film with the simple goal of learning to use the tools. He didn’t know if it would yield anything worth watching but that was not the point.

linkedin.com iconlinkedin.com