Skip to content

104 posts tagged with “user experience”

Writing for UX Collective, Filipe Nzongo argues that designers should embrace behavior as a fundamental design material—not just to drive metrics or addiction, but to intentionally create products that empower people and foster meaningful, lasting change in their lives.

Behavior should be treated as a design material, just as technology once became our material. If we use behavior thoughtfully, we can create better products. More than that, I believe there is a broader and more meaningful opportunity before us: to design for behavior. Not to make people addicted to products, but to help them grow as human beings, better parents, citizens, students, and professionals. Because if behavior is our medium, then design is our tool for empowerment.

Behavior is our medium

Behavior is our medium

The focus should remain on human

uxdesign.cc iconuxdesign.cc

A former colleague of mine, designer Evan Sornstein wrote a wonderful piece on LinkedIn applying Buddhist principles to design.

Buddhism begins with the recognition that life is marked by impermanence, suffering, and non-self. These aren’t abstract doctrines — they are observations about how the world actually works. Over centuries, these ideas contributed to Japanese aesthetics: wabi-sabi (imperfection), ma (meaningful emptiness), yo no bi (beauty in usefulness), the humility of the shokunin, and the care of omotenashi. What emerges is not a set of rules, but an extraordinary perspective: beauty is inseparable from impermanence; usefulness is inseparable from dignity; care is inseparable from design. In an age when our digital products too often prioritize stickiness and metrics over humanity, these ideas offer a different path. They remind us that design is not about control or cleverness — it’s about connection, trust, and care.

The following eight principles aren’t new “methods” or “laws,” but reflections of this lineage, reframed for product design — though they apply to nearly any creative practice. They are invitations to design with the same attention, humility, and compassion that Buddhism and Japanese aesthetics have carried for centuries.

Designing Emptiness

Designing Emptiness

What Buddhism and Japanese aesthetics teach us about space, meaning, and care in UX It’s been about two years since I first realized I wanted to write this. Looking back, I’ve been on a quiet path for nearly a decade — unknowingly becoming a Buddhist.

linkedin.com iconlinkedin.com

I think these guidelines from Vercel are great. It’s a one-pager and very clearly written for both humans and AI. It reminds me of the old school MailChimp brand voice guidelines and Apple’s Human Interface Guidelines which have become reference standards.

Web Interface Guidelines

Web Interface Guidelines

Guidelines for building great interfaces on the web. Covers interactions, animations, layout, content, forms, performance & design.

vercel.com iconvercel.com

There’s a famous quote that Henry Ford allegedly said:

If I had asked people what they wanted, they would have said faster horses.

Anton Sten argues that a lot of people use this quote to justify not doing any user (or market) research:

This quote gets thrown around constantly—usually by someone who wants to justify ignoring user research entirely. The logic goes: users don’t know what they want, so why bother asking them?

I think he’s right. The question to ask users isn’t “What should we build?” but “What are your biggest pain points?”

Good research uncovers problems. It reveals pain points. It helps you understand what people are actually struggling with in their daily lives. What they’re working around. What they’ve given up on entirely.

Users aren’t supposed to design your product. That’s your job. But they’re the only ones who can tell you what’s actually broken in their world.

When you focus on understanding problems instead of collecting feature requests, you stop getting “faster horses” and start hearing real needs.

Henry Ford’s horse problem wasn’t about imagination

The famous “faster horses” quote isn’t wrong because users can’t imagine solutions—it’s wrong because it defends lazy research.

antonsten.com iconantonsten.com

Nielsen Norman Group weighs in on iOS 26 Liquid Glass. Predictably, they don’t like it. Raluca Budiu:

With iOS 26, Apple seems to be leaning harder into visual design and decorative UI effects — but at what cost to usability? At first glance, the system looks fluid and modern. But try to use it, and soon those shimmering surfaces and animated controls start to get in the way.

I get it. Flat—or mostly flat—and static UI conforms to the heuristics. But honestly, it can get boring and homogenous quickly. Put the NNg microscope on any video game UI and it’ll be torn to shreds, despite gamers learning to adapt quickly.

I’ve had iOS 26 on my phone for just a couple of weeks. I continue to be delighted by the animations and effects. So far, nothing has hindered the usability for me. We’ll see what happens as more and more apps get translated.

Liquid Glass Is Cracked, and Usability Suffers in iOS 26

Liquid Glass Is Cracked, and Usability Suffers in iOS 26

iOS 26’s visual language obscures content instead of letting it take the spotlight. New (but not always better) design patterns replace established conventions.

nngroup.com iconnngroup.com

In my most recent post, I called out our design profession, for our part in developing these addictive products. Jeffrey Inscho, brings it back up to the tech industry at large and observes they’re actually publishers:

The executives at these companies will tell you they’re neutral platforms, that they don’t choose what content gets seen. This is a lie. Every algorithmic recommendation is an editorial decision. When YouTube’s algorithm suggests increasingly extreme political content to keep someone watching, that’s editorial. When Facebook’s algorithm amplifies posts that generate angry reactions, that’s editorial. When Twitter’s trending algorithms surface conspiracy theories, that’s editorial.

They are publishers. They have always been publishers. They just don’t want the responsibility that comes with being publishers.

His point is that if these social media platforms are sorting and promoting posts, it’s an editorial approach and they should be treated like newspapers. “It’s like a newspaper publisher claiming they’re not responsible for what appears on their front page because they didn’t write the articles themselves.”

The answer, Inscho argues, is regulation of the algorithms.

Turn Off the Internet

Big tech has built machines designed for one thing: to hold …

staticmade.com iconstaticmade.com

The headline rings true to me because that’s what I look for in designers and how I run my team. The software that we build is too complex and too mission-critical for designers to vibe-code—at least given today’s tooling. But each one of the designers on my team can fill in for a PM when they’re on vacation.

Kai Wong, writing in UX Collective:

One thing I’ve learned, talking with 15 design leaders (and one CEO), is that a ‘designer who codes’ may look appealing, but a ‘designer who understands business’ is far more valuable and more challenging to replace.

You already possess the core skill that makes this transition possible: the ability to understand users with systematic observation and thoughtful questioning.

The only difference, now, is learning to apply that same methodology to understand your business.

Strategic thinking doesn’t require fancy degrees (although it may sometimes help).

Ask strategic questions about business goals. Understand how to balance user and business needs. Frame your design decisions in terms of measurable business impact.

preview-1758775414784.png

Why many employers want Designers to think like PMs, not Devs

How asking questions, which used to annoy teams, is now critical to UX’s future

uxdesign.cc iconuxdesign.cc

As much as I defended the preview, and as much as Apple wants to make Liquid Glass a thing, the new UI is continuing to draw criticism. Dan Moren for Six Colors:

“Glass” is the overall look of these updates, and it’s everywhere. Transparent, frosted, distorting. In some places it looks quite cool, such as in the edge distortion when you’re swiping up on the lock screen. But elsewhere, it seems to me that glass may not be quite the right material for the job. The Glass House might be architecturally impressive, but it’s not particularly practical.

It’s also a definite philosophical choice, and one that’s going to engender some criticism—much of it well-deserved. Apple has argued that it’s about getting controls out of the way, but is that really what’s happening here? It’s hard to argue that having a transparent button sitting right on top of your email is helping that email be more prominent. To take this argument to its logical conclusion, why is the keyboard not fully transparent glass over our content?

I’ve yet to upgrade myself. I will say that everyone dislikes change. Lest we forget that the now-ubiquitous flat design introduced by iOS 7 was also criticized.

preview-1758732622764.png

iOS 26 Review: Through a glass, liquidly

iOS 26! It feels like just last year we were here discussing iOS 18. How time flies. After a year that saw the debut of Apple Intelligence and the subsequent controversy over the features that it d…

sixcolors.com iconsixcolors.com

Jason Spielman put up a case study on his site for his work on Google’s NotebookLM:

The mental model of NotebookLM was built around the creation journey: starting with inputs, moving through conversation, and ending with outputs. Users bring in their sources (documents, notes, references), then interact with them through chat by asking questions, clarifying, and synthesizing before transforming those insights into structured outputs like notes, study guides, and Audio Overviews.

And yes, he includes a sketch he did on the back of a napkin.

I’ve always wondered about the UX of NotebookLM. It’s not typical and, if I’m being honest, not exactly super intuitive. But after a while, it does make sense. Maybe I’m the outlier though, because Spielman’s grandmother found it easy. In an interview last year on Sequoia Capital’s Training Data, he recalls:

I actually do think part of the explosion of audio overviews was the fact it was a simple one click experience. I was on the phone with my grandma trying to explain her how to use it and it actually didn’t take any explanation. I’m like, “Drop in a source.” And she’s like, “Oh! I see. I click this button to generate it.” And I think that the ease of creation is really actually what catalyzed so much explosion. So I think when we think about adding these knobs [for customization] I think we want to do it in a way that’s very intentional.

preview-1758507696745.png

Designing NotebookLM

Designer, builder, and visual storyteller. Now building Huxe. Previously led design on NotebookLM and contributed to Google AI projects like Gemini and Search. Also shoot photo/video for brands like Coachella, GoPro, and Rivian.

jasonspielman.com iconjasonspielman.com

Chatboxes have become the uber box for all things AI. The criticism of this blank box has been the cold start issue. New users don’t know what to type. Designers shipping these product mostly got around this problem by offering suggested prompts to teach users about the possibilities.

The issue on the other end is that expert users end up creating their own library of prompts to copy and paste into the chatbox for repetitive tasks.

Sharang Sharma writing in UX Collective illustrates how these UIs can be smarter by being predictive of intent:

Contrary, Predictive UX points to an alternate approach. Instead of waiting for users to articulate every step, systems can anticipate intent based on behavior or common patterns as the user types. Apple Reminders suggests likely tasks as you type. Grammarly predicts errors and offers corrections inline. Gmail’s Smart Compose even predicts full phrases, reducing the friction of drafting entirely.

Sharma says that the goal of predictive UX is to “reduce time-to-value and reframe AI as an adaptive partner that anticipates user’s intent as you type.”

Imagine a little widget that appears within the chatbox as you type. Kind of a cool idea.

preview-1758077109263.jpeg

How can AI UI capture intent?

Exploring contextual prompt patterns that capture user intent as it is typed

uxdesign.cc iconuxdesign.cc

Thinking about this morning’s link about web forms, if you abstract why it’s so powerful, you get to the point of human-computer interaction: the computer should do what the user intends, not the buttons they push.

Matt Webb reminds us about the DWIM, or Do What I Mean philosophy in computing that was coined by Warren Teitelman in 1966. Webb quotes computer scientist Larry Masinter:

DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user’s request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious.

Webb goes on to say:

Squint and you can see ChatGPT as a DWIM UI: it never, never, never says “syntax error.”

Now, arguably it should come back and ask for clarifications more often, and in particular DWIM (and AI) interfaces are more successful the more they have access to the user’s context (current situation, history, environment, etc).

But it’s a starting point. The algo is: design for capturing intent and then DWIM; iterate until that works. AI unlocks that.

preview-1757558679383.png

The destination for AI interfaces is Do What I Mean

Posted on Friday 29 Aug 2025. 840 words, 10 links. By Matt Webb.

interconnected.org iconinterconnected.org

Forms is one of the fundamental things we make users do in software. Whether it’s the login screen, billing address form, or a mortgage application, forms are the main method for getting data from users and into computer-accessible databases. The human is deciding what piece of information to put into which column in the database. With AI, form filling should be much simpler.

Luke Wroblewski makes the argument:

With Web forms, the burden is on people to adapt to databases. Today’s AI models, however, can flip this requirement. That is, they allow people to provide information in whatever form they like and use AI do the work necessary to put that information into the right structure for a database.

How can it work?

With AgentDB connected to an AI model (via an MCP server), a person can simply say “add this” and provide an image, PDF, audio, video, you name it. The model will use AgentDB’s template to decide what information to extract from this unstructured input and how to format it for the database. In the case where something is missing or incomplete, the model can ask for clarification or use tools (like search) to find possible answers.

preview-1757557969255.png

Unstructured Input in AI Apps Instead of Web Forms

Web forms exist to put information from people into databases. The input fields and formatting rules in online forms are there to make sure the information fits...

lukew.com iconlukew.com

DOC is a publication from Fabricio Teixeira and Caio Braga that I’ve linked to before. Their latest reflection is on interfaces.

A good user interface is a good conversation.

Interfaces thrive on clarity, responsiveness, and mutual understanding. In a productive dialogue, each party clearly articulates their intentions and receives timely, understandable responses. Just as a good conversationalist anticipates the next question or need, a good interface guides you smoothly through your task. At their core, interfaces translate intent into action. They’re a bridge between what’s in your head and what the product can do.

Reflection is the best word I’ve found to describe these pieces. They’re hype-free, urging us to take a step back, and—at least for me—a reminder about our why.

In the end, interfaces are also a space for self-expression.

The ideal of “no interface” promises ultimate efficiency and direct access—but what do we lose in that pursuit? Perhaps the interface is not just a barrier to be minimized, but a space for human expression. It’s a canvas; a place to imbue a product with personality, visual expression, and a unique form of art.

When we strip that away, or make everything look the same, we lose something important. We trade the unique and the delightful for the purely functional. We sacrifice a vital part of what makes technology human: the thoughtful, and sometimes imperfect, ways we present ourselves to the world.

A pixelated hand

DOC • Interface

On connection, multi-modality, and self-expression.

doc.cc icondoc.cc
Conceptual 3D illustration of stacked digital notebooks with a pen on top, overlaid on colorful computer code patterns.

Why We Still Need a HyperCard for the AI Era

I rewatched the 1982 film TRON for the umpteenth time the other night with my wife. I have always credited this movie as the spark that got me interested in computers. Mind you, I was nine years old when this film came out. I was so excited after watching the movie that I got my father to buy us a home computer—the mighty Atari 400 (note sarcasm). I remember an educational game that came on cassette called “States & Capitals” that taught me, well, the states and their capitals. It also introduced me to BASIC, and after watching TRON, I wanted to write programs!

Vintage advertisement for the Atari 400 home computer, featuring the system with its membrane keyboard and bold headline “Introducing Atari 400.”

The Atari 400’s membrane keyboard was easy to wipe down, but terrible for typing. It also reminded me of fast food restaurant registers of the time.

Back in the early days of computing—the 1960s and ’70s—there was no distinction between users and programmers. Computer users wrote programs to do stuff for them. Hence the close relationship between the two that’s depicted in TRON. The programs in the digital world resembled their creators because they were extensions of them. Tron, the security program that Bruce Boxleitner’s character Alan Bradley wrote, looks like its creator. Clu looked like Kevin Flynn, played by Jeff Bridges. Early in the film, a compound interest program who was captured by the MCP’s goons says to a cellmate, “if I don’t have a User, then who wrote me?”

Scene from the 1982 movie TRON showing programs in glowing blue suits standing in a digital arena.

The programs in TRON looked like their users. Unless the user was the program, which was the case with Kevin Flynn (Jeff Bridges), third from left.

I was listening to a recent interview with Ivan Zhao, CEO and cofounder of Notion, in which he said he and his cofounder were “inspired by the early computing pioneers who in the ’60s and ’70s thought that computing should be more LEGO-like rather than like hard plastic.” Meaning computing should be malleable and configurable. He goes on to say, “That generation of thinkers and pioneers thought about computing kind of like reading and writing.” As in accessible and fundamental so all users can be programmers too.

The 1980s ushered in the personal computer era with the Apple IIe, Commodore 64, TRS-80, (maybe even the Atari 400 and 800), and then the Macintosh, etc. Programs were beginning to be mass-produced and consumed by users, not programmed by them. To be sure, this move made computers much more approachable. But it also meant that users lost a bit of control. They had to wait for Microsoft to add a feature into Word that they wanted.

Of course, we’re coming back to a full circle moment. In 2025, with AI-enabled vibecoding, users are able to spin up little custom apps that do pretty much anything they want them to do. It’s easy, but not trivial. The only interface is the chatbox, so your control is only as good as your prompts and the model’s understanding. And things can go awry pretty quickly if you’re not careful.

What we’re missing is something accessible, but controllable. Something with enough power to allow users to build a lot, but not so much that it requires high technical proficiency to produce something good. In 1987, Apple released HyperCard and shipped it for free with every new Mac. HyperCard, as fans declared at the time, was “programming for the rest of us.”

HyperCard—Programming for the Rest of Us

Black-and-white screenshot of HyperCard’s welcome screen on a classic Macintosh, showing icons for Tour, Help, Practice, New Features, Art Bits, Addresses, Phone Dialer, Graph Maker, QuickTime Tools, and AppleScript utilities.

HyperCard’s welcome screen showed some useful stacks to help the user get started.

Bill Atkinson was the programmer responsible for MacPaint. After the Mac launched, and apparently on an acid trip, Atkinson conceived of HyperCard. As he wrote on the Apple history site Folklore:

Inspired by a mind-expanding LSD journey in 1985, I designed the HyperCard authoring system that enabled non-programmers to make their own interactive media. HyperCard used a metaphor of stacks of cards containing graphics, text, buttons, and links that could take you to another card. The HyperTalk scripting language implemented by Dan Winkler was a gentle introduction to event-based programming.

There were five main concepts in HyperCard: cards, stacks, objects, HyperTalk, and hyperlinks. 

  • Cards were screens or pages. Remember that the Mac’s nine-inch monochrome screen was just 512 pixels by 342 pixels.
  • Stacks were collections of cards, essentially apps.
  • Objects were the UI and layout elements that included buttons, fields, and backgrounds.
  • HyperTalk was the scripting language that read like plain English.
  • Hyperlinks were links from one interactive element like a button to another card or stack.

When I say that HyperTalk read like plain English, I mean it really did. AppleScript and JavaScript are descendants. Here’s a sample logic script:

if the text of field "Password" is "open sesame" then
  go to card "Secret"
else
  answer "Wrong password."
end if

Armed with this kit of parts, users were able to use this programming “erector set” and build all sorts of banal or wonderful apps. From tracking vinyl records to issuing invoices, or transporting gamers to massive immersive worlds, HyperCard could do it all. The first version of the classic puzzle adventure game, Myst was created with HyperCard. It was comprised of six stacks and 1,355 cards. From Wikipedia:

The original HyperCard Macintosh version of Myst had each Age as a unique HyperCard stack. Navigation was handled by the internal button system and HyperTalk scripts, with image and QuickTime movie display passed off to various plugins; essentially, Myst functions as a series of separate multimedia slides linked together by commands.

Screenshot from the game Myst, showing a 3D-rendered island scene with a ship in a fountain and classical stone columns.

The hit game Myst was built in HyperCard.

For a while, HyperCard was everywhere. Teachers made lesson plans. Hobbyists made games. Artists made interactive stories. In the Eighties and early Nineties, there was a vibrant shareware community. Small independent developers who created and shared simple programs for a postcard, a beer, or five dollars. Thousands of HyperCard stacks were distributed on aggregated floppies and CD-ROMs. Steve Sande, writing in Rocket Yard:

At one point, there was a thriving cottage industry of commercial stack authors, and I was one of them. Heizer Software ran what was called the “Stack Exchange”, a place for stack authors to sell their wares. Like Apple with the current app stores, Heizer took a cut of each sale to run the store, but authors could make a pretty good living from the sale of popular stacks. The company sent out printed catalogs with descriptions and screenshots of each stack; you’d order through snail mail, then receive floppies (CDs at a later date) with the stack(s) on them.

Black-and-white screenshot of Heizer Software’s “Stack Exchange” HyperCard catalog, advertising a marketplace for stacks.

Heizer Software’s “Stack Exchange,” a marketplace for HyperCard authors.

From Stacks to Shrink-Wrap

But even as shareware tiny programs and stacks thrived, the ground beneath this cottage industry was beginning to shift. The computer industry—to move from niche to one in every household—professionalized and commoditized software development, distribution, and sales. By the 1990s, the dominant model was packaged software that was merchandised on store shelves in slick shrink-wrapped boxes. The packaging was always oversized for the floppy or CD it contained to maximize visual space.

Unlike the users/programmers from the ’60s and ’70s, you didn’t make your own word processor anymore, you bought Microsoft Word. You didn’t build your own paint and retouching program—you purchased Adobe Photoshop. These applications were powerful, polished, and designed for thousands and eventually millions of users. But that meant if you wanted a new feature, you had to wait for the next upgrade cycle—typically a couple of years. If you had an idea, you were constrained by what the developers at Microsoft or Adobe decided was on the roadmap.

The ethos of tinkering gave way to the economics of scale. Software became something you consumed rather than created.

From Shrink-Wrap to SaaS

The 2000s took that shift even further. Instead of floppy disks or CD-ROMs, software moved into the cloud. Gmail replaced the personal mail client. Google Docs replaced the need for a copy of Word on every hard drive. Salesforce, Slack, and Figma turned business software into subscription services you didn’t own, but rented month-to-month.

SaaS has been a massive leap for collaboration and accessibility. Suddenly your documents, projects, and conversations lived everywhere. No more worrying about hard drive crashes or lost phones! But it pulled users even farther away from HyperCard’s spirit. The stack you made was yours; the SaaS you use belongs to someone else’s servers. You can customize workflows, but you don’t own the software.

Why Modern Tools Fall Short

For what started out as a note-taking app, Notion has come a long way. With its kit of parts—pages, databases, tags, etc.—it’s highly configurable for tracking information. But you can’t make games with it. Nor can you really tell interactive stories (sure, you can link pages together). You also can’t distribute what you’ve created and share with the rest of the world. (Yes, you can create and sell Notion templates.)

No productivity software programs are malleable in the HyperCard sense. 

Animation editor workspace with green apple, selected peach, and peeled banana on stage; timeline left, properties right, asset thumbnails below.

Director let anyone build interactive stories and games without needing to code.

Of course, there are specialized tools for creativity. Unreal Engine and Unity are great for making games. Director and Flash continued the tradition started by HyperCard—at least in the interactive media space—before they were supplanted by more complex HTML5, CSS, and JavaScript. Objectively, these authoring environments are more complex than HyperCard ever was.

The Web’s HyperCard DNA

In a fun remembrance, Constantine Frantzeskos writes:

HyperCard’s core idea was linking cards and information graphically. This was true hypertext before HTML. It’s no surprise that the first web pioneers drew direct inspiration from HyperCard – in fact, HyperCard influenced the creation of HTTP and the Web itself​. The idea of clicking a link to jump to another document? HyperCard had that in 1987 (albeit linking cards, not networked documents). The pointing finger cursor you see when hovering over a web link today? That was borrowed from HyperCard’s navigation cursor​.

Ted Nelson coined the terms “hypertext” and “hyperlink” in the mid-1960s, envisioning a world where digital documents could be linked together in nonlinear “trails”—making information interwoven and easily navigable. Bill Atkinson’s HyperCard was the first mass-market program that popularized this idea, even influencing Tim Berners-Lee, the father of the World Wide Web. Berners-Lee’s invention was about linking documents together on a server and linking to other documents on other servers. A web of documents.

Early ViolaWWW hypermedia browser from 1993, displaying a window with navigation buttons, URL bar, and hypertext description.

Early web browser from 1993, ViolaWWW, directly inspired by the concepts in HyperCard.

Pei-Yuan Wei, developer of one of the first web browsers called ViolaWWW, also drew direct inspiration from HyperCard. Matthew Lasar writing for Ars Technica:

“HyperCard was very compelling back then, you know graphically, this hyperlink thing,” Wei later recalled. “I got a HyperCard manual and looked at it and just basically took the concepts and implemented them in X-windows,” which is a visual component of UNIX. The resulting browser, Viola, included HyperCard-like components: bookmarks, a history feature, tables, graphics. And, like HyperCard, it could run programs.

And of course, with the built-in source code viewer, browsers brought on a new generation of tinkerers who’d look at HTML and make stuff by copying, tweaking, and experimenting.

The Missing Ingredient: Personal Software

Today, we have low-code and no code tools like Bubble for making web apps, Framer for building web sites, and Zapier for automations. The tools are still aimed at professionals though. Maybe with the exception of Zapier and IFTTT, they’ve expanded the number of people who can make software (including websites), but they’re not general purpose. These are all adjacent to what HyperCard was.

(Re)enter personal software.

In an essay titled “Personal software,” Lee Robinson wrote, “You wouldn’t search ‘best chrome extensions for note taking’. You would work with AI. In five minutes, you’d have something that works exactly how you want.”

Exploring the idea of “malleable software,” researchers at Ink & Switch wrote:

How can users tweak the existing tools they’ve installed, rather than just making new siloed applications? How can AI-generated tools compose with one another to build up larger workflows over shared data? And how can we let users take more direct, precise control over tweaking their software, without needing to resort to AI coding for even the tiniest change? None of these questions are addressed by products that generate a cloud-hosted application from a prompt.

Of course, AI prompt-to-code tools have been emerging this year, allowing anyone who can type to build web applications. However, if you study these tools more closely—Replit, Lovable, Base44, etc.—you’ll find that the audience is still technical people. Developers, product managers, and designers can understand what’s going on. But not everyday people.

These tools are still missing ingredients HyperCard had that allowed it to be in the general zeitgeist for a while, that enabled users to be programmers again.

They are:

  • Direct manipulation
  • Technical abstraction
  • Local apps

What Today’s Tools Still Miss

Direct Manipulation

As I concluded in my exhaustive AI prompt-to-code tools roundup from April, “We need to be able to directly manipulate components by clicking and modifying shapes on the canvas or changing values in an inspector.” The latency of the roundtrip of prompting the model, waiting for it to think and then generate code, and then rebuild the app is much too long. If you don’t know how to code, every change takes minutes, so building something becomes tedious, not fun.

Tools need to be a canvas-first, not chatbox-first. Imagine a kit of UI elements on the left that you can drag onto the canvas and then configure and style—not unlike WordPress page builders. 

AI is there to do the work for you if you want, but you don’t need to use it.

Hand-drawn sketch of a modern HyperCard-like interface, with a canvas in the center, object palette on the left, and chat panel on the right.

My sketch of the layout of what a modern HyperCard successor could look like. A directly manipulatable canvas is in the center, object palette on the left, and AI chat panel on the right.

Technical Abstraction

For gen pop, I believe that these tools should hide away all the JavaScript, TypeScript, etc. The thing that the user is building should just work.

Additionally, there’s an argument to be made to bring back HyperTalk or something similar. Here is the same password logic I showed earlier, but in modern-day JavaScript:

const password = document.getElementById("Password").value;

if (password === "open sesame") {
  window.location.href = "secret.html";
} else {
  alert("Wrong password.");
} 

No one is going to understand that, much less write something like it.

One could argue that the user doesn’t need to understand that code since the AI will write it. Sure, but code is also documentation. If a user is working on an immersive puzzle game, they need to know the algorithm for the solution. 

As a side note, I think flow charts or node-based workflows are great. Unreal Engine’s Blueprints visual scripting is fantastic. Again, AI should be there to assist.

Unreal Engine Blueprints visual scripting interface, with node blocks connected by wires representing game logic.

Unreal Engine has a visual scripting interface called Blueprints, with node blocks connected by wires representing game logic.

Local Apps

HyperCard’s file format was “stacks.” And stacks could be compiled into applications that can be distributed without HyperCard. With today’s cloud-based AI coding tools, they can all publish a project to a unique URL for sharing. That’s great for prototyping and for personal use, but if you wanted to distribute it as shareware or donation-ware, you’d have to map it to a custom domain name. It’s not straightforward to purchase from a registrar and deal with DNS records.

What if these web apps can be turned into a single exchangeable file format like “.stack” or some such? Furthermore, what if they can be wrapped into executable apps via Electron?

Rip, Mix, Burn

Lovable, v0, and others already have sharing and remixing built in. This ethos is great and builds on the philosophies of the hippie computer scientists. In addition to fostering a remix culture, I imagine a centralized store for these apps. Of course, those that are published as runtime apps can go through the official Apple and Google stores if they wish. Finally, nothing stops third-party stores, similar to the collections of stacks that used to be distributed on CD-ROMs.

AI as Collaborator, Not Interface

As mentioned, AI should not be the main UI for this. Instead, it’s a collaborator. It’s there if you want it. I imagine that it can help with scaffolding a project just by describing what you want to make. And as it’s shaping your app, it’s also explaining what it’s doing and why so that the user is learning and slowly becoming a programmer too.

Democratizing Programming

When my daughter was in middle school, she used a site called Quizlet to make flash cards to help her study for history tests. There were often user-generated sets of cards for certain subjects, but there were never sets specifically for her class, her teacher, that test. With this HyperCard of the future, she would be able to build something custom in minutes.

Likewise, a small business owner who runs an Etsy shop selling T-shirts can spin up something a little more complicated to analyze sales and compare against overall trends in the marketplace.

And that same Etsy shop owner could sell the little app they made to others wanting the same tool for for their stores.

The Future Is Close

Scene from TRON showing a program with raised arms, looking upward at a floating disc in a beam of light.

Tron talks to his user, Alan Bradley, via a communication beam.

In an interview with Garry Tan of Y Combinator in June, Michael Truell, the CEO of Anysphere, which is the company behind Cursor, said his company’s mission is to “replace coding with something that’s much better.” He acknowledged that coding today is really complicated:

Coding requires editing millions of lines of esoteric formal programming languages. It requires doing lots and lots of labor to actually make things show up on the screen that are kind of simple to describe.

Truell believes that in five to ten years, making software will boil down to “defining how you want the software to work and how you want the software to look.”

In my opinion, his timeline is a bit conservative, but maybe he means for professionals. I wonder if something simpler will come along sooner that will capture the imagination of the public, like ChatGPT has. Something that will encourage playing and tinkering like HyperCard did.

There’s a third sequel to TRON that’s coming out soon—TRON: Ares. In a panel discussion in the 5,000-seat Hall H at San Diego Comic-Con earlier this summer, Steven Lisberger, the creator of the franchise provided this warning about AI, “Let’s kick the technology around artistically before it kicks us around.” While he said it as a warning, I think it’s an opportunity as well.

AI opens up computer “programming” to a much larger swath of people—hell, everyone. As an industry, we should encourage tinkering by building such capabilities into our products. Not UIs on the fly, but mods as necessary. We should build platforms that increase the pool of users from technical people to everyday users like students, high school teachers, and grandmothers. We should imagine a world where software is as personalizable as a notebook—something you can write in, rearrange, and make your own. And maybe users can be programmers once again.

Hard to believe that the Domino’s Pizza tracker debuted in 2008. The moment was ripe for them—about a year after the debut of the iPhone. Mobile e-commerce was in its early days.

Alex Mayyasi for The Hustle:

…the tracker’s creation was spurred by the insight that online orders were more profitable – and made customers more satisfied – than phone or in-person orders. The company’s push to increase digital sales from 20% to 50% of its business led to new ways to order (via a tweet, for example) and then a new way for customers to track their order.

Mayyasi weaves together a tale of business transparency, UI, and content design, tracing—or tracking?—the tracker’s impact on business since then. “The pizza tracker is essentially a progress bar.” But progress bars do so much for the user experience, most of which is setting proper expectations.

preview-1756791507284.png

How the Domino’s pizza tracker conquered the business world

One cheesy progress update at a time.

thehustle.co iconthehustle.co

America by Design, Again

President Trump signed an executive order creating America by Design, a national initiative to improve the usability and design of federal services, both digital and physical. The order establishes a National Design Studio inside the White House and appoints Airbnb co-founder and RISD graduate Joe Gebbia as the first Chief Design Officer. The studio’s mandate: cut duplicative design costs, standardize experiences to build trust, and raise the quality of government services. Gebbia said he aims to make the U.S. “the most beautiful, and usable, country in the digital world.”

Ironically, this follows the gutting of the US Digital Service, left like a caterpillar consumed from within by parasitic wasp larvae, when it was turned into DOGE. And as part of the cutting of thousands from the federal workforce, 18F, the pioneering digital services agency that started in 2014, was eliminated.

Ethan Marcotte, the designer who literally wrote the book on responsive design and worked at 18F, had some thoughts. He points out the announcement web page weighs in at over three megabytes. Very heavy for a government page and slow for those in the country unserved by broadband—about 26 million. On top of that, the page is full of typos and is an accessibility nightmare.

In other words, we’re left with a web page announcing a new era of design for the United States government, but it’s tremendously costly to download, and inaccessible to many. What I want to suggest is that neither of these things are accidents: they read to me as signals of intent; of how this administration intends to practice design.

The National Design Studio has a mission to turn government services into as easy as buying from the Apple Store. Marcotte’s insight is that designing for government—at scale for nearly 350 million people—is very different than designing in the private sector. Coordination among agencies can take years.

Despite what this new “studio” would suggest, designing better government services didn’t involve smearing an animated flag and a few nice fonts across a website. It involved months, if not years, of work: establishing a regular cadence of user research and stakeholder interviews; building partnerships across different teams or agencies; working to understand the often vast complexity of the policy and technical problems involved; and much, much more. Judging by their mission statement, this “studio” confuses surface-level aesthetics with the real, substantive work of design.

Here’s the kicker:

There’s a long, brutal history of design under fascism, and specifically in the way aesthetics are used to define a single national identity. Dwell had a good feature on this in June…

The executive order also brought on some saltiness from Christopher Butler, lays out the irony, or the waste.

The hubris of this appointment becomes clearer when viewed alongside the recent dismantling of 18F, the federal government’s existing design services office. Less than a year ago, Trump and Elon Musk’s DOGE initiative completely eviscerated this team, which was modeled after the UK’s Government Digital Service and comprised hundreds of design practitioners with deep expertise in government systems. Many of us likely knew someone at 18F. We knew how much value they offered the country. The people in charge didn’t understand what they did and didn’t care.

In other words, we were already doing what Gebbia claims he’ll accomplish in three years. The 18F team had years of experience navigating federal bureaucracy, understanding regulatory constraints, and working within existing governmental structures—precisely the institutional knowledge required for meaningful reform.

Butler knew Joe Gebbia, the appointed Chief Design Officer, in college and calls out his track record in government, or lack thereof.

Full disclosure: I attended college with Joe Gebbia and quickly formed negative impressions of his character that subsequent events have only reinforced.

While personal history colors perspective, the substantive concerns about this appointment stand independently: the mismatch between promised expertise and demonstrated capabilities, the destruction of existing institutional knowledge, the unrealistic timeline claims, and the predictable potential for conflicts of interest.

Government design reform is important work that requires deep expertise, institutional knowledge, and genuine commitment to public service. It deserves leaders with proven track records in complex systems design, not entrepreneurs whose primary experience involves circumventing existing regulations for private gain.

If anything this yet another illustration of this administration’s incompetence.

Interesting piece from Vaughn Tan about a critical thinking framework that is disguised as a piece about building better AI UIs for critical thinking. Sorry, that sentence is kind of a tongue-twister. Tan calls out—correctly—that LLMs don’t think, or in his words, can’t make meaning:

Meaningmaking is making inherently subjective decisions about what’s valuable: what’s desirable or undesirable, what’s right or wrong. The machines behind the prompt box are remarkable tools, but they’re not meaningmaking entities.

Therefore when users ask LLMs for their opinions on matters, e.g., as in the therapy use case, the AIs won’t come back with actual thinking. IMHO, it’s semantics, but that’s another post.

Anyhow, Tan shares a pen and paper prototype he’s been testing, which breaks down a major decision into guided steps, or put another way, a framework.

This user experience was designed to simulate a multi-stage process of structured elicitation of various aspects of strongly reasoned arguments. This design explicitly addresses both requirements for good tool use. The structured prompts helped students think critically about what they were actually trying to accomplish with their custom major proposals — the meaningmaking work of determining value, worth, and personal fit. Simultaneously, the framework made clear what kinds of thinking work the students needed to do themselves versus what kinds of information gathering and analysis could potentially be supported by tools like LLMs.

This guided or framework-driven approach was something I attempted wtih Griffin AI. Via a series of AI-guided prompts to the user—or a glorified form, honestly—my tool helped users build brand strategies. To be sure, a lot of the “thinking” was done by the model, but the idea that an AI can guide you to critically think about your business or your client’s business was there.

preview-1756270668809.png

Designing AI tools that support critical thinking

Current AI interfaces lull us into thinking we’re talking to something that can make meaningful judgments about what’s valuable. We’re not — we’re using tools that are tremendously powerful but nonetheless can’t do “meaningmaking” work (the work of deciding what matters, what’s worth pursuing).

vaughntan.org iconvaughntan.org

Designer Tey Bannerman writes that when he hears “human in the loop,” he’s reminded of a story about Lieutenant Colonel Stanislav Petrov, a Soviet Union duty watch officer who monitored for incoming missile strikes from the US.

12:15 AM… the unthinkable. Every alarm in the facility started screaming. The screens showed five US ballistic missiles, 28 minutes from impact. Confidence level: 100%. Petrov had minutes to decide whether to trigger a chain reaction that would start nuclear war and could very well end civilisation as we knew it.

He was the “human in the loop” in the most literal, terrifying sense.

Everything told him to follow protocol. His training. His commanders. The computers.

But something felt wrong. His intuition, built from years of intelligence work, whispered that this didn’t match what he knew about US strategic thinking.

Against every protocol, against the screaming certainty of technology, he pressed the button marked “false alarm”.

Twenty-three minutes of gripping fear passed before ground radar confirmed: no missiles. The system had mistaken a rare alignment of sunlight on high-altitude clouds for incoming warheads.

His decision to break the loop prevented nuclear war.

Then Bannerman shares an awesome framework he developed that allows humans in the loop in AI systems “genuine authority, time to think, and understanding the bigger picture well enough to question” the system’s decision. Click on to get the PDF from his site.

Framework diagram by Tey Bannerman titled Beyond ‘human in the loop’. It shows a 4×4 matrix mapping AI oversight approaches based on what is being optimized (speed/volume, quality/accuracy, compliance, innovation) and what’s at stake (irreversible consequences, high-impact failures, recoverable setbacks, low-stakes outcomes). Colored blocks represent four modes: active control, human augmentation, guided automation, and AI autonomy. Right panel gives real-world examples in e-commerce email marketing and recruitment applicant screening.

Redefining ‘human in the loop’

"Human in the loop" is overused and vague. The Petrov story shows humans must have real authority, time, and context to safely override AI. Bannerman offers a framework that asks what you optimize for and what is at stake, then maps 16 practical approaches.

teybannerman.com iconteybannerman.com
Surreal black-and-white artwork of a glowing spiral galaxy dripping paint-like streaks over a city skyline at night.

Why I’m Keeping My Design Title

In the 2011 documentary Jiro Dreams of Sushi, then 85 year-old sushi master Jiro Ono says this about craft:

Once you decide on your occupation… you must immerse yourself in your work. You have to fall in love with your work. Never complain about your job. You must dedicate your life to mastering your skill. That’s the secret of success and is the key to being regarded honorably.

Craft is typically thought of as the formal aspects of any field such as design, woodworking, writing, or cooking. In design, we think about composition, spacing, and typography—being pixel-perfect. But one’s craft is much more than that. Ono’s sushi craft is not solely about slicing fish and pressing it against a bit of rice. It is also about picking the right fish, toasting the nori just so, cooking the rice perfectly, and running a restaurant. It’s the whole thing.

Therefore, mastering design—or any occupation—takes time, experience, or reps as the kids say. So it’s to my dismay that Suff Syed’s essay “Why I’m Giving Up My Design Title — And What That Says About the Future of Design” got so much play in recent weeks. Syed is Head of Product Design at Microsoft—er, was. I guess his title is now Member of the Technical Staff. In a perfectly well-argued and well-written essay, he concludes:

That’s why I’m switching careers. From Head of Product Design to Member of Technical Staff.

This isn’t a farewell to experience, clarity, or elegance. It’s a return to first principles. I want to get closer to the metal—to shape the primitives, models, and agents that will define how tomorrow’s software is built.

We need more people at the intersection. Builders who understand agentic flows and elevated experiences. Designers who can reason about trust boundaries and token windows. Researchers who can make complex systems usable—without dumbing them down to a chat interface.

In the 2,800 words preceding the above quote, Syed lays out a five-point argument: the paradigm for software is changing to agentic AI, design doesn’t drive innovation, fewer design leaders will be needed in the future, the commoditization of design, and the pay gap. The tl;dr being that design as a profession is dead and building with AI is where it’s at. 

With respect to Mr. Syed, I call bullshit. 

Let’s discuss each of his arguments.

The Paradigm Argument

Suff Syed:

The entire traditional role of product designers, creating static UI in Silicon Valley offices that work for billions of users, is becoming increasingly irrelevant; when the Agent can simply generate the UI it needs for every single user.

That’s a very narrow view of what user experience designers do. In this diagram by Dan Saffer from 2008, UX encircles a large swath of disciplines. It’s a little older so it doesn’t cover newer disciplines like service design or AI design.

Diagram titled The Disciplines of UX showing overlapping circles of fields like Industrial Design, Human Factors, Communication Design, and Architecture. The central green overlap highlights Interaction Design, surrounded by related areas such as usability engineering, information architecture, motion design, application design, and human-computer interaction.

Originally made by envis pricisely GmBH - www.envis-precisely.com, based on “The Disciplines of UX” by Dan Saffer (2008). (PDF)

I went to design school a long time ago, graduating 1995. But even back then, in Graphic Design 2 class, graphic design wasn’t just print design. Our final project for that semester was to design an exhibit, something that humans could walk through. I’ve long lost the physical model, but my solution was inspired by the Golden Gate Bridge and how I had this impression of the main cables as welcome arms as you drove across the bridge. My exhibit was a 20-foot tall open structure made of copper beams and a glass roof. Etched onto the roof was a poem—by whom I can’t recall—that would cast the shadows of its letters onto the ground, creating an experience for anyone walking through the structure.

Similarly, thoughtful product designers consider the full experience, not just what’s rendered on the screen. How is onboarding? What’s their interaction with customer service? And with techniques like contextual inquiry, we care about the environments users are in. Understanding that nurses in a hospital are in a very busy setting and share computers are important insights that can’t be gleaned from desk research or general knowledge. Designers are students of life and observers of human behavior.

Syed again:

Agents offer a radical alternative by placing control directly into users’ hands. Instead of navigating through endless interfaces, finding a good Airbnb could be as simple as having a conversation with an AI agent. The UI could be generated on the fly, tailored specifically to your preferences; an N:1 model. No more clicking around, no endless tabs, no frustration.

I don’t know. I have my doubts that this is actually going to be the future. While I agree that agentic workflows will be game-changing, I disagree that the chat UI is the only one for all use cases or even most scenarios. I’ve previously discussed the disadvantages of prompting-only workflows and how professionals need more control. 

I also disagree that users will want UIs generated on the fly. Think about the avalanche of support calls and how insane those will be if every user’s interface is different!

In my experience, users—including myself—like to spend the time to set up their software for efficiency. For example, in a dual-monitor setup, I used to expose all of Photoshop’s palettes and put them in the smaller display, and the main canvas on the larger one. Every time I got a new computer or new monitor, I would import that workspace so I could work efficiently. 

Habit and muscle memory are underrated. Once a user has invested the time to arrange panels, tools, and shortcuts the way they like, changing it frequently adds friction. For productivity and work software, consistency often outweighs optimization. Even if a specialized AI-made-for-you workspace could be more “optimal” for a task, switching disrupts the user’s mental model and motor memory.

I want to provide one more example because it’s in the news: consider the backlash that OpenAI has faced in the past week with their rollout of GPT-5. OpenAI assumed people would simply welcome “the next model up,” but what they underestimated was the depth of attachment to existing workflows, and in some cases, to the personas of the models themselves. As Casey Newton put it, “it feels different and stronger than the kinds of attachment people have had to previous kinds of technology.” It’s evidence of how much emotional and cognitive investment users pour into the tools they depend on. You can’t just rip that foundation away without warning. 

Which brings us back to the heart of design: respect for the user. Not just their immediate preferences, but the habits, muscle memory, and yes, relationships that accumulate over time. Agents may generate UIs on the fly, but if they ignore the human need for continuity and control, they’ll stumble into the same backlash OpenAI faced.

The Innovation Argument

Syed’s second argument is that design supports innovation rather than drive it. I half agree with this. If we’re talking about patents or inventions, sure. Technology will always win the day. But design can certainly drive innovation.

He cites Airbnb, Figma, Notion, and Linear as being “incredible companies with design founders,” but only Airbnb is a Fortune 500 company. 

While not having been founded by designers, I don’t think anyone would argue that Apple, Nike, Tesla, and Disney are not design-led and aren’t innovative. All are in the Fortune 500. Disney treats experience design, which includes its parks, media, and consumer products, as a core capability. Imagineering is a literal design R&D division that shapes the company’s most profitable experiences. Look up Lanny Smoot.

Early prototypes of the iPhone featuring the first multitouch screens were actually tablet-sized. But Apple’s industrial design team led by Jony Ive, along with the hardware engineering team got the form factor to fit nicely in one hand. And it was Bas Ording, the UI designer behind Mac OS X’s Aqua design language that prototyped inertial effects. Farhad Manjoo, writing in Slate in 2012:

Jonathan Ive, Apple’s chief designer, had been investigating a technology that he thought could do wonderful things someday—a touch display that could understand taps from multiple fingers at once. (Note that Apple did not invent multitouch interfaces; it was one of several companies investigating the technology at the time.) According to Isaacson’s biography, the company’s initial plan was to the use the new touch system to build a tablet computer. Apple’s tablet project began in 2003—seven years before the iPad went on sale—but as it progressed, it dawned on executives that multitouch might work on phones. At one meeting in 2004, Jobs and his team looked a prototype tablet that displayed a list of contacts. “You could tap on the contact and it would slide over and show you the information,” Forstall testified. “It was just amazing.”

Jobs himself was particularly taken by two features that Bas Ording, a talented user-interface designer, had built into the tablet prototype. One was “inertial scrolling”—when you flick at a list of items on the screen, the list moves as a function of how fast you swipe, and then it comes to rest slowly, as if being affected by real-world inertia. Another was the “rubber-band effect,” which causes a list to bounce against the edge of the screen when there were no more items to display. When Jobs saw the prototype, he thought, “My god, we can build a phone out of this,” he told the D Conference in 2010.

The Leadership Argument

Suff Syed’s third argument is about what it means to be a design leader. He says, “scaling your impact as a designer meant scaling the surfaces you influence.” As you rose up through the ranks, “your craft was increasingly displaced by coordination. You became a negotiator, a timeline manager, a translator of ambition through Product and Engineering partnerships.”

Instead, he argues, because AI can build with fewer people—well, you only need one person: “You need two people: one who understands systems and one who understands the user. Better if they’re the same person.”

That doesn’t scale. Don’t tell me that Microsoft, a company with $281 billion in revenue and 228,000 employees—will shrink like a stellar collapse into a single person with an army of AIs. That’s magical thinking.

Leaders are still needed. Influence and coordination are still needed. Humans will still be needed.

He ends this argument with:

This new world despises a calendar full of reviews, design crits, review meetings, and 1:1s. It emphasizes a repo with commits that matter. And promises the joy of shipping to return to your work. That joy unmediated by PowerPoint, politics, or process. That’s not a demotion. That’s liberation.

So he wants us all to sit in our home offices and not collaborate with others? Innovation no longer comes from lone geniuses. They’re born from bouncing ideas off of your coworkers and everyone building on each other’s ideas.

Friction in the process can actually make things better. Pixar famously has a council known as the Braintrust—a small, rotating group of the studio’s best storytellers who meet regularly to tear down and rebuild works-in-progress. The rules are simple: no mandatory fixes, no sugarcoating, and no egos. The point is to push the director to see the story’s problems more clearly—and to own the solution. One of the most famous saves came with Toy Story 2. Originally destined for direct-to-video release, early cuts were so flat that the Braintrust urged the team to start from scratch. Nine frantic months later, the film emerged as one of Pixar’s most beloved works, proof that constructive creative friction can turn a near-disaster into a classic.

The Distribution Argument

Design taste has been democratized and is table stakes, says Syed in his next argument.

There was a time when every new Y Combinator startup looked like someone tortured an intern into generating a logo using Clipart. Today, thanks to a generation of exposure to good design—and better tools—most founders have internalized the basics of aesthetic judgment. First impressions matter, and now, they’re trivial to get right.

And that templates, libraries, and frameworks make it super easy and quick to spin up something tasteful in minutes:

Component libraries like Tailwind, shadcn/ui, and Radix have collapsed the design stack. What once required a full design team handcrafting a system in Figma, exporting specs to Storybook, and obsessively QA-ing the front-end… now takes a few lines of code. Spin up a repo. Drop in some components. Tweak the palette. Ship something that looks eerily close to Linear or Notion in a weekend.

I’m starting to think that Suff Syed believes that designers are just painters or something. Wow. This whole argument is reductive, flattening our role to be only about aesthetics. See above for how much design actually entails.

The Wealth Argument

“Nobody is paying Designers $10M, let alone $100M anytime soon.” Ah, I think this is him saying the quiet part out loud. Mr. Syed is dropping his design title and becoming a “member of the technical staff” because he’s chasing the money.

He’s right. No one is going to pay a designer $100 million total comp package. Unless you’re Jony Ive and part of io, which OpenAI acquired for $6.5 billion back in May. Which is a rare and likely once-ever occurrence.

In a recent episode of Hard Fork, The New Times tech columnist Kevin Roose said:

The scale of money and investment going into these AI systems is unlike anything we’ve ever seen before in the tech industry. …I heard a rumor there was a big company that wasted a billion dollars or more on a failed training run. And then you start to think, oh, I understand why, to a company like Meta, the right AI talent is worth a hundred million dollars, because that level of expertise doesn’t exist that widely outside of this very small group of people. And if this person does their job well, they can save your company something more like a billion dollars. And maybe that means that you should pay them a hundred million dollars.

“Very small group of people” is likely just a couple dozen people in the world who have this expertise and worth tens of millions of dollars.

Syed again:

People are getting generationally wealthy inventing new agentic abstractions, compressing inference cycles, and scaling frontier models safely. That’s where the gravity is. That’s where anybody should aspire to be. With AI enabling and augmenting you as an individual, there’s a far more compelling reason to chase this frontier. No reason not to.

People also get generationally wealthy by hitting the startup lottery. But it’s a hard road and there’s a lot of luck involved.

The current AI frenzy feels a lot like 1849 in California. Back then, roughly 300,000 people flooded the Sierra Nevada mountains hoping to strike gold, but the math was brutal: maybe 10% made any profit at all, the top 4% earned enough to brag a little, and only about 1% became truly rich. The rest? They left with sore backs, empty pockets, and I guess some good stories. 

Back to Reality

AI is already changing the software industry. As designers and builders of software, we are going to be using AI as material. This is as obvious as when the App Store on iPhone debuted and everyone needed to build apps.

Suff Syed wrote his piece as part personal journey and decision-making and part rallying cry to other designers. He is essentially switching careers and says that it won’t be easy.

This transition isn’t about abandoning one identity for another. It’s about evolving—unlearning what no longer serves us and embracing the disciplines that will shape the future. There’s a new skill tree ahead: model internals, agent architectures, memory hierarchies, prompt flows, evaluation loops, and infrastructure that determines how products think, behave, and scale.

Best of luck to Suff Syed on his journey. I hope he strikes AI gold. 

As for me, I aim to continue on my journey of being a shokunin, or craftsman, like Jiro Ono. For over 30 years—if you count my amateur days in front of the Mac in middle school—I’ve been designing. Not just pushing pixels in Photoshop or Figma, but doing the work of understanding audiences and users, solving business problems, inventing new interaction patterns, and advocating for usability. All in the service of the user, and all while honing my craft.

That craft isn’t tied to a technology stack or a job title. It’s a discipline, a mindset, and a lifetime’s work. Being a designer is my life. 

So no, I’m not giving up my design title. It’s not a relic—it’s a commitment. And in a world chasing the next gold rush, I’d rather keep making work worth coming back to, knowing that in the end, gold fades but mastery endures. Besides, if I ever do get rich, it’ll be because I designed something great, not because I happened to be standing near a gold mine.

As a follow-up to yesterday’s item on how Google’s AI overviews are curtailing traffic to websites by as much as 25%, here is a link to Nielsen Norman Group’s just-published study showing that generative AI is reshaping search.

Kate Moran, Maria Rosala and Josh Brown:

While AI offers compelling shortcuts around tedious research tasks, it isn’t close to completely replacing traditional search. But, even when people are using traditional search, the AI-generated overview that now tops almost all search-results pages steals a significant amount of attention and often shortcuts the need to visit the actual pages.

They write that users have developed a way to search over the years, skipping sponsored results and heading straight for the organic links. Users also haven’t completely broken free of traditional Google Search, now adding chatbots to the mix:

While generative AI does offer enough value to change user behaviors, it has not replaced traditional search entirely. Traditional search and AI chats were often used in tandem to explore the same topic and were sometimes used to fact-check each other.

All our participants engaged in traditional search (using keywords, evaluating results pages, visiting content pages, etc.) multiple times in the study. Nobody relied entirely on genAI’s responses (in chat or in an AI overview) for all their information-seeking needs.

In many ways, I think this is smart. Unless “web search” is happening, I tend double-check ChatGPT and Claude, especially for anything historical and mission-critical. I also like Perplexity for that fact—because it shows me its receipts by giving me sources.

preview-1755581621661.png

How AI Is Changing Search Behaviors

Our study shows that generative AI is reshaping search, but long-standing habits persist. Many users still default to Google, giving Gemini a fighting chance.

nngroup.com iconnngroup.com

I enjoyed this interview with Notion’s CEO, Ivan Zhao over at the Decoder podcast, with substitute host, Casey Newton. What I didn’t quite get when I first used Notion was the “LEGO” aspect of it. Their vision is to build business software that is highly malleable and configurable to do all sorts of things. Here’s Zhao:

Well, because it didn’t quite exist with software. If you think about the last 15 years of [software-as-a-service], it’s largely people building vertical point solutions. For each buyer, for each point, that solution sort of makes sense. The way we describe it is that it’s like a hard plastic solution for your problem, but once you have 20 different hard plastic solutions, they sort of don’t fit well together. You cannot tinker with them. As an end user, you have to jump between half a dozen of them each day.

That’s not quite right, and we’re also inspired by the early computing pioneers who in the ‘60s and ‘70s thought that computing should be more LEGO-like rather than like hard plastic. That’s what got me started working on Notion a long time ago, when I was reading a computer science paper back in college.

From a user experience POV, Notion is both simple and exceedingly complicated. Taking notes is easy. Building the system for a workflow, not so much.

In the second half, Newton (gently) presses Zhao on the impact of AI on the workforce and how productivity software like Notion could replace headcount.

Newton: Do you think that AI and Notion will get to a point where executives will hire fewer people, because Notion will do it for them? Or are you more focused on just helping people do their existing jobs?

Zhao: We’re actually putting out a campaign about this, in the coming weeks or months. We want to push out a more amplifying, positive message about what Notion can do for you. So, imagine the billboard we’re putting out. It’s you in the center. Then, with a tool like Notion or other AI tools, you can have AI teammates. Imagine that you and I start a company. We’re two co-founders, we sign up for Notion, and all of a sudden, we’re supplemented by other AI teammates, some taking notes for us, some triaging, some doing research while we’re sleeping.

Zhao dodges the “hire fewer people” part of the question and instead, answers with “amplifying” people or making them more productive.

preview-1755062355751.jpg

Notion CEO Ivan Zhao wants you to demand better from your tools

Notion’s Ivan Zhao on AI agents, productivity, and how software will change in the future.

theverge.com icontheverge.com

Ben Davies-Romano argues that the AI chat box is our new design interface:

Every interaction with a large language model starts the same way: a blinking cursor in a blank text field. That unassuming box is more than an input — it’s the interface between our human intent and the model’s vast, probabilistic brain.

This is where the translation happens. We pour in the nuance, constraints, and context of our ideas; the model converts them into an output. Whether it’s generating words, an image, a video sequence, or an interactive prototype, every request passes through this narrow bridge.

It’s the highest-stakes, lowest-fidelity design surface I’ve ever worked with: a single field that stands between human creativity and an engine capable of reshaping it into almost any form, albeit with all the necessary guidance and expertise applied.

In other words, don’t just say “Make it better,” but guide the AI instead.

That’s why a vague, lazy prompt, like “make it better”, is the design equivalent of telling a junior designer “make it intuitive” and walking away. You’ll get something generic, safe, and soulless, not because the AI “missed the brief,” but because there was no brief.

Without clear stakes, a defined brand voice, and rich context, the system will fill in the blanks with its default, most average response. And “average” is rarely what design is aiming for.

And he makes a point that designers should be leading the charge on showing others what generative AI can do:

In the age of AI, it shouldn’t be everyone designing, per say. It should be designers using AI as an extension of our craft. Bringing our empathy, our user focus, our discipline of iteration, and our instinct for when to stop generating and start refining. AI is not a replacement for that process; it’s a multiplier when guided by skilled hands.

So, let’s lead. Let’s show that the real power of AI isn’t in what it can generate, but in how we guide it — making it safer, sharper, and more human. Let’s replace the fear and the gimmicks with clarity, empathy, and intentionality.

The blank prompt is our new canvas. And friends, we need to be all over it.

preview-1754887809469.jpeg

Prompting is designing. And designers need to lead.

Forget “prompt hacks.” Designers have the skills to turn AI from a gimmick into a powerful, human-centred tool if we take the lead.

medium.com iconmedium.com

Christopher K. Wong argues that desirability is a key part of design that helps decide which features users really want:

To give a basic definition, desirability is a strategic part of UX that revolves around a single user question: Have you defined (and solved) the right problem for users?

In other words, before drawing a single box or arrow, have you done your research and discovery to know you’re solving a pain point?

The way the post is written makes it hard to get at a succinct definition, but here’s my take. Desirability is about ensuring a product or feature is truly wanted, needed, and chosen by users—not just visual appeal—making it a core pillar for impactful design decisions and prioritization. And designers should own this.

preview-1754632102491.jpeg

Want to have a strategic design voice at work? Talk about desirability

Desirability isn’t just about visual appeal: it’s one of the most important user factors

dataanddesign.substack.com icondataanddesign.substack.com
Illustration of diverse designers collaborating around a table with laptops and design materials, rendered in a vibrant style with coral, yellow, and teal colors

Five Practical Strategies for Entry-Level Designers in the AI Era

In Part I of this series on the design talent crisis, I wrote about the struggles recent grads have had finding entry-level design jobs and what might be causing the stranglehold on the design job market. In Part II, I discussed how industry and education need to change in order to ensure the survival of the profession.

Part III: Adaptation Through Action

Like most Gen X kids, I grew up with a lot of freedom to roam. By fifth grade, I was regularly out of the house. My friends and I would go to an arcade in San Francisco’s Fisherman’s Wharf called The Doghouse, where naturally, they served hot dogs alongside their Joust and TRON cabinets. But we would invariably go to the Taco Bell across the street for cheap pre-dinner eats. In seventh grade—this is 1986—I walked by a ComputerLand on Van Ness Avenue and noticed a little beige computer with a built-in black and white CRT. The Macintosh screen was actually pale blue and black, but more importantly, showed MacPaint. It was my first exposure to creating graphics on a computer, which would eventually become my career.

Desktop publishing had officially begun a year earlier with the introduction of Aldus PageMaker and the Apple LaserWriter printer for the Mac, which enabled WYSIWYG (What You See Is What You Get) page layouts and high-quality printed output. A generation of designers who had created layouts using paste-up techniques with tools and materials like X-Acto knives, Rapidograph pens, rubyliths, photostats, and rubber cement had to start learning new skills. Typesetters would eventually be phased out in favor of QuarkXPress. A decade of transition would revolutionize the industry, only to be upended again by the web.

Many designers who made the jump from paste-up to desktop publishing couldn’t make the additional leap to HTML. They stayed graphic designers and a new generation of web designers emerged. I think those who were in my generation—those that started in the waning days of analog and the early days of DTP—were able to make that transition.

We are in the midst of yet another transition: to AI-augemented design. It’s important to note that it’s so early, that no one can say anything with absolute authority. Any so-called experts have been working with AI tools and AI UX patterns for maybe two years, maximum. (Caveat: the science of AI has been around for many decades, but using these new tools, techniques, and developing UX patterns for interacting with such tools is solely new.)

It’s obvious that AI is changing not only the design industry, but nearly all industries. The transformation is having secondary effects on the job market, especially for entry-level talent like young designers.

The AI revolution mirrors the previous shifts in our industry, but with a crucial difference: it’s bigger and faster. Unlike the decade-long transitions from paste-up to desktop publishing and from print to web, AI’s impact is compressing adaptation timelines into months rather than years. For today’s design graduates facing the harsh reality documented in Part I and Part II—where entry-level positions have virtually disappeared and traditional apprenticeship pathways have been severed—understanding this historical context isn’t just academic. It’s reality for them. For some, adaptation is possible but requires deliberate strategy. The designers who will thrive aren’t necessarily those with the most polished portfolios or prestigious degrees, but those who can read the moment, position themselves strategically, and create their own pathways into an industry in tremendous flux.

As a designer who is entering the workforce, here are five practical strategies you can employ right now to increase your odds of landing a job in this market:

  1. Leverage AI literacy as competitive differentiator
  2. Emphasize strategic thinking and systems thinking
  3. Become a “dangerous generalist”
  4. Explore alternative pathways and flexibility
  5. Connecting with community

1. AI Literacy as Competitive Differentiator

Young designer orchestrating multiple AI tools on screens, with floating platform icons representing various AI tools.

Just like how Leah Ray, a recent graphic design MFA graduate from CCA, has deeply incorporated AI into her workflow, you have to get comfortable with some of the tools. (See her story in Part II for more context.)

Be proficient in the following categories of AI tools:

  • Chatbot: Choose ChatGPT, Claude, or Gemini. Learn about how to write prompts. You can even use the chatbot to learn how to write prompts! Use it as a creative partner to bounce ideas off of and to do some initial research for you.
  • Image generator: Adobe Firefly, DALL-E, Gemini, Midjourney, or Visual Electric. Learn how to use at least one of these, but more importantly, figure out how to get consistently good results from these generators.
  • Website/web app generator: Figma Make, Lovable, or v0. Especially if you’re in an interaction design field, you’ll need to be proficient in an AI prompt-to-code tool.

Add these skills to your resume and LinkedIn profile. Share your experiments on social media. 

But being AI-literate goes beyond just the tools. It’s also about wielding AI as a design material. Here’s the good part: by getting proficient in the tools, you’re also learning about the UX patterns for AI and learning what is possible with AI technologies like LLMs, agents, and diffusion models.

I’ve linked to a number of articles about designing for AI use cases:

Have a basic understanding of the following:

Be sure to add at least one case study in your portfolio that incorporates an AI feature.

2. Strategic Thinking and Systems Thinking

Designer pointing at an interconnected web diagram showing how design decisions create ripple effects through business systems.

Stunts like AI CEOs notwithstanding, companies don’t trust AI enough to cede strategy to it. LLMs are notoriously bad at longer tasks that contain multiple steps. So thinking about strategy and how to create a coherent system are still very much human activities.

Systems thinking—the ability to understand how different parts of a system interact and how changes in one component can create cascading effects throughout the entire system—is becoming essential for tech careers and especially designers. The World Economic Forum’s Future of Jobs Report 2025 identifies it as one of the critical skills alongside AI and big data. 

Modern technology is incredibly interconnected. AI can optimize individual elements, but it can’t see the bigger picture—how a pricing change affects user retention, how a new feature impacts server costs, or why your B2B customers need different onboarding than consumers. 

Early-career lawyers at the firm Macfarlanes are now interpreting complex contracts that used to be reserved for more senior colleagues. While AI can extract key info from contracts and flag potential issues, humans are still needed to understand the context, implications, and strategic considerations. 

Emphasize these skills in your case studies by presenting clear, logical arguments that lead to strategic insights and systemic solutions. Frame every project through a business lens. Show how your design decisions ladder up to company, brand, or product metrics. Include the downstream effects—not just the immediate impact.

3. The “Dangerous Generalist” Advantage

Multi-armed designer like an octopus, each arm holding different design tools including research, strategy, prototypes, and presentations.

Josh Silverman, professor at CCA and also a design coach and recruiter, has an idea he calls the “dangerous generalist.” This is the unicorn designer who can “do the research, the strategy, the prototyping, the visual design, the presentation, and the storytelling; and be a leader and make a measurable impact.” 

It’s a lot and seemingly unfair to expect that out of one person, but for a young and hungry designer with the right training and ambition, I think it’s possible. Other than leadership and making quantitative impact, all of those traits would have been practiced and honed at a good design program. 

Be sure to have a variety of projects in your portfolio to showcase how you can do it all.

4. Alternative Pathways and Flexibility

Designer navigating a maze of career paths with signposts directing to startups, nonprofits, UI developer, and product manager roles.

Matt Ström-Awn, in an excellent piece about the product design talent crisis published last Thursday, did some research and says that in “over 600 product design listings, only 1% were for internships, and only 5% required 2 years or less of experience.”

Those are some dismal numbers for anyone trying to get a full-time job with little design experience. So you have to try creative ways of breaking into the industry. In other words, don’t get stuck on only applying for junior-level jobs on LinkedIn. Do that but do more.

Let’s break this down to type of company and type of role.

Types of Companies

Historically, I would have always recommended any new designer to go to an agency first because they usually have the infrastructure to mentor entry-level workers. But, as those jobs have dried up, consider these types of companies.

  • Early-stage startups: Look for seed-stage or Series A startups. Companies who have just raised their Series A will make a big announcement, so they should be easy to target. Note that you will often be the only designer in the company, so you’ll be doing a lot of learning on the job. If this happens, remember to find community (see below).
  • Non-tech businesses: Legacy industries might be a lot slower to think about AI, much less adopt it. Focus on sectors where human touch, tradition, regulations, or analog processes dominate. These fields need design expertise, especially as many are just starting to modernize and may require digital transformation, improved branding, or modernized UX.
  • Nonprofits: With limited budgets and small teams, nonprofits and not-for-profits could be great places to work for. While they tend to try to DIY everything, they will also recognize the need for designers. Organizations that invest in design are 50% more likely to see increases in fundraising revenue.

Type of Roles

In his post for UX Collective, Patrick Morgan says, “Sometimes the smartest move isn’t aiming straight for a ‘product designer’ title, but stepping into a role where you can stay close to product and grow into the craft.”

In other words, look for adjacent roles at the company you want to work for, just to get your foot in the door.

Here are some of those roles—includes ones from Morgan’s list. What is appropriate for you will depend heavily on your skill sets and the type of design you want to eventually practice.

  • UI developer or front-end engineer: If you’re technically-minded, front-end developers are still sought after, though maybe not as much as before because, you know, AI. But if you’re able to snag a spot as one, it’s a way in.
  • Product manager: There is no single path to becoming a product manager. It’s a lot of the same skills a good designer should have, but with even more focus on creating strategies that come from customer insights (aka research). I’ve seen designers move into PM roles pretty easily.
  • Graphic/visual/growth/marketing designer: Again, depending on your design focus, you could already be looking for these jobs. But if you’re in UX and you see one of these roles open up, it’s another way into a company.
  • Production artist: These roles are likely slowly disappearing as well. This is usually a role at an agency or a larger company that just does design execution.
  • Freelancer: You may already be doing this, but you can freelance. Not all companies, especially small ones can afford a full-time designer. So they rely on freelance help. Try your hand at Upwork to build up your portfolio. Ensure that you’re charging a price that seems fair to you and that will help pay your bills.
  • Executive assistant: While this might seem odd, this is a good way to learn about a company and to show your resourcefulness. Lots of EAs are responsible for putting together events, swag, and more. Eventually, you might be able to parlay this role into a design role.
  • Intern: Internships are rare, I know. And if you haven’t done one, you should. However, ensure that the company complies with local regulations about paying interns. For example, California has strict laws about paying interns at least minimum wage. Unpaid internships are legal only if the role meets a litany of criteria including:
  • The internship is primarily educational (similar to a school or training program).
  • The intern is the “primary beneficiary,” not the company.
  • The internship does not replace paid employees or provide substantial benefit to the employer.

5. Connecting with Community

Diverse designers in a supportive network circle, connected both in-person and digitally, with glowing threads showing mentorship relationships.

The job search is isolating. Especially now.

Josh Silverman emphasizes something often overlooked: you’re already part of communities. “Consider all the communities you identify with, as well as all the identities that are a part of you,” he points out. Think beyond LinkedIn—way beyond.

Did you volunteer at a design conference? Help a nonprofit with their rebrand? Those connections matter. Silverman suggests reaching out to three to five people—not hiring managers, but people who understand your work. Former classmates who graduated ahead of you. Designers you met at meetups. Workshop leaders.

“Whether it’s a casual coffee chat or slightly more informal informational interview, there are people who would welcome seeing your name pop up on their screen.”

These conversations aren’t always about immediate job leads. They’re about understanding where the industry’s actually heading, which companies are genuinely hiring, and what skills truly matter versus what’s in job descriptions. As Silverman notes, it’s about creating space to listen and articulate what you need—“nurturing relationships in community will have longer-term benefits.”

In practice: Join alumni Slack channels, participate in local AIGA events, contribute to open-source projects, engage in design challenges. The designers landing jobs aren’t just those with perfect portfolios. They’re the ones who stay visible.

The Path Forward Requires Adaptation, Not Despair

My 12 year-old self would be astonished at what the world is today and how this profession has evolved. I’ve been through three revolutions. Traditional to desktop publishing. Print to web. And now, human-only design to AI-augmented design. 

Here’s what I know: the designers who survived those transitions weren’t necessarily the most talented. They were the most adaptable. They read the moment, learned the tools, and—crucially—didn’t wait for permission to reinvent themselves.

This transition is different. It’s faster and much more brutal to entry-level designers.

But you have advantages my generation didn’t. AI tools are accessible in ways that PageMaker and HTML never were. We had to learn through books! We learned by copying. We learned by taking weeks to craft projects. You can chat with Lovable and prompt your way to a portfolio-worthy project over a weekend. You can generate production-ready assets with Midjourney before lunch. You can prototype and test five different design directions while your coffee’s still warm.

The traditional path—degree, internship, junior role, slow climb up the ladder—is broken. Maybe permanently. But that also means the floor is being raised. You should be working on more strategic and more meaningful work earlier in your career.

But you need to be dangerous, versatile, and visible. 

The companies that will hire you might not be the ones you dreamed about in design school. The role might not have “designer” in the title. Your first year might be messier than you planned.

That’s OK. Every designer I respect has a messy and unlikely origin story.

The industry will stabilize because it always does. New expectations will emerge, new roles will be created, and yes—companies will realize they still need human designers who understand context, culture, and why that button should definitely not be bright purple.

Until then? Be the designer who ships. Who shows up. Who adapts.

The machines can’t do that. Yet.


I hope you enjoyed this series. I think it’s an important topic to discuss in our industry right now, before it’s too late. Don’t forget to read about the five grads and five educators I interviewed for the series. Please reach out if you have any comments, positive or negative. I’d love to hear them.

Portraits of five recent design graduates. From top left to right: Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background; Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket; Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors; Bottom row, left to right: Leah Ray, in a black-and-white portrait wearing a black turtleneck, looking ahead, Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Meet the 5 Recent Design Grads and 5 Design Educators

For my series on the Design Talent Crisis (see Part IPart II, and Part III) I interviewed five recent graduates from California College of the Arts (CCA) and San Diego City College. I’m an alum of CCA and I used to teach at SDCC. There’s a mix of folks from both the graphic design and interaction design disciplines. 

Meet the Grads

If these enthusiastic and immensely talented designers are available and you’re in a position to hire, please reach out to them!

Benedict Allen

Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Benedict Allen is a Los Angeles-based visual designer who specializes in creating compelling visual identities at the intersection of design, culture, and storytelling. With a strong background in apparel graphics and branding, Benedict brings experience from his freelance work for The Hunt and Company—designing for a major automotive YouTuber’s clothing line—and an internship at Pureboost Energy Drink Mix. He is skilled in a range of creative tools including Photoshop, Illustrator, Figma, and AI image generation. Benedict’s approach is rooted in history and narrative, resulting in clever and resonant design solutions. He holds an Associate of Arts in Graphic Design from San Diego City College and has contributed to the design community through volunteer work with AIGA San Diego Tijuana.

Emma Haines

Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors

Emma Haines is a UX and interaction designer with a background in computer science, currently completing her MDes in Interaction Design at California College of the Arts. She brings technical expertise and a passion for human-centered design to her work, with hands-on experience in integrating AI into both the design process and user-facing projects. Emma has held roles at Mphasis, Blink UX, and Colorado State University, and is now seeking full-time opportunities where she can apply her skills in UX, UI, or product design, particularly in collaborative, fast-paced environments.

Erika Kim

Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket

Erika Kim is a passionate UI/UX and product designer based in Poway, California, with a strong foundation in both visual communication and thoughtful problem-solving. A recent graduate of San Diego City College’s Interaction & Graphic Design program, Erika has gained hands-on experience through internships at TritonNav, Four Fin Creative, and My Rental Spot, as well as a year at Apple in operations and customer service roles. Her work has earned her recognition, including a Gold Winner award at The One Club Student Awards for her project “Gatcha Eats.” Erika’s approach to design emphasizes clarity, collaboration, and the power of well-crafted wayfinding—a passion inspired by her fascination with city and airport signage. She is fluent in English and Korean, and is currently open to new opportunities in user experience and product design.

Ashton Landis

Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background

Ashton Landis is a San Francisco-based graphic designer with a passion for branding, typography, and visual storytelling. A recent graduate of California College of the Arts with a BFA in Graphic Design and a minor in ecological practices, Ashton has developed expertise across branding, UI/UX, design strategy, environmental graphics, and more. She brings a people-centered approach to her work, drawing on her background in photography to create impactful and engaging design solutions. Ashton’s experience includes collaborating with Bay Area non-profits to build participatory identity systems and improve community engagement. She is now seeking new opportunities to grow and help brands make a meaningful impact.

Leah Ray

Leah Ray, , in a black-and-white portrait wearing a black turtleneck, looking ahead

Leah (Xiayi Lei) Ray is a Beijing-based visual designer currently working at Kuaishou Technology, with a strong background in impactful graphic design that blends logic and creativity. She holds an MFA in Design and Visual Communications from California College of the Arts, where she also contributed as a teaching assistant and poster designer. Leah’s experience spans freelance work in branding, identity, and book cover design, as well as roles in UI/UX and visual development at various companies. She is fluent in English and Mandarin, passionate about education, arts, and culture, and is recognized for her thoughtful, novel approach to design.

Meet the Educators

Sean Bacon

Sean Bacon, smiling in a light button-down against a blue-gray background

Sean Bacon is a professor, passionate designer and obsessive typophile who teaches a wide range of classes at San Diego City College. He also helps direct and manage the graphic design program and its administrative responsibilities. He teaches a wide range of classes and always strives to bring excellence to his students’ work. He brings his wealth of experiences and insight to help produce many of the award winning portfolios from City. He has worked for The Daily Aztec, Jonathan Segal Architecture, Parallax Visual Communication and Silent Partner. He attended San Diego City College, San Diego State and completed his masters at Savannah College of Art and Design. 

Eric Heiman

Eric Heiman, in profile wearing a flat cap and glasses, black and white photo

Eric Heiman is principal and co-founder of the award-winning, oft-exhibited design studio Volume Inc. He also teaches at California College of the Arts (CCA) where he currently manages TBD*, a student-staffed design studio creating work to help local Bay Area nonprofits and civic institutions. Eric also writes about design every so often, has curated one film festival, occasionally podcasts about classic literature, and was recently made an AIGA Fellow for his contribution to raising the standards of excellence in practice and conduct within the Bay Area design community. 

Elena Pacenti

Portrait of Elena Pacenti, smiling with long blonde hair, wearing a black top, in soft natural light.

Elena Pacenti is a seasoned design expert with over thirty years of experience in design education, research, and international projects. Currently the Director of the MDes Interaction Design program at California College of the Arts, she has previously held leadership roles at NewSchool of Architecture & Design and Domus Academy, focusing on curriculum development, faculty management, and strategic planning. Elena holds a PhD in Industrial Design and a Master’s in Architecture from Politecnico di Milano, and is recognized for her expertise in service design, strategic design, and user experience. She is passionate about leading innovative, complex projects where design plays a central role.

Bradford Prairie

Bradford Prairie, smiling in a jacket and button-down against a soft purple background

Bradford Prairie has been teaching at San Diego City College for nine years, starting as an adjunct instructor while simultaneously working as a professional designer and creative director at Ignyte, a leading branding agency. What made his transition unique was Ignyte’s support for his educational aspirations—they understood his desire to prioritize teaching and eventually move into it full-time. This dual background in industry and academia allows him to bring real-world expertise into the classroom while maintaining his creative practice.

Josh Silverman

Josh Silverman, smiling in a striped shirt against a dark background

For three decades, Josh Silverman has built bridges between entrepreneurship, design education, and designers—always focused on helping people find purpose and opportunity. As founder of PeopleWork Partners, he brings a humane design lens to recruiting and leadership coaching, placing emerging leaders at companies like Target, Netflix, and OpenAI, and advising design teams on critique, culture, and operations. He has chaired the MDes program at California College of the Arts, taught and spoken worldwide, and led AIGA chapters. Earlier, he founded Schwadesign, a lean, holacratic studio recognized by The Wall Street Journal and others. His clients span startups, global enterprises, top universities, cities, and non-profits. Josh is endlessly curious about how teams make decisions and what makes them thrive—and is always up for a long bike ride.

Page 1 of 5