Skip to content

96 posts tagged with “ai”

Our profession is changing rapidly. I’ve been covering that here for nearly a year now. Lots of posts come across my desk that say similar things. Tom Scott repeats a lot of what’s been said, but I’ll pull out a couple nuggets that caught my eye.

He declares that “Hands-on is the new default.” Quoting Vitor Amaral, a designer at Intercom:

Being craft-focused means staying hands-on, regardless of specialty or seniority. This won’t be a niche role, it will be an expectation for everyone, from individual contributors to VPs. The value lies in deeply understanding how things actually work, and that comes from direct involvement in the work.

As AI speeds up execution, the craft itself will become easier, but what will matter most is the critical judgment to craft the right thing, move fast, and push the boundaries of quality.

For those looking for work, Scott says, “You NEED to change how you find a job.” Quoting Felix Haas, investor and designer at Lovable:

Start building a real product and get a feeling for it what it means pushing something out in the market

Learn to use AI to prototype interactively → even at a basic level

Get comfortable with AI tools early → they’ll be your co-designer / sparring partner

Focus on solving real problems, not just making things look good (Which was a problem for very long in the design space)

Scott also says that “Design roles are merging,” and Ridd from Dive Club illustrates the point:

We are seeing a collapse of design’s monopoly on ideation where designers no longer “own” the early idea stage. PMs, engineers, and others are now prototyping directly with new tools.

If designers move too slow, others will fill the gap. The line between PM, engineer, and designer is thinner than ever. Anyone tool-savvy can spin up prototypes — which raises the bar for designers.

Impact comes from working prototypes, not just facilitation. Leading brainstorms or “owning process” isn’t enough. Real influence comes from putting tangible prototypes in front of the team and aligning everyone around them.

Design is still best positioned — but not guaranteed

Designers could lead this shift, but only if they step up. Ownership of ideation is earned, not assumed.

The future of product design

The future of product design

The future belongs to AI-native designers

verifiedinsider.substack.com iconverifiedinsider.substack.com

Is the AI bubble about to burst? Apparently, AI prompt-to-code tools like Lovable and v0 have peaked and are on their way down.

Alistair Barr writing for Business Insider:

The drop-off raises tough questions for startups that flaunted exponential annual recurring revenue growth just months ago. Analysts wrote that much of that revenue comes from month-to-month subscribers who may churn as quickly as they signed up, putting the durability of those flashy numbers in doubt.

Barr interviewed Eric Simons, CEO of Bolt who said:

“This is the problem across all these companies right now. The churn rate for everyone is really high,” Simons said. “You have to build a retentive business.”

AI vibe coding tools were supposed to change everything. Now traffic is crashing.

AI vibe coding tools were supposed to change everything. Now traffic is crashing.

Vibe coding tools have seen traffic drop, with Vercel’s v0 and Lovable seeing significant declines, raising sustainability questions, Barclays warns.

businessinsider.com iconbusinessinsider.com

I love this framing by Patrizia Bertini:

Let me offer a different provocation: AI is not coming for your job. It is coming for your tasks. And if you cannot distinguish between the two, then yes — you should be worried. Going further, she distinguishes between output and outcome: Output is what a process produces. Code. Copy. Designs. Legal briefs. Medical recommendations. Outputs are the tangible results of a system executing its programmed or prescribed function — the direct product of following steps, rules, or algorithms. The term emerged in the industrial era, literally describing the quantity of coal or iron a mine could extract in a given period. Output depends entirely on the efficiency and capability of the process that generates it.

Outcome is what happens when that output meets reality. An outcome requires context, interpretation, application, and crucially — intentionality. Outcomes demand understanding not just what was produced, but why it matters, who it affects, and what consequences ripple from it. Where outputs measure productivity, outcomes measure impact. They are the ultimate change or consequence that results from applying an output with purpose and judgment.

She argues that, “AI can generate outputs. It cannot, however, create outcomes.”

This reminds me of a recent thread by engineer Marc Love:

It’s insane just how much how I work has changed in the last 18 months.

I almost never hand write code anymore except when giving examples during planning conversations with LLMs.

I build multiple full features per day , each of which would’ve taken me a week or more to hand write. Building full drafts and discarding them is basically free.

Well over half of my day is spent ideating, doing systems design, and deciding what and what not to build.

It’s still conceptually the same job, but if i list out the specific things i do in a day versus 18 months ago, it’s almost completely different.

Care about the outcome, not the output.

preview-1759425572315-1200x533.png

When machines make outputs, humans must own outcomes

The future of work in the age of AI and deepware.

uxdesign.cc iconuxdesign.cc

In an announcement to users this morning, Visual Electric said they were being acquired by Perplexity—or more accurately, the team that makes Visual Electric will be hired by Perplexity. The service will shut down in the next 90 days.

Today we’re sharing the next step in Visual Electric’s journey: we’ve been acquired by Perplexity. This is a milestone that marks both an exciting opportunity for our team and some big changes for our product.

Over the next 90 days we’ll be sunsetting Visual Electric, and our team will be forming a new Agent Experiences group at Perplexity.

While we’ve seen acquihires and shutdowns in either the AI infrastructure space (e.g., Scale AI) or coding space (e.g., Windsurf), I don’t believe we’ve seen one in the image or video gen AI space have an exit event like this yet. Obviously, The Browser Company announced their acquisition by Atlassian last month.

I believe building gen AI tools at this moment is incredibly competitive. I think it takes an even stronger stomached entrepreneur than in the pre-ChatGPT moment. So kudos for the folks at Visual Electric for having a good outcome and getting to continue to do their work at Perplexity. But I do think this is not the last that we’ll see consolidation in this space.

preview-1759365436869-1200x630.png

Visual Electric is Joining Perplexity

Today we’re sharing the next step in Visual Electric’s journey: we’ve been acquired by Perplexity. This is a milestone that marks both an exciting opportunity for our team and some big changes for our product.

visualelectric.com iconvisualelectric.com

The headline rings true to me because that’s what I look for in designers and how I run my team. The software that we build is too complex and too mission-critical for designers to vibe-code—at least given today’s tooling. But each one of the designers on my team can fill in for a PM when they’re on vacation.

Kai Wong, writing in UX Collective:

One thing I’ve learned, talking with 15 design leaders (and one CEO), is that a ‘designer who codes’ may look appealing, but a ‘designer who understands business’ is far more valuable and more challenging to replace.

You already possess the core skill that makes this transition possible: the ability to understand users with systematic observation and thoughtful questioning.

The only difference, now, is learning to apply that same methodology to understand your business.

Strategic thinking doesn’t require fancy degrees (although it may sometimes help).

Ask strategic questions about business goals. Understand how to balance user and business needs. Frame your design decisions in terms of measurable business impact.

preview-1758775414784.png

Why many employers want Designers to think like PMs, not Devs

How asking questions, which used to annoy teams, is now critical to UX’s future

uxdesign.cc iconuxdesign.cc

I’m happy that the conversation around the design talent crisis continues. Carly Ayres, writing for It’s Nice That picks up the torch and speaks to designers and educators about this topic. What struck me—and I think what adds to the dialogue—is the notion of the belief gap. Ayres spoke with Naheel Jawaid, founder of Silicon Valley School of Design, about it:

“A big part of what I do is just being a coach, helping someone see their potential when they don’t see it yet,” Naheel says. “I’ve had people tell me later that a single conversation changed how they saw themselves.”

In the past, belief capital came from senior designers taking juniors under their wing. Today, those same seniors are managing instability of their own. “It’s a bit of a ‘dog eat dog world’-type vibe,” Naheel says. “It’s really hard to get mentorship right now.”

The whole piece is great. Tighter than my sprawling three-parter. I do think there’s a piece missing though. While Ayres highlights the issue and offers suggestions from designer leaders, businesses need to step up and do something about the issue—i.e., hire more juniors. Us recognizing it is the first step.

preview-1758774248502.png

Welcome to the entry-level void: what happens when junior design jobs disappear?

Entry-level jobs are disappearing. In their place: unpaid gigs, cold DMs and self-starters scrambling for a foothold. The ladder’s gone – what’s replacing it, and who’s being left behind?

itsnicethat.com iconitsnicethat.com

Jason Spielman put up a case study on his site for his work on Google’s NotebookLM:

The mental model of NotebookLM was built around the creation journey: starting with inputs, moving through conversation, and ending with outputs. Users bring in their sources (documents, notes, references), then interact with them through chat by asking questions, clarifying, and synthesizing before transforming those insights into structured outputs like notes, study guides, and Audio Overviews.

And yes, he includes a sketch he did on the back of a napkin.

I’ve always wondered about the UX of NotebookLM. It’s not typical and, if I’m being honest, not exactly super intuitive. But after a while, it does make sense. Maybe I’m the outlier though, because Spielman’s grandmother found it easy. In an interview last year on Sequoia Capital’s Training Data, he recalls:

I actually do think part of the explosion of audio overviews was the fact it was a simple one click experience. I was on the phone with my grandma trying to explain her how to use it and it actually didn’t take any explanation. I’m like, “Drop in a source.” And she’s like, “Oh! I see. I click this button to generate it.” And I think that the ease of creation is really actually what catalyzed so much explosion. So I think when we think about adding these knobs [for customization] I think we want to do it in a way that’s very intentional.

preview-1758507696745.png

Designing NotebookLM

Designer, builder, and visual storyteller. Now building Huxe. Previously led design on NotebookLM and contributed to Google AI projects like Gemini and Search. Also shoot photo/video for brands like Coachella, GoPro, and Rivian.

jasonspielman.com iconjasonspielman.com

Chatboxes have become the uber box for all things AI. The criticism of this blank box has been the cold start issue. New users don’t know what to type. Designers shipping these product mostly got around this problem by offering suggested prompts to teach users about the possibilities.

The issue on the other end is that expert users end up creating their own library of prompts to copy and paste into the chatbox for repetitive tasks.

Sharang Sharma writing in UX Collective illustrates how these UIs can be smarter by being predictive of intent:

Contrary, Predictive UX points to an alternate approach. Instead of waiting for users to articulate every step, systems can anticipate intent based on behavior or common patterns as the user types. Apple Reminders suggests likely tasks as you type. Grammarly predicts errors and offers corrections inline. Gmail’s Smart Compose even predicts full phrases, reducing the friction of drafting entirely.

Sharma says that the goal of predictive UX is to “reduce time-to-value and reframe AI as an adaptive partner that anticipates user’s intent as you type.”

Imagine a little widget that appears within the chatbox as you type. Kind of a cool idea.

preview-1758077109263.jpeg

How can AI UI capture intent?

Exploring contextual prompt patterns that capture user intent as it is typed

uxdesign.cc iconuxdesign.cc

Thinking about this morning’s link about web forms, if you abstract why it’s so powerful, you get to the point of human-computer interaction: the computer should do what the user intends, not the buttons they push.

Matt Webb reminds us about the DWIM, or Do What I Mean philosophy in computing that was coined by Warren Teitelman in 1966. Webb quotes computer scientist Larry Masinter:

DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user’s request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious.

Webb goes on to say:

Squint and you can see ChatGPT as a DWIM UI: it never, never, never says “syntax error.”

Now, arguably it should come back and ask for clarifications more often, and in particular DWIM (and AI) interfaces are more successful the more they have access to the user’s context (current situation, history, environment, etc).

But it’s a starting point. The algo is: design for capturing intent and then DWIM; iterate until that works. AI unlocks that.

preview-1757558679383.png

The destination for AI interfaces is Do What I Mean

Posted on Friday 29 Aug 2025. 840 words, 10 links. By Matt Webb.

interconnected.org iconinterconnected.org

Forms is one of the fundamental things we make users do in software. Whether it’s the login screen, billing address form, or a mortgage application, forms are the main method for getting data from users and into computer-accessible databases. The human is deciding what piece of information to put into which column in the database. With AI, form filling should be much simpler.

Luke Wroblewski makes the argument:

With Web forms, the burden is on people to adapt to databases. Today’s AI models, however, can flip this requirement. That is, they allow people to provide information in whatever form they like and use AI do the work necessary to put that information into the right structure for a database.

How can it work?

With AgentDB connected to an AI model (via an MCP server), a person can simply say “add this” and provide an image, PDF, audio, video, you name it. The model will use AgentDB’s template to decide what information to extract from this unstructured input and how to format it for the database. In the case where something is missing or incomplete, the model can ask for clarification or use tools (like search) to find possible answers.

preview-1757557969255.png

Unstructured Input in AI Apps Instead of Web Forms

Web forms exist to put information from people into databases. The input fields and formatting rules in online forms are there to make sure the information fits...

lukew.com iconlukew.com

I believe purity tests of any sort are problematic. And it’s much too easy to throw around the “This is AI slop!” claim. AI was used in the main title sequence for the Marvel TV show Secret Invasion. But it was on purpose and aligned with the show’s themes of shapeshifters.

Anyway, Daniel John, writing in the Creative Bloq:

[Lady] Gaga just dropped the music video for The Dead Dance, a song debuted in Season 2 of Netflix’s Wednesday. Directed by Tim Burton, it’s a suitably nightmarish black-and-white cacophony of monsters and dolls. But some are already claiming that parts of it were made using AI.

John shows a tweet from @graveyardquy as an example:

i didn’t think we’d ever be in a timeline where a tim burton x lady gaga collab would turn out to be AI slop… but here we are

We need to separate quality critiques from tool usage. If it looks good and is appropriate, I’m fine with CG, AI, and whatever comes next that helps tell the story. Same goes for what we do as designers, ’natch.

Gaga’s song is great. It’s a bop, as the kids say, with a neat music video to boot.

preview-1757379113823.jpg

The Lady Gaga backlash proves AI paranoia has gone too far

Just because it looks odd, doesn't mean it's AI.

creativebloq.com iconcreativebloq.com

Josh Miller, CEO, and Hursh Agrawal, CTO, of The Browser Company:

Today, The Browser Company of New York is entering into an agreement to be acquired by Atlassian in an all-cash transaction. We will operate independently, with Dia as our focus. Our objective is to bring Dia to the masses.

Super interesting acquisition here. There is zero overlap as far as I can tell. Atlassian’s move is out of left-field. Dia’s early users were college students. The Browser Company more recently opened it up to former Arc users. Is this bet for Atlassian—the company that makes tech-company-focused products like Jira and Confluence—around the future of work and collaboration? Is this their first move against Salesforce? 🤔

preview-1757007229906.jpeg

Your Tuesday in 2030

Or why The Browser Company is being acquired to bring Dia to the masses.

open.substack.com iconopen.substack.com
Conceptual 3D illustration of stacked digital notebooks with a pen on top, overlaid on colorful computer code patterns.

Why We Still Need a HyperCard for the AI Era

I rewatched the 1982 film TRON for the umpteenth time the other night with my wife. I have always credited this movie as the spark that got me interested in computers. Mind you, I was nine years old when this film came out. I was so excited after watching the movie that I got my father to buy us a home computer—the mighty Atari 400 (note sarcasm). I remember an educational game that came on cassette called “States & Capitals” that taught me, well, the states and their capitals. It also introduced me to BASIC, and after watching TRON, I wanted to write programs!

Vintage advertisement for the Atari 400 home computer, featuring the system with its membrane keyboard and bold headline “Introducing Atari 400.”

The Atari 400’s membrane keyboard was easy to wipe down, but terrible for typing. It also reminded me of fast food restaurant registers of the time.

Back in the early days of computing—the 1960s and ’70s—there was no distinction between users and programmers. Computer users wrote programs to do stuff for them. Hence the close relationship between the two that’s depicted in TRON. The programs in the digital world resembled their creators because they were extensions of them. Tron, the security program that Bruce Boxleitner’s character Alan Bradley wrote, looks like its creator. Clu looked like Kevin Flynn, played by Jeff Bridges. Early in the film, a compound interest program who was captured by the MCP’s goons says to a cellmate, “if I don’t have a User, then who wrote me?”

Scene from the 1982 movie TRON showing programs in glowing blue suits standing in a digital arena.

The programs in TRON looked like their users. Unless the user was the program, which was the case with Kevin Flynn (Jeff Bridges), third from left.

I was listening to a recent interview with Ivan Zhao, CEO and cofounder of Notion, in which he said he and his cofounder were “inspired by the early computing pioneers who in the ’60s and ’70s thought that computing should be more LEGO-like rather than like hard plastic.” Meaning computing should be malleable and configurable. He goes on to say, “That generation of thinkers and pioneers thought about computing kind of like reading and writing.” As in accessible and fundamental so all users can be programmers too.

The 1980s ushered in the personal computer era with the Apple IIe, Commodore 64, TRS-80, (maybe even the Atari 400 and 800), and then the Macintosh, etc. Programs were beginning to be mass-produced and consumed by users, not programmed by them. To be sure, this move made computers much more approachable. But it also meant that users lost a bit of control. They had to wait for Microsoft to add a feature into Word that they wanted.

Of course, we’re coming back to a full circle moment. In 2025, with AI-enabled vibecoding, users are able to spin up little custom apps that do pretty much anything they want them to do. It’s easy, but not trivial. The only interface is the chatbox, so your control is only as good as your prompts and the model’s understanding. And things can go awry pretty quickly if you’re not careful.

What we’re missing is something accessible, but controllable. Something with enough power to allow users to build a lot, but not so much that it requires high technical proficiency to produce something good. In 1987, Apple released HyperCard and shipped it for free with every new Mac. HyperCard, as fans declared at the time, was “programming for the rest of us.”

HyperCard—Programming for the Rest of Us

Black-and-white screenshot of HyperCard’s welcome screen on a classic Macintosh, showing icons for Tour, Help, Practice, New Features, Art Bits, Addresses, Phone Dialer, Graph Maker, QuickTime Tools, and AppleScript utilities.

HyperCard’s welcome screen showed some useful stacks to help the user get started.

Bill Atkinson was the programmer responsible for MacPaint. After the Mac launched, and apparently on an acid trip, Atkinson conceived of HyperCard. As he wrote on the Apple history site Folklore:

Inspired by a mind-expanding LSD journey in 1985, I designed the HyperCard authoring system that enabled non-programmers to make their own interactive media. HyperCard used a metaphor of stacks of cards containing graphics, text, buttons, and links that could take you to another card. The HyperTalk scripting language implemented by Dan Winkler was a gentle introduction to event-based programming.

There were five main concepts in HyperCard: cards, stacks, objects, HyperTalk, and hyperlinks. 

  • Cards were screens or pages. Remember that the Mac’s nine-inch monochrome screen was just 512 pixels by 342 pixels.
  • Stacks were collections of cards, essentially apps.
  • Objects were the UI and layout elements that included buttons, fields, and backgrounds.
  • HyperTalk was the scripting language that read like plain English.
  • Hyperlinks were links from one interactive element like a button to another card or stack.

When I say that HyperTalk read like plain English, I mean it really did. AppleScript and JavaScript are descendants. Here’s a sample logic script:

if the text of field "Password" is "open sesame" then
  go to card "Secret"
else
  answer "Wrong password."
end if

Armed with this kit of parts, users were able to use this programming “erector set” and build all sorts of banal or wonderful apps. From tracking vinyl records to issuing invoices, or transporting gamers to massive immersive worlds, HyperCard could do it all. The first version of the classic puzzle adventure game, Myst was created with HyperCard. It was comprised of six stacks and 1,355 cards. From Wikipedia:

The original HyperCard Macintosh version of Myst had each Age as a unique HyperCard stack. Navigation was handled by the internal button system and HyperTalk scripts, with image and QuickTime movie display passed off to various plugins; essentially, Myst functions as a series of separate multimedia slides linked together by commands.

Screenshot from the game Myst, showing a 3D-rendered island scene with a ship in a fountain and classical stone columns.

The hit game Myst was built in HyperCard.

For a while, HyperCard was everywhere. Teachers made lesson plans. Hobbyists made games. Artists made interactive stories. In the Eighties and early Nineties, there was a vibrant shareware community. Small independent developers who created and shared simple programs for a postcard, a beer, or five dollars. Thousands of HyperCard stacks were distributed on aggregated floppies and CD-ROMs. Steve Sande, writing in Rocket Yard:

At one point, there was a thriving cottage industry of commercial stack authors, and I was one of them. Heizer Software ran what was called the “Stack Exchange”, a place for stack authors to sell their wares. Like Apple with the current app stores, Heizer took a cut of each sale to run the store, but authors could make a pretty good living from the sale of popular stacks. The company sent out printed catalogs with descriptions and screenshots of each stack; you’d order through snail mail, then receive floppies (CDs at a later date) with the stack(s) on them.

Black-and-white screenshot of Heizer Software’s “Stack Exchange” HyperCard catalog, advertising a marketplace for stacks.

Heizer Software’s “Stack Exchange,” a marketplace for HyperCard authors.

From Stacks to Shrink-Wrap

But even as shareware tiny programs and stacks thrived, the ground beneath this cottage industry was beginning to shift. The computer industry—to move from niche to one in every household—professionalized and commoditized software development, distribution, and sales. By the 1990s, the dominant model was packaged software that was merchandised on store shelves in slick shrink-wrapped boxes. The packaging was always oversized for the floppy or CD it contained to maximize visual space.

Unlike the users/programmers from the ’60s and ’70s, you didn’t make your own word processor anymore, you bought Microsoft Word. You didn’t build your own paint and retouching program—you purchased Adobe Photoshop. These applications were powerful, polished, and designed for thousands and eventually millions of users. But that meant if you wanted a new feature, you had to wait for the next upgrade cycle—typically a couple of years. If you had an idea, you were constrained by what the developers at Microsoft or Adobe decided was on the roadmap.

The ethos of tinkering gave way to the economics of scale. Software became something you consumed rather than created.

From Shrink-Wrap to SaaS

The 2000s took that shift even further. Instead of floppy disks or CD-ROMs, software moved into the cloud. Gmail replaced the personal mail client. Google Docs replaced the need for a copy of Word on every hard drive. Salesforce, Slack, and Figma turned business software into subscription services you didn’t own, but rented month-to-month.

SaaS has been a massive leap for collaboration and accessibility. Suddenly your documents, projects, and conversations lived everywhere. No more worrying about hard drive crashes or lost phones! But it pulled users even farther away from HyperCard’s spirit. The stack you made was yours; the SaaS you use belongs to someone else’s servers. You can customize workflows, but you don’t own the software.

Why Modern Tools Fall Short

For what started out as a note-taking app, Notion has come a long way. With its kit of parts—pages, databases, tags, etc.—it’s highly configurable for tracking information. But you can’t make games with it. Nor can you really tell interactive stories (sure, you can link pages together). You also can’t distribute what you’ve created and share with the rest of the world. (Yes, you can create and sell Notion templates.)

No productivity software programs are malleable in the HyperCard sense. 

[IMAGE: Director]

Of course, there are specialized tools for creativity. Unreal Engine and Unity are great for making games. Director and Flash continued the tradition started by HyperCard—at least in the interactive media space—before they were supplanted by more complex HTML5, CSS, and JavaScript. Objectively, these authoring environments are more complex than HyperCard ever was.

The Web’s HyperCard DNA

In a fun remembrance, Constantine Frantzeskos writes:

HyperCard’s core idea was linking cards and information graphically. This was true hypertext before HTML. It’s no surprise that the first web pioneers drew direct inspiration from HyperCard – in fact, HyperCard influenced the creation of HTTP and the Web itself​. The idea of clicking a link to jump to another document? HyperCard had that in 1987 (albeit linking cards, not networked documents). The pointing finger cursor you see when hovering over a web link today? That was borrowed from HyperCard’s navigation cursor​.

Ted Nelson coined the terms “hypertext” and “hyperlink” in the mid-1960s, envisioning a world where digital documents could be linked together in nonlinear “trails”—making information interwoven and easily navigable. Bill Atkinson’s HyperCard was the first mass-market program that popularized this idea, even influencing Tim Berners-Lee, the father of the World Wide Web. Berners-Lee’s invention was about linking documents together on a server and linking to other documents on other servers. A web of documents.

Early ViolaWWW hypermedia browser from 1993, displaying a window with navigation buttons, URL bar, and hypertext description.

Early web browser from 1993, ViolaWWW, directly inspired by the concepts in HyperCard.

Pei-Yuan Wei, developer of one of the first web browsers called ViolaWWW, also drew direct inspiration from HyperCard. Matthew Lasar writing for Ars Technica:

“HyperCard was very compelling back then, you know graphically, this hyperlink thing,” Wei later recalled. “I got a HyperCard manual and looked at it and just basically took the concepts and implemented them in X-windows,” which is a visual component of UNIX. The resulting browser, Viola, included HyperCard-like components: bookmarks, a history feature, tables, graphics. And, like HyperCard, it could run programs.

And of course, with the built-in source code viewer, browsers brought on a new generation of tinkerers who’d look at HTML and make stuff by copying, tweaking, and experimenting.

The Missing Ingredient: Personal Software

Today, we have low-code and no code tools like Bubble for making web apps, Framer for building web sites, and Zapier for automations. The tools are still aimed at professionals though. Maybe with the exception of Zapier and IFTTT, they’ve expanded the number of people who can make software (including websites), but they’re not general purpose. These are all adjacent to what HyperCard was.

(Re)enter personal software.

In an essay titled “Personal software,” Lee Robinson wrote, “You wouldn’t search ‘best chrome extensions for note taking’. You would work with AI. In five minutes, you’d have something that works exactly how you want.”

Exploring the idea of “malleable software,” researchers at Ink & Switch wrote:

How can users tweak the existing tools they’ve installed, rather than just making new siloed applications? How can AI-generated tools compose with one another to build up larger workflows over shared data? And how can we let users take more direct, precise control over tweaking their software, without needing to resort to AI coding for even the tiniest change? None of these questions are addressed by products that generate a cloud-hosted application from a prompt.

Of course, AI prompt-to-code tools have been emerging this year, allowing anyone who can type to build web applications. However, if you study these tools more closely—Replit, Lovable, Base44, etc.—you’ll find that the audience is still technical people. Developers, product managers, and designers can understand what’s going on. But not everyday people.

These tools are still missing ingredients HyperCard had that allowed it to be in the general zeitgeist for a while, that enabled users to be programmers again.

They are:

  • Direct manipulation
  • Technical abstraction
  • Local apps

What Today’s Tools Still Miss

Direct Manipulation

As I concluded in my exhaustive AI prompt-to-code tools roundup from April, “We need to be able to directly manipulate components by clicking and modifying shapes on the canvas or changing values in an inspector.” The latency of the roundtrip of prompting the model, waiting for it to think and then generate code, and then rebuild the app is much too long. If you don’t know how to code, every change takes minutes, so building something becomes tedious, not fun.

Tools need to be a canvas-first, not chatbox-first. Imagine a kit of UI elements on the left that you can drag onto the canvas and then configure and style—not unlike WordPress page builders. 

AI is there to do the work for you if you want, but you don’t need to use it.

Hand-drawn sketch of a modern HyperCard-like interface, with a canvas in the center, object palette on the left, and chat panel on the right.

My sketch of the layout of what a modern HyperCard successor could look like. A directly manipulatable canvas is in the center, object palette on the left, and AI chat panel on the right.

Technical Abstraction

For gen pop, I believe that these tools should hide away all the JavaScript, TypeScript, etc. The thing that the user is building should just work.

Additionally, there’s an argument to be made to bring back HyperTalk or something similar. Here is the same password logic I showed earlier, but in modern-day JavaScript:

const password = document.getElementById("Password").value;

if (password === "open sesame") {
  window.location.href = "secret.html";
} else {
  alert("Wrong password.");
} 

No one is going to understand that, much less write something like it.

One could argue that the user doesn’t need to understand that code since the AI will write it. Sure, but code is also documentation. If a user is working on an immersive puzzle game, they need to know the algorithm for the solution. 

As a side note, I think flow charts or node-based workflows are great. Unreal Engine’s Blueprints visual scripting is fantastic. Again, AI should be there to assist.

Unreal Engine Blueprints visual scripting interface, with node blocks connected by wires representing game logic.

Unreal Engine has a visual scripting interface called Blueprints, with node blocks connected by wires representing game logic.

Local Apps

HyperCard’s file format was “stacks.” And stacks could be compiled into applications that can be distributed without HyperCard. With today’s cloud-based AI coding tools, they can all publish a project to a unique URL for sharing. That’s great for prototyping and for personal use, but if you wanted to distribute it as shareware or donation-ware, you’d have to map it to a custom domain name. It’s not straightforward to purchase from a registrar and deal with DNS records.

What if these web apps can be turned into a single exchangeable file format like “.stack” or some such? Furthermore, what if they can be wrapped into executable apps via Electron?

Rip, Mix, Burn

Lovable, v0, and others already have sharing and remixing built in. This ethos is great and builds on the philosophies of the hippie computer scientists. In addition to fostering a remix culture, I imagine a centralized store for these apps. Of course, those that are published as runtime apps can go through the official Apple and Google stores if they wish. Finally, nothing stops third-party stores, similar to the collections of stacks that used to be distributed on CD-ROMs.

AI as Collaborator, Not Interface

As mentioned, AI should not be the main UI for this. Instead, it’s a collaborator. It’s there if you want it. I imagine that it can help with scaffolding a project just by describing what you want to make. And as it’s shaping your app, it’s also explaining what it’s doing and why so that the user is learning and slowly becoming a programmer too.

Democratizing Programming

When my daughter was in middle school, she used a site called Quizlet to make flash cards to help her study for history tests. There were often user-generated sets of cards for certain subjects, but there were never sets specifically for her class, her teacher, that test. With this HyperCard of the future, she would be able to build something custom in minutes.

Likewise, a small business owner who runs an Etsy shop selling T-shirts can spin up something a little more complicated to analyze sales and compare against overall trends in the marketplace.

And that same Etsy shop owner could sell the little app they made to others wanting the same tool for for their stores.

The Future Is Close

Scene from TRON showing a program with raised arms, looking upward at a floating disc in a beam of light.

Tron talks to his user, Alan Bradley, via a communication beam.

In an interview with Garry Tan of Y Combinator in June, Michael Truell, the CEO of Anysphere, which is the company behind Cursor, said his company’s mission is to “replace coding with something that’s much better.” He acknowledged that coding today is really complicated:

Coding requires editing millions of lines of esoteric formal programming languages. It requires doing lots and lots of labor to actually make things show up on the screen that are kind of simple to describe.

Truell believes that in five to ten years, making software will boil down to “defining how you want the software to work and how you want the software to look.”

In my opinion, his timeline is a bit conservative, but maybe he means for professionals. I wonder if something simpler will come along sooner that will capture the imagination of the public, like ChatGPT has. Something that will encourage playing and tinkering like HyperCard did.

There’s a third sequel to TRON that’s coming out soon—TRON: Ares. In a panel discussion in the 5,000-seat Hall H at San Diego Comic-Con earlier this summer, Steven Lisberger, the creator of the franchise provided this warning about AI, “Let’s kick the technology around artistically before it kicks us around.” While he said it as a warning, I think it’s an opportunity as well.

AI opens up computer “programming” to a much larger swath of people—hell, everyone. As an industry, we should encourage tinkering by building such capabilities into our products. Not UIs on the fly, but mods as necessary. We should build platforms that increase the pool of users from technical people to everyday users like students, high school teachers, and grandmothers. We should imagine a world where software is as personalizable as a notebook—something you can write in, rearrange, and make your own. And maybe users can be programmers once again.

Interesting piece from Vaughn Tan about a critical thinking framework that is disguised as a piece about building better AI UIs for critical thinking. Sorry, that sentence is kind of a tongue-twister. Tan calls out—correctly—that LLMs don’t think, or in his words, can’t make meaning:

Meaningmaking is making inherently subjective decisions about what’s valuable: what’s desirable or undesirable, what’s right or wrong. The machines behind the prompt box are remarkable tools, but they’re not meaningmaking entities.

Therefore when users ask LLMs for their opinions on matters, e.g., as in the therapy use case, the AIs won’t come back with actual thinking. IMHO, it’s semantics, but that’s another post.

Anyhow, Tan shares a pen and paper prototype he’s been testing, which breaks down a major decision into guided steps, or put another way, a framework.

This user experience was designed to simulate a multi-stage process of structured elicitation of various aspects of strongly reasoned arguments. This design explicitly addresses both requirements for good tool use. The structured prompts helped students think critically about what they were actually trying to accomplish with their custom major proposals — the meaningmaking work of determining value, worth, and personal fit. Simultaneously, the framework made clear what kinds of thinking work the students needed to do themselves versus what kinds of information gathering and analysis could potentially be supported by tools like LLMs.

This guided or framework-driven approach was something I attempted wtih Griffin AI. Via a series of AI-guided prompts to the user—or a glorified form, honestly—my tool helped users build brand strategies. To be sure, a lot of the “thinking” was done by the model, but the idea that an AI can guide you to critically think about your business or your client’s business was there.

preview-1756270668809.png

Designing AI tools that support critical thinking

Current AI interfaces lull us into thinking we’re talking to something that can make meaningful judgments about what’s valuable. We’re not — we’re using tools that are tremendously powerful but nonetheless can’t do “meaningmaking” work (the work of deciding what matters, what’s worth pursuing).

vaughntan.org iconvaughntan.org

Designer Tey Bannerman writes that when he hears “human in the loop,” he’s reminded of a story about Lieutenant Colonel Stanislav Petrov, a Soviet Union duty watch officer who monitored for incoming missile strikes from the US.

12:15 AM… the unthinkable. Every alarm in the facility started screaming. The screens showed five US ballistic missiles, 28 minutes from impact. Confidence level: 100%. Petrov had minutes to decide whether to trigger a chain reaction that would start nuclear war and could very well end civilisation as we knew it.

He was the “human in the loop” in the most literal, terrifying sense.

Everything told him to follow protocol. His training. His commanders. The computers.

But something felt wrong. His intuition, built from years of intelligence work, whispered that this didn’t match what he knew about US strategic thinking.

Against every protocol, against the screaming certainty of technology, he pressed the button marked “false alarm”.

Twenty-three minutes of gripping fear passed before ground radar confirmed: no missiles. The system had mistaken a rare alignment of sunlight on high-altitude clouds for incoming warheads.

His decision to break the loop prevented nuclear war.

Then Bannerman shares an awesome framework he developed that allows humans in the loop in AI systems “genuine authority, time to think, and understanding the bigger picture well enough to question” the system’s decision. Click on to get the PDF from his site.

Framework diagram by Tey Bannerman titled Beyond ‘human in the loop’. It shows a 4×4 matrix mapping AI oversight approaches based on what is being optimized (speed/volume, quality/accuracy, compliance, innovation) and what’s at stake (irreversible consequences, high-impact failures, recoverable setbacks, low-stakes outcomes). Colored blocks represent four modes: active control, human augmentation, guided automation, and AI autonomy. Right panel gives real-world examples in e-commerce email marketing and recruitment applicant screening.

Redefining ‘human in the loop’

"Human in the loop" is overused and vague. The Petrov story shows humans must have real authority, time, and context to safely override AI. Bannerman offers a framework that asks what you optimize for and what is at stake, then maps 16 practical approaches.

teybannerman.com iconteybannerman.com

Simon Sherwood, writing in The Register:

Amazon Web Services CEO Matt Garman has suggested firing junior workers because AI can do their jobs is “the dumbest thing I’ve ever heard.”

Garman made that remark in conversation with AI investor Matthew Berman, during which he talked up AWS’s Kiro AI-assisted coding tool and said he’s encountered business leaders who think AI tools “can replace all of our junior people in our company.”

That notion led to the “dumbest thing I’ve ever heard” quote, followed by a justification that junior staff are “probably the least expensive employees you have” and also the most engaged with AI tools.

“How’s that going to work when ten years in the future you have no one that has learned anything,” he asked. “My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.”

Yup. I agree.

preview-1756189648262.jpg

AWS CEO says AI replacing junior staff is 'dumbest idea'

They're cheap and grew up with AI … so you're firing them why?

theregister.com icontheregister.com

This post from Carly Ayres breaks down a beef between Michael Roberson (developer of an AI-enabled moodboard tool) and Elizabeth Goodspeed (writer and designer, oft-linked on this blog) and explores ragebait, putting in the reps as a junior, and designers as influencers.

Tweet by Michael Roberson defending Moodboard AI against criticism, saying if faster design research threatens your job, “you’re ngmi.” Screenshot shows a Sweetgreen brand audit board with colors, fonts, and imagery.

Tweet from Michael Roberson

The tweet earned 30,000 views, but only about 20 likes. “That ratio was pretty jarring,” [Roberson] said. Still, the strategy felt legible. “When I post things like, ‘if you don’t do X, you’re not going to make it,’ obviously, I don’t think that. These tools aren’t really capable of replacing designers just yet. It’s really easy to get views baiting and fear-mongering.”

Much like the provocative Artisan campaign, I think this is a net negative for the brand. Pretty sure I won’t be trying out Moodboard AI anytime soon, ngl.

But stepping back from the internet beef, Ayres argues that it’s a philosophical difference about the role friction in the creative process.

Michael’s experience mirrors that of many young designers: brand audits felt like busywork during his Landor internship. “That process was super boring,” he told me. “I wasn’t learning much by copy-pasting things into a deck.” His tool promises to cut through that inefficiency, letting teams reach visual consensus faster and spend more time on execution.

Young Michael, the process is the point! Without doing this boring stuff, by automating it with AI, how are you going to learn? This is but one facet of the whole discussion around expertise, wisdom, and the design talent crisis.

Goodspeed agrees with me:

Elizabeth sees it differently. “What’s interesting to me,” Elizabeth noted, “is how many people are now entering this space without a personal understanding of how the process of designing something actually works.” For her, that grunt work was formative. “The friction is the process,” she explained. “That’s how you form your point of view. You can’t just slap seven images on a board. You’re forced to think: What’s relevant? How do I organize this and communicate it clearly?”

Ultimately, the saddest point that Ayres makes—and noted by my friend Eric Heiman—is this:

When you’re young, online, and trying to get a project off the ground, caring about distribution is the difference between a hobby and a company. But there’s a cost. The more you perform expertise, the less you develop it. The more you optimize for engagement, the more you risk flattening what gave the work meaning in the first place. In a world where being known matters more than knowing, the incentives point toward performance over practice. And we all become performers in someone else’s growth strategy.

…Because when distribution matters more than craft, you don’t become a designer by designing. You become a designer by being known as one. That’s the game now.

preview-1755813084079.png

Mooooooooooooooood

Is design discourse the new growth hack?

open.substack.com iconopen.substack.com
Surreal black-and-white artwork of a glowing spiral galaxy dripping paint-like streaks over a city skyline at night.

Why I’m Keeping My Design Title

In the 2011 documentary Jiro Dreams of Sushi, then 85 year-old sushi master Jiro Ono says this about craft:

Once you decide on your occupation… you must immerse yourself in your work. You have to fall in love with your work. Never complain about your job. You must dedicate your life to mastering your skill. That’s the secret of success and is the key to being regarded honorably.

Craft is typically thought of as the formal aspects of any field such as design, woodworking, writing, or cooking. In design, we think about composition, spacing, and typography—being pixel-perfect. But one’s craft is much more than that. Ono’s sushi craft is not solely about slicing fish and pressing it against a bit of rice. It is also about picking the right fish, toasting the nori just so, cooking the rice perfectly, and running a restaurant. It’s the whole thing.

Therefore, mastering design—or any occupation—takes time, experience, or reps as the kids say. So it’s to my dismay that Suff Syed’s essay “Why I’m Giving Up My Design Title — And What That Says About the Future of Design” got so much play in recent weeks. Syed is Head of Product Design at Microsoft—er, was. I guess his title is now Member of the Technical Staff. In a perfectly well-argued and well-written essay, he concludes:

That’s why I’m switching careers. From Head of Product Design to Member of Technical Staff.

This isn’t a farewell to experience, clarity, or elegance. It’s a return to first principles. I want to get closer to the metal—to shape the primitives, models, and agents that will define how tomorrow’s software is built.

We need more people at the intersection. Builders who understand agentic flows and elevated experiences. Designers who can reason about trust boundaries and token windows. Researchers who can make complex systems usable—without dumbing them down to a chat interface.

In the 2,800 words preceding the above quote, Syed lays out a five-point argument: the paradigm for software is changing to agentic AI, design doesn’t drive innovation, fewer design leaders will be needed in the future, the commoditization of design, and the pay gap. The tl;dr being that design as a profession is dead and building with AI is where it’s at. 

With respect to Mr. Syed, I call bullshit. 

Let’s discuss each of his arguments.

The Paradigm Argument

Suff Syed:

The entire traditional role of product designers, creating static UI in Silicon Valley offices that work for billions of users, is becoming increasingly irrelevant; when the Agent can simply generate the UI it needs for every single user.

That’s a very narrow view of what user experience designers do. In this diagram by Dan Saffer from 2008, UX encircles a large swath of disciplines. It’s a little older so it doesn’t cover newer disciplines like service design or AI design.

Diagram titled The Disciplines of UX showing overlapping circles of fields like Industrial Design, Human Factors, Communication Design, and Architecture. The central green overlap highlights Interaction Design, surrounded by related areas such as usability engineering, information architecture, motion design, application design, and human-computer interaction.

Originally made by envis pricisely GmBH - www.envis-precisely.com, based on “The Disciplines of UX” by Dan Saffer (2008). (PDF)

I went to design school a long time ago, graduating 1995. But even back then, in Graphic Design 2 class, graphic design wasn’t just print design. Our final project for that semester was to design an exhibit, something that humans could walk through. I’ve long lost the physical model, but my solution was inspired by the Golden Gate Bridge and how I had this impression of the main cables as welcome arms as you drove across the bridge. My exhibit was a 20-foot tall open structure made of copper beams and a glass roof. Etched onto the roof was a poem—by whom I can’t recall—that would cast the shadows of its letters onto the ground, creating an experience for anyone walking through the structure.

Similarly, thoughtful product designers consider the full experience, not just what’s rendered on the screen. How is onboarding? What’s their interaction with customer service? And with techniques like contextual inquiry, we care about the environments users are in. Understanding that nurses in a hospital are in a very busy setting and share computers are important insights that can’t be gleaned from desk research or general knowledge. Designers are students of life and observers of human behavior.

Syed again:

Agents offer a radical alternative by placing control directly into users’ hands. Instead of navigating through endless interfaces, finding a good Airbnb could be as simple as having a conversation with an AI agent. The UI could be generated on the fly, tailored specifically to your preferences; an N:1 model. No more clicking around, no endless tabs, no frustration.

I don’t know. I have my doubts that this is actually going to be the future. While I agree that agentic workflows will be game-changing, I disagree that the chat UI is the only one for all use cases or even most scenarios. I’ve previously discussed the disadvantages of prompting-only workflows and how professionals need more control. 

I also disagree that users will want UIs generated on the fly. Think about the avalanche of support calls and how insane those will be if every user’s interface is different!

In my experience, users—including myself—like to spend the time to set up their software for efficiency. For example, in a dual-monitor setup, I used to expose all of Photoshop’s palettes and put them in the smaller display, and the main canvas on the larger one. Every time I got a new computer or new monitor, I would import that workspace so I could work efficiently. 

Habit and muscle memory are underrated. Once a user has invested the time to arrange panels, tools, and shortcuts the way they like, changing it frequently adds friction. For productivity and work software, consistency often outweighs optimization. Even if a specialized AI-made-for-you workspace could be more “optimal” for a task, switching disrupts the user’s mental model and motor memory.

I want to provide one more example because it’s in the news: consider the backlash that OpenAI has faced in the past week with their rollout of GPT-5. OpenAI assumed people would simply welcome “the next model up,” but what they underestimated was the depth of attachment to existing workflows, and in some cases, to the personas of the models themselves. As Casey Newton put it, “it feels different and stronger than the kinds of attachment people have had to previous kinds of technology.” It’s evidence of how much emotional and cognitive investment users pour into the tools they depend on. You can’t just rip that foundation away without warning. 

Which brings us back to the heart of design: respect for the user. Not just their immediate preferences, but the habits, muscle memory, and yes, relationships that accumulate over time. Agents may generate UIs on the fly, but if they ignore the human need for continuity and control, they’ll stumble into the same backlash OpenAI faced.

The Innovation Argument

Syed’s second argument is that design supports innovation rather than drive it. I half agree with this. If we’re talking about patents or inventions, sure. Technology will always win the day. But design can certainly drive innovation.

He cites Airbnb, Figma, Notion, and Linear as being “incredible companies with design founders,” but only Airbnb is a Fortune 500 company. 

While not having been founded by designers, I don’t think anyone would argue that Apple, Nike, Tesla, and Disney are not design-led and aren’t innovative. All are in the Fortune 500. Disney treats experience design, which includes its parks, media, and consumer products, as a core capability. Imagineering is a literal design R&D division that shapes the company’s most profitable experiences. Look up Lanny Smoot.

Early prototypes of the iPhone featuring the first multitouch screens were actually tablet-sized. But Apple’s industrial design team led by Jony Ive, along with the hardware engineering team got the form factor to fit nicely in one hand. And it was Bas Ording, the UI designer behind Mac OS X’s Aqua design language that prototyped inertial effects. Farhad Manjoo, writing in Slate in 2012:

Jonathan Ive, Apple’s chief designer, had been investigating a technology that he thought could do wonderful things someday—a touch display that could understand taps from multiple fingers at once. (Note that Apple did not invent multitouch interfaces; it was one of several companies investigating the technology at the time.) According to Isaacson’s biography, the company’s initial plan was to the use the new touch system to build a tablet computer. Apple’s tablet project began in 2003—seven years before the iPad went on sale—but as it progressed, it dawned on executives that multitouch might work on phones. At one meeting in 2004, Jobs and his team looked a prototype tablet that displayed a list of contacts. “You could tap on the contact and it would slide over and show you the information,” Forstall testified. “It was just amazing.”

Jobs himself was particularly taken by two features that Bas Ording, a talented user-interface designer, had built into the tablet prototype. One was “inertial scrolling”—when you flick at a list of items on the screen, the list moves as a function of how fast you swipe, and then it comes to rest slowly, as if being affected by real-world inertia. Another was the “rubber-band effect,” which causes a list to bounce against the edge of the screen when there were no more items to display. When Jobs saw the prototype, he thought, “My god, we can build a phone out of this,” he told the D Conference in 2010.

The Leadership Argument

Suff Syed’s third argument is about what it means to be a design leader. He says, “scaling your impact as a designer meant scaling the surfaces you influence.” As you rose up through the ranks, “your craft was increasingly displaced by coordination. You became a negotiator, a timeline manager, a translator of ambition through Product and Engineering partnerships.”

Instead, he argues, because AI can build with fewer people—well, you only need one person: “You need two people: one who understands systems and one who understands the user. Better if they’re the same person.”

That doesn’t scale. Don’t tell me that Microsoft, a company with $281 billion in revenue and 228,000 employees—will shrink like a stellar collapse into a single person with an army of AIs. That’s magical thinking.

Leaders are still needed. Influence and coordination are still needed. Humans will still be needed.

He ends this argument with:

This new world despises a calendar full of reviews, design crits, review meetings, and 1:1s. It emphasizes a repo with commits that matter. And promises the joy of shipping to return to your work. That joy unmediated by PowerPoint, politics, or process. That’s not a demotion. That’s liberation.

So he wants us all to sit in our home offices and not collaborate with others? Innovation no longer comes from lone geniuses. They’re born from bouncing ideas off of your coworkers and everyone building on each other’s ideas.

Friction in the process can actually make things better. Pixar famously has a council known as the Braintrust—a small, rotating group of the studio’s best storytellers who meet regularly to tear down and rebuild works-in-progress. The rules are simple: no mandatory fixes, no sugarcoating, and no egos. The point is to push the director to see the story’s problems more clearly—and to own the solution. One of the most famous saves came with Toy Story 2. Originally destined for direct-to-video release, early cuts were so flat that the Braintrust urged the team to start from scratch. Nine frantic months later, the film emerged as one of Pixar’s most beloved works, proof that constructive creative friction can turn a near-disaster into a classic.

The Distribution Argument

Design taste has been democratized and is table stakes, says Syed in his next argument.

There was a time when every new Y Combinator startup looked like someone tortured an intern into generating a logo using Clipart. Today, thanks to a generation of exposure to good design—and better tools—most founders have internalized the basics of aesthetic judgment. First impressions matter, and now, they’re trivial to get right.

And that templates, libraries, and frameworks make it super easy and quick to spin up something tasteful in minutes:

Component libraries like Tailwind, shadcn/ui, and Radix have collapsed the design stack. What once required a full design team handcrafting a system in Figma, exporting specs to Storybook, and obsessively QA-ing the front-end… now takes a few lines of code. Spin up a repo. Drop in some components. Tweak the palette. Ship something that looks eerily close to Linear or Notion in a weekend.

I’m starting to think that Suff Syed believes that designers are just painters or something. Wow. This whole argument is reductive, flattening our role to be only about aesthetics. See above for how much design actually entails.

The Wealth Argument

“Nobody is paying Designers $10M, let alone $100M anytime soon.” Ah, I think this is him saying the quiet part out loud. Mr. Syed is dropping his design title and becoming a “member of the technical staff” because he’s chasing the money.

He’s right. No one is going to pay a designer $100 million total comp package. Unless you’re Jony Ive and part of io, which OpenAI acquired for $6.5 billion back in May. Which is a rare and likely once-ever occurrence.

In a recent episode of Hard Fork, The New Times tech columnist Kevin Roose said:

The scale of money and investment going into these AI systems is unlike anything we’ve ever seen before in the tech industry. …I heard a rumor there was a big company that wasted a billion dollars or more on a failed training run. And then you start to think, oh, I understand why, to a company like Meta, the right AI talent is worth a hundred million dollars, because that level of expertise doesn’t exist that widely outside of this very small group of people. And if this person does their job well, they can save your company something more like a billion dollars. And maybe that means that you should pay them a hundred million dollars.

“Very small group of people” is likely just a couple dozen people in the world who have this expertise and worth tens of millions of dollars.

Syed again:

People are getting generationally wealthy inventing new agentic abstractions, compressing inference cycles, and scaling frontier models safely. That’s where the gravity is. That’s where anybody should aspire to be. With AI enabling and augmenting you as an individual, there’s a far more compelling reason to chase this frontier. No reason not to.

People also get generationally wealthy by hitting the startup lottery. But it’s a hard road and there’s a lot of luck involved.

The current AI frenzy feels a lot like 1849 in California. Back then, roughly 300,000 people flooded the Sierra Nevada mountains hoping to strike gold, but the math was brutal: maybe 10% made any profit at all, the top 4% earned enough to brag a little, and only about 1% became truly rich. The rest? They left with sore backs, empty pockets, and I guess some good stories. 

Back to Reality

AI is already changing the software industry. As designers and builders of software, we are going to be using AI as material. This is as obvious as when the App Store on iPhone debuted and everyone needed to build apps.

Suff Syed wrote his piece as part personal journey and decision-making and part rallying cry to other designers. He is essentially switching careers and says that it won’t be easy.

This transition isn’t about abandoning one identity for another. It’s about evolving—unlearning what no longer serves us and embracing the disciplines that will shape the future. There’s a new skill tree ahead: model internals, agent architectures, memory hierarchies, prompt flows, evaluation loops, and infrastructure that determines how products think, behave, and scale.

Best of luck to Suff Syed on his journey. I hope he strikes AI gold. 

As for me, I aim to continue on my journey of being a shokunin, or craftsman, like Jiro Ono. For over 30 years—if you count my amateur days in front of the Mac in middle school—I’ve been designing. Not just pushing pixels in Photoshop or Figma, but doing the work of understanding audiences and users, solving business problems, inventing new interaction patterns, and advocating for usability. All in the service of the user, and all while honing my craft.

That craft isn’t tied to a technology stack or a job title. It’s a discipline, a mindset, and a lifetime’s work. Being a designer is my life. 

So no, I’m not giving up my design title. It’s not a relic—it’s a commitment. And in a world chasing the next gold rush, I’d rather keep making work worth coming back to, knowing that in the end, gold fades but mastery endures. Besides, if I ever do get rich, it’ll be because I designed something great, not because I happened to be standing near a gold mine.

As a follow-up to yesterday’s item on how Google’s AI overviews are curtailing traffic to websites by as much as 25%, here is a link to Nielsen Norman Group’s just-published study showing that generative AI is reshaping search.

Kate Moran, Maria Rosala and Josh Brown:

While AI offers compelling shortcuts around tedious research tasks, it isn’t close to completely replacing traditional search. But, even when people are using traditional search, the AI-generated overview that now tops almost all search-results pages steals a significant amount of attention and often shortcuts the need to visit the actual pages.

They write that users have developed a way to search over the years, skipping sponsored results and heading straight for the organic links. Users also haven’t completely broken free of traditional Google Search, now adding chatbots to the mix:

While generative AI does offer enough value to change user behaviors, it has not replaced traditional search entirely. Traditional search and AI chats were often used in tandem to explore the same topic and were sometimes used to fact-check each other.

All our participants engaged in traditional search (using keywords, evaluating results pages, visiting content pages, etc.) multiple times in the study. Nobody relied entirely on genAI’s responses (in chat or in an AI overview) for all their information-seeking needs.

In many ways, I think this is smart. Unless “web search” is happening, I tend double-check ChatGPT and Claude, especially for anything historical and mission-critical. I also like Perplexity for that fact—because it shows me its receipts by giving me sources.

preview-1755581621661.png

How AI Is Changing Search Behaviors

Our study shows that generative AI is reshaping search, but long-standing habits persist. Many users still default to Google, giving Gemini a fighting chance.

nngroup.com iconnngroup.com

Jessica Davies reports that new publisher data suggests that some sites are getting 25% less traffic from Google than the previous year.

Writing in Digiday:

Organic search referral traffic from Google is declining broadly, with the majority of DCN member sites — spanning both news and entertainment — experiencing traffic losses from Google search between 1% and 25%. Twelve of the respondent companies were news brands, and seven were non-news.

Jason Kint, CEO of DCN, says that this is a “direct consequence of Google AI Overviews.”

I wrote previously about the changing economics of the web here, here, and here.

And related, Eric Mersch writes in a LinkedIn post that Monday.com’s stock fell 23% because co-CEO Roy Mann said, “We are seeing some softness in the market due to Google algorithm,” during their Q2 earnings call and the analysts just kept hammering him and the CFO about how the algo changes might affect customer acquisition.

Analysts continued to press the issue, which caught company management completely off guard. Matthew Bullock from Bank of America Merrill Lynch asked frankly, “And then help us understand, why call this out now? How did the influence of Google SEO disruption change this quarter versus 1Q, for example?” The CEO could only respond, “So look, I think like we said, we optimize in real-time. We just budget daily,” implying that they were not aware of the problem until they saw Q2 results.

This is the first public sign that the shift from Google to AI-powered searches is having an impact.

preview-1755493440980.jpg

Google AI Overviews linked to 25% drop in publisher referral traffic, new data shows

The majority of Digital Content Next publisher members are seeing traffic losses from Google search between 1% and 25% due to AI Overviews.

digiday.com icondigiday.com

Ben Davies-Romano argues that the AI chat box is our new design interface:

Every interaction with a large language model starts the same way: a blinking cursor in a blank text field. That unassuming box is more than an input — it’s the interface between our human intent and the model’s vast, probabilistic brain.

This is where the translation happens. We pour in the nuance, constraints, and context of our ideas; the model converts them into an output. Whether it’s generating words, an image, a video sequence, or an interactive prototype, every request passes through this narrow bridge.

It’s the highest-stakes, lowest-fidelity design surface I’ve ever worked with: a single field that stands between human creativity and an engine capable of reshaping it into almost any form, albeit with all the necessary guidance and expertise applied.

In other words, don’t just say “Make it better,” but guide the AI instead.

That’s why a vague, lazy prompt, like “make it better”, is the design equivalent of telling a junior designer “make it intuitive” and walking away. You’ll get something generic, safe, and soulless, not because the AI “missed the brief,” but because there was no brief.

Without clear stakes, a defined brand voice, and rich context, the system will fill in the blanks with its default, most average response. And “average” is rarely what design is aiming for.

And he makes a point that designers should be leading the charge on showing others what generative AI can do:

In the age of AI, it shouldn’t be everyone designing, per say. It should be designers using AI as an extension of our craft. Bringing our empathy, our user focus, our discipline of iteration, and our instinct for when to stop generating and start refining. AI is not a replacement for that process; it’s a multiplier when guided by skilled hands.

So, let’s lead. Let’s show that the real power of AI isn’t in what it can generate, but in how we guide it — making it safer, sharper, and more human. Let’s replace the fear and the gimmicks with clarity, empathy, and intentionality.

The blank prompt is our new canvas. And friends, we need to be all over it.

preview-1754887809469.jpeg

Prompting is designing. And designers need to lead.

Forget “prompt hacks.” Designers have the skills to turn AI from a gimmick into a powerful, human-centred tool if we take the lead.

medium.com iconmedium.com

Yesterday, OpenAI launched GPT-5, their latest and greatest model that replaces the confusing assortment of GPT-4o, o3, o4-mini, etc. with just two options: GPT-5 and GPT-5 pro. The reasoning is built in and the new model is smart enough to know what to think harder, or when a quick answer suffices.

Simon Willison deep dives into GPT-5, exploring its mix of speed and deep reasoning, massive context limits, and competitive pricing. He sees it as a steady, reliable default for everyday work rather than a radical leap forward:

I’ve mainly explored full GPT-5. My verdict: it’s just good at stuff. It doesn’t feel like a dramatic leap ahead from other LLMs but it exudes competence—it rarely messes up, and frequently impresses me. I’ve found it to be a very sensible default for everything that I want to do. At no point have I found myself wanting to re-run a prompt against a different model to try and get a better result.

It’s a long technical read but interesting nonetheless.

preview-1754630277862.jpg

GPT-5: Key characteristics, pricing and model card

I’ve had preview access to the new GPT-5 model family for the past two weeks (see related video) and have been using GPT-5 as my daily-driver. It’s my new favorite …

simonwillison.net iconsimonwillison.net
Illustration of diverse designers collaborating around a table with laptops and design materials, rendered in a vibrant style with coral, yellow, and teal colors

Five Practical Strategies for Entry-Level Designers in the AI Era

*In Part I of this series on the design talent crisis, I wrote about the struggles recent grads have had finding entry-level design jobs and what might be causing the stranglehold on the design job market. In Part II, I discussed how industry and education need to change in order to ensure the survival of the profession. *

**Part III: Adaptation Through Action **

Like most Gen X kids, I grew up with a lot of freedom to roam. By fifth grade, I was regularly out of the house. My friends and I would go to an arcade in San Francisco’s Fisherman’s Wharf called The Doghouse, where naturally, they served hot dogs alongside their Joust and TRON cabinets. But we would invariably go to the Taco Bell across the street for cheap pre-dinner eats. In seventh grade—this is 1986—I walked by a ComputerLand on Van Ness Avenue and noticed a little beige computer with a built-in black and white CRT. The Macintosh screen was actually pale blue and black, but more importantly, showed MacPaint. It was my first exposure to creating graphics on a computer, which would eventually become my career.

Desktop publishing had officially begun a year earlier with the introduction of Aldus PageMaker and the Apple LaserWriter printer for the Mac, which enabled WYSIWYG (What You See Is What You Get) page layouts and high-quality printed output. A generation of designers who had created layouts using paste-up techniques with tools and materials like X-Acto knives, Rapidograph pens, rubyliths, photostats, and rubber cement had to start learning new skills. Typesetters would eventually be phased out in favor of QuarkXPress. A decade of transition would revolutionize the industry, only to be upended again by the web.

Many designers who made the jump from paste-up to desktop publishing couldn’t make the additional leap to HTML. They stayed graphic designers and a new generation of web designers emerged. I think those who were in my generation—those that started in the waning days of analog and the early days of DTP—were able to make that transition.

We are in the midst of yet another transition: to AI-augemented design. It’s important to note that it’s so early, that no one can say anything with absolute authority. Any so-called experts have been working with AI tools and AI UX patterns for maybe two years, maximum. (Caveat: the science of AI has been around for many decades, but using these new tools, techniques, and developing UX patterns for interacting with such tools is solely new.)

It’s obvious that AI is changing not only the design industry, but nearly all industries. The transformation is having secondary effects on the job market, especially for entry-level talent like young designers.

The AI revolution mirrors the previous shifts in our industry, but with a crucial difference: it’s bigger and faster. Unlike the decade-long transitions from paste-up to desktop publishing and from print to web, AI’s impact is compressing adaptation timelines into months rather than years. For today’s design graduates facing the harsh reality documented in Part I and Part II—where entry-level positions have virtually disappeared and traditional apprenticeship pathways have been severed—understanding this historical context isn’t just academic. It’s reality for them. For some, adaptation is possible but requires deliberate strategy. The designers who will thrive aren’t necessarily those with the most polished portfolios or prestigious degrees, but those who can read the moment, position themselves strategically, and create their own pathways into an industry in tremendous flux.

As a designer who is entering the workforce, here are five practical strategies you can employ right now to increase your odds of landing a job in this market:

  1. Leverage AI literacy as competitive differentiator
  2. Emphasize strategic thinking and systems thinking
  3. Become a “dangerous generalist”
  4. Explore alternative pathways and flexibility
  5. Connecting with community

1. AI Literacy as Competitive Differentiator

Young designer orchestrating multiple AI tools on screens, with floating platform icons representing various AI tools.

Just like how Leah Ray, a recent graphic design MFA graduate from CCA, has deeply incorporated AI into her workflow, you have to get comfortable with some of the tools. (See her story in Part II for more context.)

Be proficient in the following categories of AI tools:

  • Chatbot: Choose ChatGPT, Claude, or Gemini. Learn about how to write prompts. You can even use the chatbot to learn how to write prompts! Use it as a creative partner to bounce ideas off of and to do some initial research for you.
  • Image generator: Adobe Firefly, DALL-E, Gemini, Midjourney, or Visual Electric. Learn how to use at least one of these, but more importantly, figure out how to get consistently good results from these generators.
  • Website/web app generator: Figma Make, Lovable, or v0. Especially if you’re in an interaction design field, you’ll need to be proficient in an AI prompt-to-code tool.

Add these skills to your resume and LinkedIn profile. Share your experiments on social media. 

But being AI-literate goes beyond just the tools. It’s also about wielding AI as a design material. Here’s the good part: by getting proficient in the tools, you’re also learning about the UX patterns for AI and learning what is possible with AI technologies like LLMs, agents, and diffusion models.

I’ve linked to a number of articles about designing for AI use cases:

Have a basic understanding of the following:

Be sure to add at least one case study in your portfolio that incorporates an AI feature.

2. Strategic Thinking and Systems Thinking

Designer pointing at an interconnected web diagram showing how design decisions create ripple effects through business systems.

Stunts like AI CEOs notwithstanding, companies don’t trust AI enough to cede strategy to it. LLMs are notoriously bad at longer tasks that contain multiple steps. So thinking about strategy and how to create a coherent system are still very much human activities.

Systems thinking—the ability to understand how different parts of a system interact and how changes in one component can create cascading effects throughout the entire system—is becoming essential for tech careers and especially designers. The World Economic Forum’s Future of Jobs Report 2025 identifies it as one of the critical skills alongside AI and big data. 

Modern technology is incredibly interconnected. AI can optimize individual elements, but it can’t see the bigger picture—how a pricing change affects user retention, how a new feature impacts server costs, or why your B2B customers need different onboarding than consumers. 

Early-career lawyers at the firm Macfarlanes are now interpreting complex contracts that used to be reserved for more senior colleagues. While AI can extract key info from contracts and flag potential issues, humans are still needed to understand the context, implications, and strategic considerations. 

Emphasize these skills in your case studies by presenting clear, logical arguments that lead to strategic insights and systemic solutions. Frame every project through a business lens. Show how your design decisions ladder up to company, brand, or product metrics. Include the downstream effects—not just the immediate impact.

3. The “Dangerous Generalist” Advantage

Multi-armed designer like an octopus, each arm holding different design tools including research, strategy, prototypes, and presentations.

Josh Silverman, professor at CCA and also a design coach and recruiter, has an idea he calls the “dangerous generalist.” This is the unicorn designer who can “do the research, the strategy, the prototyping, the visual design, the presentation, and the storytelling; and be a leader and make a measurable impact.” 

It’s a lot and seemingly unfair to expect that out of one person, but for a young and hungry designer with the right training and ambition, I think it’s possible. Other than leadership and making quantitative impact, all of those traits would have been practiced and honed at a good design program. 

Be sure to have a variety of projects in your portfolio to showcase how you can do it all.

4. Alternative Pathways and Flexibility

Designer navigating a maze of career paths with signposts directing to startups, nonprofits, UI developer, and product manager roles.

Matt Ström-Awn, in an excellent piece about the product design talent crisis published last Thursday, did some research and says that in “over 600 product design listings, only 1% were for internships, and only 5% required 2 years or less of experience.”

Those are some dismal numbers for anyone trying to get a full-time job with little design experience. So you have to try creative ways of breaking into the industry. In other words, don’t get stuck on only applying for junior-level jobs on LinkedIn. Do that but do more.

Let’s break this down to type of company and type of role.

Types of Companies

Historically, I would have always recommended any new designer to go to an agency first because they usually have the infrastructure to mentor entry-level workers. But, as those jobs have dried up, consider these types of companies.

  • Early-stage startups: Look for seed-stage or Series A startups. Companies who have just raised their Series A will make a big announcement, so they should be easy to target. Note that you will often be the only designer in the company, so you’ll be doing a lot of learning on the job. If this happens, remember to find community (see below).
  • Non-tech businesses: Legacy industries might be a lot slower to think about AI, much less adopt it. Focus on sectors where human touch, tradition, regulations, or analog processes dominate. These fields need design expertise, especially as many are just starting to modernize and may require digital transformation, improved branding, or modernized UX.
  • Nonprofits: With limited budgets and small teams, nonprofits and not-for-profits could be great places to work for. While they tend to try to DIY everything, they will also recognize the need for designers. Organizations that invest in design are 50% more likely to see increases in fundraising revenue.

Type of Roles

In his post for UX Collective, Patrick Morgan says, “Sometimes the smartest move isn’t aiming straight for a ‘product designer’ title, but stepping into a role where you can stay close to product and grow into the craft.”

In other words, look for adjacent roles at the company you want to work for, just to get your foot in the door.

Here are some of those roles—includes ones from Morgan’s list. What is appropriate for you will depend heavily on your skill sets and the type of design you want to eventually practice.

  • UI developer or front-end engineer: If you’re technically-minded, front-end developers are still sought after, though maybe not as much as before because, you know, AI. But if you’re able to snag a spot as one, it’s a way in.
  • Product manager: There is no single path to becoming a product manager. It’s a lot of the same skills a good designer should have, but with even more focus on creating strategies that come from customer insights (aka research). I’ve seen designers move into PM roles pretty easily.
  • Graphic/visual/growth/marketing designer: Again, depending on your design focus, you could already be looking for these jobs. But if you’re in UX and you see one of these roles open up, it’s another way into a company.
  • Production artist: These roles are likely slowly disappearing as well. This is usually a role at an agency or a larger company that just does design execution.
  • Freelancer: You may already be doing this, but you can freelance. Not all companies, especially small ones can afford a full-time designer. So they rely on freelance help. Try your hand at Upwork to build up your portfolio. Ensure that you’re charging a price that seems fair to you and that will help pay your bills.
  • Executive assistant: While this might seem odd, this is a good way to learn about a company and to show your resourcefulness. Lots of EAs are responsible for putting together events, swag, and more. Eventually, you might be able to parlay this role into a design role.
  • Intern: Internships are rare, I know. And if you haven’t done one, you should. However, ensure that the company complies with local regulations about paying interns. For example, California has strict laws about paying interns at least minimum wage. Unpaid internships are legal only if the role meets a litany of criteria including:
  • The internship is primarily educational (similar to a school or training program).
  • The intern is the “primary beneficiary,” not the company.
  • The internship does not replace paid employees or provide substantial benefit to the employer.

5. Connecting with Community

Diverse designers in a supportive network circle, connected both in-person and digitally, with glowing threads showing mentorship relationships.

The job search is isolating. Especially now.

Josh Silverman emphasizes something often overlooked: you’re already part of communities. “Consider all the communities you identify with, as well as all the identities that are a part of you,” he points out. Think beyond LinkedIn—way beyond.

Did you volunteer at a design conference? Help a nonprofit with their rebrand? Those connections matter. Silverman suggests reaching out to three to five people—not hiring managers, but people who understand your work. Former classmates who graduated ahead of you. Designers you met at meetups. Workshop leaders.

“Whether it’s a casual coffee chat or slightly more informal informational interview, there are people who would welcome seeing your name pop up on their screen.”

These conversations aren’t always about immediate job leads. They’re about understanding where the industry’s actually heading, which companies are genuinely hiring, and what skills truly matter versus what’s in job descriptions. As Silverman notes, it’s about creating space to listen and articulate what you need—“nurturing relationships in community will have longer-term benefits.”

In practice: Join alumni Slack channels, participate in local AIGA events, contribute to open-source projects, engage in design challenges. The designers landing jobs aren’t just those with perfect portfolios. They’re the ones who stay visible.

The Path Forward Requires Adaptation, Not Despair

My 12 year-old self would be astonished at what the world is today and how this profession has evolved. I’ve been through three revolutions. Traditional to desktop publishing. Print to web. And now, human-only design to AI-augmented design. 

Here’s what I know: the designers who survived those transitions weren’t necessarily the most talented. They were the most adaptable. They read the moment, learned the tools, and—crucially—didn’t wait for permission to reinvent themselves.

This transition is different. It’s faster and much more brutal to entry-level designers.

But you have advantages my generation didn’t. AI tools are accessible in ways that PageMaker and HTML never were. We had to learn through books! We learned by copying. We learned by taking weeks to craft projects. You can chat with Lovable and prompt your way to a portfolio-worthy project over a weekend. You can generate production-ready assets with Midjourney before lunch. You can prototype and test five different design directions while your coffee’s still warm.

The traditional path—degree, internship, junior role, slow climb up the ladder—is broken. Maybe permanently. But that also means the floor is being raised. You should be working on more strategic and more meaningful work earlier in your career.

But you need to be dangerous, versatile, and visible. 

The companies that will hire you might not be the ones you dreamed about in design school. The role might not have “designer” in the title. Your first year might be messier than you planned.

That’s OK. Every designer I respect has a messy and unlikely origin story.

The industry will stabilize because it always does. New expectations will emerge, new roles will be created, and yes—companies will realize they still need human designers who understand context, culture, and why that button should definitely not be bright purple.

Until then? Be the designer who ships. Who shows up. Who adapts.

The machines can’t do that. Yet.


I hope you enjoyed this series. I think it’s an important topic to discuss in our industry right now, before it’s too late. Don’t forget to read about the five grads and five educators I interviewed for the series. Please reach out if you have any comments, positive or negative. I’d love to hear them.

My former colleague from Organic, Christian Haas—now ECD at YouTube—has been experimenting with AI video generation recently. He’s made a trilogy of short films called AI Jobs.

Play

You can watch part one above 👆, but don’t sleep on parts two and three.

Haas put together a “behind the scenes” article explaining his process. It’s fascinating and I’ll want to play with video generation myself at some point.

I started with a rough script, but that was just the beginning of a conversation. As I started generating images, I was casting my characters and scouting locations in real time. What the model produced would inspire new ideas, and I would rewrite the script on the fly. This iterative loop continued through every stage. Decisions weren’t locked in; they were fluid. A discovery made during the edit could send me right back to “production” to scout a new location, cast a new character and generate a new shot. This flexibility is one of the most powerful aspects of creating with Gen AI.

It’s a wonderful observation Haas has made—the workflow enabled by gen AI allows for more creative freedom. In any creative endeavor where the production of the final thing is really involved and utilizes a significant amount of labor and materials, be it a film, commercial photography, or software, planning is a huge part. We work hard to spec out everything before a crew of a hundred shows up on set or a team of highly-paid engineers start coding. With gen AI, as shown here with Google’s Veo 3, you have more room for exploration and expression.

UPDATE: I came across this post from Rory Flynn after I published this. He uses diagrams to direct Veo 3.

preview-1754327232920.jpg

Behind the Prompts — The Making of "AI Jobs"

Christian Haas created the first film with the simple goal of learning to use the tools. He didn’t know if it would yield anything worth watching but that was not the point.

linkedin.com iconlinkedin.com
Portraits of five recent design graduates. From top left to right: Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background; Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket; Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors; Bottom row, left to right: Leah Ray, in a black-and-white portrait wearing a black turtleneck, looking ahead, Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Meet the 5 Recent Design Grads and 5 Design Educators

For my series on the Design Talent Crisis (see Part IPart II, and Part III) I interviewed five recent graduates from California College of the Arts (CCA) and San Diego City College. I’m an alum of CCA and I used to teach at SDCC. There’s a mix of folks from both the graphic design and interaction design disciplines. 

Meet the Grads

If these enthusiastic and immensely talented designers are available and you’re in a position to hire, please reach out to them!

Benedict Allen

Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Benedict Allen is a Los Angeles-based visual designer who specializes in creating compelling visual identities at the intersection of design, culture, and storytelling. With a strong background in apparel graphics and branding, Benedict brings experience from his freelance work for The Hunt and Company—designing for a major automotive YouTuber’s clothing line—and an internship at Pureboost Energy Drink Mix. He is skilled in a range of creative tools including Photoshop, Illustrator, Figma, and AI image generation. Benedict’s approach is rooted in history and narrative, resulting in clever and resonant design solutions. He holds an Associate of Arts in Graphic Design from San Diego City College and has contributed to the design community through volunteer work with AIGA San Diego Tijuana.

Emma Haines

Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors

Emma Haines is a UX and interaction designer with a background in computer science, currently completing her MDes in Interaction Design at California College of the Arts. She brings technical expertise and a passion for human-centered design to her work, with hands-on experience in integrating AI into both the design process and user-facing projects. Emma has held roles at Mphasis, Blink UX, and Colorado State University, and is now seeking full-time opportunities where she can apply her skills in UX, UI, or product design, particularly in collaborative, fast-paced environments.

Erika Kim

Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket

Erika Kim is a passionate UI/UX and product designer based in Poway, California, with a strong foundation in both visual communication and thoughtful problem-solving. A recent graduate of San Diego City College’s Interaction & Graphic Design program, Erika has gained hands-on experience through internships at TritonNav, Four Fin Creative, and My Rental Spot, as well as a year at Apple in operations and customer service roles. Her work has earned her recognition, including a Gold Winner award at The One Club Student Awards for her project “Gatcha Eats.” Erika’s approach to design emphasizes clarity, collaboration, and the power of well-crafted wayfinding—a passion inspired by her fascination with city and airport signage. She is fluent in English and Korean, and is currently open to new opportunities in user experience and product design.

Ashton Landis

Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background

Ashton Landis is a San Francisco-based graphic designer with a passion for branding, typography, and visual storytelling. A recent graduate of California College of the Arts with a BFA in Graphic Design and a minor in ecological practices, Ashton has developed expertise across branding, UI/UX, design strategy, environmental graphics, and more. She brings a people-centered approach to her work, drawing on her background in photography to create impactful and engaging design solutions. Ashton’s experience includes collaborating with Bay Area non-profits to build participatory identity systems and improve community engagement. She is now seeking new opportunities to grow and help brands make a meaningful impact.

Leah Ray

Leah Ray, , in a black-and-white portrait wearing a black turtleneck, looking ahead

Leah (Xiayi Lei) Ray is a Beijing-based visual designer currently working at Kuaishou Technology, with a strong background in impactful graphic design that blends logic and creativity. She holds an MFA in Design and Visual Communications from California College of the Arts, where she also contributed as a teaching assistant and poster designer. Leah’s experience spans freelance work in branding, identity, and book cover design, as well as roles in UI/UX and visual development at various companies. She is fluent in English and Mandarin, passionate about education, arts, and culture, and is recognized for her thoughtful, novel approach to design.

Meet the Educators

Sean Bacon

Sean Bacon, smiling in a light button-down against a blue-gray background

Sean Bacon is a professor, passionate designer and obsessive typophile who teaches a wide range of classes at San Diego City College. He also helps direct and manage the graphic design program and its administrative responsibilities. He teaches a wide range of classes and always strives to bring excellence to his students’ work. He brings his wealth of experiences and insight to help produce many of the award winning portfolios from City. He has worked for The Daily Aztec, Jonathan Segal Architecture, Parallax Visual Communication and Silent Partner. He attended San Diego City College, San Diego State and completed his masters at Savannah College of Art and Design. 

Eric Heiman

Eric Heiman, in profile wearing a flat cap and glasses, black and white photo

Eric Heiman is principal and co-founder of the award-winning, oft-exhibited design studio Volume Inc. He also teaches at California College of the Arts (CCA) where he currently manages TBD*, a student-staffed design studio creating work to help local Bay Area nonprofits and civic institutions. Eric also writes about design every so often, has curated one film festival, occasionally podcasts about classic literature, and was recently made an AIGA Fellow for his contribution to raising the standards of excellence in practice and conduct within the Bay Area design community. 

Elena Pacenti

Portrait of Elena Pacenti, smiling with long blonde hair, wearing a black top, in soft natural light.

Elena Pacenti is a seasoned design expert with over thirty years of experience in design education, research, and international projects. Currently the Director of the MDes Interaction Design program at California College of the Arts, she has previously held leadership roles at NewSchool of Architecture & Design and Domus Academy, focusing on curriculum development, faculty management, and strategic planning. Elena holds a PhD in Industrial Design and a Master’s in Architecture from Politecnico di Milano, and is recognized for her expertise in service design, strategic design, and user experience. She is passionate about leading innovative, complex projects where design plays a central role.

Bradford Prairie

Bradford Prairie, smiling in a jacket and button-down against a soft purple background

Bradford Prairie has been teaching at San Diego City College for nine years, starting as an adjunct instructor while simultaneously working as a professional designer and creative director at Ignyte, a leading branding agency. What made his transition unique was Ignyte’s support for his educational aspirations—they understood his desire to prioritize teaching and eventually move into it full-time. This dual background in industry and academia allows him to bring real-world expertise into the classroom while maintaining his creative practice.

Josh Silverman

Josh Silverman, smiling in a striped shirt against a dark background

For three decades, Josh Silverman has built bridges between entrepreneurship, design education, and designers—always focused on helping people find purpose and opportunity. As founder of PeopleWork Partners, he brings a humane design lens to recruiting and leadership coaching, placing emerging leaders at companies like Target, Netflix, and OpenAI, and advising design teams on critique, culture, and operations. He has chaired the MDes program at California College of the Arts, taught and spoken worldwide, and led AIGA chapters. Earlier, he founded Schwadesign, a lean, holacratic studio recognized by The Wall Street Journal and others. His clients span startups, global enterprises, top universities, cities, and non-profits. Josh is endlessly curious about how teams make decisions and what makes them thrive—and is always up for a long bike ride.

Page 1 of 4