89 posts tagged with “ai

Chatboxes have become the uber box for all things AI. The criticism of this blank box has been the cold start issue. New users don't know what to type. Designers shipping these product mostly got around this problem by offering suggested prompts to teach users about the possibilities.

The issue on the other end is that expert users end up creating their own library of prompts to copy and paste into the chatbox for repetitive tasks.

Sharang Sharma writing in UX Collective illustrates how these UIs can be smarter by being predictive of intent:

Contrary, Predictive UX points to an alternate approach. Instead of waiting for users to articulate every step, systems can anticipate intent based on behavior or common patterns as the user types. Apple Reminders suggests likely tasks as you type. Grammarly predicts errors and offers corrections inline. Gmail’s Smart Compose even predicts full phrases, reducing the friction of drafting entirely.

Sharma says that the goal of predictive UX is to “reduce time-to-value and reframe AI as an adaptive partner that anticipates user’s intent as you type.”

Imagine a little widget that appears within the chatbox as you type. Kind of a cool idea.

preview-1758077109263.jpeg

How can AI UI capture intent?

Exploring contextual prompt patterns that capture user intent as it is typed

Earth 3 Streamline Icon: https://streamlinehq.comuxdesign.cc

Thinking about this morning’s link about web forms, if you abstract why it’s so powerful, you get to the point of human-computer interaction: the computer should do what the user intends, not the buttons they push.

Matt Webb reminds us about the DWIM, or Do What I Mean philosophy in computing that was coined by Warren Teitelman in 1966. Webb quotes computer scientist Larry Masinter:

DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user’s request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious.

Webb goes on to say:

Squint and you can see ChatGPT as a DWIM UI: it never, never, never says “syntax error.”

Now, arguably it should come back and ask for clarifications more often, and in particular DWIM (and AI) interfaces are more successful the more they have access to the user’s context (current situation, history, environment, etc).

But it’s a starting point. The algo is: design for capturing intent and then DWIM; iterate until that works. AI unlocks that.
preview-1757558679383.png

The destination for AI interfaces is Do What I Mean

Posted on Friday 29 Aug 2025. 840 words, 10 links. By Matt Webb.

Earth 3 Streamline Icon: https://streamlinehq.cominterconnected.org

Forms is one of the fundamental things we make users do in software. Whether it’s the login screen, billing address form, or a mortgage application, forms are the main method for getting data from users and into computer-accessible databases. The human is deciding what piece of information to put into which column in the database. With AI, form filling should be much simpler.

Luke Wroblewski makes the argument:

With Web forms, the burden is on people to adapt to databases. Today's AI models, however, can flip this requirement. That is, they allow people to provide information in whatever form they like and use AI do the work necessary to put that information into the right structure for a database.

How can it work?

With AgentDB connected to an AI model (via an MCP server), a person can simply say "add this" and provide an image, PDF, audio, video, you name it. The model will use AgentDB's template to decide what information to extract from this unstructured input and how to format it for the database. In the case where something is missing or incomplete, the model can ask for clarification or use tools (like search) to find possible answers.
preview-1757557969255.png

Unstructured Input in AI Apps Instead of Web Forms

Web forms exist to put information from people into databases. The input fields and formatting rules in online forms are there to make sure the information fits...

Earth 3 Streamline Icon: https://streamlinehq.comlukew.com

I believe purity tests of any sort are problematic. And it’s much too easy to throw around the “This is AI slop!” claim. AI was used in the main title sequence for the Marvel TV show Secret Invasion. But it was on purpose and aligned with the show’s themes of shapeshifters.

Anyway, Daniel John, writing in the Creative Bloq:

[Lady] Gaga just dropped the music video for The Dead Dance, a song debuted in Season 2 of Netflix's Wednesday. Directed by Tim Burton, it's a suitably nightmarish black-and-white cacophony of monsters and dolls. But some are already claiming that parts of it were made using AI.

John shows a tweet from @graveyardquy as an example:

i didn’t think we’d ever be in a timeline where a tim burton x lady gaga collab would turn out to be AI slop… but here we are

We need to separate quality critiques from tool usage. If it looks good and is appropriate, I’m fine with CG, AI, and whatever comes next that helps tell the story. Same goes for what we do as designers, ’natch.

Gaga’s song is great. It’s a bop, as the kids say, with a neat music video to boot.

preview-1757379113823.jpg

The Lady Gaga backlash proves AI paranoia has gone too far

Just because it looks odd, doesn't mean it's AI.

Earth 3 Streamline Icon: https://streamlinehq.comcreativebloq.com

Josh Miller, CEO, and Hursh Agrawal, CTO, of The Browser Company:

Today, The Browser Company of New York is entering into an agreement to be acquired by Atlassian in an all-cash transaction. We will operate independently, with Dia as our focus. Our objective is to bring Dia to the masses.

Super interesting acquisition here. There is zero overlap as far as I can tell. Atlassian’s move is out of left-field. Dia’s early users were college students. The Browser Company more recently opened it up to former Arc users. Is this bet for Atlassian—the company that makes tech-company-focused products like Jira and Confluence—around the future of work and collaboration? Is this their first move against Salesforce? 🤔

preview-1757007229906.jpeg

Your Tuesday in 2030

Or why The Browser Company is being acquired to bring Dia to the masses.

Earth 3 Streamline Icon: https://streamlinehq.comopen.substack.com
Conceptual 3D illustration of stacked digital notebooks with a pen on top, overlaid on colorful computer code patterns.

Why We Still Need a HyperCard for the AI Era

I rewatched the 1982 film TRON for the umpteenth time the other night with my wife. I have always credited this movie as the spark that got me interested in computers. Mind you, I was nine years old when this film came out. I was so excited after watching the movie that I got my father to buy us a home computer—the mighty Atari 400 (note sarcasm). I remember an educational game that came on cassette called “States & Capitals” that taught me, well, the states and their capitals. It also introduced me to BASIC, and after watching TRON, I wanted to write programs!

Vintage advertisement for the Atari 400 home computer, featuring the system with its membrane keyboard and bold headline “Introducing Atari 400.”

The Atari 400’s membrane keyboard was easy to wipe down, but terrible for typing. It also reminded me of fast food restaurant registers of the time.

Back in the early days of computing—the 1960s and ’70s—there was no distinction between users and programmers. Computer users wrote programs to do stuff for them. Hence the close relationship between the two that’s depicted in TRON. The programs in the digital world resembled their creators because they were extensions of them. Tron, the security program that Bruce Boxleitner’s character Alan Bradley wrote, looks like its creator. Clu looked like Kevin Flynn, played by Jeff Bridges. Early in the film, a compound interest program who was captured by the MCP’s goons says to a cellmate, “if I don’t have a User, then who wrote me?”

Scene from the 1982 movie TRON showing programs in glowing blue suits standing in a digital arena.

The programs in TRON looked like their users. Unless the user was the program, which was the case with Kevin Flynn (Jeff Bridges), third from left.

Interesting piece from Vaughn Tan about a critical thinking framework that is disguised as a piece about building better AI UIs for critical thinking. Sorry, that sentence is kind of a tongue-twister. Tan calls out—correctly—that LLMs don’t think, or in his words, can’t make meaning:

Meaningmaking is making inherently subjective decisions about what’s valuable: what’s desirable or undesirable, what’s right or wrong. The machines behind the prompt box are remarkable tools, but they’re not meaningmaking entities.

Therefore when users ask LLMs for their opinions on matters, e.g., as in the therapy use case, the AIs won’t come back with actual thinking. IMHO, it’s semantics, but that’s another post.

Anyhow, Tan shares a pen and paper prototype he’s been testing, which breaks down a major decision into guided steps, or put another way, a framework.

This user experience was designed to simulate a multi-stage process of structured elicitation of various aspects of strongly reasoned arguments. This design explicitly addresses both requirements for good tool use. The structured prompts helped students think critically about what they were actually trying to accomplish with their custom major proposals — the meaningmaking work of determining value, worth, and personal fit. Simultaneously, the framework made clear what kinds of thinking work the students needed to do themselves versus what kinds of information gathering and analysis could potentially be supported by tools like LLMs.

This guided or framework-driven approach was something I attempted wtih Griffin AI. Via a series of AI-guided prompts to the user—or a glorified form, honestly—my tool helped users build brand strategies. To be sure, a lot of the “thinking” was done by the model, but the idea that an AI can guide you to critically think about your business or your client’s business was there.

preview-1756270668809.png

Designing AI tools that support critical thinking

Current AI interfaces lull us into thinking we’re talking to something that can make meaningful judgments about what’s valuable. We’re not — we’re using tools that are tremendously powerful but nonetheless can’t do “meaningmaking” work (the work of deciding what matters, what’s worth pursuing).

Earth 3 Streamline Icon: https://streamlinehq.comvaughntan.org

Designer Tey Bannerman writes that when he hears “human in the loop,” he’s reminded of a story about Lieutenant Colonel Stanislav Petrov, a Soviet Union duty watch officer who monitored for incoming missile strikes from the US.

12:15 AM… the unthinkable. Every alarm in the facility started screaming. The screens showed five US ballistic missiles, 28 minutes from impact. Confidence level: 100%. Petrov had minutes to decide whether to trigger a chain reaction that would start nuclear war and could very well end civilisation as we knew it.

He was the “human in the loop” in the most literal, terrifying sense.

Everything told him to follow protocol. His training. His commanders. The computers.

But something felt wrong. His intuition, built from years of intelligence work, whispered that this didn’t match what he knew about US strategic thinking.

Against every protocol, against the screaming certainty of technology, he pressed the button marked “false alarm”.

Twenty-three minutes of gripping fear passed before ground radar confirmed: no missiles. The system had mistaken a rare alignment of sunlight on high-altitude clouds for incoming warheads.

His decision to break the loop prevented nuclear war.

Then Bannerman shares an awesome framework he developed that allows humans in the loop in AI systems “genuine authority, time to think, and understanding the bigger picture well enough to question” the system’s decision. Click on to get the PDF from his site.

Framework diagram by Tey Bannerman titled Beyond ‘human in the loop’. It shows a 4×4 matrix mapping AI oversight approaches based on what is being optimized (speed/volume, quality/accuracy, compliance, innovation) and what’s at stake (irreversible consequences, high-impact failures, recoverable setbacks, low-stakes outcomes). Colored blocks represent four modes: active control, human augmentation, guided automation, and AI autonomy. Right panel gives real-world examples in e-commerce email marketing and recruitment applicant screening.

Redefining ‘human in the loop’

"Human in the loop" is overused and vague. The Petrov story shows humans must have real authority, time, and context to safely override AI. Bannerman offers a framework that asks what you optimize for and what is at stake, then maps 16 practical approaches.

Earth 3 Streamline Icon: https://streamlinehq.comteybannerman.com

Simon Sherwood, writing in The Register:

Amazon Web Services CEO Matt Garman has suggested firing junior workers because AI can do their jobs is "the dumbest thing I've ever heard."

Garman made that remark in conversation with AI investor Matthew Berman, during which he talked up AWS’s Kiro AI-assisted coding tool and said he's encountered business leaders who think AI tools "can replace all of our junior people in our company."

That notion led to the “dumbest thing I've ever heard” quote, followed by a justification that junior staff are “probably the least expensive employees you have” and also the most engaged with AI tools.

“How's that going to work when ten years in the future you have no one that has learned anything,” he asked. “My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.”

Yup. I agree.

preview-1756189648262.jpg

AWS CEO says AI replacing junior staff is 'dumbest idea'

They're cheap and grew up with AI … so you're firing them why?

Earth 3 Streamline Icon: https://streamlinehq.comtheregister.com

This post from Carly Ayres breaks down a beef between Michael Roberson (developer of an AI-enabled moodboard tool) and Elizabeth Goodspeed (writer and designer, oft-linked on this blog) and explores ragebait, putting in the reps as a junior, and designers as influencers.

The tweet earned 30,000 views, but only about 20 likes. “That ratio was pretty jarring,” [Roberson] said. Still, the strategy felt legible. “When I post things like, ‘if you don’t do X, you’re not going to make it,’ obviously, I don’t think that. These tools aren’t really capable of replacing designers just yet. It’s really easy to get views baiting and fear-mongering.”

Much like the provocative Artisan campaign, I think this is a net negative for the brand. Pretty sure I won’t be trying out Moodboard AI anytime soon, ngl.

But stepping back from the internet beef, Ayres argues that it’s a philosophical difference about the role friction in the creative process.

Michael’s experience mirrors that of many young designers: brand audits felt like busywork during his Landor internship. “That process was super boring,” he told me. “I wasn’t learning much by copy-pasting things into a deck.” His tool promises to cut through that inefficiency, letting teams reach visual consensus faster and spend more time on execution.

Young Michael, the process is the point! Without doing this boring stuff, by automating it with AI, how are you going to learn? This is but one facet of the whole discussion around expertise, wisdom, and the design talent crisis.

Goodspeed agrees with me:

Elizabeth sees it differently. “What’s interesting to me,” Elizabeth noted, “is how many people are now entering this space without a personal understanding of how the process of designing something actually works.” For her, that grunt work was formative. “The friction is the process,” she explained. “That’s how you form your point of view. You can’t just slap seven images on a board. You’re forced to think: What’s relevant? How do I organize this and communicate it clearly?”

Ultimately, the saddest point that Ayres makes—and noted by my friend Eric Heiman—is this:

When you’re young, online, and trying to get a project off the ground, caring about distribution is the difference between a hobby and a company. But there’s a cost. The more you perform expertise, the less you develop it. The more you optimize for engagement, the more you risk flattening what gave the work meaning in the first place. In a world where being known matters more than knowing, the incentives point toward performance over practice. And we all become performers in someone else’s growth strategy.

…Because when distribution matters more than craft, you don’t become a designer by designing. You become a designer by being known as one. That’s the game now.
preview-1755813084079.png

Mooooooooooooooood

Is design discourse the new growth hack?

Earth 3 Streamline Icon: https://streamlinehq.comopen.substack.com
Surreal black-and-white artwork of a glowing spiral galaxy dripping paint-like streaks over a city skyline at night.

Why I’m Keeping My Design Title

In the 2011 documentary Jiro Dreams of Sushi, then 85 year-old sushi master Jiro Ono says this about craft:

Once you decide on your occupation… you must immerse yourself in your work. You have to fall in love with your work. Never complain about your job. You must dedicate your life to mastering your skill. That’s the secret of success and is the key to being regarded honorably.

Craft is typically thought of as the formal aspects of any field such as design, woodworking, writing, or cooking. In design, we think about composition, spacing, and typography—being pixel-perfect. But one’s craft is much more than that. Ono’s sushi craft is not solely about slicing fish and pressing it against a bit of rice. It is also about picking the right fish, toasting the nori just so, cooking the rice perfectly, and running a restaurant. It’s the whole thing.

Therefore, mastering design—or any occupation—takes time, experience, or reps as the kids say. So it’s to my dismay that Suff Syed’s essay “Why I’m Giving Up My Design Title — And What That Says About the Future of Design” got so much play in recent weeks. Syed is Head of Product Design at Microsoft—er, was. I guess his title is now Member of the Technical Staff. In a perfectly well-argued and well-written essay, he concludes:

As a follow-up to yesterday’s item on how Google’s AI overviews are curtailing traffic to websites by as much as 25%, here is a link to Nielsen Norman Group’s just-published study showing that generative AI is reshaping search.

While AI offers compelling shortcuts around tedious research tasks, it isn’t close to completely replacing traditional search. But, even when people are using traditional search, the AI-generated overview that now tops almost all search-results pages steals a significant amount of attention and often shortcuts the need to visit the actual pages.

They write that users have developed a way to search over the years, skipping sponsored results and heading straight for the organic links. Users also haven’t completely broken free of traditional Google Search, now adding chatbots to the mix:

While generative AI does offer enough value to change user behaviors, it has not replaced traditional search entirely. Traditional search and AI chats were often used in tandem to explore the same topic and were sometimes used to fact-check each other.

All our participants engaged in traditional search (using keywords, evaluating results pages, visiting content pages, etc.) multiple times in the study. Nobody relied entirely on genAI’s responses (in chat or in an AI overview) for all their information-seeking needs.

In many ways, I think this is smart. Unless “web search” is happening, I tend double-check ChatGPT and Claude, especially for anything historical and mission-critical. I also like Perplexity for that fact—because it shows me its receipts by giving me sources.

preview-1755581621661.png

How AI Is Changing Search Behaviors

Our study shows that generative AI is reshaping search, but long-standing habits persist. Many users still default to Google, giving Gemini a fighting chance.

Earth 3 Streamline Icon: https://streamlinehq.comnngroup.com

Jessica Davies reports that new publisher data suggests that some sites are getting 25% less traffic from Google than the previous year.

Writing in Digiday:

Organic search referral traffic from Google is declining broadly, with the majority of DCN member sites — spanning both news and entertainment — experiencing traffic losses from Google search between 1% and 25%. Twelve of the respondent companies were news brands, and seven were non-news.

Jason Kint, CEO of DCN, says that this is a “direct consequence of Google AI Overviews.”

I wrote previously about the changing economics of the web here, here, and here.

And related, Eric Mersch writes in a LinkedIn post that Monday.com’s stock fell 23% because co-CEO Roy Mann said, “We are seeing some softness in the market due to Google algorithm,” during their Q2 earnings call and the analysts just kept hammering him and the CFO about how the algo changes might affect customer acquisition.

Analysts continued to press the issue, which caught company management completely off guard. Matthew Bullock from Bank of America Merrill Lynch asked frankly, “And then help us understand, why call this out now? How did the influence of Google SEO disruption change this quarter versus 1Q, for example?” The CEO could only respond, “So look, I think like we said, we optimize in real-time. We just budget daily,” implying that they were not aware of the problem until they saw Q2 results.

This is the first public sign that the shift from Google to AI-powered searches is having an impact.
preview-1755493440980.jpg

Google AI Overviews linked to 25% drop in publisher referral traffic, new data shows

The majority of Digital Content Next publisher members are seeing traffic losses from Google search between 1% and 25% due to AI Overviews.

Earth 3 Streamline Icon: https://streamlinehq.comdigiday.com

Ben Davies-Romano argues that the AI chat box is our new design interface:

Every interaction with a large language model starts the same way: a blinking cursor in a blank text field. That unassuming box is more than an input — it’s the interface between our human intent and the model’s vast, probabilistic brain.

This is where the translation happens. We pour in the nuance, constraints, and context of our ideas; the model converts them into an output. Whether it’s generating words, an image, a video sequence, or an interactive prototype, every request passes through this narrow bridge.

It’s the highest-stakes, lowest-fidelity design surface I’ve ever worked with: a single field that stands between human creativity and an engine capable of reshaping it into almost any form, albeit with all the necessary guidance and expertise applied.

In other words, don't just say "Make it better," but guide the AI instead.

That’s why a vague, lazy prompt, like “make it better”, is the design equivalent of telling a junior designer “make it intuitive” and walking away. You’ll get something generic, safe, and soulless, not because the AI “missed the brief,” but because there was no brief.

Without clear stakes, a defined brand voice, and rich context, the system will fill in the blanks with its default, most average response. And “average” is rarely what design is aiming for.

And he makes a point that designers should be leading the charge on showing others what generative AI can do:

In the age of AI, it shouldn’t be everyone designing, per say. It should be designers using AI as an extension of our craft. Bringing our empathy, our user focus, our discipline of iteration, and our instinct for when to stop generating and start refining. AI is not a replacement for that process; it’s a multiplier when guided by skilled hands.

So, let’s lead. Let’s show that the real power of AI isn’t in what it can generate, but in how we guide it — making it safer, sharper, and more human. Let’s replace the fear and the gimmicks with clarity, empathy, and intentionality.

The blank prompt is our new canvas. And friends, we need to be all over it.
preview-1754887809469.jpeg

Prompting is designing. And designers need to lead.

Forget “prompt hacks.” Designers have the skills to turn AI from a gimmick into a powerful, human-centred tool if we take the lead.

Earth 3 Streamline Icon: https://streamlinehq.commedium.com

Yesterday, OpenAI launched GPT-5, their latest and greatest model that replaces the confusing assortment of GPT-4o, o3, o4-mini, etc. with just two options: GPT-5 and GPT-5 pro. The reasoning is built in and the new model is smart enough to know what to think harder, or when a quick answer suffices.

Simon Willison deep dives into GPT-5, exploring its mix of speed and deep reasoning, massive context limits, and competitive pricing. He sees it as a steady, reliable default for everyday work rather than a radical leap forward:

I’ve mainly explored full GPT-5. My verdict: it’s just good at stuff. It doesn’t feel like a dramatic leap ahead from other LLMs but it exudes competence—it rarely messes up, and frequently impresses me. I’ve found it to be a very sensible default for everything that I want to do. At no point have I found myself wanting to re-run a prompt against a different model to try and get a better result.

It's a long technical read but interesting nonetheless.

preview-1754630277862.jpg

GPT-5: Key characteristics, pricing and model card

I’ve had preview access to the new GPT-5 model family for the past two weeks (see related video) and have been using GPT-5 as my daily-driver. It’s my new favorite …

Earth 3 Streamline Icon: https://streamlinehq.comsimonwillison.net
Illustration of diverse designers collaborating around a table with laptops and design materials, rendered in a vibrant style with coral, yellow, and teal colors

Five Practical Strategies for Entry-Level Designers in the AI Era

In Part I of this series on the design talent crisis, I wrote about the struggles recent grads have had finding entry-level design jobs and what might be causing the stranglehold on the design job market. In Part II, I discussed how industry and education need to change in order to ensure the survival of the profession.

Part III: Adaptation Through Action 

Like most Gen X kids, I grew up with a lot of freedom to roam. By fifth grade, I was regularly out of the house. My friends and I would go to an arcade in San Francisco’s Fisherman’s Wharf called The Doghouse, where naturally, they served hot dogs alongside their Joust and TRON cabinets. But we would invariably go to the Taco Bell across the street for cheap pre-dinner eats. In seventh grade—this is 1986—I walked by a ComputerLand on Van Ness Avenue and noticed a little beige computer with a built-in black and white CRT. The Macintosh screen was actually pale blue and black, but more importantly, showed MacPaint. It was my first exposure to creating graphics on a computer, which would eventually become my career.

Desktop publishing had officially begun a year earlier with the introduction of Aldus PageMaker and the Apple LaserWriter printer for the Mac, which enabled WYSIWYG (What You See Is What You Get) page layouts and high-quality printed output. A generation of designers who had created layouts using paste-up techniques with tools and materials like X-Acto knives, Rapidograph pens, rubyliths, photostats, and rubber cement had to start learning new skills. Typesetters would eventually be phased out in favor of QuarkXPress. A decade of transition would revolutionize the industry, only to be upended again by the web.

My former colleague from Organic, Christian Haas—now ECD at YouTube—has been experimenting with AI video generation recently. He’s made a trilogy of short films called AI Jobs.

You can watch part one above 👆, but don’t sleep on parts two and three.

Haas put together a “behind the scenes” article explaining his process. It’s fascinating and I’ll want to play with video generation myself at some point.

I started with a rough script, but that was just the beginning of a conversation. As I started generating images, I was casting my characters and scouting locations in real time. What the model produced would inspire new ideas, and I would rewrite the script on the fly. This iterative loop continued through every stage. Decisions weren't locked in; they were fluid. A discovery made during the edit could send me right back to "production" to scout a new location, cast a new character and generate a new shot. This flexibility is one of the most powerful aspects of creating with Gen AI.

It’s a wonderful observation Haas has made—the workflow enabled by gen AI allows for more creative freedom. In any creative endeavor where the production of the final thing is really involved and utilizes a significant amount of labor and materials, be it a film, commercial photography, or software, planning is a huge part. We work hard to spec out everything before a crew of a hundred shows up on set or a team of highly-paid engineers start coding. With gen AI, as shown here with Google’s Veo 3, you have more room for exploration and expression.

UPDATE: I came across this post from Rory Flynn after I published this. He uses diagrams to direct Veo 3.

preview-1754327232920.jpg

Behind the Prompts — The Making of "AI Jobs"

Christian Haas created the first film with the simple goal of learning to use the tools. He didn’t know if it would yield anything worth watching but that was not the point.

Earth 3 Streamline Icon: https://streamlinehq.comlinkedin.com
Portraits of five recent design graduates. From top left to right: Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background; Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket; Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors; Bottom row, left to right: Leah Ray, in a black-and-white portrait wearing a black turtleneck, looking ahead, Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Meet the 5 Recent Design Grads and 5 Design Educators

For my series on the Design Talent Crisis (see Part IPart II, and Part III) I interviewed five recent graduates from California College of the Arts (CCA) and San Diego City College. I’m an alum of CCA and I used to teach at SDCC. There’s a mix of folks from both the graphic design and interaction design disciplines. 

Meet the Grads

If these enthusiastic and immensely talented designers are available and you’re in a position to hire, please reach out to them!

Benedict Allen

For the past year, CPG behemoth Unilever has been “working with marketing services group Brandtech to build up its Beauty AI Studio: a bespoke, in-house system inside its beauty and wellbeing business. Now in place across 18 different markets (the U.S. and U.K. among them), the studio is being used to make assets for paid social, programmatic display inventory and e-commerce usage across brands including Dove Intensive Repair, TRESemme Lamellar Shine and Vaseline Gluta Hya.”

Sam Bradley, writing in Digiday:

The system relies on Pencil Pro, a generative AI application developed by Brandtech Group. The tool draws on several large language models (LLMs), as well as API access to Meta and TikTok for effectiveness measurement. It’s already used by hearing-care brand Amplifon to rapidly produce text and image assets for digital ad channels.

In Unilever’s process, marketers use prompts and their own insights about target audiences to generate images and video based on 3D renders of each product, a practice sometimes referred to as “digital twinning.” Each brand in a given market is assigned a “BrandDNAi” — an AI tool that can retrieve information about brand guidelines and relevant regulations and that provides further limitations to the generative process.

So far, they haven’t used this system to generate AI humans. Yet.

Inside Unilever’s AI beauty marketing assembly line — and its implications for agencies

The CPG giant has created an AI-augmented in-house production system. Could it be a template for others looking to bring AI in house?

Earth 3 Streamline Icon: https://streamlinehq.comdigiday.com
Human chain of designers supporting each other to reach laptops and design tools floating above them, illustrating collaborative mentorship and knowledge transfer in the design industry.

Why Young Designers Are the Antidote to AI Automation

In Part I of this series, I wrote about the struggles recent grads have had finding entry-level design jobs and what might be causing the stranglehold on the design job market.

Part II: Building New Ladders 

When I met Benedict Allen, he had just finished with Portfolio Review a week earlier. That’s the big show all the design students in the Graphic Design program at San Diego City College work toward. It’s a nice event that brings out the local design community where seasoned professionals review the portfolios of the graduating students.

Allen was all smiles and relief. “I want to dabble in different aspects of design because the principles are generally the same.” He goes on to mention how he wants to start a fashion brand someday, DJ, try 3D. “I just want to test and try things and just have fun! Of course, I’ll have my graphic design job, but I don’t want that to be the end. Like when the workday ends, that’s not the end of my creativity.” He was bursting with enthusiasm.

Luke Wroblewski, writing in his blog:

Across several of our companies, software development teams are now "out ahead" of design. To be more specific, collaborating with AI agents (like Augment Code) allows software developers to move from concept to working code 10x faster. This means new features become code at a fast and furious pace.

When software is coded this way, however, it (currently at least) lacks UX refinement and thoughtful integration into the structure and purpose of a product. This is the work that designers used to do upfront but now need to "clean up" afterward. It's like the development process got flipped around. Designers used to draw up features with mockups and prototypes, then engineers would have to clean them up to ship them. Now engineers can code features so fast that designers are ones going back and cleaning them up.

This is what I’ve been secretly afraid of. That we would go back to the times when designers were called in to do cleanup. Wroblewski says:

Instead of waiting for months, you can start playing with working features and ideas within hours. This allows everyone, whether designer or engineer, an opportunity to learn what works and what doesn’t. At its core rapid iteration improves software and the build, use/test, learn, repeat loop just flipped, it didn't go away.

Yeah, or the feature will get shipped this way and be stuck this way because startups move fast and move on.

My take is that as designers, we need to meet the moment and figure out how to build design systems and best practices into the agentic workflows our developer counterparts are using.

preview-1753725448535.png

AI Has Flipped Software Development

For years, it's been faster to create mockups and prototypes of software than to ship it to production. As a result, software design teams could stay "ahead" of...

Earth 3 Streamline Icon: https://streamlinehq.comlukew.com

In many ways, this excellent article by Kaustubh Saini for Final Round AI’s blog is a cousin to my essay on the design talent crisis. But it’s about what happens when people “become” developers and only know vibe coding.

The appeal is obvious, especially for newcomers facing a brutal job market. Why spend years learning complex programming languages when you can just describe what you want in plain English? The promise sounds amazing: no technical knowledge required, just explain your vision and watch the AI build it.

In other words, these folks don’t understand the code and, well, bad things can happen.

The most documented failure involves an indie developer who built a SaaS product entirely through vibe coding. Initially celebrating on social media that his "saas was built with Cursor, zero hand written code," the story quickly turned dark.

Within weeks, disaster struck. The developer reported that "random things are happening, maxed out usage on api keys, people bypassing the subscription, creating random shit on db." Being non-technical, he couldn't debug the security breaches or understand what was going wrong. The application was eventually shut down permanently after he admitted "Cursor keeps breaking other parts of the code."

This failure illustrates the core problem with vibe coding: it produces developers who can generate code but can't understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

I don’t foresee something this disastrous with design. I mean, a newbie designer wielding an AI-enabled Canva or Figma can’t tank a business alone because the client will have eyes on it and won’t let through something that doesn’t work. It could be a design atrocity, but it’ll likely be fine.

This can happen to a designer using vibe coding tools, however. Full disclosure: I’m one of them. This site is partially vibe-coded. My Severance fan project is entirely vibe-coded.

But back to the idea of a talent crisis. In the developer world, it’s already happening:

The fundamental problem is that vibe coding creates what experts call "pseudo-developers." These are people who can generate code but can't understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

In other words, they don’t have the skills necessary to be developers because they can’t do the basics. They can’t debug, don’t understand architecture, have no code review skills, and basically have no fundamental knowledge of what it means to be a programmer. “They miss the foundation that allows developers to adapt to new technologies, understand trade-offs, and make architectural decisions.”

Again, assuming our junior designers have the requisite fundamental design skills, not having spent time developing their craft and strategic skills through experience will be detrimental to them and any org that hires them.

preview-1753377392986.jpg

How AI Vibe Coding Is Destroying Junior Developers' Careers

New research shows developers think AI makes them 20% faster but are actually 19% slower. Vibe coding is creating unemployable pseudo-developers who can't debug or maintain code.

Earth 3 Streamline Icon: https://streamlinehq.comfinalroundai.com
Illustration of people working on laptops atop tall ladders and multi-level platforms, symbolizing hierarchy and competition, set against a bold, abstract sunset background.

The Design Industry Created Its Own Talent Crisis. AI Just Made It Worse.

This is the first part in a three-part series about the design talent crisis. Read Part II and Part III.

Part I: The Vanishing Bottom Rung 

Erika Kim’s path to UX design represents a familiar pandemic-era pivot story, yet one that reveals deeper currents about creative work and economic necessity. Armed with a 2020 film and photography degree from UC Riverside, she found herself working gig photography—graduations, band events—when the creative industries collapsed. The work satisfied her artistic impulses but left her craving what she calls “structure and stability,” leading her to UX design. The field struck her as an ideal synthesis, “I’m creating solutions for companies. I’m working with them to figure out what they want, and then taking that creative input and trying to make something that works best for them.”

Since graduating from the interaction design program at San Diego City College a year ago, she’s had three internships and works retail part-time to pay the bills. “I’ve been in survival mode,” she admits. On paper, she’s a great candidate for any junior position. Speaking with her reveals a very thoughtful and resourceful young designer. Why hasn’t she been able to land a full-time job? What’s going on in the design job market? 

Retro-style robot standing at a large control panel filled with buttons, switches, and monitors displaying futuristic data.

The Era of the AI Browser Is Here

For nearly three years, Arc from The Browser Company has been my daily driver. To be sure, there was a little bit of a learning curve. Tabs disappeared after a day unless you pinned them. Then they became almost like bookmarks. Tabs were on the left side of the window, not at the top. Spaces let me organize my tabs based on use cases like personal, work, or finances. I could switch between tabs using control-Tab and saw little thumbnails of the pages, similar to the app switcher on my Mac. Shift-command-C copied the current page’s URL. 

All these little interface ideas added up to a productivity machine for web jockeys like myself. And so, I was saddened to hear in May that The Browser Company stopped actively developing Arc in favor of a new AI-powered browser called Dia. (They are keeping Arc updated with maintenance releases.)

They had started beta-testing Dia with college students first and just recently opened it up to Arc members. I finally got access to Dia a few weeks ago. 

But before diving into Dia, I should mention I also got access to another AI browser, Perplexity’s Comet about a week ago. I’m on their Pro plan but somehow got an invite in my email. I had thought it was limited to those on their much more expensive Max plan only. Shhh.

This is a really well-written piece that pulls the AI + design concepts neatly together. Sharang Sharma, writing in UX Collective:

As AI reshapes how we work, I’ve been asking myself, it’s not just how to stay relevant, but how to keep growing and finding joy in my craft.

In my learning, the new shift requires leveraging three areas
1. AI tools: Assembling an evolving AI design stack to ship fast
2. AI fluency: Learning how to design for probabilistic systems
3. Human-advantage: Strengthening moats like craft, agency and judgment to stay ahead of automation

Together with strategic thinking and human-centric skills, these pillars shape our path toward becoming an AI-native designer.

Sharma connects all the crumbs I’ve been dropping this week:

preview-1752771124483.jpeg

AI tools + AI fluency + human advantage = AI-native designer

From tools to agency, is this what it would take to thrive as a product designer in the AI era?

Earth 3 Streamline Icon: https://streamlinehq.comuxdesign.cc