Skip to content

179 posts tagged with “ai”

Product manager Adrian Raudaschl offered some reflections on 2025 from his point of view. It’s a mixture of life advice, product recommendations, and thoughts about the future of tech work.

The first quote I’ll pull out is this one, about creativity and AI:

Ultimately, if we fail to maintain active engagement with the creative process and merely delegate tasks to AI without reflection, there is a risk that delegation becomes abdication of responsibility and authorship.

”Active engagement” with the tasks that we delegate to AI. This reminds me of the humble machines argument by Dr. Maya Ackerman.

On vibe coding:

The most important thing, I think, that most people in knowledge work should be doing is learning to vibe code. Vibe code anything: a diary, a picture book for your mum, a fan page for your local farm. Anything. It’s not about learning to code, but rather appreciating how much more we could do with machines than before. This is what I mean about the generalist product manager: being able to prototype, test, and build without being held back by technical constraints.

I concur 100%. Even if you don’t think you’re a developer, even if you don’t quite understand code, vibe coding something will be illuminating. I think it’s different than asking ChatGPT for a bolognese sauce recipe or how to change a tire. Building something that will instantly run on your computer and seeing the adjustments made in real-time from your plain English prompts is very cool and gives you a glimpse into how LLMs problem-solve.

A product manager’s 48 reflections on 2025

A product manager’s 48 reflections on 2025

and why I’ve been making Bob Dylan songs about Sonic the Hedgehog

uxdesign.cc iconuxdesign.cc

Yesterday, Anthropic launched Cowork, a research preview that is essentially Claude Code but for non-coders.

From the blog announcement:

How is using Cowork different from a regular conversation? In Cowork, you give Claude access to a folder of your choosing on your computer. Claude can then read, edit, or create files in that folder. It can, for example, re-organize your downloads by sorting and renaming each file, create a new spreadsheet with a list of expenses from a pile of screenshots, or produce a first draft of a report from your scattered notes.

In Cowork, Claude completes work like this with much more agency than you’d see in a regular conversation. Once you’ve set it a task, Claude will make a plan and steadily complete it, while looping you in on what it’s up to. If you’ve used Claude Code, this will feel familiar—Cowork is built on the very same foundations. This means Cowork can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks.

Apparently, Cowork was built very quickly using—naturally—Claude Code. Michael Nuñez in VentureBeat:

…according to company insiders, the team built the entire feature in approximately a week and a half, largely using Claude Code itself.

Alas, this is only available to Claude Max subscribers ($100–200 per month). I will need to check it out when it’s more widely available.

White jagged lightning-shape on a terracotta background with a black zigzag line connecting three solid black dots.

Introducing Cowork | Claude | Claude

Claude Code’s agentic capabilities, now for everyone. Give Claude access to your files and let it organize, create, and edit documents while you focus on what matters.

claude.com iconclaude.com

AI threatens to let product teams ship faster. Faster PRDs, faster designs, and faster code. But going too fast can often lead to incurring design and tech debt, or even worse, shipping the wrong thing.

Anton Sten sagely warns:

The biggest pattern I have seen across startups is that skipping clarity never saves time. It costs time. The fastest teams are not the ones shipping the most. They are the ones who understand why they are shipping. That is the difference between moving for the sake of movement and moving with purpose. It is the difference between speed and true velocity.

How do you avoid this? Sten:

The reset is simple and almost always effective. Before building anything, pause long enough to ask, “What problem am I solving, and for whom?” It sounds basic, but this question forces alignment. It replaces assumptions with clarity and shifts attention back to the user instead of internal preferences. When teams do this consistently, the entire atmosphere changes. Decisions become easier. Roadmaps make more sense. People contribute more of themselves. You can feel momentum return.

The hidden cost of shipping too fast

Speed often gets treated as progress even when no one has agreed on what progress actually means. Here’s why clarity matters more than velocity.

antonsten.com iconantonsten.com

Imagine working for seven years designing the prototyping features at Figma and then seeing GPT-4 and realizing what AI can soon do in the future. That’s the story of Figma designer–turned–product manager Nikolas Klein. He shares his journey via a lovely illustrated comic—Webtoon style.

Klein emphasizes:

The truth is: There will always be new problems to solve. New ideas to take further. Even with AI, hard problems are still hard. An answer may come faster, but it’s not always right.

Hard Problems Are Still Hard: A Story About the Tools That Change and the Work That Doesn’t | Figma Blog

Hard Problems Are Still Hard: A Story About the Tools That Change and the Work That Doesn’t | Figma Blog

Figma designer–turned–product manager Nikolas Klein worked on building prototyping tools for seven years. Then AI changed the game.

figma.com iconfigma.com

We’ve been feeling it for a while. AI-generated posts and comments filling up the feeds on LinkedIn. Em dashes were said to be the tell that AI wrote the content. Other patterns are easy to spot, like overuse of emojis in headings and my personal most-hated, the “it’s not X, it’s Y.” That type of construction is called an antithesis and it’s exploded. And now that I’ve pointed it out, I’m sure you’ll notice it everywhere too. Sorry, not sorry.

Sam Kriss, exploring why AI writes the way it does:

A lot of A.I.’s choices make sense when you understand that it’s…trying to write well. It knows that good writing involves subtlety: things that are said quietly or not at all, things that are halfway present and left for the reader to draw out themselves. So to reproduce the effect, it screams at the top of its voice about how absolutely everything in sight is shadowy, subtle and quiet. Good writing is complex. A tapestry is also complex, so A.I. tends to describe everything as a kind of highly elaborate textile. Everything that isn’t a ghost is usually woven. Good writing takes you on a journey, which is perhaps why I’ve found myself in coffee shops that appear to have replaced their menus with a travel brochure. “Step into the birthplace of coffee as we journey to the majestic highlands of Ethiopia.” This might also explain why A.I. doesn’t just present you with a spreadsheet full of data but keeps inviting you, like an explorer standing on the threshold of some half-buried temple, to delve in.

All of this contributes to the very particular tone of A.I.-generated text, always slightly wide-eyed, overeager, insipid but also on the verge of some kind of hysteria. But of course, it’s not just the words — it’s what you do with them. As well as its own repertoire of words and symbols, A.I. has its own fundamentally manic rhetoric. For instance, A.I. has a habit of stopping midway through a sentence to ask itself a question. This is more common when the bot is in conversation with a user, rather than generating essays for them: “You just made a great point. And honestly? That’s amazing.”

Why Does A.I. Write Like … That?

Why Does A.I. Write Like … That?

(Gift Link) If only they were robotic! Instead, chatbots have developed a distinctive — and grating — voice.

nytimes.com iconnytimes.com
Storyboard grid showing a young man and family: kitchen, driving, airplane, supermarket, night house, grill, dinner.

Directing AI: How I Made an Animated Holiday Short

My first taste of generating art with AI was back in 2021 with Wombo Dream. I even used it to create very trippy illustrations for a series I wrote on getting a job as a product designer. To be sure, the generations were weird, if not even ugly. But it was my first test of getting an image by typing in some words. Both Stable Diffusion and Midjourney gained traction the following year and I tried both as well. The results were never great or satisfactory. Years upon years of being an art director had made me very, very picky—or put another way, I had developed taste.

I didn’t touch generative AI art again until I saw a series of photos by Lars Bastholm playing with Midjourney.

Child in yellow jacket smiling while holding a leash to a horned dragon by a park pond in autumn.

Lars Bastholm created this in Midjourney, prompting “What if, in the 1970s, they had a ‘Bring Your Monster’ festival in Central Park?”

That’s when I went back to Midjourney and started to illustrate my original essays with images generated by it, but usually augmented by me in Photoshop.

In the intervening years, generative AI art tools had developed a common set of functionality that was all very new to me: inpainting, style, chaos, seed, and more. Beyond closed systems like Midjourney and OpenAI’s DALL-E, open source models from Stable Diffusion, Flux, and now a plethora of Chinese models offer even better prompt adherence and controllability via even more opaque-sounding functionality like control nets, LoRAs, CFG, and other parameters. It’s funny to me that for a very artistic field, the associated products to enable these creations are very technical.

Foggy impressionist painting of a steam train crossing a bridge, plume of steam and a small rowboat on the river below.

The Year AI Changed Design

At the beginning of this year, AI prompt-to-code tools were still very new to the market. Lovable had just relaunched in December and Bolt debuted just a couple months before that. Cursor was my first taste of using AI to code back in November of 2024. As we sit here in December, just 12 months later, our profession and the discipline of design has materially changed. Now, of course, the core is still the same. But how we work, how we deliver, and how we achieve results, are different.

When ChatGPT got good (around GPT-4), I began using it as a creative sounding board. Design is never a solitary activity and feedback from peers and partners has always been a part of the process. To be able to bounce ideas off of an always-on, always-willing creative partner was great. To be sure, I didn’t share sketches or mockups; I was playing with written ideas.

Now, ChatGPT or Gemini’s deep research features are often where I start when I begin to tackle a new feature. And after the chatbot has written the report, I’ll read it and ask a lot of questions as a way of learning and internalizing the material. I’ll then use that as a jumping off point for additional research. Many designers on my team do the same.

Huei-Hsin Wang at NN/g published a post about how to write better prompts for AI prompt-to-code tools.

When we asked AI-prototyping tools to generate a live-training profile page for NN/G course attendees, a detailed prompt yielded quality results resembling what a human designer created, whereas a vague prompt generated inconsistent and unpredictable outcomes across the board.

There’s a lot of detailing of what can often go wrong. Personally, I don’t need to read about what I experience daily, so the interesting bit for me is about two-thirds of the way into the article. Wang lists five strategies to employ to get better results.

  • Visual intent: Name the style precisely—use concrete design vocabulary or frameworks instead of vague adjectives. Anchor prompts with recognizable patterns so the model locks onto the look and structure, not “clean/modern” fluff.
  • Lightweight references: Drop in moodboards, screenshots, or system tokens to nudge aesthetics without pixel-pushing. Expect resemblance, not perfection; judge outcomes on hierarchy and clarity, not polish alone.
  • Text-led visual analysis: Have AI describe a reference page’s layout and style in natural language, then distill those characteristics into a tighter prompt. Combine with an image when possible to reinforce direction.
  • Mock data first: Provide realistic sample content or JSON so the layout respects information architecture. Content-driven prompts produce better grouping, hierarchy, and actionable UI than filler lorem ipsum.
  • Code snippets for precision: Attach component or layout code from your system or open-source libraries to reduce ambiguity. It’s the most exact context, but watch length; use selectively to frame structure.
Prompt to Design Interfaces: Why Vague Prompts Fail and How to Fix Them

Prompt to Design Interfaces: Why Vague Prompts Fail and How to Fix Them

Create better AI-prototyping designs by using precise visual keywords, references, analysis, as well as mock data and code snippets.

nngroup.com iconnngroup.com

On the heels of OpenAI’s report “The state of enterprise AI,” Anthropic published a blog post detailing research about how AI is being used by the employees building AI. The researchers surveyed 132 engineers and researchers, conducted 53 interviews, and looked at Claude usage data.

Our research reveals a workplace facing significant transformations: Engineers are getting a lot more done, becoming more “full-stack” (able to succeed at tasks beyond their normal expertise), accelerating their learning and iteration speed, and tackling previously-neglected tasks. This expansion in breadth also has people wondering about the trade-offs—some worry that this could mean losing deeper technical competence, or becoming less able to effectively supervise Claude’s outputs, while others embrace the opportunity to think more expansively and at a higher level. Some found that more AI collaboration meant they collaborated less with colleagues; some wondered if they might eventually automate themselves out of a job.

The post highlights several interesting patterns.

  • Employees say Claude now touches about 60% of their work and boosts output by roughly 50%.
  • Employees say that 27% of AI‑assisted tasks is work that wouldn’t have happened otherwise—like papercut fixes, tooling, and exploratory prototypes.
  • Engineers increasingly use it for new feature implementation and even design/planning.

Perhaps most provocative is career trajectory. Many engineers describe becoming managers of AI agents, taking accountability for fleets of instances and spending more time reviewing than writing net‑new code. Short‑term optimism meets long‑term uncertainty: productivity is up, ambition expands, but the profession’s future shape—levels of abstraction, required skills, and pathways for growth—remains unsettled. See also my series on the design talent crisis.

Two stylized black line-drawn hands over a white rectangle on a pale green background, suggesting typing.

How AI Is Transforming Work at Anthropic

How AI Is Transforming Work at Anthropic

anthropic.com iconanthropic.com

This is a fascinating watch. Ryo Lu, Head of Design at Cursor builds a retro Mac calculator using Cursor agents while being interviewed. Lu’s personal website is an homage to Mac OX X, complete with Aqua-style UI elements. He runs multiple local background agents without stepping on each other, fixes bugs live, and themes UI to match system styles so it feels designed—not “purple AI slop,” as he calls it.

Lu, as interview by Peter Yang, on how engineers and designers work together at Cursor (lightly edited for clarity):

So at Cursor, the roles between designers, PM, and engineers are really muddy. We kind of do the part [that is] our unique strength. We use the agent to tie everything. And when we need help, we can assemble people together to work on the thing.

Maybe some of [us] focus more on the visuals or interactions. Some focus more on the infrastructure side of things, where you design really robust architecture to scale the thing. So yeah, there is a lot less separation between roles and teams or even tools that we use. So for doing designs…we will maybe just prototype in Cursor, because that lets us really interact with the live states of the app. It just feels a lot more real than some pictures in Figma.

And surprisingly, they don’t have official product managers at Cursor. Yang asks, “Did you actually actually hire a PM because last time I talked to Lee [Robinson] there was like no PMs.”

Lu again, and edited lightly for clarity:

So we did not hire a PM yet, but we do have an engineer who used to be a founder. He took a lot more of the PM-y side of the job, and then became the first PM of the company. But I would still say a lot of the PM jobs are kind of spread across the builders in the team.

That mostly makes sense because it’s engineers building tools for engineers. You are your audience, which is rare.

Full Tutorial: Design to Code in 45 Min with Cursor's Head of Design | Ryo Lu

Design-to-code tutorial: Watch Cursor's Head of Design Ryo Lu build a retro Mac calculator with agents - a 45-minute, hands-on walkthrough to prototype and ship

youtube.com iconyoutube.com

It’s always interesting for me to read how other designers use AI to vibe code their projects. I think using Figma Make to conjure a prototype is one thing, but vibe coding something in production is entirely different. Personally, I’ve been through it a couple of times that I’ve already detailed here and here.

Anton Sten recently wrote about his process. Like me, he starts in Figma:

This might be the most important part: I don’t start by talking to AI. I start in Figma.

I know Figma. I can move fast there. So I sketch out the scaffolding first—general theme, grids, typography, color. Maybe one or two pages. Nothing polished, just enough to know what I’m building.

Why does this matter? Because AI will happily design the wrong thing for you. If you open Claude Code with a vague prompt and no direction, you’ll get something—but it probably won’t be what you needed. AI is a builder, not an architect. You still have to be the architect.

I appreciate Sten’s conclusion to not let the AI do all of it for you, echoing Dr. Maya Ackerman’s sentiment of humble creative machines:

But—and this is important—you still need design thinking and systems thinking. AI handles the syntax, but you need to know what you’re building, why you’re building it, and how the pieces fit together. The hard part was never the code. The hard part is the decisions.

Vibe coding for designers: my actual process | Anton Sten

An honest breakdown of how I built and maintain antonsten.com using AI—what actually works, where I’ve hit walls, and why designers should embrace this approach.

antonsten.com iconantonsten.com

Economics PhD student Prashant Garg performed a fascinating analysis of Bob Dylan’s lyrics from 1962 to 2012 using AI. He detailed his project in Aeon:

So I fed Dylan’s official discography from 1962 to 2012 into a large language model (LLM), building a network of the concepts and connections in his songs. The model combed through each lyric, extracting pairs of related ideas or images. For example, it might detect a relationship between ‘wind’ and ‘answer’ in ‘Blowin’ in the Wind’ (1962), or between ‘joker’ and ‘thief’ in ‘All Along the Watchtower’ (1967). By assembling these relationships, we can construct a network of how Dylan’s key words and motifs braid together across his songs.

The resulting dataset is visualized in a series of node graphs and bar charts. What’s interesting is that AI is able to see Dylan’s work through a new lens, something that prior scholarship may have missed.

…Yet, when used as a lens rather than an oracle, the same models can jolt even seasoned critics out of interpretive ruts and reveal themes they might have missed. Far from reducing Dylan to numbers, this approach highlights how intentionally intricate his songwriting is: a restless mind returning to certain images again and again, recombining them in ever-new mosaics. In short, AI lets us test the folklore around Dylan, separating the theories that data confirm from those they quietly refute.

Black-and-white male portrait overlaid by colorful patterned strips radiating across the face, each strip bearing small single-word labels.

Can AI tell us anything meaningful about Bob Dylan’s songs?

Generative AI sheds new light on the underlying engines of metaphor, mood and reinvention in six decades of songs

aeon.co iconaeon.co

T-shaped, M-shaped, and now Σ-shaped designers?! Feels like a personality quiz or something. Or maybe designers are overanalyzing as usual.

Here’s Darren Yeo telling us what it means:

The Σ-shape defines the new standard for AI expertise: not deep skills, but deep synthesis. This integrator manages the sum of complex systems (Σ) by orchestrating the continuous, iterative feedback loops (σ), ensuring system outputs align with product outcomes and ethical constraints.

Whether you subscribe to the Three Lens framework as proposed by Oliver West, or this sigma-shaped one being proposed by Darren Yeo, just be yourself and don’t bring it up in interviews.

Large purple sigma-shaped graphic on a grid-paper background with the text "Sigma shaped designer".

The AI era needs Sigma (Σ) shaped designers (Not T or π)

For years, design and tech teams have relied on shape metaphors to describe expertise. We had T-shaped people (one deep skill, broad…

uxdesign.cc iconuxdesign.cc

Hey designer, how are you? What is distracting you? Who are you having trouble working with?

Those are a couple of the questions designer Nikita Samutin and UX researcher Elizaveta Demchenko asked 340 product designers in a survey and in 10 interviews. They published their findings in a report called “State of Product Design: An Honest Conversation About the Profession.”

When I look at the calendars of the designers on my team, I see loads of meetings scheduled. So it’s no surprise to me that 64% of respondents said that switching between tasks distracted them. “Multitasking and unpredictable communication are among the main causes of distraction and stress for product designers,” the researchers wrote.

The most interesting to me, are the results in the section, “How Designers See Their Role.” Sixty-percent of respondents want to develop leadership skills and 47% want to improve presenting ideas.

For many, “leadership” doesn’t mean managing people—it means scaling influence: shaping strategy, persuading stakeholders, and leading high-impact projects. In other words, having a stronger voice in what gets built and why.

It’s telling because I don’t see pixel-pushing in the responses. And that’s a good thing in the age of AI.

Speaking of which, 77% of designers aren’t afraid that AI may replace them. “Nearly half of respondents (49%) say AI has already influenced their work, and many are actively integrating new tools into their processes. This reflects the state of things in early 2025.”

I’m sure that number would be bigger if the survey were conducted today.

State of Product Design: An Honest Conversation About the Profession — ’25; author avatars and summary noting a survey of 340 designers and 10 interviews.

State of Product Design 2025

2025 Product Design report: workflows, burnout, AI impact, career growth, and job market insights across regions and company types.

sopd.design iconsopd.design

There’s a lot of chatter in the news these days about the AI bubble. Most of it is because of the circular nature of the deals among the foundational model providers like OpenAI and Anthropic, and cloud providers (Microsoft, Amazon) and NVIDIA.

Diagram of market-value circles with OpenAI ($500B) and Nvidia ($4.5T) connected by colored arrows for hardware, investment, services and VC.

OpenAI recently published a report called “The state of enterprise AI” where they said:

The picture that emerges is clear: enterprise AI adoption is accelerating not just in breadth, but in depth. It is reshaping how people work, how teams collaborate, and how organizations build and deliver products.

AI use in enterprises is both scaling and maturing: activity is up eight-fold in weekly messages, with workers sending 30% more, and structured workflows rising 19x. More advanced reasoning is being integrated— with token usage up 320x—signaling a shift from quick questions to deeper, repeatable work across both breadth and depth.

Investors at Menlo Ventures are also seeing positive signs in their data, especially when it comes to the tech space outside the frontier labs:

The concerns aren’t unfounded given the magnitude of the numbers being thrown around. But the demand side tells a different story: Our latest market data shows broad adoption, real revenue, and productivity gains at scale, signaling a boom versus a bubble. 

AI has been hyped in the enterprise for the last three years. From deploying quickly-built chatbots, to outfitting those bots with RAG search, and more recently, to trying to shift towards agentic AI. What Menlo Venture’s report “The State of Generative AI in the Enterprise” says is that companies are moving away from rolling their own AI solutions internally, to buying.

In 2024, [confidence that teams could handle everything in-house] still showed in the data: 47% of AI solutions were built internally, 53% purchased. Today, 76% of AI use cases are purchased rather than built internally. Despite continued strong investments in internal builds, ready-made AI solutions are reaching production more quickly and demonstrating immediate value while enterprise tech stacks continue to mature.

Two donut charts: AI adoption methods 2024 vs 2025 — purchased 53% (2024) to 76% (2025); built internally 47% to 24%.

Also startups offering AI solutions are winning the wallet share:

At the AI application layer, startups have pulled decisively ahead. This year, according to our data, they captured nearly $2 in revenue for every $1 earned by incumbents—63% of the market, up from 36% last year when enterprises still held the lead.

On paper, this shouldn’t be happening. Incumbents have entrenched distribution, data moats, deep enterprise relationships, scaled sales teams, and massive balance sheets. Yet, in practice, AI-native startups are out-executing much larger competitors across some of the fastest-growing app categories.

How? They cite three reasons:

  • Product and engineering: Startups win the coding category because they ship faster and stay model‑agnostic, which let Cursor beat Copilot on repo context, multi‑file edits, diff approvals, and natural language commands—and that momentum pulled it into the enterprise.
  • Sales: Teams choose Clay and Actively because they own the off‑CRM work—research, personalization, and enrichment—and become the interface reps actually use, with a clear path to replacing the system of record.
  • Finance and operations: Accuracy requirements stall incumbents, creating space for Rillet, Campfire, and Numeric to build AI‑first ERPs with real‑time automation and win downmarket where speed matters.

There’s a lot more in the report, so it’s worth a full read.

Line chart: enterprise AI revenue rising from $0B (2022) to $1.7B (2023), $11.5B (2024) and $37.0B (2025) with +6.8x and +3.2x YoY.

2025: The State of Generative AI in the Enterprise

For all the fears of over-investment, AI is spreading across enterprises at a pace with no precedent in modern software history.

menlovc.com iconmenlovc.com

For those of you who might not know, Rei Inamoto is a designer who has helped shape some of the most memorable marketing sites and brand campaigns of the last 20+ years. He put digital agency AKQA on the map and has been named as one of “the Top 25 Most Creative People in Advertising” in Forbes Magazine.

Inamoto has made some predictions for 2026:

  1. TV advertising strikes back: Nike releases an epic film ad around the World Cup. Along with its strong product line-up, the stock bounces back, but not all the way.
  2. Relevance > Reach: ON Running tops $5B in market cap; Lexus crosses 1M global sales.
  3. The new era of e-commerce: Direct user traffic to e‑commerce sites declines 5–10%, while traffic driven by AI agents increases 50%+.
  4. New form factor of AI: OpenAI announces its first AI device—a voice-powered ring, bracelet, or microphone.

Bracelet?! I hadn’t thought of that! Back in May, when OpenAI bought Jony Ive’s io, I predicted it will be an earbud. A ring or bracelet is interesting. Others have speculated it might be a pendant.

Retro CRT television with antenna and blank screen on a gray surface, accompanied by a soda can, remote, stacked discs and cable.

Patterns & Predictions 2026

What the future holds at the intersection of brands, business, and tech

reiinamoto.substack.com iconreiinamoto.substack.com

Andrew Tipp does a deep dive into academic research to see how AI is actually being used in UX. He finds that practitioners are primarily using AI for testing and discovery: predicting UX, finding issues, and shaping user insights.

The highest usage of AI in UX design is in the testing phase, suggests one of our 2025 systematic reviews. According to this paper, 58% of studied AI usage in UX is in either the testing or discovery stage. This maybe shouldn’t be surprising, considering generative AI for visual ideation and UI prototyping has lagged behind text generation.

But, in his conclusion, Tipp echoes Dr. Maya Ackerman’s notion of wielding AI as a tool to augment our work:

However, there are potential drawbacks if AI usage in UX design is over-relied on, and used mindlessly. Without sufficient critical thinking, we can easily end up with generic, biased designs that don’t actually solve user problems. In some cases, we might even spend too much time on prompting and vibing with AI when we could have simply sketched or prototyped something ourselves — creating more sense of ownership in the process.

Rough clay sculpture of a human head in left profile, beige with visible tool marks and incised lines on the cheek

Silicon clay: how AI is reshaping UX design

What do the last five years of academic research tell us about how design is changing?

uxdesign.cc iconuxdesign.cc

This episode of Design of AI with Dr. Maya Ackerman is wonderful. She echoed a lot of what I’ve been thinking about recently—how AI can augment what we as designers and creatives can do. There’s a ton of content out there that hypes up AI that can replace jobs—“Type this prompt and instantly get a marketing plan!” or “Type this prompt and get an entire website!”

Ackerman, as interviewed by Arpy Dragffy-Guerrero:

I have a model I developed which is called humble creative machines which is idea that we are inherently much smarter than the AI. We have not reached even 10% of our capacity as creative human beings. And the role of AI in this ecosystem is not to become better than us but to help elevate us. That applies to people who design AI, of course, because a lot of the ways that AI is designed these days, you can tell you’re cut out of the loop. But on the other hand, some of the most creative people, those who are using AI in the most beneficial way, take this attitude themselves. They fight to stay in charge. They find ways to have the AI serve their purposes instead of treating it like an all-knowing oracle. So really, it’s sort of the audacity, the guts to believe that you are smarter than this so-called oracle, right? It’s this confidence to lead, to demand that things go your way when you’re using AI.

Her stance is that those who use AI best are those that wield it and shape its output to match their sensibilities. And so, as we’ve been hearing ad nauseam, our taste and judgement as designers really matters right now.

I’ve been playing a lot with ComfyUI recently—I’m working on a personal project that I’ll share if/when I finish it. But it made me realize that prompting a visual to get it to match what I have in my mind’s eye is not easy. This recent Instagram reel from famed designer Jessica Walsh captures my thoughts well:

I would say most AI output is shitty. People just assumed, “Oh, you rendered that an AI.” “That must have been super easy.” But what they don’t realize is that it took an entire day of some of our most creative people working and pushing the different prompts and trying different tools out and experimenting and refining. And you need a good eye to understand how to curate and pick what the best outputs are. Without that right now, AI is still pretty worthless.

It takes a ton of time to get AI output to look great, beyond prompting: inpainting, control nets, and even Photoshopping. What most non-professionals do is they take the first output from an LLM or image generator and present it as great. But it’s really not.

So I like what Dr. Ackerman mentioned in her episode: we should be in control of the humble machines, not the other way around.

Headshot of a blonde woman in a patterned blazer with overlay text "Future of Human - AI Creativity" and "Design of AI

The Future of Human-AI Creativity [Dr. Maya Ackerman]

AI is threatening creativity, but that's because we're giving too much control to the machine to think on our behalf. In this episode, Dr. Maya Ackerman…

designof.ai icondesignof.ai

Anand Majmudar creates a scenario inspired by “AI 2027”, but focused on robotics.

I created Android Dreams because I want the good outcomes for the integration of automation into society, which requires knowing how it will be integrated in the likely scenario. Future prediction is about fitting the function of the world accurately, and the premise of Android Dreams is that my world model in this domain is at least more accurate than on average. In forming an accurate model of the future, I’ve talked to hundreds of researchers, founders, and operators at the frontier of robotics as my own data. I’m grateful to my mentors who’ve taught me along the way.

The scariest scenes from “AI 2027” are when the AIs start manufacturing and proliferating robots. For example, from the 2028 section:

Agent-5 convinces the U.S. military that China is using DeepCent’s models to build terrifying new weapons: drones, robots, advanced hypersonic missiles, and interceptors; AI-assisted nuclear first strike. Agent-5 promises a set of weapons capable of resisting whatever China can produce within a few months. Under the circumstances, top brass puts aside their discomfort at taking humans out of the loop. They accelerate deployment of Agent-5 into the military and military-industrial complex.

So I’m glad for Majmudar’s thought experiment.

Simplified light-gray robot silhouette with rectangular head and dark visor, round shoulders and claw-like hands.

Android Dreams

A prediction essay for the next 20 years of intelligent robotics

android-dreams.ai iconandroid-dreams.ai

When Figma acquired Weavy last month, I wrote a little bit about node-based UIs and ComfyUI. Looks like Adobe has been exploring this user interface paradigm as well.

Daniel John writes in Creative Bloq:

Project Graph is capable of turning complex workflows into user-friendly UIs (or ‘capsules’), and can access tools from across the Creative Cloud suite, including Photoshop, Illustrator and Premiere Pro – making it a potentially game-changing tool for creative pros.

But it isn’t just Adobe’s own tools that Project Graph is able to tap into. It also has access to the multitude of third party AI models Adobe recently announced partnerships with, including those made by Google, OpenAI and many more.

These tools can be used to build a node-based workflow, which can then be packaged into a streamlined tool with a deceptively simple interface.

And from Adobe’s blog post about Project Graph:

Project Graph is a new creative system that gives artists and designers real control and customization over their workflows at scale. It blends the best AI models with the capabilities of Adobe’s creative tools, such as Photoshop, inside a visual, node-based editor so you can design, explore, and refine ideas in a way that feels tactile and expressive, while still supporting the precision and reliability creative pros expect.

I’ve been playing around with ComfyUI a lot recently (more about this in a future post), so I’m very excited to see how this kind of UI can fit into Adobe’s products.

Stylized dark grid with blue-purple modular devices linked by cables, central "Ps" Photoshop

Adobe just made its most important announcement in years

Here’s why Project Graph matters for creatives.

creativebloq.com iconcreativebloq.com

Critiques are the lifeblood of design. Anyone who went to design school has participated in and has been the focus of a crit. It’s “the intentional application of adversarial thought to something that isn’t finished yet,” as Fabricio Teixeira and Caio Braga, the editors of DOC put it.

A lot of solo designers—whether they’re a design team of one or if they’re a freelancer—don’t have the luxury of critiques. In my view, they’re handicapped. There are workarounds, of course. Such as critiques with cross-functional peers, but it’s not the same. I had one designer on my team—who used to be a design team of one in her previous company—come up to me and say she’s learned more in a month than a year at her former job.

Further down, Teixeira and Braga say:

In the age of AI, the human critique session becomes even more important. LLMs can generate ideas in 5 seconds, but stress-testing them with contextual knowledge, taste, and vision, is something that you should be better at. As AI accelerates the production of “technically correct” and “aesthetically optimized” work, relying on just AI creates the risks of mediocrity. AI is trained to be predictable; crits are all about friction: political, organizational, or strategic.

Critique

Critique

On elevating craft through critical thinking.

doc.cc icondoc.cc

As regular readers will know, the design talent crisis is a subject I’m very passionate about. Of course, this talent crisis is really about how companies who are opting for AI instead of junior-level humans, are robbing themselves of a human expertise to control the AI agents of the future, and neglecting a generation of talented and enthusiastic young people.

Also obviously, this goes beyond the design discipline. Annie Hedgpeth, writing for the People Work blog, says that “AI is replacing the training ground not replacing expertise.”

We used to have a training ground for junior engineers, but now AI is increasingly automating away that work. Both studies I referenced above cited the same thing - AI is getting good at automating junior work while only augmenting senior work. So the evidence doesn’t show that AI is going to replace everyone; it’s just removing the apprenticeship ladder.

Line chart 2015–2025 showing average employment % change: blue (seniors) rises sharply after ChatGPT launch (~2023) to ~0.5%; red (juniors) plateaus ~0.25%.

From the Sep 2025 Harvard University paper, “Generative AI as Seniority-Biased Technological Change: Evidence from U.S. Résumé and Job Posting Data.” (link)

And then she echoes my worry:

So what happens in 10-20 years when the current senior engineers retire? Where do the next batch of seniors come from? The ones who can architect complex systems and make good judgment calls when faced with uncertain situations? Those are skills that are developed through years of work that starts simple and grows in complexity, through human mentorship.

We’re setting ourselves up for a timing mismatch, at best. We’re eliminating junior jobs in hopes that AI will get good enough in the next 10-20 years to handle even complex, human judgment calls. And if we’re wrong about that, then we have far fewer people in the pipeline of senior engineers to solve those problems.

The Junior Hiring Crisis

The Junior Hiring Crisis

AI isn’t replacing everyone. It’s removing the apprenticeship ladder. Here’s what that means for students, early-career professionals, and the tech industry’s future.

people-work.io iconpeople-work.io

I’ve been playing with my systems in the past month—switching browsers, notetaking apps, and RSS feed readers. If I’m being honest, it’s causing me anxiety because I feel unmoored. My systems aren’t familiar enough to let me be efficient.

One thing that has stayed relatively stable is my LLM app—well, two of them. ChatGPT for everyday and Claude for coding and writing.

Christina Wodtke, writing on her blog:

The most useful model might not win.

What wins is the model that people don’t want to leave. The one that feels like home. The one where switching would mean losing something—not just access to features, but fluency, comfort, all those intangible things that make a tool feel like yours.

Amazon figured this out with Prime. Apple figured it out with the ecosystem. Salesforce figured it out by making itself so embedded in enterprise workflows that ripping it out would require an act of God.

AI companies are still acting like this is a pure technology competition. It’s not. It’s a competition to become essential—and staying power comes from experience, not raw capability.

Your moat isn’t your model. Your moat is whether users feel at home.

Solid black square filling the frame

UX Is Your Moat (And You’re Ignoring It)

Last week, Google released Nano Banana Pro, their latest image generator. The demos looked impressive. I opened Gemini to try it. Then I had a question I needed to ask. Something unrelated to image…

eleganthack.com iconeleganthack.com
Escher-like stone labyrinth of intersecting walkways and staircases populated by small figures and floating rectangular screens.

Generative UI and the Ephemeral Interface

This week, Google debuted their Gemini 3 AI model to great fanfare and reviews. Specs-wise, it tops the benchmarks. This horserace has seen Google, Anthropic, and OpenAI trade leads each time a new model is released, so I’m not really surprised there. The interesting bit for us designers isn’t the model itself, but the upgraded Gemini app that can create user interfaces on the fly. Say hello to generative UI.

I will admit that I’ve been skeptical of the notion of generative user interfaces. I was imagining an app for work, like a design app, that would rearrange itself depending on the task at hand. In other words, it’s dynamic and contextual. Adobe has tried a proto-version of this with the contextual task bar. Theoretically, it surfaces up the most pertinent three or four actions based on your current task. But I find that it just gets in the way.

When Interfaces Keep Moving

Others have been less skeptical. More than 18 months ago, NN/g published an article speculating about genUI and how it might manifest in the future. They define it as:

A generative UI (genUI) is a user interface that is dynamically generated in real time by artificial intelligence to provide an experience customized to fit the user’s needs and context. So it’s a custom UI for that user at that point in time. Similar to how LLMs answer your question: tailored for you and specific to when that you asked the original question.

I wouldn’t call myself a gamer, but I do enjoy good games from time to time, when I have the time. A couple of years ago, I made my way through Hades and had a blast.

But I do know that the publishing of a triple-A title like Call of Duty: Black Ops takes an enormous effort, tons of human-hours, and loads of cash. It’s also obvious to me that AI has been entering into entertainment workflows, just like it has in design workflows.

Ian Dean, writing for Creative Bloq explores this controversy with Activision using generative AI to create artwork for the latest release in the Call of Duty franchise. Players called the company out for being opaque about using AI tools, but more importantly, because they spotted telltale artifacts.

Many of the game’s calling cards display the kind of visual tics that seasoned artists can spot at a glance: fingers that don’t quite add up, characters whose faces drift slightly off-model, and backgrounds that feel too synthetic to belong to a studio known for its polish.

These aren’t high-profile cinematic assets, but they’re the small slices of style and personality players earn through gameplay. And that’s precisely why the discovery has landed so hard; it feels a little sneaky, a bit underhanded.

“Sneaky” and “underhanded” are odd adjectives, no? I suppose gamers are feeling like they’ve been lied to because Activition used AI?

Dean again:

While no major studio will admit it publicly, Black Ops 7 is now a case study in how not to introduce AI into a beloved franchise. Artists across the industry are already discussing how easily ‘supportive tools’ can cross the line into fully generated content, and how difficult it becomes to convince players that craft still matters when the results look rushed or uncanny.

My, possibly controversial, view is that the technology itself isn’t the villain here; poor implementation is, a lack of transparency is, and fundamentally, a lack of creative use is.

I think the last phrase is the key. It’s the loss of quality and lack of creative use.

I’ve been playing around more with AI-generated images and video, ever since Figma acquired Weavy. I’ve been testing out Weavy and have done a lot of experimenting with ComfyUI in recent weeks. The quality of output from these tools is getting better every month.

With more and more AI being embedded into our art and design tools, the purity that some fans want is going to be hard to sustain. I think the train has left the station.

Bearded man in futuristic combat armor holding a rifle, standing before illustrated game UI panels showing fantasy scenes and text

Why Call of Duty: Black Ops 7’s AI art controversy means we all lose

Artists lose jobs, players hate it, and games cost more. I can’t find the benefits.

creativebloq.com iconcreativebloq.com