Skip to content

132 posts tagged with “user experience”

Alrighty, here’s one more “lens” thing to throw at you today.

In UX Collective, Daleen Rabe says that a “designer’s true value lies not in the polish of their pixels, but in the clarity of their lens.” She means our point-of-view, how we process the world:

  1. The method for creating truth
  2. The discipline of asking questions
  3. The mindset for enacting change
  4. The compass for navigating our ethics

The spec, as she calls it, is the designer’s way for creating truth. Others might call it a mockup or wireframe. Either way, it’s a visual representation of what we intend to build:

The spec is a democratic tool, while a text-based document can be ambiguous. It relies on a shared interpretation of language that often doesn’t exist. A visual, however, is a common language. It allows people with vastly different perspectives to align on something we can all agree exists in this reality. It’s a two-dimensional representation that is close enough to the truth to allow us to debate realistic scenarios and identify issues before they become code.

As designers, our role is to find the balance between the theoretical concept of what the business needs and what is tangibly feasible. The design spec is the tool we use to achieve this.

3D hexagonal prism sketched in black outline on a white background

The product designer’s Lens

Four tools that product designers use that have nothing to do with Figma

uxdesign.cc iconuxdesign.cc

T-shaped, M-shaped, and now Σ-shaped designers?! Feels like a personality quiz or something. Or maybe designers are overanalyzing as usual.

Here’s Darren Yeo telling us what it means:

The Σ-shape defines the new standard for AI expertise: not deep skills, but deep synthesis. This integrator manages the sum of complex systems (Σ) by orchestrating the continuous, iterative feedback loops (σ), ensuring system outputs align with product outcomes and ethical constraints.

Whether you subscribe to the Three Lens framework as proposed by Oliver West, or this sigma-shaped one being proposed by Darren Yeo, just be yourself and don’t bring it up in interviews.

Large purple sigma-shaped graphic on a grid-paper background with the text "Sigma shaped designer".

The AI era needs Sigma (Σ) shaped designers (Not T or π)

For years, design and tech teams have relied on shape metaphors to describe expertise. We had T-shaped people (one deep skill, broad…

uxdesign.cc iconuxdesign.cc

Oliver West argues in UX Magazine that UX designers aren’t monolithic—meaning we’re not all the same and see the world in the same way.

West:

UX is often described as a mix of art and science, but that definition is too simple. The truth is, UX is a spectrum made up of three distinct but interlinked lenses:

  • Creativity: Bringing clarity, emotion, and imagination to how we solve problems.
  • Science: Applying evidence, psychology, and rigor to understand behavior.
  • Business: Focusing on relevance, outcomes, and measurable value.

Every UX professional looks through these lenses differently. And that’s exactly how it should be.

He then outlines how those who are more focused on certain parts of the spectrum may be more apt for more specialized roles. For example, if you’re more focused on creativity, you might be more of a UI designer:

UI Designers lead with the creative lens. Their strength lies in turning complex ideas into interfaces that feel intuitive, elegant, and emotionally engaging. But the best UI Designers also understand the science of usability and the business context behind what they’re designing.

I think for product designers working in the startup world, you actually do need all three lenses, as it were. But with a bias towards Science and Business.

Glass triangular prism with red and blue reflections on a blue surface; overlay text about UX being more than one skill and using three lenses.

The Three Lenses of UX: Because Not All UX Is the Same

Great designers don’t do everything; they see the world through different lenses: creative, scientific, and strategic. This article explains why those differences aren’t flaws, but rather the core reason UX works, and how identifying your own lens can transform careers, hiring, and collaboration. If you’ve ever wondered why “unicorn” designers don’t exist, this perspective explains why.

uxmag.com iconuxmag.com

Hey designer, how are you? What is distracting you? Who are you having trouble working with?

Those are a couple of the questions designer Nikita Samutin and UX researcher Elizaveta Demchenko asked 340 product designers in a survey and in 10 interviews. They published their findings in a report called “State of Product Design: An Honest Conversation About the Profession.”

When I look at the calendars of the designers on my team, I see loads of meetings scheduled. So it’s no surprise to me that 64% of respondents said that switching between tasks distracted them. “Multitasking and unpredictable communication are among the main causes of distraction and stress for product designers,” the researchers wrote.

The most interesting to me, are the results in the section, “How Designers See Their Role.” Sixty-percent of respondents want to develop leadership skills and 47% want to improve presenting ideas.

For many, “leadership” doesn’t mean managing people—it means scaling influence: shaping strategy, persuading stakeholders, and leading high-impact projects. In other words, having a stronger voice in what gets built and why.

It’s telling because I don’t see pixel-pushing in the responses. And that’s a good thing in the age of AI.

Speaking of which, 77% of designers aren’t afraid that AI may replace them. “Nearly half of respondents (49%) say AI has already influenced their work, and many are actively integrating new tools into their processes. This reflects the state of things in early 2025.”

I’m sure that number would be bigger if the survey were conducted today.

State of Product Design: An Honest Conversation About the Profession — ’25; author avatars and summary noting a survey of 340 designers and 10 interviews.

State of Product Design 2025

2025 Product Design report: workflows, burnout, AI impact, career growth, and job market insights across regions and company types.

sopd.design iconsopd.design

Andrew Tipp does a deep dive into academic research to see how AI is actually being used in UX. He finds that practitioners are primarily using AI for testing and discovery: predicting UX, finding issues, and shaping user insights.

The highest usage of AI in UX design is in the testing phase, suggests one of our 2025 systematic reviews. According to this paper, 58% of studied AI usage in UX is in either the testing or discovery stage. This maybe shouldn’t be surprising, considering generative AI for visual ideation and UI prototyping has lagged behind text generation.

But, in his conclusion, Tipp echoes Dr. Maya Ackerman’s notion of wielding AI as a tool to augment our work:

However, there are potential drawbacks if AI usage in UX design is over-relied on, and used mindlessly. Without sufficient critical thinking, we can easily end up with generic, biased designs that don’t actually solve user problems. In some cases, we might even spend too much time on prompting and vibing with AI when we could have simply sketched or prototyped something ourselves — creating more sense of ownership in the process.

Rough clay sculpture of a human head in left profile, beige with visible tool marks and incised lines on the cheek

Silicon clay: how AI is reshaping UX design

What do the last five years of academic research tell us about how design is changing?

uxdesign.cc iconuxdesign.cc

I spend a lot of time not talking about design nor hanging out with other designers. I suppose I do a lot of reading about design to write this blog, and I am talking with the designers on my team, but I see Design as the output of a lot of input that comes from the rest of life.

Hardik Pandya agrees and puts it much more elegantly:

Design is synthesizing the world of your users into your solutions. Solutions need to work within the user’s context. But most designers rarely take time to expose themselves to the realities of that context.

You are creative when you see things others don’t. Not necessarily new visuals, but new correlations. Connections between concepts. Problems that aren’t obvious until someone points them out. And you can’t see what you’re not exposed to.

Improving as a designer is really about increasing your exposure. Getting different experiences and widening your input of information from different sources. That exposure can take many forms. Conversations with fellow builders like PMs, engineers, customer support, sales. Or doing your own digging through research reports, industry blogs, GPTs, checking out other products, YouTube.

Male avatar and text "EXPOSURE AS A DESIGNER" with hvpandya.com/notes on left; stippled doorway and rock illustration on right.

Exposure

For equal amount of design skills, your exposure to the world determines how effective of a designer you can be.

hvpandya.com iconhvpandya.com

Designer and front-end dev Ondřej Konečný has a lovely presentation of his book collection.

My favorites that I’ve read include:

  • Creative Selection by Ken Kocienda (my review)
  • Grid Systems in Graphic Design by Josef Müller-Brockmann
  • Steve Jobs by Walter Isaacson
  • Don’t Make Me Think by Steve Krug
  • Responsive Web Design by Ethan Marcotte

(h/t Jeffrey Zeldman)

Books page showing a grid of colorful book covers with titles, authors, and years on a light background.

Ondřej Konečný | Books

Ondřej Konečný’s personal website.

ondrejkonecny.com iconondrejkonecny.com

Critiques are the lifeblood of design. Anyone who went to design school has participated in and has been the focus of a crit. It’s “the intentional application of adversarial thought to something that isn’t finished yet,” as Fabricio Teixeira and Caio Braga, the editors of DOC put it.

A lot of solo designers—whether they’re a design team of one or if they’re a freelancer—don’t have the luxury of critiques. In my view, they’re handicapped. There are workarounds, of course. Such as critiques with cross-functional peers, but it’s not the same. I had one designer on my team—who used to be a design team of one in her previous company—come up to me and say she’s learned more in a month than a year at her former job.

Further down, Teixeira and Braga say:

In the age of AI, the human critique session becomes even more important. LLMs can generate ideas in 5 seconds, but stress-testing them with contextual knowledge, taste, and vision, is something that you should be better at. As AI accelerates the production of “technically correct” and “aesthetically optimized” work, relying on just AI creates the risks of mediocrity. AI is trained to be predictable; crits are all about friction: political, organizational, or strategic.

Critique

Critique

On elevating craft through critical thinking.

doc.cc icondoc.cc
Close-up of a Frankenstein-like monster face with stitched scars and neck bolts, overlaid by horizontal digital glitch bars

Architects and Monsters

According to recently unsealed court documents, Meta discontinued its internal studies on Facebook’s impact after discovering direct evidence that its platforms were detrimental to users’ mental health.

Jeff Horwitz reporting for Reuters:

In a 2020 research project code-named “Project Mercury,” Meta scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook, according to Meta documents obtained via discovery. To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.

Rather than publishing those findings or pursuing additional research, the filing states, Meta called off further work and internally declared that the negative study findings were tainted by the “existing media narrative” around the company.

Privately, however, a staffer insisted that the conclusions of the research were valid, according to the filing.

As more and more evidence comes to light about Mark Zuckerberg and Meta’s failings and possibly criminal behavior, we as tech workers and specifically designers making technology that billions of people use, have to do better. While my previous essay written after the assassination of Charlie Kirk was an indictment on the algorithm, I’ve come across a couple of pieces recently that bring the responsibility closer to UX’s doorstep.

David Kelley is an icon in design. A restless tinkerer turned educator, he co-founded the reowned industrial design firm IDEO, helped shape human-centered design at Stanford’s d.school, and collaborated with Apple on seminal projects like the early mouse.

Here’s his take on creativity in a brief segment for PBS News Hour:

And as I started teaching, I realized that my purpose in life was figuring out how to help people gain confidence in their creative ability. Many people assume they’re not creative. Time and time again, they say, a teacher told me I wasn’t creative or that’s not a very good drawing of a horse or whatever it is. We don’t have to teach creativity. Once we remove the blocks, they can then feel themselves as being a creative person. Witnessing somebody realizing they’re creative for the first time is just a complete joy. You can just see them come out of the shop and beaming that I can weld. Like, what’s next?

Older man with glasses and a mustache seated at a workshop workbench, shelves of blue parts bins and tools behind him.

David Kelley's Brief But Spectacular take on creativity and design

For decades, David Kelley has helped people unlock their creativity. A pioneer of design, he founded the Stanford d.school as a place for creative, cross-disciplinary problem solving. He reflects on the journey that shaped his belief that everyone has the capacity to be creative and his Brief But Spectacular take on creativity and design.

pbs.org iconpbs.org

I’ve been playing with my systems in the past month—switching browsers, notetaking apps, and RSS feed readers. If I’m being honest, it’s causing me anxiety because I feel unmoored. My systems aren’t familiar enough to let me be efficient.

One thing that has stayed relatively stable is my LLM app—well, two of them. ChatGPT for everyday and Claude for coding and writing.

Christina Wodtke, writing on her blog:

The most useful model might not win.

What wins is the model that people don’t want to leave. The one that feels like home. The one where switching would mean losing something—not just access to features, but fluency, comfort, all those intangible things that make a tool feel like yours.

Amazon figured this out with Prime. Apple figured it out with the ecosystem. Salesforce figured it out by making itself so embedded in enterprise workflows that ripping it out would require an act of God.

AI companies are still acting like this is a pure technology competition. It’s not. It’s a competition to become essential—and staying power comes from experience, not raw capability.

Your moat isn’t your model. Your moat is whether users feel at home.

Solid black square filling the frame

UX Is Your Moat (And You’re Ignoring It)

Last week, Google released Nano Banana Pro, their latest image generator. The demos looked impressive. I opened Gemini to try it. Then I had a question I needed to ask. Something unrelated to image…

eleganthack.com iconeleganthack.com
Escher-like stone labyrinth of intersecting walkways and staircases populated by small figures and floating rectangular screens.

Generative UI and the Ephemeral Interface

This week, Google debuted their Gemini 3 AI model to great fanfare and reviews. Specs-wise, it tops the benchmarks. This horserace has seen Google, Anthropic, and OpenAI trade leads each time a new model is released, so I’m not really surprised there. The interesting bit for us designers isn’t the model itself, but the upgraded Gemini app that can create user interfaces on the fly. Say hello to generative UI.

I will admit that I’ve been skeptical of the notion of generative user interfaces. I was imagining an app for work, like a design app, that would rearrange itself depending on the task at hand. In other words, it’s dynamic and contextual. Adobe has tried a proto-version of this with the contextual task bar. Theoretically, it surfaces up the most pertinent three or four actions based on your current task. But I find that it just gets in the way.

When Interfaces Keep Moving

Others have been less skeptical. More than 18 months ago, NN/g published an article speculating about genUI and how it might manifest in the future. They define it as:

A generative UI (genUI) is a user interface that is dynamically generated in real time by artificial intelligence to provide an experience customized to fit the user’s needs and context. So it’s a custom UI for that user at that point in time. Similar to how LLMs answer your question: tailored for you and specific to when that you asked the original question.

There are dark patterns in UX, and there are also dark patterns specific to games. Dark Pattern Games is a website that catalogs such patterns and the offending mobile games.

The site’s definition of a dark pattern is:

A gaming dark pattern is something that is deliberately added to a game to cause an unwanted negative experience for the player with a positive outcome for the game developer.

The “Social Pyramid Scheme” is one of my most loathed:

Some games will give you a bonus when you invite your friends to play and link to them to your account. This bonus may be a one-time benefit, or it may be an ongoing benefit that improves the gaming experience for each friend that you add. This gives players a strong incentive to convince their friends to play. Those friends then have to sign up more friends and so on, leading to a pyramid scheme and viral growth for the game.

Starry background with red pixelated text "Dark Pattern Games", a D-pad icon with red arrows, and URL www.darkpattern.games

DarkPattern.games » Healthy Gaming « Avoid Addictive Dark Patterns

Game reviews to help you find good games that don’t trick you into addictive gaming patterns.

darkpattern.games icondarkpattern.games

Geoffrey Litt is a design engineer at Notion. He is one of the authors at Ink & Switch of “Malleable software,” which I linked to back in July. I think it’s pretty fitting that he popped up at Notion, with the CEO Ivan Zhao likening the app to LEGO bricks.

In a recent interview with Rid on Dive Club, Litt explains the concept further:

So, when I say malleable software, I do not mean only disposable software. The main thing I think about with malleable software is actually much closer to … designing my interior space in my house. Let’s say when I come home I don’t want everything to be rearranged, right? I want it to be the way it was. And if I want to move the furniture or put things on the wall, I want to have the right to do that. And so I think of it much more as kind of crafting an environment over time that’s actually more stable and predictable, not only for myself, but also for my team. Having shared environments that we all work in together that are predictable is also really important, right? Ironically, actually, in some ways, I think sometimes malleable software results in more stable software because I have more control.

For building with AI, Litt advocates “coding like a surgeon”: stay in the loop and use agents for prep and grunt work.

How do we think of AI as a way to leverage our time better? [So we can] stay connected to the work and [do] it ourselves by having prep work done for us. Having tools in the moment helping us do it so that we can really focus on the stuff we love to do, and do less of everything else. And that’s how I’m trying to use coding agents for my core work that I care about today. Which is when I show up, sit down at my desk in the morning and work on a feature, I want to be prepped with a brief on all the code I’m going to be touching today, how it works, what the traps are. Maybe I’ll see a draft that the AI did for me overnight, sketching out how the coding could go. Maybe some ideas for me.

In other words, like an assistant who works overnight. And yeah, this could apply to design as well.

Geoffrey Litt - The Future of Malleable Software

AI is fundamentally shifting the way we think about digital products and the core deliverables that we’re bringing to the table as designers.So I asked Geoff…

youtube.com iconyoutube.com

Something that I think a lot about as a design leader is how to promote the benefits of design in the organization. Paul Boag created this practical guide to guerrilla internal marketing that builds a network of ambassadors across departments and keeps user-centered thinking top of mind.

Boag, writing in his newsletter:

You cannot be everywhere at once. You cannot attend every meeting, influence every decision, or educate every colleague personally. But you can identify and equip people across different departments who care about users and give them the tools to spread UX thinking in their teams.

This is how culture change actually happens. Not through presentations from the UX team, but through conversations between colleagues who trust each other.

Marketing UX Within Your Organization header, man in red beanie with glasses holding papers; author photo, 6‑min read.

Marketing UX Within Your Organization

Learn guerrilla marketing tactics to raise UX awareness and shift your organization's culture without a big budget.

boagworld.com iconboagworld.com

Design Thinking has gotten a bad rap in recent years. It was supposed to change everything in the corporate world but ended up changing very little. While Design Thinking may not be the darling anymore, designers still need time to think, which is, for the sake of argument, time away from Figma and pushing pixels.

Chris Becker argues in UX Collective:

However, the canary in the coalmine is that Designers are not being used for their “thinking” but rather their “repetition”. Much of the consternation we feel in the UX industry is catapulted on us from this point of friction.

He says that agile software development and time for designers to think aren’t incompatible:

But allowing Designers to implement their thinking into the process is about trust. When good software teams collaborate effectively, there are high levels of trust and autonomy (a key requirement of agile teams). Designers must earn that trust, of course, and when we demonstrate that we have “done the thinking,” it builds confidence and garners more thinking time. Thinking begets thinking. So, Designers, let’s continue to work to maximise our “thinking” faculties.

Hand-drawn diagram titled THINKING: sensory icons and eyeballs feed a brain, plus a phone labeled "Illusory Truth Effect," leading to outputs labeled "Habits.

Let designers think

How “Thinking” + “Designing” need to be practiced outside AI.

uxdesign.cc iconuxdesign.cc

Game design is fascinating to me. As designers, “gamification” was all the rage a few years back, inspired by apps like Duolingo that made it fun to progress in a product. Raph Koster outlines a twelve-step, systems-first framework for game design, complete with illustrations. Notice how he’ll use UX terms like “affordance” because ultimately, game design is UX.

In step five, “Feedback,” Koster provides an example:

[The player] can’t learn and get better unless [they] get a whole host of information.

  • You need to know what actions – we usually call them verbs — are even available to you. There’s a gas pedal.
  • You need to be able to tell you used a verb. You hear the engine growl as you press the pedal.
  • You need to see that the use of the verb affected the state of the problem, and how it changed. The spedometer moved!
  • You need to be told if the state of the problem is better for your goal, or worse. Did you mean to go this fast?

Sound familiar? It’s Jakob Nielsen’s “Visibility of System Status.”

White-bordered hex grid with red, blue, yellow and black hex tiles marked by dot patterns, clustered on a dark tabletop

Game design is simple, actually

So, let’s just walk through the whole thing, end to end. Here’s a twelve-step program for understanding game design. One: Fun There are a lot of things people call “fun.” But most of them are not u…

raphkoster.com iconraphkoster.com

Apologies for sharing back-to-back articles from NN/g, but this is a good comprehensive index of all the AI-related guides the firm has published. Start here if you’re just getting into it.

Highlights from my POV:

  • Your AI UX Intern: Meet Ari. AI tools in UX act like junior interns whose output serves as a starting draft needing review, specific instructions, and added context. Their work should be checked and not used for final products or decisions without supervision.
  • The Future-Proof Designer. AI speeds up product development and automates design tasks, but creates risks like design marginalization and information overload. Designers must focus on strategic thinking, outcomes, and critical judgment to ensure decisions benefit users and business value.
  • Design Taste vs. Technical Skills in the Era of AI. Generative AI has equalized access to design output, but quality depends on creative discernment and taste, which remain essential for impactful results.
Using AI for UX Work: Study Guide — profile head with magnifying glass, robot face, papers, speech bubble and vector-cursor icons; NN/G logo

Using AI for UX Work: Study Guide

Unsure where to start? Use this collection of links to our articles and videos to learn about the best ways to use artificial intelligence for UX work.

nngroup.com iconnngroup.com

Leave it to NN/g to evaluate the AI prompt-to-code tool landscape with some rigor. Huei-Hsin Wang and Megan Brown cover over a dozen tools, including ChatGPT, Claude, UX Pilot, Uizard, Relume, Stitch, Bolt, Lovable, v0, Replit, Figma Make, Magic Patterns, and Subframe. They use a human designer as the control.

Among their conclusions:

AI’s limited grasp of design nuances and inconsistent output make it best suited for ideation, concept exploration, and early-phase prototype testing, rather than later stages. While you likely won’t take an AI-generated prototype straight to production, these tools can help you break through creative blocks and explore new directions quickly.

I think the best part is they shared screenshots of outputs in a FigJam board.

Header "Good from Afar, But Far from Good: AI Prototyping in Real Design Contexts" with teal robot icon and dotted wireframe UI.

Good from Afar, But Far from Good: AI Prototyping in Real Design Contexts

AI prototyping tools follow general directions but lack the judgment and nuance of an experienced designer.

nngroup.com iconnngroup.com

In thinking about the three current AI-native web browsers, Fanny on Medium sees what lessons product designers can take from their different approaches.

On Perplexity Comet:

Design Insight: Comet succeeds by making AI feel like a natural extension of browsing, not an interruption. The sidecar model is brilliant because it respects the user’s primary task (reading, researching, shopping) while offering help exactly when context is fresh. But there’s a trade-off — Comet’s background assistant, which can handle multiple tasks simultaneously while you work, requires extensive permissions and introduces real security concerns.

On ChatGPT Atlas:

Design Insight: Atlas is making a larger philosophical statement — that the future of computing isn’t about better search, it’s about conversation as an interface. The key product decision here is making ChatGPT’s memory and context awareness central. Atlas remembers what sites you’ve visited, what you were working on, and uses that history to personalize responses. Ask “What was that doc I had my presentation plan in?” and it finds it.

On The Browser Company Dia:

Design Insight: Dia is asking the most interesting question — what happens when AI isn’t a sidebar or a search replacement, but a fundamental rethinking of input methods? The insertion cursor, the mouse, the address bar — these are the primitives of computing. Dia is making them intelligent.

She concludes that they “can’t all be right. But they’re probably all pointing at pieces of what comes next.”

I do think it’s a combo and Atlas is likely headed in the right direction. For AI to be truly assistive, it has to have relevant context. Since a lot of our lives are increasingly on the internet via web apps—and nearly everything is a web app these days—ChatGPT’s profile of you will have the most context, including your chats with the chatbot.

I began using Perplexity because I appreciated its accuracy compared with ChatGPT; this was pre-web search. But even with web search built into ChatGPT 5, I still find Perplexity’s (and therefore Comet’s) approach to be more trustworthy.

My conclusion stands though: I’m still waiting on the Arc-Dia-Comet browser smoothie.

Three app icons on dock: blue flower with paper plane, rounded square with sunrise gradient, and dark circle with white arches.

The AI Browser Wars: What Comet, Atlas, and Dia Reveal About Designing for AI-First Experiences

Last week, I watched OpenAI’s Sam Altman announce Atlas with the kind of confidence usually reserved for iPhone launches. “Tabs were…

uxplanet.org iconuxplanet.org

To close us out on Halloween, here’s one more archive full of spooky UX called the Dark Patterns Hall of Shame. It’s managed by a team of designers and researchers, who have dedicated themselves to identifying and exposing dark patterns and unethical design examples on the internet. More than anything, I just love the names some of these dark patterns have, like Confirmshaming, Privacy Zuckering, and Roach Motel.

Small gold trophy above bold dark text "Hall of shame. design" on a pale beige background.

Collection of Dark Patterns and Unethical Design

Discover a variety of dark pattern examples, sorted by category, to better understand deceptive design practices.

hallofshame.design iconhallofshame.design

Celine Nguyen wrote a piece that connects directly to what Ethan Mollick calls “working with wizards” and what SAP’s Ellie Kemery describes as the “calibration of trust” problem. It’s about how the interfaces we design shape the relationships we have with technology.

The through-line is metaphor. For LLMs, that metaphor is conversation. And it’s working—maybe too well:

Our intense longing to be understood can make even a rudimentary program seem human. This desire predates today’s technologies—and it’s also what makes conversational AI so promising and problematic.

When the metaphor is this good, we forget it’s a metaphor at all:

When we interact with an LLM, we instinctively apply the same expectations that we have for humans: If an LLM offers us incorrect information, or makes something up because it the correct information is unavailable, it is lying to us. …The problem, of course, is that it’s a little incoherent to accuse an LLM of lying. It’s not a person.

We’re so trapped inside the conversational metaphor that we accuse statistical models of having intent, of choosing to deceive. The interface has completely obscured the underlying technology.

Nguyen points to research showing frequent chatbot users “showed consistently worse outcomes” around loneliness and emotional dependence:

Participants who are more likely to feel hurt when accommodating others…showed more problematic AI use, suggesting a potential pathway where individuals turn to AI interactions to avoid the emotional labor required in human relationships.

However, replacing human interaction with AI may only exacerbate their anxiety and vulnerability when facing people.

This isn’t just about individual users making bad choices. It’s about an interface design that encourages those choices by making AI feel like a relationship rather than a tool.

The kicker is that we’ve been here before. In 1964, Joseph Weizenbaum created ELIZA, a simple chatbot that parodied a therapist:

I was startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it…What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Sixty years later, we’ve built vastly more sophisticated systems. But the fundamental problem remains unchanged.

The reality is we’re designing interfaces that make powerful tools feel like people. Susan Kare’s icons for the Macintosh helped millions understand computers. But they didn’t trick people into thinking their computers cared about them.

That’s the difference. And it matters.

Old instant-message window showing "MeowwwitsMadix3: heyyy" and "are you mad at me?" with typed reply "no i think im just kinda embarassed" and buttons Warn, Block, Expressions, Games, Send.

how to speak to a computer

against chat interfaces ✦ a brief history of artificial intelligence ✦ and the (worthwhile) problem of other minds

personalcanon.com iconpersonalcanon.com

Speaking of trusting AI, in a recent episode of Design Observer’s Design As, Lee Moreau speaks with four industry leaders about trust and doubt in the age of AI.

We’ve linked to a story about Waymo before, so here’s Ryan Powell, head of UX at Waymo:

Safety is at the heart of everything that we do. We’ve been at this for a long time, over a decade, and we’ve taken a very cautious approach to how we scale up our technology. As designers, what we have really focused on is that idea that more people will use us as a serious transportation option if they trust us. We peel that back a little bit. Okay, well, How do we design for trust? What does it actually mean?

Ellie Kemery, principal research lead, advancing responsible AI at SAP, on maintaining critical thinking and transparency in AI-driven products:

We need to think about ethics as a part of this because the unintended consequences, especially at the scale that we operate, are just too big, right?

So we focus a lot of our energy on value, delivering the right value, but we also focus a lot of our energy on making sure that people are aware of how the technology came to that output,…making sure that people are in control of what’s happening at all times, because at the end of the day, they need to be the ones making the call.

Everybody’s aware that without trust, there is no adoption. But there is something that people aren’t talking about as much, which is that people should also not blindly trust a system, right? And there’s a huge risk there because, humans we tend to, you know, we’ll try something a couple of times and if it works it works. And then we lose that critical thinking. We stop checking those things and we simply aren’t in a space where we can do that yet. And so making sure that we’re focusing on the calibration of trust, like what is the right amount of trust that people should have to be able to benefit from the technology while at the same time making sure that they’re aware of the limitations.

Bold white letters in a 3x3 grid reading D E S / I G N / A S on a black background, with a right hand giving a thumbs-up over the right column.

Design as Trust | Design as Doubt

Explore how designers build trust, confront doubt, and center equity and empathy in the age of AI with leaders from Adobe, Waymo, RUSH, and SAP

designobserver.com icondesignobserver.com

In this era of AI, we’ve been taught that LLMs are probabilistic, not deterministic, and that they will sometimes hallucinate. There’s a saying in AI circles that humans are right about 80% of the time, and so are AIs. Except when less than 100% accuracy is unacceptable. Accountants need to be 100% accurate, lest they lose track of money for their clients or businesses.

And that’s the problem Intuit had to solve to roll out their AI agent. Sean Michael Kerner, writing in VentureBeat:

Even when its accounting agent improved transaction categorization accuracy by 20 percentage points on average, they still received complaints about errors.

“The use cases that we’re trying to solve for customers include tax and finance; if you make a mistake in this world, you lose trust with customers in buckets and we only get it back in spoonfuls,” Joe Preston, Intuit’s VP of product and design, told VentureBeat.

So they built an agent that queries data from a multitude of sources and returns those exact results. But do users trust those results? It comes down to a design decision on being transparent:

Intuit has made explainability a core user experience across its AI agents. This goes beyond simply providing correct answers: It means showing users the reasoning behind automated decisions.

When Intuit’s accounting agent categorizes a transaction, it doesn’t just display the result; it shows the reasoning. This isn’t marketing copy about explainable AI, it’s actual UI displaying data points and logic.

“It’s about closing that trust loop and making sure customers understand the why,” Alastair Simpson, Intuit’s VP of design, told VentureBeat.

Rusty metal bucket tipped over pouring a glowing stream of blue binary digits (ones and zeros) onto a dark surface.

Intuit learned to build AI agents for finance the hard way: Trust lost in buckets, earned back in spoonfuls

The QuickBooks maker's approach to embedding AI agents reveals a critical lesson for enterprise AI adoption: in high-stakes domains like finance and tax, one mistake can erase months of user confidence.

venturebeat.com iconventurebeat.com

Circling back to Monday’s item on how caring is good design, Felix Haas has a subtly different take: build kindness into your products.

Kindness in design isn’t about adding smiley faces or writing cheerful copy. It’s deeper than tone. It’s about intent embedded in every interaction.

Kindness shows up in the patience of an empty state that doesn’t rush you. In the warmth of micro-interactions that acknowledge your actions without demanding attention. In error messages that guide rather than scold. In defaults that assume good intent rather than user incompetence.

These moments seem subtle, even trivial, in isolation. But they accumulate. They shape how we feel about a product over weeks and months. They turn interfaces into relationships. They build trust.

Kind Products Win

Kind Products Win

Why do so many products feel soulless?

designplusai.com icondesignplusai.com