Skip to content

115 posts tagged with “user experience”

Apologies for sharing back-to-back articles from NN/g, but this is a good comprehensive index of all the AI-related guides the firm has published. Start here if you’re just getting into it.

Highlights from my POV:

  • Your AI UX Intern: Meet Ari. AI tools in UX act like junior interns whose output serves as a starting draft needing review, specific instructions, and added context. Their work should be checked and not used for final products or decisions without supervision.
  • The Future-Proof Designer. AI speeds up product development and automates design tasks, but creates risks like design marginalization and information overload. Designers must focus on strategic thinking, outcomes, and critical judgment to ensure decisions benefit users and business value.
  • Design Taste vs. Technical Skills in the Era of AI. Generative AI has equalized access to design output, but quality depends on creative discernment and taste, which remain essential for impactful results.
Using AI for UX Work: Study Guide — profile head with magnifying glass, robot face, papers, speech bubble and vector-cursor icons; NN/G logo

Using AI for UX Work: Study Guide

Unsure where to start? Use this collection of links to our articles and videos to learn about the best ways to use artificial intelligence for UX work.

nngroup.com iconnngroup.com

Leave it to NN/g to evaluate the AI prompt-to-code tool landscape with some rigor. Huei-Hsin Wang and Megan Brown cover over a dozen tools, including ChatGPT, Claude, UX Pilot, Uizard, Relume, Stitch, Bolt, Lovable, v0, Replit, Figma Make, Magic Patterns, and Subframe. They use a human designer as the control.

Among their conclusions:

AI’s limited grasp of design nuances and inconsistent output make it best suited for ideation, concept exploration, and early-phase prototype testing, rather than later stages. While you likely won’t take an AI-generated prototype straight to production, these tools can help you break through creative blocks and explore new directions quickly.

I think the best part is they shared screenshots of outputs in a FigJam board.

Header "Good from Afar, But Far from Good: AI Prototyping in Real Design Contexts" with teal robot icon and dotted wireframe UI.

Good from Afar, But Far from Good: AI Prototyping in Real Design Contexts

AI prototyping tools follow general directions but lack the judgment and nuance of an experienced designer.

nngroup.com iconnngroup.com

In thinking about the three current AI-native web browsers, Fanny on Medium sees what lessons product designers can take from their different approaches.

On Perplexity Comet:

Design Insight: Comet succeeds by making AI feel like a natural extension of browsing, not an interruption. The sidecar model is brilliant because it respects the user’s primary task (reading, researching, shopping) while offering help exactly when context is fresh. But there’s a trade-off — Comet’s background assistant, which can handle multiple tasks simultaneously while you work, requires extensive permissions and introduces real security concerns.

On ChatGPT Atlas:

Design Insight: Atlas is making a larger philosophical statement — that the future of computing isn’t about better search, it’s about conversation as an interface. The key product decision here is making ChatGPT’s memory and context awareness central. Atlas remembers what sites you’ve visited, what you were working on, and uses that history to personalize responses. Ask “What was that doc I had my presentation plan in?” and it finds it.

On The Browser Company Dia:

Design Insight: Dia is asking the most interesting question — what happens when AI isn’t a sidebar or a search replacement, but a fundamental rethinking of input methods? The insertion cursor, the mouse, the address bar — these are the primitives of computing. Dia is making them intelligent.

She concludes that they “can’t all be right. But they’re probably all pointing at pieces of what comes next.”

I do think it’s a combo and Atlas is likely headed in the right direction. For AI to be truly assistive, it has to have relevant context. Since a lot of our lives are increasingly on the internet via web apps—and nearly everything is a web app these days—ChatGPT’s profile of you will have the most context, including your chats with the chatbot.

I began using Perplexity because I appreciated its accuracy compared with ChatGPT; this was pre-web search. But even with web search built into ChatGPT 5, I still find Perplexity’s (and therefore Comet’s) approach to be more trustworthy.

My conclusion stands though: I’m still waiting on the Arc-Dia-Comet browser smoothie.

Three app icons on dock: blue flower with paper plane, rounded square with sunrise gradient, and dark circle with white arches.

The AI Browser Wars: What Comet, Atlas, and Dia Reveal About Designing for AI-First Experiences

Last week, I watched OpenAI’s Sam Altman announce Atlas with the kind of confidence usually reserved for iPhone launches. “Tabs were…

uxplanet.org iconuxplanet.org

To close us out on Halloween, here’s one more archive full of spooky UX called the Dark Patterns Hall of Shame. It’s managed by a team of designers and researchers, who have dedicated themselves to identifying and exposing dark patterns and unethical design examples on the internet. More than anything, I just love the names some of these dark patterns have, like Confirmshaming, Privacy Zuckering, and Roach Motel.

Small gold trophy above bold dark text "Hall of shame. design" on a pale beige background.

Collection of Dark Patterns and Unethical Design

Discover a variety of dark pattern examples, sorted by category, to better understand deceptive design practices.

hallofshame.design iconhallofshame.design

Celine Nguyen wrote a piece that connects directly to what Ethan Mollick calls “working with wizards” and what SAP’s Ellie Kemery describes as the “calibration of trust” problem. It’s about how the interfaces we design shape the relationships we have with technology.

The through-line is metaphor. For LLMs, that metaphor is conversation. And it’s working—maybe too well:

Our intense longing to be understood can make even a rudimentary program seem human. This desire predates today’s technologies—and it’s also what makes conversational AI so promising and problematic.

When the metaphor is this good, we forget it’s a metaphor at all:

When we interact with an LLM, we instinctively apply the same expectations that we have for humans: If an LLM offers us incorrect information, or makes something up because it the correct information is unavailable, it is lying to us. …The problem, of course, is that it’s a little incoherent to accuse an LLM of lying. It’s not a person.

We’re so trapped inside the conversational metaphor that we accuse statistical models of having intent, of choosing to deceive. The interface has completely obscured the underlying technology.

Nguyen points to research showing frequent chatbot users “showed consistently worse outcomes” around loneliness and emotional dependence:

Participants who are more likely to feel hurt when accommodating others…showed more problematic AI use, suggesting a potential pathway where individuals turn to AI interactions to avoid the emotional labor required in human relationships.

However, replacing human interaction with AI may only exacerbate their anxiety and vulnerability when facing people.

This isn’t just about individual users making bad choices. It’s about an interface design that encourages those choices by making AI feel like a relationship rather than a tool.

The kicker is that we’ve been here before. In 1964, Joseph Weizenbaum created ELIZA, a simple chatbot that parodied a therapist:

I was startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it…What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Sixty years later, we’ve built vastly more sophisticated systems. But the fundamental problem remains unchanged.

The reality is we’re designing interfaces that make powerful tools feel like people. Susan Kare’s icons for the Macintosh helped millions understand computers. But they didn’t trick people into thinking their computers cared about them.

That’s the difference. And it matters.

Old instant-message window showing "MeowwwitsMadix3: heyyy" and "are you mad at me?" with typed reply "no i think im just kinda embarassed" and buttons Warn, Block, Expressions, Games, Send.

how to speak to a computer

against chat interfaces ✦ a brief history of artificial intelligence ✦ and the (worthwhile) problem of other minds

personalcanon.com iconpersonalcanon.com

Speaking of trusting AI, in a recent episode of Design Observer’s Design As, Lee Moreau speaks with four industry leaders about trust and doubt in the age of AI.

We’ve linked to a story about Waymo before, so here’s Ryan Powell, head of UX at Waymo:

Safety is at the heart of everything that we do. We’ve been at this for a long time, over a decade, and we’ve taken a very cautious approach to how we scale up our technology. As designers, what we have really focused on is that idea that more people will use us as a serious transportation option if they trust us. We peel that back a little bit. Okay, well, How do we design for trust? What does it actually mean?

Ellie Kemery, principal research lead, advancing responsible AI at SAP, on maintaining critical thinking and transparency in AI-driven products:

We need to think about ethics as a part of this because the unintended consequences, especially at the scale that we operate, are just too big, right?

So we focus a lot of our energy on value, delivering the right value, but we also focus a lot of our energy on making sure that people are aware of how the technology came to that output,…making sure that people are in control of what’s happening at all times, because at the end of the day, they need to be the ones making the call.

Everybody’s aware that without trust, there is no adoption. But there is something that people aren’t talking about as much, which is that people should also not blindly trust a system, right? And there’s a huge risk there because, humans we tend to, you know, we’ll try something a couple of times and if it works it works. And then we lose that critical thinking. We stop checking those things and we simply aren’t in a space where we can do that yet. And so making sure that we’re focusing on the calibration of trust, like what is the right amount of trust that people should have to be able to benefit from the technology while at the same time making sure that they’re aware of the limitations.

Bold white letters in a 3x3 grid reading D E S / I G N / A S on a black background, with a right hand giving a thumbs-up over the right column.

Design as Trust | Design as Doubt

Explore how designers build trust, confront doubt, and center equity and empathy in the age of AI with leaders from Adobe, Waymo, RUSH, and SAP

designobserver.com icondesignobserver.com

In this era of AI, we’ve been taught that LLMs are probabilistic, not deterministic, and that they will sometimes hallucinate. There’s a saying in AI circles that humans are right about 80% of the time, and so are AIs. Except when less than 100% accuracy is unacceptable. Accountants need to be 100% accurate, lest they lose track of money for their clients or businesses.

And that’s the problem Intuit had to solve to roll out their AI agent. Sean Michael Kerner, writing in VentureBeat:

Even when its accounting agent improved transaction categorization accuracy by 20 percentage points on average, they still received complaints about errors.

“The use cases that we’re trying to solve for customers include tax and finance; if you make a mistake in this world, you lose trust with customers in buckets and we only get it back in spoonfuls,” Joe Preston, Intuit’s VP of product and design, told VentureBeat.

So they built an agent that queries data from a multitude of sources and returns those exact results. But do users trust those results? It comes down to a design decision on being transparent:

Intuit has made explainability a core user experience across its AI agents. This goes beyond simply providing correct answers: It means showing users the reasoning behind automated decisions.

When Intuit’s accounting agent categorizes a transaction, it doesn’t just display the result; it shows the reasoning. This isn’t marketing copy about explainable AI, it’s actual UI displaying data points and logic.

“It’s about closing that trust loop and making sure customers understand the why,” Alastair Simpson, Intuit’s VP of design, told VentureBeat.

Rusty metal bucket tipped over pouring a glowing stream of blue binary digits (ones and zeros) onto a dark surface.

Intuit learned to build AI agents for finance the hard way: Trust lost in buckets, earned back in spoonfuls

The QuickBooks maker's approach to embedding AI agents reveals a critical lesson for enterprise AI adoption: in high-stakes domains like finance and tax, one mistake can erase months of user confidence.

venturebeat.com iconventurebeat.com

Circling back to Monday’s item on how caring is good design, Felix Haas has a subtly different take: build kindness into your products.

Kindness in design isn’t about adding smiley faces or writing cheerful copy. It’s deeper than tone. It’s about intent embedded in every interaction.

Kindness shows up in the patience of an empty state that doesn’t rush you. In the warmth of micro-interactions that acknowledge your actions without demanding attention. In error messages that guide rather than scold. In defaults that assume good intent rather than user incompetence.

These moments seem subtle, even trivial, in isolation. But they accumulate. They shape how we feel about a product over weeks and months. They turn interfaces into relationships. They build trust.

Kind Products Win

Kind Products Win

Why do so many products feel soulless?

designplusai.com icondesignplusai.com

As UX designers, we try to anticipate the edge cases—what might a user do and how can we ensure they don’t hit any blockers. But beyond the confines of the products we build, we must also remember to anticipate the unintended consequences. How might this product or feature affect the user emotionally? Are we creating bad habits? Are we fomenting rage in pursuit of engagement?

Martin Tomitsch and Steve Baty write in DOC, suggesting some frameworks to anticipate the unpredictable:

Chaos theory describes the observation that even tiny perturbations like the flutter of a butterfly can lead to dramatic, non-linear effects elsewhere over time. Seemingly small changes or decisions that we make as designers can have significant and often unforeseen consequences.

As designers, we can’t directly control the chain of reactions that will follow an action. Reactions are difficult to predict, as they occur depending on factors beyond our direct control.

But by using tools like systems maps, the impact ripple canvas, and iceberg visuals, we can take potential reactions out of the unpredictable pile and shift them into the foreseeable pile.

The UX butterfly effect

The UX butterfly effect

Understanding unintended consequences in design and how to plan for them.

doc.cc icondoc.cc

Speaking of workslop, here’s an article from NN/g on how to avoid falling into over-reliance on AI in our design field. They call it the “7 Deadly AI Sins for UX Professionals.”

  1. Outsourced Thinking
  2. Wasted Time
  3. Lost Details
  4. Isolated Ideation
  5. Naïve Trust
  6. Bland Taste
  7. Defensive Outlook

As Tanner Kohler writes:

It’s not about avoiding AI. It’s about maintaining your own growth and the quality of your work as you use AI. AI will constantly be changing. Never let yourself slip into repeatedly committing the sins that weaken you and your UX skills.

7 Deadly AI Sins for UX Professionals

7 Deadly AI Sins for UX Professionals

Succumbing to AI temptations weakens your UX skills. Strive for the AI virtues to keep yourself strong as you use AI in your work.

nngroup.com iconnngroup.com

Designer Ben Holliday writes a wonderful deep dive into how caring is good design. In it, he references the conversation that Jony Ive had with Patrick Collison a few months ago. (It’s worth watching in its entirety if you haven’t already.)

Watching the interview back, I was struck by how he spoke about applying care to design, describing how:

“…everyone has the ability to sense the care in designed things because we can all recognise carelessness.”

Talking about the history of industrial design at Apple, Ive speaks about the care that went into the design of every product. That included the care that went into packaging – specifically things that might seem as inconsequential as how a cable was wrapped and then unpackaged. In reality, the type of small interactions that millions of people experienced when unboxing the latest iPhone. These are details that people wouldn’t see as such, but Ive and team believed that they would sense care when they had been carefully considered and designed.

This approach has always been a part of Jony Ive’s design philosophy, or the principles applied by his creative teams at Apple. I looked back and found an earlier 2015 interview and notes I’d made where he says how he believes that the majority of our manufactured environment is characterised by carelessness. But then, how, at Apple, they wanted people to sense care in their products.

The attention to detail and the focus and attention we can all bring to design is care. It’s important.

Holliday’s career has been focused in government, public sector, and non-profit environments. In other words, he thinks a lot about how design can impact people’s lives at massive scale.

In the past few months, I’ve been drawn to the word ‘careless’ when thinking about the challenges faced by our public services and society. This is especially the case with the framing around the impact of technology in our lives, and increasingly the big bets being made around AI to drive efficiency and productivity.

The word careless can be defined as the failure to give sufficient attention to avoiding harm or errors. Put simply, carelessness can be described as ‘negligence’.

Later, he cites Facebook/Meta’s carelessness when they “used data to target young people when at their most vulnerable,” specifically, body confidence.

Design is care (and sensing carelessness)

Design is care (and sensing carelessness)

Why design is care, and how the experiences we shape and deliver will be defined by how people sense that care in the future.

benholliday.com iconbenholliday.com

Writing for UX Collective, Filipe Nzongo argues that designers should embrace behavior as a fundamental design material—not just to drive metrics or addiction, but to intentionally create products that empower people and foster meaningful, lasting change in their lives.

Behavior should be treated as a design material, just as technology once became our material. If we use behavior thoughtfully, we can create better products. More than that, I believe there is a broader and more meaningful opportunity before us: to design for behavior. Not to make people addicted to products, but to help them grow as human beings, better parents, citizens, students, and professionals. Because if behavior is our medium, then design is our tool for empowerment.

Behavior is our medium

Behavior is our medium

The focus should remain on human

uxdesign.cc iconuxdesign.cc

A former colleague of mine, designer Evan Sornstein wrote a wonderful piece on LinkedIn applying Buddhist principles to design.

Buddhism begins with the recognition that life is marked by impermanence, suffering, and non-self. These aren’t abstract doctrines — they are observations about how the world actually works. Over centuries, these ideas contributed to Japanese aesthetics: wabi-sabi (imperfection), ma (meaningful emptiness), yo no bi (beauty in usefulness), the humility of the shokunin, and the care of omotenashi. What emerges is not a set of rules, but an extraordinary perspective: beauty is inseparable from impermanence; usefulness is inseparable from dignity; care is inseparable from design. In an age when our digital products too often prioritize stickiness and metrics over humanity, these ideas offer a different path. They remind us that design is not about control or cleverness — it’s about connection, trust, and care.

The following eight principles aren’t new “methods” or “laws,” but reflections of this lineage, reframed for product design — though they apply to nearly any creative practice. They are invitations to design with the same attention, humility, and compassion that Buddhism and Japanese aesthetics have carried for centuries.

Designing Emptiness

Designing Emptiness

What Buddhism and Japanese aesthetics teach us about space, meaning, and care in UX It’s been about two years since I first realized I wanted to write this. Looking back, I’ve been on a quiet path for nearly a decade — unknowingly becoming a Buddhist.

linkedin.com iconlinkedin.com

I think these guidelines from Vercel are great. It’s a one-pager and very clearly written for both humans and AI. It reminds me of the old school MailChimp brand voice guidelines and Apple’s Human Interface Guidelines which have become reference standards.

Web Interface Guidelines

Web Interface Guidelines

Guidelines for building great interfaces on the web. Covers interactions, animations, layout, content, forms, performance & design.

vercel.com iconvercel.com

There’s a famous quote that Henry Ford allegedly said:

If I had asked people what they wanted, they would have said faster horses.

Anton Sten argues that a lot of people use this quote to justify not doing any user (or market) research:

This quote gets thrown around constantly—usually by someone who wants to justify ignoring user research entirely. The logic goes: users don’t know what they want, so why bother asking them?

I think he’s right. The question to ask users isn’t “What should we build?” but “What are your biggest pain points?”

Good research uncovers problems. It reveals pain points. It helps you understand what people are actually struggling with in their daily lives. What they’re working around. What they’ve given up on entirely.

Users aren’t supposed to design your product. That’s your job. But they’re the only ones who can tell you what’s actually broken in their world.

When you focus on understanding problems instead of collecting feature requests, you stop getting “faster horses” and start hearing real needs.

Henry Ford’s horse problem wasn’t about imagination

The famous “faster horses” quote isn’t wrong because users can’t imagine solutions—it’s wrong because it defends lazy research.

antonsten.com iconantonsten.com

Nielsen Norman Group weighs in on iOS 26 Liquid Glass. Predictably, they don’t like it. Raluca Budiu:

With iOS 26, Apple seems to be leaning harder into visual design and decorative UI effects — but at what cost to usability? At first glance, the system looks fluid and modern. But try to use it, and soon those shimmering surfaces and animated controls start to get in the way.

I get it. Flat—or mostly flat—and static UI conforms to the heuristics. But honestly, it can get boring and homogenous quickly. Put the NNg microscope on any video game UI and it’ll be torn to shreds, despite gamers learning to adapt quickly.

I’ve had iOS 26 on my phone for just a couple of weeks. I continue to be delighted by the animations and effects. So far, nothing has hindered the usability for me. We’ll see what happens as more and more apps get translated.

Liquid Glass Is Cracked, and Usability Suffers in iOS 26

Liquid Glass Is Cracked, and Usability Suffers in iOS 26

iOS 26’s visual language obscures content instead of letting it take the spotlight. New (but not always better) design patterns replace established conventions.

nngroup.com iconnngroup.com

In my most recent post, I called out our design profession, for our part in developing these addictive products. Jeffrey Inscho, brings it back up to the tech industry at large and observes they’re actually publishers:

The executives at these companies will tell you they’re neutral platforms, that they don’t choose what content gets seen. This is a lie. Every algorithmic recommendation is an editorial decision. When YouTube’s algorithm suggests increasingly extreme political content to keep someone watching, that’s editorial. When Facebook’s algorithm amplifies posts that generate angry reactions, that’s editorial. When Twitter’s trending algorithms surface conspiracy theories, that’s editorial.

They are publishers. They have always been publishers. They just don’t want the responsibility that comes with being publishers.

His point is that if these social media platforms are sorting and promoting posts, it’s an editorial approach and they should be treated like newspapers. “It’s like a newspaper publisher claiming they’re not responsible for what appears on their front page because they didn’t write the articles themselves.”

The answer, Inscho argues, is regulation of the algorithms.

Turn Off the Internet

Big tech has built machines designed for one thing: to hold …

staticmade.com iconstaticmade.com

The headline rings true to me because that’s what I look for in designers and how I run my team. The software that we build is too complex and too mission-critical for designers to vibe-code—at least given today’s tooling. But each one of the designers on my team can fill in for a PM when they’re on vacation.

Kai Wong, writing in UX Collective:

One thing I’ve learned, talking with 15 design leaders (and one CEO), is that a ‘designer who codes’ may look appealing, but a ‘designer who understands business’ is far more valuable and more challenging to replace.

You already possess the core skill that makes this transition possible: the ability to understand users with systematic observation and thoughtful questioning.

The only difference, now, is learning to apply that same methodology to understand your business.

Strategic thinking doesn’t require fancy degrees (although it may sometimes help).

Ask strategic questions about business goals. Understand how to balance user and business needs. Frame your design decisions in terms of measurable business impact.

preview-1758775414784.png

Why many employers want Designers to think like PMs, not Devs

How asking questions, which used to annoy teams, is now critical to UX’s future

uxdesign.cc iconuxdesign.cc

As much as I defended the preview, and as much as Apple wants to make Liquid Glass a thing, the new UI is continuing to draw criticism. Dan Moren for Six Colors:

“Glass” is the overall look of these updates, and it’s everywhere. Transparent, frosted, distorting. In some places it looks quite cool, such as in the edge distortion when you’re swiping up on the lock screen. But elsewhere, it seems to me that glass may not be quite the right material for the job. The Glass House might be architecturally impressive, but it’s not particularly practical.

It’s also a definite philosophical choice, and one that’s going to engender some criticism—much of it well-deserved. Apple has argued that it’s about getting controls out of the way, but is that really what’s happening here? It’s hard to argue that having a transparent button sitting right on top of your email is helping that email be more prominent. To take this argument to its logical conclusion, why is the keyboard not fully transparent glass over our content?

I’ve yet to upgrade myself. I will say that everyone dislikes change. Lest we forget that the now-ubiquitous flat design introduced by iOS 7 was also criticized.

preview-1758732622764.png

iOS 26 Review: Through a glass, liquidly

iOS 26! It feels like just last year we were here discussing iOS 18. How time flies. After a year that saw the debut of Apple Intelligence and the subsequent controversy over the features that it d…

sixcolors.com iconsixcolors.com

Jason Spielman put up a case study on his site for his work on Google’s NotebookLM:

The mental model of NotebookLM was built around the creation journey: starting with inputs, moving through conversation, and ending with outputs. Users bring in their sources (documents, notes, references), then interact with them through chat by asking questions, clarifying, and synthesizing before transforming those insights into structured outputs like notes, study guides, and Audio Overviews.

And yes, he includes a sketch he did on the back of a napkin.

I’ve always wondered about the UX of NotebookLM. It’s not typical and, if I’m being honest, not exactly super intuitive. But after a while, it does make sense. Maybe I’m the outlier though, because Spielman’s grandmother found it easy. In an interview last year on Sequoia Capital’s Training Data, he recalls:

I actually do think part of the explosion of audio overviews was the fact it was a simple one click experience. I was on the phone with my grandma trying to explain her how to use it and it actually didn’t take any explanation. I’m like, “Drop in a source.” And she’s like, “Oh! I see. I click this button to generate it.” And I think that the ease of creation is really actually what catalyzed so much explosion. So I think when we think about adding these knobs [for customization] I think we want to do it in a way that’s very intentional.

preview-1758507696745.png

Designing NotebookLM

Designer, builder, and visual storyteller. Now building Huxe. Previously led design on NotebookLM and contributed to Google AI projects like Gemini and Search. Also shoot photo/video for brands like Coachella, GoPro, and Rivian.

jasonspielman.com iconjasonspielman.com

Chatboxes have become the uber box for all things AI. The criticism of this blank box has been the cold start issue. New users don’t know what to type. Designers shipping these product mostly got around this problem by offering suggested prompts to teach users about the possibilities.

The issue on the other end is that expert users end up creating their own library of prompts to copy and paste into the chatbox for repetitive tasks.

Sharang Sharma writing in UX Collective illustrates how these UIs can be smarter by being predictive of intent:

Contrary, Predictive UX points to an alternate approach. Instead of waiting for users to articulate every step, systems can anticipate intent based on behavior or common patterns as the user types. Apple Reminders suggests likely tasks as you type. Grammarly predicts errors and offers corrections inline. Gmail’s Smart Compose even predicts full phrases, reducing the friction of drafting entirely.

Sharma says that the goal of predictive UX is to “reduce time-to-value and reframe AI as an adaptive partner that anticipates user’s intent as you type.”

Imagine a little widget that appears within the chatbox as you type. Kind of a cool idea.

preview-1758077109263.jpeg

How can AI UI capture intent?

Exploring contextual prompt patterns that capture user intent as it is typed

uxdesign.cc iconuxdesign.cc

Thinking about this morning’s link about web forms, if you abstract why it’s so powerful, you get to the point of human-computer interaction: the computer should do what the user intends, not the buttons they push.

Matt Webb reminds us about the DWIM, or Do What I Mean philosophy in computing that was coined by Warren Teitelman in 1966. Webb quotes computer scientist Larry Masinter:

DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user’s request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious.

Webb goes on to say:

Squint and you can see ChatGPT as a DWIM UI: it never, never, never says “syntax error.”

Now, arguably it should come back and ask for clarifications more often, and in particular DWIM (and AI) interfaces are more successful the more they have access to the user’s context (current situation, history, environment, etc).

But it’s a starting point. The algo is: design for capturing intent and then DWIM; iterate until that works. AI unlocks that.

preview-1757558679383.png

The destination for AI interfaces is Do What I Mean

Posted on Friday 29 Aug 2025. 840 words, 10 links. By Matt Webb.

interconnected.org iconinterconnected.org

Forms is one of the fundamental things we make users do in software. Whether it’s the login screen, billing address form, or a mortgage application, forms are the main method for getting data from users and into computer-accessible databases. The human is deciding what piece of information to put into which column in the database. With AI, form filling should be much simpler.

Luke Wroblewski makes the argument:

With Web forms, the burden is on people to adapt to databases. Today’s AI models, however, can flip this requirement. That is, they allow people to provide information in whatever form they like and use AI do the work necessary to put that information into the right structure for a database.

How can it work?

With AgentDB connected to an AI model (via an MCP server), a person can simply say “add this” and provide an image, PDF, audio, video, you name it. The model will use AgentDB’s template to decide what information to extract from this unstructured input and how to format it for the database. In the case where something is missing or incomplete, the model can ask for clarification or use tools (like search) to find possible answers.

preview-1757557969255.png

Unstructured Input in AI Apps Instead of Web Forms

Web forms exist to put information from people into databases. The input fields and formatting rules in online forms are there to make sure the information fits...

lukew.com iconlukew.com

DOC is a publication from Fabricio Teixeira and Caio Braga that I’ve linked to before. Their latest reflection is on interfaces.

A good user interface is a good conversation.

Interfaces thrive on clarity, responsiveness, and mutual understanding. In a productive dialogue, each party clearly articulates their intentions and receives timely, understandable responses. Just as a good conversationalist anticipates the next question or need, a good interface guides you smoothly through your task. At their core, interfaces translate intent into action. They’re a bridge between what’s in your head and what the product can do.

Reflection is the best word I’ve found to describe these pieces. They’re hype-free, urging us to take a step back, and—at least for me—a reminder about our why.

In the end, interfaces are also a space for self-expression.

The ideal of “no interface” promises ultimate efficiency and direct access—but what do we lose in that pursuit? Perhaps the interface is not just a barrier to be minimized, but a space for human expression. It’s a canvas; a place to imbue a product with personality, visual expression, and a unique form of art.

When we strip that away, or make everything look the same, we lose something important. We trade the unique and the delightful for the purely functional. We sacrifice a vital part of what makes technology human: the thoughtful, and sometimes imperfect, ways we present ourselves to the world.

A pixelated hand

DOC • Interface

On connection, multi-modality, and self-expression.

doc.cc icondoc.cc
Conceptual 3D illustration of stacked digital notebooks with a pen on top, overlaid on colorful computer code patterns.

Why We Still Need a HyperCard for the AI Era

I rewatched the 1982 film TRON for the umpteenth time the other night with my wife. I have always credited this movie as the spark that got me interested in computers. Mind you, I was nine years old when this film came out. I was so excited after watching the movie that I got my father to buy us a home computer—the mighty Atari 400 (note sarcasm). I remember an educational game that came on cassette called “States & Capitals” that taught me, well, the states and their capitals. It also introduced me to BASIC, and after watching TRON, I wanted to write programs!