Skip to content

66 posts tagged with “user interface”

I’ve linked to a footer gallery, a navbar gallery, and now to round us out, here is a full-on Component Gallery. Web developer Iain Bean has been maintaining this library since 2019.

Bean writes in the about page:

The original idea for this site came from A Pattern Language2, a 1977 book focused on architecture, building and planning, which describes over 250 ‘patterns’: forms which fit specific contexts, or to put it another way, solutions to design problems. Examples include: ‘Beer hall’, ‘Positive outdoor space’ and ‘Light on two sides of every room’.

Whereas the book focuses on the physical world, my original aim with this site is was focus on those patterns that appear on the web; these often borrow the word ‘pattern’ (see Patterns on the GOV.UK design system), but are more commonly called components, hence ‘the component gallery’ — unlike a component library, most of these components aren’t ready to use off-the-shelf, but they’ll hopefully inspire you to design your own solution to the problem you’re working to solve.

So if you ever need a reference for how different design systems handle certain components (e.g., combobox, segmented control, or toast ), this is your site.

The Component Gallery

The Component Gallery

An up-to-date repository of interface components based on examples from the world of design systems, designed to be a reference for anyone building user interfaces.

component.gallery iconcomponent.gallery

Huei-Hsin Wang at NN/g published a post about how to write better prompts for AI prompt-to-code tools.

When we asked AI-prototyping tools to generate a live-training profile page for NN/G course attendees, a detailed prompt yielded quality results resembling what a human designer created, whereas a vague prompt generated inconsistent and unpredictable outcomes across the board.

There’s a lot of detailing of what can often go wrong. Personally, I don’t need to read about what I experience daily, so the interesting bit for me is about two-thirds of the way into the article. Wang lists five strategies to employ to get better results.

  • Visual intent: Name the style precisely—use concrete design vocabulary or frameworks instead of vague adjectives. Anchor prompts with recognizable patterns so the model locks onto the look and structure, not “clean/modern” fluff.
  • Lightweight references: Drop in moodboards, screenshots, or system tokens to nudge aesthetics without pixel-pushing. Expect resemblance, not perfection; judge outcomes on hierarchy and clarity, not polish alone.
  • Text-led visual analysis: Have AI describe a reference page’s layout and style in natural language, then distill those characteristics into a tighter prompt. Combine with an image when possible to reinforce direction.
  • Mock data first: Provide realistic sample content or JSON so the layout respects information architecture. Content-driven prompts produce better grouping, hierarchy, and actionable UI than filler lorem ipsum.
  • Code snippets for precision: Attach component or layout code from your system or open-source libraries to reduce ambiguity. It’s the most exact context, but watch length; use selectively to frame structure.
Prompt to Design Interfaces: Why Vague Prompts Fail and How to Fix Them

Prompt to Design Interfaces: Why Vague Prompts Fail and How to Fix Them

Create better AI-prototyping designs by using precise visual keywords, references, analysis, as well as mock data and code snippets.

nngroup.com iconnngroup.com

This is a fascinating watch. Ryo Lu, Head of Design at Cursor builds a retro Mac calculator using Cursor agents while being interviewed. Lu’s personal website is an homage to Mac OX X, complete with Aqua-style UI elements. He runs multiple local background agents without stepping on each other, fixes bugs live, and themes UI to match system styles so it feels designed—not “purple AI slop,” as he calls it.

Lu, as interview by Peter Yang, on how engineers and designers work together at Cursor (lightly edited for clarity):

So at Cursor, the roles between designers, PM, and engineers are really muddy. We kind of do the part [that is] our unique strength. We use the agent to tie everything. And when we need help, we can assemble people together to work on the thing.

Maybe some of [us] focus more on the visuals or interactions. Some focus more on the infrastructure side of things, where you design really robust architecture to scale the thing. So yeah, there is a lot less separation between roles and teams or even tools that we use. So for doing designs…we will maybe just prototype in Cursor, because that lets us really interact with the live states of the app. It just feels a lot more real than some pictures in Figma.

And surprisingly, they don’t have official product managers at Cursor. Yang asks, “Did you actually actually hire a PM because last time I talked to Lee [Robinson] there was like no PMs.”

Lu again, and edited lightly for clarity:

So we did not hire a PM yet, but we do have an engineer who used to be a founder. He took a lot more of the PM-y side of the job, and then became the first PM of the company. But I would still say a lot of the PM jobs are kind of spread across the builders in the team.

That mostly makes sense because it’s engineers building tools for engineers. You are your audience, which is rare.

Full Tutorial: Design to Code in 45 Min with Cursor's Head of Design | Ryo Lu

Design-to-code tutorial: Watch Cursor's Head of Design Ryo Lu build a retro Mac calculator with agents - a 45-minute, hands-on walkthrough to prototype and ship

youtube.com iconyoutube.com

Oliver West argues in UX Magazine that UX designers aren’t monolithic—meaning we’re not all the same and see the world in the same way.

West:

UX is often described as a mix of art and science, but that definition is too simple. The truth is, UX is a spectrum made up of three distinct but interlinked lenses:

  • Creativity: Bringing clarity, emotion, and imagination to how we solve problems.
  • Science: Applying evidence, psychology, and rigor to understand behavior.
  • Business: Focusing on relevance, outcomes, and measurable value.

Every UX professional looks through these lenses differently. And that’s exactly how it should be.

He then outlines how those who are more focused on certain parts of the spectrum may be more apt for more specialized roles. For example, if you’re more focused on creativity, you might be more of a UI designer:

UI Designers lead with the creative lens. Their strength lies in turning complex ideas into interfaces that feel intuitive, elegant, and emotionally engaging. But the best UI Designers also understand the science of usability and the business context behind what they’re designing.

I think for product designers working in the startup world, you actually do need all three lenses, as it were. But with a bias towards Science and Business.

Glass triangular prism with red and blue reflections on a blue surface; overlay text about UX being more than one skill and using three lenses.

The Three Lenses of UX: Because Not All UX Is the Same

Great designers don’t do everything; they see the world through different lenses: creative, scientific, and strategic. This article explains why those differences aren’t flaws, but rather the core reason UX works, and how identifying your own lens can transform careers, hiring, and collaboration. If you’ve ever wondered why “unicorn” designers don’t exist, this perspective explains why.

uxmag.com iconuxmag.com

When Figma acquired Weavy last month, I wrote a little bit about node-based UIs and ComfyUI. Looks like Adobe has been exploring this user interface paradigm as well.

Daniel John writes in Creative Bloq:

Project Graph is capable of turning complex workflows into user-friendly UIs (or ‘capsules’), and can access tools from across the Creative Cloud suite, including Photoshop, Illustrator and Premiere Pro – making it a potentially game-changing tool for creative pros.

But it isn’t just Adobe’s own tools that Project Graph is able to tap into. It also has access to the multitude of third party AI models Adobe recently announced partnerships with, including those made by Google, OpenAI and many more.

These tools can be used to build a node-based workflow, which can then be packaged into a streamlined tool with a deceptively simple interface.

And from Adobe’s blog post about Project Graph:

Project Graph is a new creative system that gives artists and designers real control and customization over their workflows at scale. It blends the best AI models with the capabilities of Adobe’s creative tools, such as Photoshop, inside a visual, node-based editor so you can design, explore, and refine ideas in a way that feels tactile and expressive, while still supporting the precision and reliability creative pros expect.

I’ve been playing around with ComfyUI a lot recently (more about this in a future post), so I’m very excited to see how this kind of UI can fit into Adobe’s products.

Stylized dark grid with blue-purple modular devices linked by cables, central "Ps" Photoshop

Adobe just made its most important announcement in years

Here’s why Project Graph matters for creatives.

creativebloq.com iconcreativebloq.com

On Corporate Maneuvers Punditry

Mark Gurman, writing for Bloomberg:

Meta Platforms Inc. has poached Apple Inc.’s most prominent design executive in a major coup that underscores a push by the social networking giant into AI-equipped consumer devices.

The company is hiring Alan Dye, who has served as the head of Apple’s user interface design team since 2015, according to people with knowledge of the matter. Apple is replacing Dye with longtime designer Stephen Lemay, according to the people, who asked not to be identified because the personnel changes haven’t been announced.

I don’t regularly cover personnel moves here, but Alan Dye jumping over to Meta has been a big deal in the Apple news ecosystem. John Gruber, in a piece titled “Bad Dye Job” on his Daring Fireball blog, wrote a scathing takedown of Dye, excoriating his tenure at Apple and flogging him for going over to Meta, which is arguably Apple’s arch nemesis.

Putting Alan Dye in charge of user interface design was the one big mistake Jony Ive made as Apple’s Chief Design Officer. Dye had no background in user interface design — he came from a brand and print advertising background. Before joining Apple, he was design director for the fashion brand Kate Spade, and before that worked on branding for the ad agency Ogilvy. His promotion to lead Apple’s software interface design team under Ive happened in 2015, when Apple was launching Apple Watch, their closest foray into the world of fashion. It might have made some sense to bring someone from the fashion/brand world to lead software design for Apple Watch, but it sure didn’t seem to make sense for the rest of Apple’s platforms. And the decade of Dye’s HI leadership has proven it.

I usually appreciate Gruber’s writing and take on things. He’s unafraid to tell it like it is and to be incredibly direct. Which makes people love him and fear him. But in paragraph after paragraph, Gruber just lays in on Dye.

It’s rather extraordinary in today’s hyper-partisan world that there’s nearly universal agreement amongst actual practitioners of user-interface design that Alan Dye is a fraud who led the company deeply astray. It was a big problem inside the company too. I’m aware of dozens of designers who’ve left Apple, out of frustration over the company’s direction, to work at places like LoveFrom, OpenAI, and their secretive joint venture io. I’m not sure there are any interaction designers at io who aren’t ex-Apple, and if there are, it’s only a handful. From the stories I’m aware of, the theme is identical: these are designers driven to do great work, and under Alan Dye, “doing great work” was no longer the guiding principle at Apple. If reaching the most users is your goal, go work on design at Google, or Microsoft, or Meta. (Design, of course, isn’t even a thing at Amazon.) Designers choose to work at Apple to do the best work in the industry. That has stopped being true under Alan Dye. The most talented designers I know are the harshest critics of Dye’s body of work, and the direction in which it’s been heading.

Designers can be great at more than one thing and they can evolve. Being in design leadership does not mean that you need to be the best practitioner of all the disciplines, but you do need to have the taste, sensibilities, and judgement of a good designer, no matter how you started. I’m a case in point. I studied traditional graphic design in art school. But I’ve been in digital design for most of my career now, and product design for the last 10 years.

Has UI over at Apple been worse over the last 10 years? Maybe. I will need to analyze things a lot more carefully. But I vividly remember having debates with my fellow designers about Mac OS X UI choices like the pinstriping, brushed metal, and many, many inconsistencies when I was working in the Graphic Design Group in 2004. UI design has never been perfect in Cupertino.

Alan Dye isn’t a CEO and wasn’t even at the same exposure level as Jony Ive when he was still at Apple. I don’t know Dye, though we’re certainly in the same design circles—we have 20 shared connections on LinkedIn. But as far as I’m concerned, he’s a civilian because he kept a low profile, like all Apple employees.

The parasocial relationships we have with tech executives is weird. I guess it’s one thing if they have a large online presence like Instagram’s Adam Mosseri or 37signals’ David Heinemeier Hansson (aka DHH), but Alan Dye made only a couple appearances in Apple keynotes and talked about Liquid Glass. In other words, why is Gruber writing 2,500 words in this particular post, and it’s just one of five posts covering this story!

Anyway, I’m not a big fan of Meta, but maybe Dye can bring some ethics to the design team over there. Who knows. Regardless, I am wishing him well rather than taking him down.

Escher-like stone labyrinth of intersecting walkways and staircases populated by small figures and floating rectangular screens.

Generative UI and the Ephemeral Interface

This week, Google debuted their Gemini 3 AI model to great fanfare and reviews. Specs-wise, it tops the benchmarks. This horserace has seen Google, Anthropic, and OpenAI trade leads each time a new model is released, so I’m not really surprised there. The interesting bit for us designers isn’t the model itself, but the upgraded Gemini app that can create user interfaces on the fly. Say hello to generative UI.

I will admit that I’ve been skeptical of the notion of generative user interfaces. I was imagining an app for work, like a design app, that would rearrange itself depending on the task at hand. In other words, it’s dynamic and contextual. Adobe has tried a proto-version of this with the contextual task bar. Theoretically, it surfaces up the most pertinent three or four actions based on your current task. But I find that it just gets in the way.

When Interfaces Keep Moving

Others have been less skeptical. More than 18 months ago, NN/g published an article speculating about genUI and how it might manifest in the future. They define it as:

A generative UI (genUI) is a user interface that is dynamically generated in real time by artificial intelligence to provide an experience customized to fit the user’s needs and context. So it’s a custom UI for that user at that point in time. Similar to how LLMs answer your question: tailored for you and specific to when that you asked the original question.

Leave it to NN/g to evaluate the AI prompt-to-code tool landscape with some rigor. Huei-Hsin Wang and Megan Brown cover over a dozen tools, including ChatGPT, Claude, UX Pilot, Uizard, Relume, Stitch, Bolt, Lovable, v0, Replit, Figma Make, Magic Patterns, and Subframe. They use a human designer as the control.

Among their conclusions:

AI’s limited grasp of design nuances and inconsistent output make it best suited for ideation, concept exploration, and early-phase prototype testing, rather than later stages. While you likely won’t take an AI-generated prototype straight to production, these tools can help you break through creative blocks and explore new directions quickly.

I think the best part is they shared screenshots of outputs in a FigJam board.

Header "Good from Afar, But Far from Good: AI Prototyping in Real Design Contexts" with teal robot icon and dotted wireframe UI.

Good from Afar, But Far from Good: AI Prototyping in Real Design Contexts

AI prototyping tools follow general directions but lack the judgment and nuance of an experienced designer.

nngroup.com iconnngroup.com

I’ve been a big fan of node-based UIs since I first experimented with Shake in the early 2000s. It’s kind of weird to wrap your head around, especially if you’re used to layers in Photoshop or Figma. The easiest way to think about nodes is to rotate the layer stack 90-degrees. Each node has inputs on the left, a distinct process that it does to the input, and outputs stuff on the right. You connect up multiple nodes to process assets to form your final composition. Popular apps with node-based workflows today include Unreal Engine (Blueprints), DaVinci Resolve (Fusion and Color), and n8n.

ComfyUI is another open source tool that uses the same node graph architecture. Made in 2023 to add some UI to the visual generative AI models like Stable Diffusion appearing around that time, it’s become popular among artists to wield the plethora of image and video gen AI models.

Fast-forward to last week, when Figma announced they had acquired Weavy, a much friendlier and cloud-based version of ComfyUI.

Weavy brings the world’s leading AI models together with professional editing tools on a single, browser-based canvas. With Weavy, you can choose the model you want for a task (e.g. Seedance, Sora, and Veo for cinematic video; Flux and Ideogram for realism; and Nano-Banana or Seedream for precision) and compose powerful primitives using generative AI outputs and hands-on edits (e.g. adjusting lighting, masking an object, color grading a shot). The end result is an inspiring environment for creative exploration and a flexible media pipeline where every output feeds the next.

This node-based approach brings a new level of craft and control to AI generation. Outputs can be branched, remixed, and refined, combining creative exploration with precision and craft. The Weavy team has inspired us with the balance they’ve struck between simplicity, approachability, and power. They’ve also created a tool that’s just a joy to use.

I must admit I had not heard about Weavy before the announcement. I had high hopes for Visual Electric, but it never quite lived up to its ambitions. I proceeded to watch all the official tutorial videos on YouTube and love it. Seems so much easier to use than ComfyUI. Let’s see what Figma does with the product.

Node-based image editor with connected panels showing a man in a rowboat on water then composited floating over a deep canyon.

Introducing Figma Weave: the next generation of AI-native creation at Figma

Figma has acquired Weavy, a platform that brings generative AI and professional editing tools into the open canvas.

figma.com iconfigma.com

I’ve been on the receiving end of Layer 1226 before and it’s not fun. While I’m pretty good with my layer naming hygiene, I’m not perfect. So I welcome anything that can help rename my layers. Apparently, when Adobe showed off this new AI feature at their Adobe MAX user conference last week, it drew a big round of applause. (Figma’s had this feature since June 2024.)

There’s more than just renaming layers though. Adobe is leaning into conversational UI for editing too. For new users coming to editing tools, this makes a lot of sense because the learning curve for Photoshop is very steep. But as I’ve always said, professionals will also need fine-grained controls.

Writing for CNET, Katelyn Chedraoui:

Renaming layers is just one of many things Adobe’s new AI assistants will be able to do. These chatbot-like tools will be added to Photoshop and Express. They have an emphasis on “conversational, agentic” experiences — meaning you can ask the chatbot to make edits, and it can independently handle them.

Express’s AI assistant is similar to using a chatbot. Once you toggle on the tool in the upper left corner, a conversation window pops up. You can ask the AI to change the color of an object or remove an obtrusive element. While pro users might be comfortable making those edits manually, the AI assistant might be more appealing to its less experienced users and folks working under a time crunch.

A peek into Adobe’s future reveals more agentic experiences:

Also announced on Tuesday is Project Moonlight, a new platform in beta on Adobe’s AI hub, Firefly. It’s a new tool that hopes to act as a creative partner. With your permission, it uses your data from Adobe platforms and social media accounts to help you create content. For example, you can ask it to come up with 20 ideas for what to do with your newest Lightroom photos based on your most successful Instagram posts in the past. 

These AI efforts represent a range of what conversational editing can look like, Mike Polner, Adobe Firefly’s vice president of product marketing for creators said in an interview. 

“One end of the spectrum is [to] type in a prompt and say, ‘Make my hat blue.’ That’s very simplistic,” said Polner. “With Project Moonlight, it can understand your context, explore and help you come up with new ideas and then help you analyze the content that you already have,” Polner said.

Photoshop AI Assistant UI over stone church landscape with large 'haven' text and command bubbles like 'Increase saturation'.

Photoshop’s New AI Assistant Can Rename All Your Layers So You Don’t Have To

The chatbot-like AI assistant isn’t out yet, but there is at least one practical way to use it.

cnet.com iconcnet.com

To close us out on Halloween, here’s one more archive full of spooky UX called the Dark Patterns Hall of Shame. It’s managed by a team of designers and researchers, who have dedicated themselves to identifying and exposing dark patterns and unethical design examples on the internet. More than anything, I just love the names some of these dark patterns have, like Confirmshaming, Privacy Zuckering, and Roach Motel.

Small gold trophy above bold dark text "Hall of shame. design" on a pale beige background.

Collection of Dark Patterns and Unethical Design

Discover a variety of dark pattern examples, sorted by category, to better understand deceptive design practices.

hallofshame.design iconhallofshame.design

Celine Nguyen wrote a piece that connects directly to what Ethan Mollick calls “working with wizards” and what SAP’s Ellie Kemery describes as the “calibration of trust” problem. It’s about how the interfaces we design shape the relationships we have with technology.

The through-line is metaphor. For LLMs, that metaphor is conversation. And it’s working—maybe too well:

Our intense longing to be understood can make even a rudimentary program seem human. This desire predates today’s technologies—and it’s also what makes conversational AI so promising and problematic.

When the metaphor is this good, we forget it’s a metaphor at all:

When we interact with an LLM, we instinctively apply the same expectations that we have for humans: If an LLM offers us incorrect information, or makes something up because it the correct information is unavailable, it is lying to us. …The problem, of course, is that it’s a little incoherent to accuse an LLM of lying. It’s not a person.

We’re so trapped inside the conversational metaphor that we accuse statistical models of having intent, of choosing to deceive. The interface has completely obscured the underlying technology.

Nguyen points to research showing frequent chatbot users “showed consistently worse outcomes” around loneliness and emotional dependence:

Participants who are more likely to feel hurt when accommodating others…showed more problematic AI use, suggesting a potential pathway where individuals turn to AI interactions to avoid the emotional labor required in human relationships.

However, replacing human interaction with AI may only exacerbate their anxiety and vulnerability when facing people.

This isn’t just about individual users making bad choices. It’s about an interface design that encourages those choices by making AI feel like a relationship rather than a tool.

The kicker is that we’ve been here before. In 1964, Joseph Weizenbaum created ELIZA, a simple chatbot that parodied a therapist:

I was startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it…What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Sixty years later, we’ve built vastly more sophisticated systems. But the fundamental problem remains unchanged.

The reality is we’re designing interfaces that make powerful tools feel like people. Susan Kare’s icons for the Macintosh helped millions understand computers. But they didn’t trick people into thinking their computers cared about them.

That’s the difference. And it matters.

Old instant-message window showing "MeowwwitsMadix3: heyyy" and "are you mad at me?" with typed reply "no i think im just kinda embarassed" and buttons Warn, Block, Expressions, Games, Send.

how to speak to a computer

against chat interfaces ✦ a brief history of artificial intelligence ✦ and the (worthwhile) problem of other minds

personalcanon.com iconpersonalcanon.com

Circling back to Monday’s item on how caring is good design, Felix Haas has a subtly different take: build kindness into your products.

Kindness in design isn’t about adding smiley faces or writing cheerful copy. It’s deeper than tone. It’s about intent embedded in every interaction.

Kindness shows up in the patience of an empty state that doesn’t rush you. In the warmth of micro-interactions that acknowledge your actions without demanding attention. In error messages that guide rather than scold. In defaults that assume good intent rather than user incompetence.

These moments seem subtle, even trivial, in isolation. But they accumulate. They shape how we feel about a product over weeks and months. They turn interfaces into relationships. They build trust.

Kind Products Win

Kind Products Win

Why do so many products feel soulless?

designplusai.com icondesignplusai.com

I think these guidelines from Vercel are great. It’s a one-pager and very clearly written for both humans and AI. It reminds me of the old school MailChimp brand voice guidelines and Apple’s Human Interface Guidelines which have become reference standards.

Web Interface Guidelines

Web Interface Guidelines

Guidelines for building great interfaces on the web. Covers interactions, animations, layout, content, forms, performance & design.

vercel.com iconvercel.com

Nielsen Norman Group weighs in on iOS 26 Liquid Glass. Predictably, they don’t like it. Raluca Budiu:

With iOS 26, Apple seems to be leaning harder into visual design and decorative UI effects — but at what cost to usability? At first glance, the system looks fluid and modern. But try to use it, and soon those shimmering surfaces and animated controls start to get in the way.

I get it. Flat—or mostly flat—and static UI conforms to the heuristics. But honestly, it can get boring and homogenous quickly. Put the NNg microscope on any video game UI and it’ll be torn to shreds, despite gamers learning to adapt quickly.

I’ve had iOS 26 on my phone for just a couple of weeks. I continue to be delighted by the animations and effects. So far, nothing has hindered the usability for me. We’ll see what happens as more and more apps get translated.

Liquid Glass Is Cracked, and Usability Suffers in iOS 26

Liquid Glass Is Cracked, and Usability Suffers in iOS 26

iOS 26’s visual language obscures content instead of letting it take the spotlight. New (but not always better) design patterns replace established conventions.

nngroup.com iconnngroup.com

As much as I defended the preview, and as much as Apple wants to make Liquid Glass a thing, the new UI is continuing to draw criticism. Dan Moren for Six Colors:

“Glass” is the overall look of these updates, and it’s everywhere. Transparent, frosted, distorting. In some places it looks quite cool, such as in the edge distortion when you’re swiping up on the lock screen. But elsewhere, it seems to me that glass may not be quite the right material for the job. The Glass House might be architecturally impressive, but it’s not particularly practical.

It’s also a definite philosophical choice, and one that’s going to engender some criticism—much of it well-deserved. Apple has argued that it’s about getting controls out of the way, but is that really what’s happening here? It’s hard to argue that having a transparent button sitting right on top of your email is helping that email be more prominent. To take this argument to its logical conclusion, why is the keyboard not fully transparent glass over our content?

I’ve yet to upgrade myself. I will say that everyone dislikes change. Lest we forget that the now-ubiquitous flat design introduced by iOS 7 was also criticized.

preview-1758732622764.png

iOS 26 Review: Through a glass, liquidly

iOS 26! It feels like just last year we were here discussing iOS 18. How time flies. After a year that saw the debut of Apple Intelligence and the subsequent controversy over the features that it d…

sixcolors.com iconsixcolors.com

Jason Spielman put up a case study on his site for his work on Google’s NotebookLM:

The mental model of NotebookLM was built around the creation journey: starting with inputs, moving through conversation, and ending with outputs. Users bring in their sources (documents, notes, references), then interact with them through chat by asking questions, clarifying, and synthesizing before transforming those insights into structured outputs like notes, study guides, and Audio Overviews.

And yes, he includes a sketch he did on the back of a napkin.

I’ve always wondered about the UX of NotebookLM. It’s not typical and, if I’m being honest, not exactly super intuitive. But after a while, it does make sense. Maybe I’m the outlier though, because Spielman’s grandmother found it easy. In an interview last year on Sequoia Capital’s Training Data, he recalls:

I actually do think part of the explosion of audio overviews was the fact it was a simple one click experience. I was on the phone with my grandma trying to explain her how to use it and it actually didn’t take any explanation. I’m like, “Drop in a source.” And she’s like, “Oh! I see. I click this button to generate it.” And I think that the ease of creation is really actually what catalyzed so much explosion. So I think when we think about adding these knobs [for customization] I think we want to do it in a way that’s very intentional.

preview-1758507696745.png

Designing NotebookLM

Designer, builder, and visual storyteller. Now building Huxe. Previously led design on NotebookLM and contributed to Google AI projects like Gemini and Search. Also shoot photo/video for brands like Coachella, GoPro, and Rivian.

jasonspielman.com iconjasonspielman.com

Chatboxes have become the uber box for all things AI. The criticism of this blank box has been the cold start issue. New users don’t know what to type. Designers shipping these product mostly got around this problem by offering suggested prompts to teach users about the possibilities.

The issue on the other end is that expert users end up creating their own library of prompts to copy and paste into the chatbox for repetitive tasks.

Sharang Sharma writing in UX Collective illustrates how these UIs can be smarter by being predictive of intent:

Contrary, Predictive UX points to an alternate approach. Instead of waiting for users to articulate every step, systems can anticipate intent based on behavior or common patterns as the user types. Apple Reminders suggests likely tasks as you type. Grammarly predicts errors and offers corrections inline. Gmail’s Smart Compose even predicts full phrases, reducing the friction of drafting entirely.

Sharma says that the goal of predictive UX is to “reduce time-to-value and reframe AI as an adaptive partner that anticipates user’s intent as you type.”

Imagine a little widget that appears within the chatbox as you type. Kind of a cool idea.

preview-1758077109263.jpeg

How can AI UI capture intent?

Exploring contextual prompt patterns that capture user intent as it is typed

uxdesign.cc iconuxdesign.cc

Thinking about this morning’s link about web forms, if you abstract why it’s so powerful, you get to the point of human-computer interaction: the computer should do what the user intends, not the buttons they push.

Matt Webb reminds us about the DWIM, or Do What I Mean philosophy in computing that was coined by Warren Teitelman in 1966. Webb quotes computer scientist Larry Masinter:

DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user’s request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious.

Webb goes on to say:

Squint and you can see ChatGPT as a DWIM UI: it never, never, never says “syntax error.”

Now, arguably it should come back and ask for clarifications more often, and in particular DWIM (and AI) interfaces are more successful the more they have access to the user’s context (current situation, history, environment, etc).

But it’s a starting point. The algo is: design for capturing intent and then DWIM; iterate until that works. AI unlocks that.

preview-1757558679383.png

The destination for AI interfaces is Do What I Mean

Posted on Friday 29 Aug 2025. 840 words, 10 links. By Matt Webb.

interconnected.org iconinterconnected.org

Brad Frost, of atomic design fame, wrote a history of themeable UIs as part of a deep dive into design tokens. He writes, “Design tokens may be the latest incarnation, but software creators have been creating themeable user interfaces for quite a long time!”

About Mario and Luigi from Super Mario Bros.:

It’s wild that two of the most iconic characters in the history of pop culture — red-clad Mario and green-clad Luigi — are themeable UI elements born from pragmatic ingenuity to overcome technological challenges. Freaking amazing.

The History of Themeable User Interfaces

The History of Themeable User Interfaces

A full-ish history of user interfaces that can be themed to meet the opportunities and constraints of the time

bradfrost.com iconbradfrost.com

DOC is a publication from Fabricio Teixeira and Caio Braga that I’ve linked to before. Their latest reflection is on interfaces.

A good user interface is a good conversation.

Interfaces thrive on clarity, responsiveness, and mutual understanding. In a productive dialogue, each party clearly articulates their intentions and receives timely, understandable responses. Just as a good conversationalist anticipates the next question or need, a good interface guides you smoothly through your task. At their core, interfaces translate intent into action. They’re a bridge between what’s in your head and what the product can do.

Reflection is the best word I’ve found to describe these pieces. They’re hype-free, urging us to take a step back, and—at least for me—a reminder about our why.

In the end, interfaces are also a space for self-expression.

The ideal of “no interface” promises ultimate efficiency and direct access—but what do we lose in that pursuit? Perhaps the interface is not just a barrier to be minimized, but a space for human expression. It’s a canvas; a place to imbue a product with personality, visual expression, and a unique form of art.

When we strip that away, or make everything look the same, we lose something important. We trade the unique and the delightful for the purely functional. We sacrifice a vital part of what makes technology human: the thoughtful, and sometimes imperfect, ways we present ourselves to the world.

A pixelated hand

DOC • Interface

On connection, multi-modality, and self-expression.

doc.cc icondoc.cc

Hard to believe that the Domino’s Pizza tracker debuted in 2008. The moment was ripe for them—about a year after the debut of the iPhone. Mobile e-commerce was in its early days.

Alex Mayyasi for The Hustle:

…the tracker’s creation was spurred by the insight that online orders were more profitable – and made customers more satisfied – than phone or in-person orders. The company’s push to increase digital sales from 20% to 50% of its business led to new ways to order (via a tweet, for example) and then a new way for customers to track their order.

Mayyasi weaves together a tale of business transparency, UI, and content design, tracing—or tracking?—the tracker’s impact on business since then. “The pizza tracker is essentially a progress bar.” But progress bars do so much for the user experience, most of which is setting proper expectations.

preview-1756791507284.png

How the Domino’s pizza tracker conquered the business world

One cheesy progress update at a time.

thehustle.co iconthehustle.co

I have always wanted to read 6,200 words about color! Sorry, that’s a lie. But I did skim it and really admired the very pretty illustrations. Dan Hollick is a saint for writing and illustrating this chapter in his living book called Making Software, a reference manual for designers and programmers that make digital products. From his newsletter:

I started writing this chapter just trying to explain what a color space is. But it turns out, you can’t really do that without explaining a lot of other stuff at the same time.

Part of the issue is color is really complicated and full of confusing terms that need a maths degree to understand. Gamuts, color models, perceptual uniformity, gamma etc. I don’t have a maths degree but I do have something better: I’m really stubborn.

And here are the opening sentences of the chapter on color:

Color is an unreasonably complex topic. Just when you think you’ve got it figured out, it reveals a whole new layer of complexity that you didn’t know existed.

This is partly because it doesn’t really exist. Sure, there are different wavelengths of light that our eyes perceive as color, but that doesn’t mean that color is actually a property of that light - it’s a phenomenon of our perception.

Digital color is about trying to map this complex interplay of light and perception into a format that computers can understand and screens can display. And it’s a miracle that any of it works at all.

I’m just waiting for him to put up a Stripe link so I can throw money at him.

preview-1756359522301.jpg

Making Software: What is a color space?

In which we answer every question you've ever had about digital color, and some you haven't.

makingsoftware.com iconmakingsoftware.com

Vitaly Friedman writes a good primer on the design possibilities for users to interact with AI features. As AI capabilities become more and more embedded in the products designers make, we have to become facile in manipulating AI as material.

Many products are obsessed with being AI-first. But you might be way better off by being AI-second instead. The difference is that we focus on user needs and sprinkle a bit of AI across customer journeys where it actually adds value.

preview-1752639762962.jpg

Design Patterns For AI Interfaces

Designing a new AI feature? Where do you even begin? From first steps to design flows and interactions, here’s a simple, systematic approach to building AI experiences that stick.

smashingmagazine.com iconsmashingmagazine.com