Really cool interactive playground by designer Braz De Pina to create and remix futuristic interfaces—you know, that stuff you see on sci-fi shows and movies.

KODO-7 — UI Odyssey
Create sci-fi terminal animations. Pure type. No ornaments.
Really cool interactive playground by designer Braz De Pina to create and remix futuristic interfaces—you know, that stuff you see on sci-fi shows and movies.

Create sci-fi terminal animations. Pure type. No ornaments.
Nice mini-site from the Figma showcasing the “iconic interactions” of the last 20 years. It explores how software has become inseparable from how we think and connect—and how AI is accelerating that shift toward adaptive, conversational interfaces. Made with Figma Make, of course.

Yesterday’s software has shaped today’s generation. To understand what’s next as software grows more intelligent, we look back on 20 years of interaction design.
Previously, I linked to Doug O’Laughlin’s piece arguing that UIs are becoming worthless—that AI agents, not humans, will be the primary consumers of software. It’s a provocative claim, and as a designer, I’ve been chewing on it.
Jeff Veen offers the counterpoint. Veen—a design veteran who cofounded Typekit and led products at Adobe—argues that an agentic future doesn’t diminish design. It clarifies it:
An agentic future elevates design into pure strategy, which is what the best designers have wanted all along. Crafting a great user experience is impossible if the way in which the business expresses its capabilities is muddied, vague or deceptive.
This is a more optimistic take than O’Laughlin’s, but it’s rooted in the same observation: when agents strip applications down to their primitives—APIs, CLI commands, raw capabilities, (plus data structures, I’d argue)—what’s left is the truth of what a business actually does.
Veen’s framing through responsive design is useful. Remember “mobile first”? The constraint of the small screen forced organizations to figure out what actually mattered. Everything else was cruft. Veen again:
We came to realize that responsive design wasn’t just about layouts, it was about forcing organizations to confront what actually mattered.
Agentic workflows do the same thing, but more radically. If your product can only be expressed through its API, there’s no hiding behind a slick dashboard or clever microcopy.
His closing question is great:
If an agent used your product tomorrow, what truths would it uncover about your organization?
For designers, this is the strategic challenge. The interface layer may become ephemeral—generated on the fly, tailored to the user, disposable. But someone still has to define what the product is. That’s design work. It’s just not pixel work.

How Claude Code is showing us what apps may become
The rise of micro apps describes what’s happening from the bottom up—regular people building their own tools instead of buying software. But there’s a top-down story too: the structural obsolescence of traditional software companies.
Doug O’Laughlin makes the case using a hardware analogy—the memory hierarchy. AI agents are fast, ephemeral memory (like DRAM), while traditional software companies need to become persistent storage (like NAND, or ROM if you’re old school like me). The implication:
Human-oriented consumption software will likely become obsolete. All horizontal software companies oriented at human-based consumption are obsolete.
That’s a bold claim. O’Laughlin goes further:
Faster workflows, better UIs, and smoother integrations will all become worthless, while persistent information, a la an API, will become extremely valuable.
As a designer, this is where I start paying close attention. The argument is that if AI agents become the primary consumers of software—not humans—then the entire discipline of UI design is in question. O’Laughlin names names:
Figma could be significantly disrupted if UIs, as a concept humans create for other humans, were to disappear.
I’m not ready to declare UIs dead. People still want direct manipulation, visual feedback, and the ability to see what they’re doing. But the shift O’Laughlin describes is real: software’s value is migrating from presentation to data. The interface becomes ephemeral—generated on the fly, tailored to the task—while the source of truth persists.
This is what I was getting at in my HyperCard essay: the tools we build tomorrow won’t look like the apps we buy today. They’ll be temporary, personal, and assembled by AI from underlying APIs and data. The SaaS companies that survive will be the ones who make their data accessible to agents, not the ones with the prettiest dashboards.

The age of PDF is over. The time of markdown has begun. Why Memory Hierarchies are the best analogy for how software must change. And why Software it’s unlikely to command the most value.
Last December, Cursor announced their visual editor—a way to edit UI directly in the browser. Karri Saarinen, the designer who co-founded Linear, saw it and called it a trap. Ryo Lu, the head of design at Cursor, pushed back. The Twitter back-and-forth went on for a couple days until they conceded they mostly agreed. Tommy Geoco digs into what the debate actually surfaced.
The traditional way we talk about design tools is floor versus ceiling—does the tool make good design more accessible, or does it push what’s possible? Geoco argues the Saarinen/Lu exchange revealed a second axis: unconstrained exploration versus material exploration. Sketching on napkins versus building in code.
Saarinen’s concern:
Whenever a designer becomes more of a builder, some idealism and creativity dies. It’s not because building is bad, but because you start introducing constraints earlier in the process than you should.
Lu’s counter:
The truth only reveals itself once you start to build. Not when you think about building, not when you sketch possibilities in a protected space, but when you actually make the thing real and let reality talk back.
Both are right, and Geoco’s reframing is useful:
The question is not should designers code. It’s are you using the new speed to explore more territory or just arriving at the same destination faster?
That’s the question I keep asking myself. When I use AI tools, am I discovering ideas I wouldn’t have found otherwise, or am I just getting to obvious ideas faster? The tools make iteration cheap, but cheap iteration on the same territory isn’t progress.
I think about it this way—back when I was starting out, sketching thumbnails was the technique I used. It was very quick and easy to sketch out dozens of ideas in a sketchbook, especially when they were logo or poster ideas. When sketching interaction ideas, the technique is closer to a storyboard—connected thumbnails. But for me, once I get into a high-fidelity design or prototype, there is tremendous pull to just keep tweaking the design rather than coming up with multiple options. In other words, convergence is happening rather than continued divergence.
Two designers: One built Linear. One leads design at Cursor. They got into it on Twitter for 48 hours about the use of AI coding tools in the design work. This debate perfectly captures both sides of what’s happening in software design right now. I’ve spent the year exploring how designers are experimenting on both sides of this argument. This is what I’ve found.
I’ve linked to a footer gallery, a navbar gallery, and now to round us out, here is a full-on Component Gallery. Web developer Iain Bean has been maintaining this library since 2019.
Bean writes in the about page:
The original idea for this site came from A Pattern Language2, a 1977 book focused on architecture, building and planning, which describes over 250 ‘patterns’: forms which fit specific contexts, or to put it another way, solutions to design problems. Examples include: ‘Beer hall’, ‘Positive outdoor space’ and ‘Light on two sides of every room’.
Whereas the book focuses on the physical world, my original aim with this site is was focus on those patterns that appear on the web; these often borrow the word ‘pattern’ (see Patterns on the GOV.UK design system), but are more commonly called components, hence ‘the component gallery’ — unlike a component library, most of these components aren’t ready to use off-the-shelf, but they’ll hopefully inspire you to design your own solution to the problem you’re working to solve.
So if you ever need a reference for how different design systems handle certain components (e.g., combobox, segmented control, or toast ), this is your site.

An up-to-date repository of interface components based on examples from the world of design systems, designed to be a reference for anyone building user interfaces.
Huei-Hsin Wang at NN/g published a post about how to write better prompts for AI prompt-to-code tools.
When we asked AI-prototyping tools to generate a live-training profile page for NN/G course attendees, a detailed prompt yielded quality results resembling what a human designer created, whereas a vague prompt generated inconsistent and unpredictable outcomes across the board.
There’s a lot of detailing of what can often go wrong. Personally, I don’t need to read about what I experience daily, so the interesting bit for me is about two-thirds of the way into the article. Wang lists five strategies to employ to get better results.

Create better AI-prototyping designs by using precise visual keywords, references, analysis, as well as mock data and code snippets.
This is a fascinating watch. Ryo Lu, Head of Design at Cursor builds a retro Mac calculator using Cursor agents while being interviewed. Lu’s personal website is an homage to Mac OX X, complete with Aqua-style UI elements. He runs multiple local background agents without stepping on each other, fixes bugs live, and themes UI to match system styles so it feels designed—not “purple AI slop,” as he calls it.
Lu, as interview by Peter Yang, on how engineers and designers work together at Cursor (lightly edited for clarity):
So at Cursor, the roles between designers, PM, and engineers are really muddy. We kind of do the part [that is] our unique strength. We use the agent to tie everything. And when we need help, we can assemble people together to work on the thing.
Maybe some of [us] focus more on the visuals or interactions. Some focus more on the infrastructure side of things, where you design really robust architecture to scale the thing. So yeah, there is a lot less separation between roles and teams or even tools that we use. So for doing designs…we will maybe just prototype in Cursor, because that lets us really interact with the live states of the app. It just feels a lot more real than some pictures in Figma.
And surprisingly, they don’t have official product managers at Cursor. Yang asks, “Did you actually actually hire a PM because last time I talked to Lee [Robinson] there was like no PMs.”
Lu again, and edited lightly for clarity:
So we did not hire a PM yet, but we do have an engineer who used to be a founder. He took a lot more of the PM-y side of the job, and then became the first PM of the company. But I would still say a lot of the PM jobs are kind of spread across the builders in the team.
That mostly makes sense because it’s engineers building tools for engineers. You are your audience, which is rare.
Design-to-code tutorial: Watch Cursor’s Head of Design Ryo Lu build a retro Mac calculator with agents - a 45-minute, hands-on walkthrough to prototype and ship
Oliver West argues in UX Magazine that UX designers aren’t monolithic—meaning we’re not all the same and see the world in the same way.
West:
UX is often described as a mix of art and science, but that definition is too simple. The truth is, UX is a spectrum made up of three distinct but interlinked lenses:
- Creativity: Bringing clarity, emotion, and imagination to how we solve problems.
- Science: Applying evidence, psychology, and rigor to understand behavior.
- Business: Focusing on relevance, outcomes, and measurable value.
Every UX professional looks through these lenses differently. And that’s exactly how it should be.
He then outlines how those who are more focused on certain parts of the spectrum may be more apt for more specialized roles. For example, if you’re more focused on creativity, you might be more of a UI designer:
UI Designers lead with the creative lens. Their strength lies in turning complex ideas into interfaces that feel intuitive, elegant, and emotionally engaging. But the best UI Designers also understand the science of usability and the business context behind what they’re designing.
I think for product designers working in the startup world, you actually do need all three lenses, as it were. But with a bias towards Science and Business.

Great designers don’t do everything; they see the world through different lenses: creative, scientific, and strategic. This article explains why those differences aren’t flaws, but rather the core reason UX works, and how identifying your own lens can transform careers, hiring, and collaboration. If you’ve ever wondered why “unicorn” designers don’t exist, this perspective explains why.
When Figma acquired Weavy last month, I wrote a little bit about node-based UIs and ComfyUI. Looks like Adobe has been exploring this user interface paradigm as well.
Daniel John writes in Creative Bloq:
Project Graph is capable of turning complex workflows into user-friendly UIs (or ‘capsules’), and can access tools from across the Creative Cloud suite, including Photoshop, Illustrator and Premiere Pro – making it a potentially game-changing tool for creative pros.
But it isn’t just Adobe’s own tools that Project Graph is able to tap into. It also has access to the multitude of third party AI models Adobe recently announced partnerships with, including those made by Google, OpenAI and many more.
These tools can be used to build a node-based workflow, which can then be packaged into a streamlined tool with a deceptively simple interface.
And from Adobe’s blog post about Project Graph:
Project Graph is a new creative system that gives artists and designers real control and customization over their workflows at scale. It blends the best AI models with the capabilities of Adobe’s creative tools, such as Photoshop, inside a visual, node-based editor so you can design, explore, and refine ideas in a way that feels tactile and expressive, while still supporting the precision and reliability creative pros expect.
I’ve been playing around with ComfyUI a lot recently (more about this in a future post), so I’m very excited to see how this kind of UI can fit into Adobe’s products.

Here’s why Project Graph matters for creatives.
Mark Gurman, writing for Bloomberg:
Meta Platforms Inc. has poached Apple Inc.’s most prominent design executive in a major coup that underscores a push by the social networking giant into AI-equipped consumer devices.
The company is hiring Alan Dye, who has served as the head of Apple’s user interface design team since 2015, according to people with knowledge of the matter. Apple is replacing Dye with longtime designer Stephen Lemay, according to the people, who asked not to be identified because the personnel changes haven’t been announced.
I don’t regularly cover personnel moves here, but Alan Dye jumping over to Meta has been a big deal in the Apple news ecosystem. John Gruber, in a piece titled “Bad Dye Job“ on his Daring Fireball blog, wrote a scathing takedown of Dye, excoriating his tenure at Apple and flogging him for going over to Meta, which is arguably Apple’s arch nemesis.
Putting Alan Dye in charge of user interface design was the one big mistake Jony Ive made as Apple’s Chief Design Officer. Dye had no background in user interface design — he came from a brand and print advertising background. Before joining Apple, he was design director for the fashion brand Kate Spade, and before that worked on branding for the ad agency Ogilvy. His promotion to lead Apple’s software interface design team under Ive happened in 2015, when Apple was launching Apple Watch, their closest foray into the world of fashion. It might have made some sense to bring someone from the fashion/brand world to lead software design for Apple Watch, but it sure didn’t seem to make sense for the rest of Apple’s platforms. And the decade of Dye’s HI leadership has proven it.
I usually appreciate Gruber’s writing and take on things. He’s unafraid to tell it like it is and to be incredibly direct. Which makes people love him and fear him. But in paragraph after paragraph, Gruber just lays in on Dye.
It’s rather extraordinary in today’s hyper-partisan world that there’s nearly universal agreement amongst actual practitioners of user-interface design that Alan Dye is a fraud who led the company deeply astray. It was a big problem inside the company too. I’m aware of dozens of designers who’ve left Apple, out of frustration over the company’s direction, to work at places like LoveFrom, OpenAI, and their secretive joint venture io. I’m not sure there are any interaction designers at io who aren’t ex-Apple, and if there are, it’s only a handful. From the stories I’m aware of, the theme is identical: these are designers driven to do great work, and under Alan Dye, “doing great work” was no longer the guiding principle at Apple. If reaching the most users is your goal, go work on design at Google, or Microsoft, or Meta. (Design, of course, isn’t even a thing at Amazon.) Designers choose to work at Apple to do the best work in the industry. That has stopped being true under Alan Dye. The most talented designers I know are the harshest critics of Dye’s body of work, and the direction in which it’s been heading.
Designers can be great at more than one thing and they can evolve. Being in design leadership does not mean that you need to be the best practitioner of all the disciplines, but you do need to have the taste, sensibilities, and judgement of a good designer, no matter how you started. I’m a case in point. I studied traditional graphic design in art school. But I’ve been in digital design for most of my career now, and product design for the last 10 years.
Has UI over at Apple been worse over the last 10 years? Maybe. I will need to analyze things a lot more carefully. But I vividly remember having debates with my fellow designers about Mac OS X UI choices like the pinstriping, brushed metal, and many, many inconsistencies when I was working in the Graphic Design Group in 2004. UI design has never been perfect in Cupertino.
Alan Dye isn’t a CEO and wasn’t even at the same exposure level as Jony Ive when he was still at Apple. I don’t know Dye, though we’re certainly in the same design circles—we have 20 shared connections on LinkedIn. But as far as I’m concerned, he’s a civilian because he kept a low profile, like all Apple employees.
The parasocial relationships we have with tech executives is weird. I guess it’s one thing if they have a large online presence like Instagram’s Adam Mosseri or 37signals’ David Heinemeier Hansson (aka DHH), but Alan Dye made only a couple appearances in Apple keynotes and talked about Liquid Glass. In other words, why is Gruber writing 2,500 words in this particular post, and it’s just one of five posts covering this story!
Anyway, I’m not a big fan of Meta, but maybe Dye can bring some ethics to the design team over there. Who knows. Regardless, I am wishing him well rather than taking him down.

This week, Google debuted their Gemini 3 AI model to great fanfare and reviews. Specs-wise, it tops the benchmarks. This horserace has seen Google, Anthropic, and OpenAI trade leads each time a new model is released, so I’m not really surprised there. The interesting bit for us designers isn’t the model itself, but the upgraded Gemini app that can create user interfaces on the fly. Say hello to generative UI.
I will admit that I’ve been skeptical of the notion of generative user interfaces. I was imagining an app for work, like a design app, that would rearrange itself depending on the task at hand. In other words, it’s dynamic and contextual. Adobe has tried a proto-version of this with the contextual task bar. Theoretically, it surfaces up the most pertinent three or four actions based on your current task. But I find that it just gets in the way.
Others have been less skeptical. More than 18 months ago, NN/g published an article speculating about genUI and how it might manifest in the future. They define it as:
A generative UI (genUI) is a user interface that is dynamically generated in real time by artificial intelligence to provide an experience customized to fit the user’s needs and context. So it’s a custom UI for that user at that point in time. Similar to how LLMs answer your question: tailored for you and specific to when that you asked the original question.
Leave it to NN/g to evaluate the AI prompt-to-code tool landscape with some rigor. Huei-Hsin Wang and Megan Brown cover over a dozen tools, including ChatGPT, Claude, UX Pilot, Uizard, Relume, Stitch, Bolt, Lovable, v0, Replit, Figma Make, Magic Patterns, and Subframe. They use a human designer as the control.
Among their conclusions:
AI’s limited grasp of design nuances and inconsistent output make it best suited for ideation, concept exploration, and early-phase prototype testing, rather than later stages. While you likely won’t take an AI-generated prototype straight to production, these tools can help you break through creative blocks and explore new directions quickly.
I think the best part is they shared screenshots of outputs in a FigJam board.

AI prototyping tools follow general directions but lack the judgment and nuance of an experienced designer.
I’ve been a big fan of node-based UIs since I first experimented with Shake in the early 2000s. It’s kind of weird to wrap your head around, especially if you’re used to layers in Photoshop or Figma. The easiest way to think about nodes is to rotate the layer stack 90-degrees. Each node has inputs on the left, a distinct process that it does to the input, and outputs stuff on the right. You connect up multiple nodes to process assets to form your final composition. Popular apps with node-based workflows today include Unreal Engine (Blueprints), DaVinci Resolve (Fusion and Color), and n8n.
ComfyUI is another open source tool that uses the same node graph architecture. Made in 2023 to add some UI to the visual generative AI models like Stable Diffusion appearing around that time, it’s become popular among artists to wield the plethora of image and video gen AI models.
Fast-forward to last week, when Figma announced they had acquired Weavy, a much friendlier and cloud-based version of ComfyUI.
Weavy brings the world’s leading AI models together with professional editing tools on a single, browser-based canvas. With Weavy, you can choose the model you want for a task (e.g. Seedance, Sora, and Veo for cinematic video; Flux and Ideogram for realism; and Nano-Banana or Seedream for precision) and compose powerful primitives using generative AI outputs and hands-on edits (e.g. adjusting lighting, masking an object, color grading a shot). The end result is an inspiring environment for creative exploration and a flexible media pipeline where every output feeds the next.
This node-based approach brings a new level of craft and control to AI generation. Outputs can be branched, remixed, and refined, combining creative exploration with precision and craft. The Weavy team has inspired us with the balance they’ve struck between simplicity, approachability, and power. They’ve also created a tool that’s just a joy to use.
I must admit I had not heard about Weavy before the announcement. I had high hopes for Visual Electric, but it never quite lived up to its ambitions. I proceeded to watch all the official tutorial videos on YouTube and love it. Seems so much easier to use than ComfyUI. Let’s see what Figma does with the product.

Figma has acquired Weavy, a platform that brings generative AI and professional editing tools into the open canvas.
I’ve been on the receiving end of Layer 1226 before and it’s not fun. While I’m pretty good with my layer naming hygiene, I’m not perfect. So I welcome anything that can help rename my layers. Apparently, when Adobe showed off this new AI feature at their Adobe MAX user conference last week, it drew a big round of applause. (Figma’s had this feature since June 2024.)
There’s more than just renaming layers though. Adobe is leaning into conversational UI for editing too. For new users coming to editing tools, this makes a lot of sense because the learning curve for Photoshop is very steep. But as I’ve always said, professionals will also need fine-grained controls.
Writing for CNET, Katelyn Chedraoui:
Renaming layers is just one of many things Adobe’s new AI assistants will be able to do. These chatbot-like tools will be added to Photoshop and Express. They have an emphasis on “conversational, agentic” experiences — meaning you can ask the chatbot to make edits, and it can independently handle them.
…
Express’s AI assistant is similar to using a chatbot. Once you toggle on the tool in the upper left corner, a conversation window pops up. You can ask the AI to change the color of an object or remove an obtrusive element. While pro users might be comfortable making those edits manually, the AI assistant might be more appealing to its less experienced users and folks working under a time crunch.
A peek into Adobe’s future reveals more agentic experiences:
Also announced on Tuesday is Project Moonlight, a new platform in beta on Adobe’s AI hub, Firefly. It’s a new tool that hopes to act as a creative partner. With your permission, it uses your data from Adobe platforms and social media accounts to help you create content. For example, you can ask it to come up with 20 ideas for what to do with your newest Lightroom photos based on your most successful Instagram posts in the past.
These AI efforts represent a range of what conversational editing can look like, Mike Polner, Adobe Firefly’s vice president of product marketing for creators said in an interview.
“One end of the spectrum is [to] type in a prompt and say, ‘Make my hat blue.’ That’s very simplistic,” said Polner. “With Project Moonlight, it can understand your context, explore and help you come up with new ideas and then help you analyze the content that you already have,” Polner said.

The chatbot-like AI assistant isn’t out yet, but there is at least one practical way to use it.
To close us out on Halloween, here’s one more archive full of spooky UX called the Dark Patterns Hall of Shame. It’s managed by a team of designers and researchers, who have dedicated themselves to identifying and exposing dark patterns and unethical design examples on the internet. More than anything, I just love the names some of these dark patterns have, like Confirmshaming, Privacy Zuckering, and Roach Motel.

Discover a variety of dark pattern examples, sorted by category, to better understand deceptive design practices.
It’s like Footer, the footer gallery, but for navigation bars! Inspired by the former and incubated on Design Twitter, Christina Liubynska makes a curated space to celebrate navbars.

Navbar Gallery is a collection of the best website navbar inspiration designs on the web. Find the ideal navigation example for your design from our collection.
Celine Nguyen wrote a piece that connects directly to what Ethan Mollick calls “working with wizards” and what SAP’s Ellie Kemery describes as the “calibration of trust” problem. It’s about how the interfaces we design shape the relationships we have with technology.
The through-line is metaphor. For LLMs, that metaphor is conversation. And it’s working—maybe too well:
Our intense longing to be understood can make even a rudimentary program seem human. This desire predates today’s technologies—and it’s also what makes conversational AI so promising and problematic.
When the metaphor is this good, we forget it’s a metaphor at all:
When we interact with an LLM, we instinctively apply the same expectations that we have for humans: If an LLM offers us incorrect information, or makes something up because it the correct information is unavailable, it is lying to us. …The problem, of course, is that it’s a little incoherent to accuse an LLM of lying. It’s not a person.
We’re so trapped inside the conversational metaphor that we accuse statistical models of having intent, of choosing to deceive. The interface has completely obscured the underlying technology.
Nguyen points to research showing frequent chatbot users “showed consistently worse outcomes” around loneliness and emotional dependence:
Participants who are more likely to feel hurt when accommodating others…showed more problematic AI use, suggesting a potential pathway where individuals turn to AI interactions to avoid the emotional labor required in human relationships.
…
However, replacing human interaction with AI may only exacerbate their anxiety and vulnerability when facing people.
This isn’t just about individual users making bad choices. It’s about an interface design that encourages those choices by making AI feel like a relationship rather than a tool.
The kicker is that we’ve been here before. In 1964, Joseph Weizenbaum created ELIZA, a simple chatbot that parodied a therapist:
I was startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it…What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.
Sixty years later, we’ve built vastly more sophisticated systems. But the fundamental problem remains unchanged.
The reality is we’re designing interfaces that make powerful tools feel like people. Susan Kare’s icons for the Macintosh helped millions understand computers. But they didn’t trick people into thinking their computers cared about them.
That’s the difference. And it matters.

against chat interfaces ✦ a brief history of artificial intelligence ✦ and the (worthwhile) problem of other minds
Circling back to Monday’s item on how caring is good design, Felix Haas has a subtly different take: build kindness into your products.
Kindness in design isn’t about adding smiley faces or writing cheerful copy. It’s deeper than tone. It’s about intent embedded in every interaction.
…
Kindness shows up in the patience of an empty state that doesn’t rush you. In the warmth of micro-interactions that acknowledge your actions without demanding attention. In error messages that guide rather than scold. In defaults that assume good intent rather than user incompetence.
These moments seem subtle, even trivial, in isolation. But they accumulate. They shape how we feel about a product over weeks and months. They turn interfaces into relationships. They build trust.

Why do so many products feel soulless?
I think these guidelines from Vercel are great. It’s a one-pager and very clearly written for both humans and AI. It reminds me of the old school MailChimp brand voice guidelines and Apple’s Human Interface Guidelines which have become reference standards.

Guidelines for building great interfaces on the web. Covers interactions, animations, layout, content, forms, performance & design.
Nielsen Norman Group weighs in on iOS 26 Liquid Glass. Predictably, they don’t like it. Raluca Budiu:
With iOS 26, Apple seems to be leaning harder into visual design and decorative UI effects — but at what cost to usability? At first glance, the system looks fluid and modern. But try to use it, and soon those shimmering surfaces and animated controls start to get in the way.
I get it. Flat—or mostly flat—and static UI conforms to the heuristics. But honestly, it can get boring and homogenous quickly. Put the NNg microscope on any video game UI and it’ll be torn to shreds, despite gamers learning to adapt quickly.
I’ve had iOS 26 on my phone for just a couple of weeks. I continue to be delighted by the animations and effects. So far, nothing has hindered the usability for me. We’ll see what happens as more and more apps get translated.

iOS 26’s visual language obscures content instead of letting it take the spotlight. New (but not always better) design patterns replace established conventions.
As much as I defended the preview, and as much as Apple wants to make Liquid Glass a thing, the new UI is continuing to draw criticism. Dan Moren for Six Colors:
“Glass” is the overall look of these updates, and it’s everywhere. Transparent, frosted, distorting. In some places it looks quite cool, such as in the edge distortion when you’re swiping up on the lock screen. But elsewhere, it seems to me that glass may not be quite the right material for the job. The Glass House might be architecturally impressive, but it’s not particularly practical.
It’s also a definite philosophical choice, and one that’s going to engender some criticism—much of it well-deserved. Apple has argued that it’s about getting controls out of the way, but is that really what’s happening here? It’s hard to argue that having a transparent button sitting right on top of your email is helping that email be more prominent. To take this argument to its logical conclusion, why is the keyboard not fully transparent glass over our content?
I’ve yet to upgrade myself. I will say that everyone dislikes change. Lest we forget that the now-ubiquitous flat design introduced by iOS 7 was also criticized.

iOS 26! It feels like just last year we were here discussing iOS 18. How time flies. After a year that saw the debut of Apple Intelligence and the subsequent controversy over the features that it d…
Jason Spielman put up a case study on his site for his work on Google’s NotebookLM:
The mental model of NotebookLM was built around the creation journey: starting with inputs, moving through conversation, and ending with outputs. Users bring in their sources (documents, notes, references), then interact with them through chat by asking questions, clarifying, and synthesizing before transforming those insights into structured outputs like notes, study guides, and Audio Overviews.
And yes, he includes a sketch he did on the back of a napkin.
I’ve always wondered about the UX of NotebookLM. It’s not typical and, if I’m being honest, not exactly super intuitive. But after a while, it does make sense. Maybe I’m the outlier though, because Spielman’s grandmother found it easy. In an interview last year on Sequoia Capital’s Training Data, he recalls:
I actually do think part of the explosion of audio overviews was the fact it was a simple one click experience. I was on the phone with my grandma trying to explain her how to use it and it actually didn’t take any explanation. I’m like, “Drop in a source.” And she’s like, “Oh! I see. I click this button to generate it.” And I think that the ease of creation is really actually what catalyzed so much explosion. So I think when we think about adding these knobs [for customization] I think we want to do it in a way that’s very intentional.

Designer, builder, and visual storyteller. Now building Huxe. Previously led design on NotebookLM and contributed to Google AI projects like Gemini and Search. Also shoot photo/video for brands like Coachella, GoPro, and Rivian.
Chatboxes have become the uber box for all things AI. The criticism of this blank box has been the cold start issue. New users don’t know what to type. Designers shipping these product mostly got around this problem by offering suggested prompts to teach users about the possibilities.
The issue on the other end is that expert users end up creating their own library of prompts to copy and paste into the chatbox for repetitive tasks.
Sharang Sharma writing in UX Collective illustrates how these UIs can be smarter by being predictive of intent:
Contrary, Predictive UX points to an alternate approach. Instead of waiting for users to articulate every step, systems can anticipate intent based on behavior or common patterns as the user types. Apple Reminders suggests likely tasks as you type. Grammarly predicts errors and offers corrections inline. Gmail’s Smart Compose even predicts full phrases, reducing the friction of drafting entirely.
Sharma says that the goal of predictive UX is to “reduce time-to-value and reframe AI as an adaptive partner that anticipates user’s intent as you type.”
Imagine a little widget that appears within the chatbox as you type. Kind of a cool idea.

Exploring contextual prompt patterns that capture user intent as it is typed
Thinking about this morning’s link about web forms, if you abstract why it’s so powerful, you get to the point of human-computer interaction: the computer should do what the user intends, not the buttons they push.
Matt Webb reminds us about the DWIM, or Do What I Mean philosophy in computing that was coined by Warren Teitelman in 1966. Webb quotes computer scientist Larry Masinter:
DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user’s request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious.
Webb goes on to say:
Squint and you can see ChatGPT as a DWIM UI: it never, never, never says “syntax error.”
Now, arguably it should come back and ask for clarifications more often, and in particular DWIM (and AI) interfaces are more successful the more they have access to the user’s context (current situation, history, environment, etc).
But it’s a starting point. The algo is: design for capturing intent and then DWIM; iterate until that works. AI unlocks that.

Posted on Friday 29 Aug 2025. 840 words, 10 links. By Matt Webb.
The design blog connecting the dots others miss. Written by Roger Wong.
If you’re new here, check out what others are reading in the Popular feed.