Skip to content

76 posts tagged with “user interface”

Fitts’s Law is one of those design principles everyone learns in school and then quietly stops thinking about. Target size, target distance, movement time. It’s a mouse-and-cursor concept, and once you’ve internalized the basics—make buttons big, put them close—it fades into the background. But with AI and voice becoming primary interaction models, the principle matters again. The friction just moved.

Julian Scaff, writing for Bootcamp, traces Fitts’s Law from desktop GUIs through touch, spatial computing, voice, and neural interfaces. His argument is that the law didn’t become obsolete—it became metaphorical:

With voice interfaces, the notion of physical distance disappears altogether, yet the underlying cognitive pattern persists. When a user says, “Turn off the lights,” there’s no target to touch or point at, but there is still a form of interaction distance, the mental and temporal gap between intention and response. Misrecognition, latency, or unclear feedback increase this gap, introducing friction analogous to a small or distant button.

“Friction analogous to a small or distant button” is a useful way to think about what’s happening with AI interfaces right now. When a user stares at a blank text field and doesn’t know what to type, that’s distance. When an agent misinterprets a prompt and the user has to rephrase three times, that’s a tiny target. The physics changed but the math didn’t.

Scaff extends this into AI and neural interfaces, where the friction gets even harder to see:

Every layer of mediation, from neural decoding errors to AI misinterpretations, adds new forms of interaction friction. The task for designers will be to minimize these invisible distances, not spatial or manual, but semantic and affective, so that the path from intention to effect feels seamless, trustworthy, and humane.

He then describes what he calls a “semantic interface,” one that interprets intent rather than waiting for explicit commands:

A semantic interface understands the why behind a user’s action, interpreting intent through context, language, and behavior rather than waiting for explicit commands. It bridges gaps in understanding by aligning system logic with human mental models, anticipating needs, and communicating in ways that feel natural and legible.

This connects to the current conversation about AI UX. The teams building chatbot-first products are, in Fitts’s terms, forcing users to cross enormous distances with tiny targets. Every blank prompt field with no guidance is a violation of the same principle that tells you to make a button bigger. We’ve known this for seventy years. We’re just ignoring it because the interface looks new.

Collage of UIs: vintage monochrome OS, classic Windows, modern Windows tiles and macOS dock, plus smartphone gesture demos

The shortest path from thought to action

Reassessing Fitts’ Law in the age of multimodal interfaces

medium.com iconmedium.com

Google’s design team is working on a hard problem: how do you create a visual identity for AI? It’s not a button or a menu. It doesn’t have a fixed set of functions. It’s a conversation partner that can do… well, a lot of things. That ambiguity is difficult to represent.

Daniel John, writing for Creative Bloq, reports on Google’s recent blog post about Gemini’s visual design:

“Consider designer Susan Kare, who pioneered the original Macintosh interface. Her icons weren’t just pixels; they were bridges between human understanding and machine logic. Gemini faces a similar challenge around accessibility, visibility, and alleviating potential concerns. What is Gemini’s equivalent of Kare’s smiling computer face?”

That’s a great question. Kare’s work on the original Mac made the computer feel approachable at a moment when most people had never touched one. She gave the machine a personality through icons that communicated function and friendliness at the same time. AI needs something similar: a visual language that builds trust while honestly representing what the technology can do.

Google’s answer? Gradients. They offer “an amorphous, adaptable approach,” one that “inspires a sense of discoverability.”

They think they’ve nailed it. I don’t think they did.

To their credit, Google seems to sense the comparison is a stretch. John quotes the Google blog again:

“Gradients might be much more about energy than ‘objectness,’ like Kare’s illustrations (a trash can is a thing, a gradient is a vibe), but they infuse a spirit and directionality into Gemini.”

Kare’s icons worked because they mapped to concrete actions and mental models people already had. A trash can means delete. A folder means storage. A smiling Mac means this thing is friendly and working. Gradients don’t map to anything. They just look nice. They’re aesthetic, not communicative. John’s word to describe them, “vibe” is right. Will a user pick up on the subtleties of a concentrated gradient versus a diffuse one?

The design challenge Google identified is real. But gradients aren’t the Kare equivalent. They’re not ownable nor iconic (pun intended). They’re a placeholder until someone figures out what is.

Rounded four-point rainbow-gradient star on left and black pixel-art vintage Macintosh-style computer with smiling face on right.

Did Google really just compare its design to Apple?

For rival tech brands, Google and Apple have seemed awfully cosy lately. Earlier this month it was announced that, in a huge blow to OpenAI, Google's Gemini will be powering the much awaited (and much delayed) enhanced Siri assistant on every iPhone. And now, Google has compared its UI design with that of Apple. Apple of 40 years ago, that is.

creativebloq.com iconcreativebloq.com

There’s a design principle I return to often: if everything is emphasized, nothing is. Bold every word in a paragraph and you’ve just made regular text harder to read. Highlight every line in a document and you’ve defeated the purpose of highlighting.

Nikita Prokopov applies this to Apple’s macOS Tahoe, which adds icons to nearly every menu item:

Perhaps counter-intuitively, adding an icon to everything is exactly the wrong thing to do. To stand out, things need to be different. But if everything has an icon, nothing stands out.

The whole article is a detailed teardown of the icon choices—inconsistent metaphors, icons reused for unrelated actions, details too small to parse at 12×12 effective pixels. But the core problem isn’t execution. It’s the premise.

Prokopov again:

It’s delusional to think that there’s a good icon for every action if you think hard enough. There isn’t. It’s a lost battle from the start.

What makes this such a burn is that Apple knew better. Prokopov pulls from the 1992 Macintosh Human Interface Guidelines, which warned that poorly used icons become “unpleasant, distracting, illegible, messy, cluttered, confusing, frustrating.” Thirty-two years later, Apple built exactly that.

This applies beyond icons. Any time you’re tempted to apply something universally—color, motion, badges, labels—ask whether you’re helping users find what matters or just adding visual noise. Emphasis only works through contrast.

Yellow banner with scattered black UI icons, retro Mac window at left, text: It's hard to justify Tahoe icons — tonsky.me

It’s hard to justify Tahoe icons

Looking at the first principles of icon design—and how Apple failed to apply all of them in macOS Tahoe

tonsky.me icontonsky.me

When I worked at LEVEL Studios (which became Rosetta) in the early 2010s, we had a whole group dedicated to icon design. It was small but incredibly talented and led by Jon Delman, a master of this craft. And yes, Jon and team designed icons for Apple.

Those glory days are long gone and the icons coming out of Cupertino these days are pedestrian, to put it gently. The best observation about Apple’s icon decline comes from Héliographe, via John Gruber:

If you put the Apple icons in reverse it looks like the portfolio of someone getting really really good at icon design.

Row of seven pen-and-paper app icons showing design evolution from orange stylized pen to ink bottle with fountain pen.

Posted by @heliographe.studio on Threads

Seven Pages icons from newest to oldest, each one more artistically interesting than the last. The original is exquisite. The current one is a squircle with a pen on it.

This is even more cringe-inducing when you keep in mind something Gruber recalls from a product briefing with Jony Ive years ago:

Apple didn’t change things just for the sake of changing them. That Apple was insistent on only changing things if the change made things better. And that this was difficult, at times, because the urge to do something that looks new and different is strong, especially in tech.

Apple’s hardware team still operates this way. An M5 MacBook Pro looks like an M1 MacBook Pro. An Apple Watch Series 11 is hard to distinguish from a Series 0. These designs don’t change because they’re excellent.

The software team lost that discipline somewhere. Gruber again:

I know a lot of talented UI designers and a lot of insightful UI critics. All of them agree that MacOS’s UI has gotten drastically worse over the last 10 years, in ways that seem so obviously worse that it boggles the mind how it happened.

The icons are just the most visible symptom. The confidence to not change something—to trust that the current design is still the best design—requires knowing the difference between familiarity and complacency. Somewhere along the way, Apple’s software designers stopped being able to tell.

Centered pale gray circle with a dark five-pointed star against a muted blue-gray background

Thoughts and Observations Regarding Apple Creator Studio

Starting with a few words on the new app icons.

daringfireball.net icondaringfireball.net

I started my career in print. I remember specifying designs in fractional inches and points, and expecting the printed piece to match the comp exactly. When I moved to the web in the late ’90s, I brought that same expectation with me because that’s how we worked back then. Our Photoshop files were precise. But if we’re being honest—that the web is an interactive, quickly malleable medium—that expectation is misplaced. I’ve long since changed my mind, of course.

Web developer Amit Sheen, writing for Smashing Magazine, articulates the problem with “pixel perfect” better than I’ve seen anyone do it:

When a designer asks for a “pixel-perfect” implementation, what are they actually asking for? Is it the colors, the spacing, the typography, the borders, the alignment, the shadows, the interactions? Take a moment to think about it. If your answer is “everything”, then you’ve just identified the core issue… When we say “make it pixel perfect,” we aren’t giving a directive; we’re expressing a feeling.

According to Sheen, “pixel perfect” sounds like a specification but functions as a vibe. It tells the developer nothing actionable.

He traces the problem back to print’s influence on early web design:

In the print industry, perfection was absolute. Once a design was sent to the press, every dot of ink had a fixed, unchangeable position on a physical page. When designers transitioned to the early web, they brought this “printed page” mentality with them. The goal was simple: The website must be an exact, pixel-for-pixel replica of the static mockup created in design applications like Photoshop and QuarkXPress.

Sheen doesn’t just tear down the old model. He offers replacement language. Instead of demanding “pixel perfect,” teams should ask for things like “visually consistent with the design system” or “preserves proportions and alignment logic.” These phrases describe actual requirements rather than feelings.

Sheen again, addressing designers directly:

When you hand over a design, don’t give us a fixed width, but a set of rules. Tell us what should stretch, what should stay fixed, and what should happen when the content inevitably overflows. Your “perfection” lies in the logic you define, not the pixels you draw.

I’m certain advanced designers and design teams know all of the above already. I just appreciated Sheen’s historical take. A Figma file is a hypothesis, a picture of what to build. The browser is the truth.

Smashing Magazine article header: "Rethinking 'Pixel Perfect' Web Design" with tags, author Amit Sheen and a red cat-and-bird illustration.

Rethinking “Pixel Perfect” Web Design — Smashing Magazine

Amit Sheen takes a hard look at the “Pixel Perfect” legacy concept, explaining why it’s failing us and redefining what “perfection” actually looks like in a multi-device, fluid world.

smashingmagazine.com iconsmashingmagazine.com

Nice mini-site from the Figma showcasing the “iconic interactions” of the last 20 years. It explores how software has become inseparable from how we think and connect—and how AI is accelerating that shift toward adaptive, conversational interfaces. Made with Figma Make, of course.

Centered bold white text "Software is culture" on a soft pastel abstract gradient background (pink, purple, green, blue).

Software Is Culture

Yesterday's software has shaped today's generation. To understand what's next as software grows more intelligent, we look back on 20 years of interaction design.

figma.com iconfigma.com

Previously, I linked to Doug O’Laughlin’s piece arguing that UIs are becoming worthless—that AI agents, not humans, will be the primary consumers of software. It’s a provocative claim, and as a designer, I’ve been chewing on it.

Jeff Veen offers the counterpoint. Veen—a design veteran who cofounded Typekit and led products at Adobe—argues that an agentic future doesn’t diminish design. It clarifies it:

An agentic future elevates design into pure strategy, which is what the best designers have wanted all along. Crafting a great user experience is impossible if the way in which the business expresses its capabilities is muddied, vague or deceptive.

This is a more optimistic take than O’Laughlin’s, but it’s rooted in the same observation: when agents strip applications down to their primitives—APIs, CLI commands, raw capabilities, (plus data structures, I’d argue)—what’s left is the truth of what a business actually does.

Veen’s framing through responsive design is useful. Remember “mobile first”? The constraint of the small screen forced organizations to figure out what actually mattered. Everything else was cruft. Veen again:

We came to realize that responsive design wasn’t just about layouts, it was about forcing organizations to confront what actually mattered.

Agentic workflows do the same thing, but more radically. If your product can only be expressed through its API, there’s no hiding behind a slick dashboard or clever microcopy.

His closing question is great:

If an agent used your product tomorrow, what truths would it uncover about your organization?

For designers, this is the strategic challenge. The interface layer may become ephemeral—generated on the fly, tailored to the user, disposable. But someone still has to define what the product is. That’s design work. It’s just not pixel work.

Three smartphone screens showing search-result lists of app shortcuts: Wells Fargo actions, Contacts actions, and KAYAK trip/flight actions.

On Coding Agents and the Future of Design

How Claude Code is showing us what apps may become

veen.com iconveen.com

The rise of micro apps describes what’s happening from the bottom up—regular people building their own tools instead of buying software. But there’s a top-down story too: the structural obsolescence of traditional software companies.

Doug O’Laughlin makes the case using a hardware analogy—the memory hierarchy. AI agents are fast, ephemeral memory (like DRAM), while traditional software companies need to become persistent storage (like NAND, or ROM if you’re old school like me). The implication:

Human-oriented consumption software will likely become obsolete. All horizontal software companies oriented at human-based consumption are obsolete.

That’s a bold claim. O’Laughlin goes further:

Faster workflows, better UIs, and smoother integrations will all become worthless, while persistent information, a la an API, will become extremely valuable.

As a designer, this is where I start paying close attention. The argument is that if AI agents become the primary consumers of software—not humans—then the entire discipline of UI design is in question. O’Laughlin names names:

Figma could be significantly disrupted if UIs, as a concept humans create for other humans, were to disappear.

I’m not ready to declare UIs dead. People still want direct manipulation, visual feedback, and the ability to see what they’re doing. But the shift O’Laughlin describes is real: software’s value is migrating from presentation to data. The interface becomes ephemeral—generated on the fly, tailored to the task—while the source of truth persists.

This is what I was getting at in my HyperCard essay: the tools we build tomorrow won’t look like the apps we buy today. They’ll be temporary, personal, and assembled by AI from underlying APIs and data. The SaaS companies that survive will be the ones who make their data accessible to agents, not the ones with the prettiest dashboards.

Memory hierarchy pyramid: CPU registers and cache (L1–L3) top; RAM; SSD flash; file-based virtual memory bottom; speed/cost/capacity notes.

The Death of Software 2.0 (A Better Analogy!)

The age of PDF is over. The time of markdown has begun. Why Memory Hierarchies are the best analogy for how software must change. And why Software it’s unlikely to command the most value.

fabricatedknowledge.com iconfabricatedknowledge.com

Last December, Cursor announced their visual editor—a way to edit UI directly in the browser. Karri Saarinen, the designer who co-founded Linear, saw it and called it a trap. Ryo Lu, the head of design at Cursor, pushed back. The Twitter back-and-forth went on for a couple days until they conceded they mostly agreed. Tommy Geoco digs into what the debate actually surfaced.

The traditional way we talk about design tools is floor versus ceiling—does the tool make good design more accessible, or does it push what’s possible? Geoco argues the Saarinen/Lu exchange revealed a second axis: unconstrained exploration versus material exploration. Sketching on napkins versus building in code.

Saarinen’s concern:

Whenever a designer becomes more of a builder, some idealism and creativity dies. It’s not because building is bad, but because you start introducing constraints earlier in the process than you should.

Lu’s counter:

The truth only reveals itself once you start to build. Not when you think about building, not when you sketch possibilities in a protected space, but when you actually make the thing real and let reality talk back.

Both are right, and Geoco’s reframing is useful:

The question is not should designers code. It’s are you using the new speed to explore more territory or just arriving at the same destination faster?

That’s the question I keep asking myself. When I use AI tools, am I discovering ideas I wouldn’t have found otherwise, or am I just getting to obvious ideas faster? The tools make iteration cheap, but cheap iteration on the same territory isn’t progress.

I think about it this way—back when I was starting out, sketching thumbnails was the technique I used. It was very quick and easy to sketch out dozens of ideas in a sketchbook, especially when they were logo or poster ideas. When sketching interaction ideas, the technique is closer to a storyboard—connected thumbnails. But for me, once I get into a high-fidelity design or prototype, there is tremendous pull to just keep tweaking the design rather than coming up with multiple options. In other words, convergence is happening rather than continued divergence.

This was the biggest debate in design [last] year

Two designers: One built Linear. One leads design at Cursor. They got into it on Twitter for 48 hours about the use of AI coding tools in the design work. This debate perfectly captures both sides of what's happening in software design right now. I've spent the year exploring how designers are experimenting on both sides of this argument. This is what I've found.

youtube.com iconyoutube.com

I’ve linked to a footer gallery, a navbar gallery, and now to round us out, here is a full-on Component Gallery. Web developer Iain Bean has been maintaining this library since 2019.

Bean writes in the about page:

The original idea for this site came from A Pattern Language2, a 1977 book focused on architecture, building and planning, which describes over 250 ‘patterns’: forms which fit specific contexts, or to put it another way, solutions to design problems. Examples include: ‘Beer hall’, ‘Positive outdoor space’ and ‘Light on two sides of every room’.

Whereas the book focuses on the physical world, my original aim with this site is was focus on those patterns that appear on the web; these often borrow the word ‘pattern’ (see Patterns on the GOV.UK design system), but are more commonly called components, hence ‘the component gallery’ — unlike a component library, most of these components aren’t ready to use off-the-shelf, but they’ll hopefully inspire you to design your own solution to the problem you’re working to solve.

So if you ever need a reference for how different design systems handle certain components (e.g., combobox, segmented control, or toast ), this is your site.

The Component Gallery

The Component Gallery

An up-to-date repository of interface components based on examples from the world of design systems, designed to be a reference for anyone building user interfaces.

component.gallery iconcomponent.gallery

Huei-Hsin Wang at NN/g published a post about how to write better prompts for AI prompt-to-code tools.

When we asked AI-prototyping tools to generate a live-training profile page for NN/G course attendees, a detailed prompt yielded quality results resembling what a human designer created, whereas a vague prompt generated inconsistent and unpredictable outcomes across the board.

There’s a lot of detailing of what can often go wrong. Personally, I don’t need to read about what I experience daily, so the interesting bit for me is about two-thirds of the way into the article. Wang lists five strategies to employ to get better results.

  • Visual intent: Name the style precisely—use concrete design vocabulary or frameworks instead of vague adjectives. Anchor prompts with recognizable patterns so the model locks onto the look and structure, not “clean/modern” fluff.
  • Lightweight references: Drop in moodboards, screenshots, or system tokens to nudge aesthetics without pixel-pushing. Expect resemblance, not perfection; judge outcomes on hierarchy and clarity, not polish alone.
  • Text-led visual analysis: Have AI describe a reference page’s layout and style in natural language, then distill those characteristics into a tighter prompt. Combine with an image when possible to reinforce direction.
  • Mock data first: Provide realistic sample content or JSON so the layout respects information architecture. Content-driven prompts produce better grouping, hierarchy, and actionable UI than filler lorem ipsum.
  • Code snippets for precision: Attach component or layout code from your system or open-source libraries to reduce ambiguity. It’s the most exact context, but watch length; use selectively to frame structure.
Prompt to Design Interfaces: Why Vague Prompts Fail and How to Fix Them

Prompt to Design Interfaces: Why Vague Prompts Fail and How to Fix Them

Create better AI-prototyping designs by using precise visual keywords, references, analysis, as well as mock data and code snippets.

nngroup.com iconnngroup.com

This is a fascinating watch. Ryo Lu, Head of Design at Cursor builds a retro Mac calculator using Cursor agents while being interviewed. Lu’s personal website is an homage to Mac OX X, complete with Aqua-style UI elements. He runs multiple local background agents without stepping on each other, fixes bugs live, and themes UI to match system styles so it feels designed—not “purple AI slop,” as he calls it.

Lu, as interview by Peter Yang, on how engineers and designers work together at Cursor (lightly edited for clarity):

So at Cursor, the roles between designers, PM, and engineers are really muddy. We kind of do the part [that is] our unique strength. We use the agent to tie everything. And when we need help, we can assemble people together to work on the thing.

Maybe some of [us] focus more on the visuals or interactions. Some focus more on the infrastructure side of things, where you design really robust architecture to scale the thing. So yeah, there is a lot less separation between roles and teams or even tools that we use. So for doing designs…we will maybe just prototype in Cursor, because that lets us really interact with the live states of the app. It just feels a lot more real than some pictures in Figma.

And surprisingly, they don’t have official product managers at Cursor. Yang asks, “Did you actually actually hire a PM because last time I talked to Lee [Robinson] there was like no PMs.”

Lu again, and edited lightly for clarity:

So we did not hire a PM yet, but we do have an engineer who used to be a founder. He took a lot more of the PM-y side of the job, and then became the first PM of the company. But I would still say a lot of the PM jobs are kind of spread across the builders in the team.

That mostly makes sense because it’s engineers building tools for engineers. You are your audience, which is rare.

Full Tutorial: Design to Code in 45 Min with Cursor's Head of Design | Ryo Lu

Design-to-code tutorial: Watch Cursor's Head of Design Ryo Lu build a retro Mac calculator with agents - a 45-minute, hands-on walkthrough to prototype and ship

youtube.com iconyoutube.com

Oliver West argues in UX Magazine that UX designers aren’t monolithic—meaning we’re not all the same and see the world in the same way.

West:

UX is often described as a mix of art and science, but that definition is too simple. The truth is, UX is a spectrum made up of three distinct but interlinked lenses:

  • Creativity: Bringing clarity, emotion, and imagination to how we solve problems.
  • Science: Applying evidence, psychology, and rigor to understand behavior.
  • Business: Focusing on relevance, outcomes, and measurable value.

Every UX professional looks through these lenses differently. And that’s exactly how it should be.

He then outlines how those who are more focused on certain parts of the spectrum may be more apt for more specialized roles. For example, if you’re more focused on creativity, you might be more of a UI designer:

UI Designers lead with the creative lens. Their strength lies in turning complex ideas into interfaces that feel intuitive, elegant, and emotionally engaging. But the best UI Designers also understand the science of usability and the business context behind what they’re designing.

I think for product designers working in the startup world, you actually do need all three lenses, as it were. But with a bias towards Science and Business.

Glass triangular prism with red and blue reflections on a blue surface; overlay text about UX being more than one skill and using three lenses.

The Three Lenses of UX: Because Not All UX Is the Same

Great designers don’t do everything; they see the world through different lenses: creative, scientific, and strategic. This article explains why those differences aren’t flaws, but rather the core reason UX works, and how identifying your own lens can transform careers, hiring, and collaboration. If you’ve ever wondered why “unicorn” designers don’t exist, this perspective explains why.

uxmag.com iconuxmag.com

When Figma acquired Weavy last month, I wrote a little bit about node-based UIs and ComfyUI. Looks like Adobe has been exploring this user interface paradigm as well.

Daniel John writes in Creative Bloq:

Project Graph is capable of turning complex workflows into user-friendly UIs (or ‘capsules’), and can access tools from across the Creative Cloud suite, including Photoshop, Illustrator and Premiere Pro – making it a potentially game-changing tool for creative pros.

But it isn’t just Adobe’s own tools that Project Graph is able to tap into. It also has access to the multitude of third party AI models Adobe recently announced partnerships with, including those made by Google, OpenAI and many more.

These tools can be used to build a node-based workflow, which can then be packaged into a streamlined tool with a deceptively simple interface.

And from Adobe’s blog post about Project Graph:

Project Graph is a new creative system that gives artists and designers real control and customization over their workflows at scale. It blends the best AI models with the capabilities of Adobe’s creative tools, such as Photoshop, inside a visual, node-based editor so you can design, explore, and refine ideas in a way that feels tactile and expressive, while still supporting the precision and reliability creative pros expect.

I’ve been playing around with ComfyUI a lot recently (more about this in a future post), so I’m very excited to see how this kind of UI can fit into Adobe’s products.

Stylized dark grid with blue-purple modular devices linked by cables, central "Ps" Photoshop

Adobe just made its most important announcement in years

Here’s why Project Graph matters for creatives.

creativebloq.com iconcreativebloq.com

On Corporate Maneuvers Punditry

Mark Gurman, writing for Bloomberg:

Meta Platforms Inc. has poached Apple Inc.’s most prominent design executive in a major coup that underscores a push by the social networking giant into AI-equipped consumer devices.

The company is hiring Alan Dye, who has served as the head of Apple’s user interface design team since 2015, according to people with knowledge of the matter. Apple is replacing Dye with longtime designer Stephen Lemay, according to the people, who asked not to be identified because the personnel changes haven’t been announced.

I don’t regularly cover personnel moves here, but Alan Dye jumping over to Meta has been a big deal in the Apple news ecosystem. John Gruber, in a piece titled “Bad Dye Job” on his Daring Fireball blog, wrote a scathing takedown of Dye, excoriating his tenure at Apple and flogging him for going over to Meta, which is arguably Apple’s arch nemesis.

Putting Alan Dye in charge of user interface design was the one big mistake Jony Ive made as Apple’s Chief Design Officer. Dye had no background in user interface design — he came from a brand and print advertising background. Before joining Apple, he was design director for the fashion brand Kate Spade, and before that worked on branding for the ad agency Ogilvy. His promotion to lead Apple’s software interface design team under Ive happened in 2015, when Apple was launching Apple Watch, their closest foray into the world of fashion. It might have made some sense to bring someone from the fashion/brand world to lead software design for Apple Watch, but it sure didn’t seem to make sense for the rest of Apple’s platforms. And the decade of Dye’s HI leadership has proven it.

I usually appreciate Gruber’s writing and take on things. He’s unafraid to tell it like it is and to be incredibly direct. Which makes people love him and fear him. But in paragraph after paragraph, Gruber just lays in on Dye.

It’s rather extraordinary in today’s hyper-partisan world that there’s nearly universal agreement amongst actual practitioners of user-interface design that Alan Dye is a fraud who led the company deeply astray. It was a big problem inside the company too. I’m aware of dozens of designers who’ve left Apple, out of frustration over the company’s direction, to work at places like LoveFrom, OpenAI, and their secretive joint venture io. I’m not sure there are any interaction designers at io who aren’t ex-Apple, and if there are, it’s only a handful. From the stories I’m aware of, the theme is identical: these are designers driven to do great work, and under Alan Dye, “doing great work” was no longer the guiding principle at Apple. If reaching the most users is your goal, go work on design at Google, or Microsoft, or Meta. (Design, of course, isn’t even a thing at Amazon.) Designers choose to work at Apple to do the best work in the industry. That has stopped being true under Alan Dye. The most talented designers I know are the harshest critics of Dye’s body of work, and the direction in which it’s been heading.

Designers can be great at more than one thing and they can evolve. Being in design leadership does not mean that you need to be the best practitioner of all the disciplines, but you do need to have the taste, sensibilities, and judgement of a good designer, no matter how you started. I’m a case in point. I studied traditional graphic design in art school. But I’ve been in digital design for most of my career now, and product design for the last 10 years.

Has UI over at Apple been worse over the last 10 years? Maybe. I will need to analyze things a lot more carefully. But I vividly remember having debates with my fellow designers about Mac OS X UI choices like the pinstriping, brushed metal, and many, many inconsistencies when I was working in the Graphic Design Group in 2004. UI design has never been perfect in Cupertino.

Alan Dye isn’t a CEO and wasn’t even at the same exposure level as Jony Ive when he was still at Apple. I don’t know Dye, though we’re certainly in the same design circles—we have 20 shared connections on LinkedIn. But as far as I’m concerned, he’s a civilian because he kept a low profile, like all Apple employees.

The parasocial relationships we have with tech executives is weird. I guess it’s one thing if they have a large online presence like Instagram’s Adam Mosseri or 37signals’ David Heinemeier Hansson (aka DHH), but Alan Dye made only a couple appearances in Apple keynotes and talked about Liquid Glass. In other words, why is Gruber writing 2,500 words in this particular post, and it’s just one of five posts covering this story!

Anyway, I’m not a big fan of Meta, but maybe Dye can bring some ethics to the design team over there. Who knows. Regardless, I am wishing him well rather than taking him down.

Escher-like stone labyrinth of intersecting walkways and staircases populated by small figures and floating rectangular screens.

Generative UI and the Ephemeral Interface

This week, Google debuted their Gemini 3 AI model to great fanfare and reviews. Specs-wise, it tops the benchmarks. This horserace has seen Google, Anthropic, and OpenAI trade leads each time a new model is released, so I’m not really surprised there. The interesting bit for us designers isn’t the model itself, but the upgraded Gemini app that can create user interfaces on the fly. Say hello to generative UI.

I will admit that I’ve been skeptical of the notion of generative user interfaces. I was imagining an app for work, like a design app, that would rearrange itself depending on the task at hand. In other words, it’s dynamic and contextual. Adobe has tried a proto-version of this with the contextual task bar. Theoretically, it surfaces up the most pertinent three or four actions based on your current task. But I find that it just gets in the way.

When Interfaces Keep Moving

Others have been less skeptical. More than 18 months ago, NN/g published an article speculating about genUI and how it might manifest in the future. They define it as:

A generative UI (genUI) is a user interface that is dynamically generated in real time by artificial intelligence to provide an experience customized to fit the user’s needs and context. So it’s a custom UI for that user at that point in time. Similar to how LLMs answer your question: tailored for you and specific to when that you asked the original question.

Leave it to NN/g to evaluate the AI prompt-to-code tool landscape with some rigor. Huei-Hsin Wang and Megan Brown cover over a dozen tools, including ChatGPT, Claude, UX Pilot, Uizard, Relume, Stitch, Bolt, Lovable, v0, Replit, Figma Make, Magic Patterns, and Subframe. They use a human designer as the control.

Among their conclusions:

AI’s limited grasp of design nuances and inconsistent output make it best suited for ideation, concept exploration, and early-phase prototype testing, rather than later stages. While you likely won’t take an AI-generated prototype straight to production, these tools can help you break through creative blocks and explore new directions quickly.

I think the best part is they shared screenshots of outputs in a FigJam board.

Header "Good from Afar, But Far from Good: AI Prototyping in Real Design Contexts" with teal robot icon and dotted wireframe UI.

Good from Afar, But Far from Good: AI Prototyping in Real Design Contexts

AI prototyping tools follow general directions but lack the judgment and nuance of an experienced designer.

nngroup.com iconnngroup.com

I’ve been a big fan of node-based UIs since I first experimented with Shake in the early 2000s. It’s kind of weird to wrap your head around, especially if you’re used to layers in Photoshop or Figma. The easiest way to think about nodes is to rotate the layer stack 90-degrees. Each node has inputs on the left, a distinct process that it does to the input, and outputs stuff on the right. You connect up multiple nodes to process assets to form your final composition. Popular apps with node-based workflows today include Unreal Engine (Blueprints), DaVinci Resolve (Fusion and Color), and n8n.

ComfyUI is another open source tool that uses the same node graph architecture. Made in 2023 to add some UI to the visual generative AI models like Stable Diffusion appearing around that time, it’s become popular among artists to wield the plethora of image and video gen AI models.

Fast-forward to last week, when Figma announced they had acquired Weavy, a much friendlier and cloud-based version of ComfyUI.

Weavy brings the world’s leading AI models together with professional editing tools on a single, browser-based canvas. With Weavy, you can choose the model you want for a task (e.g. Seedance, Sora, and Veo for cinematic video; Flux and Ideogram for realism; and Nano-Banana or Seedream for precision) and compose powerful primitives using generative AI outputs and hands-on edits (e.g. adjusting lighting, masking an object, color grading a shot). The end result is an inspiring environment for creative exploration and a flexible media pipeline where every output feeds the next.

This node-based approach brings a new level of craft and control to AI generation. Outputs can be branched, remixed, and refined, combining creative exploration with precision and craft. The Weavy team has inspired us with the balance they’ve struck between simplicity, approachability, and power. They’ve also created a tool that’s just a joy to use.

I must admit I had not heard about Weavy before the announcement. I had high hopes for Visual Electric, but it never quite lived up to its ambitions. I proceeded to watch all the official tutorial videos on YouTube and love it. Seems so much easier to use than ComfyUI. Let’s see what Figma does with the product.

Node-based image editor with connected panels showing a man in a rowboat on water then composited floating over a deep canyon.

Introducing Figma Weave: the next generation of AI-native creation at Figma

Figma has acquired Weavy, a platform that brings generative AI and professional editing tools into the open canvas.

figma.com iconfigma.com

I’ve been on the receiving end of Layer 1226 before and it’s not fun. While I’m pretty good with my layer naming hygiene, I’m not perfect. So I welcome anything that can help rename my layers. Apparently, when Adobe showed off this new AI feature at their Adobe MAX user conference last week, it drew a big round of applause. (Figma’s had this feature since June 2024.)

There’s more than just renaming layers though. Adobe is leaning into conversational UI for editing too. For new users coming to editing tools, this makes a lot of sense because the learning curve for Photoshop is very steep. But as I’ve always said, professionals will also need fine-grained controls.

Writing for CNET, Katelyn Chedraoui:

Renaming layers is just one of many things Adobe’s new AI assistants will be able to do. These chatbot-like tools will be added to Photoshop and Express. They have an emphasis on “conversational, agentic” experiences — meaning you can ask the chatbot to make edits, and it can independently handle them.

Express’s AI assistant is similar to using a chatbot. Once you toggle on the tool in the upper left corner, a conversation window pops up. You can ask the AI to change the color of an object or remove an obtrusive element. While pro users might be comfortable making those edits manually, the AI assistant might be more appealing to its less experienced users and folks working under a time crunch.

A peek into Adobe’s future reveals more agentic experiences:

Also announced on Tuesday is Project Moonlight, a new platform in beta on Adobe’s AI hub, Firefly. It’s a new tool that hopes to act as a creative partner. With your permission, it uses your data from Adobe platforms and social media accounts to help you create content. For example, you can ask it to come up with 20 ideas for what to do with your newest Lightroom photos based on your most successful Instagram posts in the past. 

These AI efforts represent a range of what conversational editing can look like, Mike Polner, Adobe Firefly’s vice president of product marketing for creators said in an interview. 

“One end of the spectrum is [to] type in a prompt and say, ‘Make my hat blue.’ That’s very simplistic,” said Polner. “With Project Moonlight, it can understand your context, explore and help you come up with new ideas and then help you analyze the content that you already have,” Polner said.

Photoshop AI Assistant UI over stone church landscape with large 'haven' text and command bubbles like 'Increase saturation'.

Photoshop’s New AI Assistant Can Rename All Your Layers So You Don’t Have To

The chatbot-like AI assistant isn’t out yet, but there is at least one practical way to use it.

cnet.com iconcnet.com

To close us out on Halloween, here’s one more archive full of spooky UX called the Dark Patterns Hall of Shame. It’s managed by a team of designers and researchers, who have dedicated themselves to identifying and exposing dark patterns and unethical design examples on the internet. More than anything, I just love the names some of these dark patterns have, like Confirmshaming, Privacy Zuckering, and Roach Motel.

Small gold trophy above bold dark text "Hall of shame. design" on a pale beige background.

Collection of Dark Patterns and Unethical Design

Discover a variety of dark pattern examples, sorted by category, to better understand deceptive design practices.

hallofshame.design iconhallofshame.design

Celine Nguyen wrote a piece that connects directly to what Ethan Mollick calls “working with wizards” and what SAP’s Ellie Kemery describes as the “calibration of trust” problem. It’s about how the interfaces we design shape the relationships we have with technology.

The through-line is metaphor. For LLMs, that metaphor is conversation. And it’s working—maybe too well:

Our intense longing to be understood can make even a rudimentary program seem human. This desire predates today’s technologies—and it’s also what makes conversational AI so promising and problematic.

When the metaphor is this good, we forget it’s a metaphor at all:

When we interact with an LLM, we instinctively apply the same expectations that we have for humans: If an LLM offers us incorrect information, or makes something up because it the correct information is unavailable, it is lying to us. …The problem, of course, is that it’s a little incoherent to accuse an LLM of lying. It’s not a person.

We’re so trapped inside the conversational metaphor that we accuse statistical models of having intent, of choosing to deceive. The interface has completely obscured the underlying technology.

Nguyen points to research showing frequent chatbot users “showed consistently worse outcomes” around loneliness and emotional dependence:

Participants who are more likely to feel hurt when accommodating others…showed more problematic AI use, suggesting a potential pathway where individuals turn to AI interactions to avoid the emotional labor required in human relationships.

However, replacing human interaction with AI may only exacerbate their anxiety and vulnerability when facing people.

This isn’t just about individual users making bad choices. It’s about an interface design that encourages those choices by making AI feel like a relationship rather than a tool.

The kicker is that we’ve been here before. In 1964, Joseph Weizenbaum created ELIZA, a simple chatbot that parodied a therapist:

I was startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it…What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Sixty years later, we’ve built vastly more sophisticated systems. But the fundamental problem remains unchanged.

The reality is we’re designing interfaces that make powerful tools feel like people. Susan Kare’s icons for the Macintosh helped millions understand computers. But they didn’t trick people into thinking their computers cared about them.

That’s the difference. And it matters.

Old instant-message window showing "MeowwwitsMadix3: heyyy" and "are you mad at me?" with typed reply "no i think im just kinda embarassed" and buttons Warn, Block, Expressions, Games, Send.

how to speak to a computer

against chat interfaces ✦ a brief history of artificial intelligence ✦ and the (worthwhile) problem of other minds

personalcanon.com iconpersonalcanon.com

Circling back to Monday’s item on how caring is good design, Felix Haas has a subtly different take: build kindness into your products.

Kindness in design isn’t about adding smiley faces or writing cheerful copy. It’s deeper than tone. It’s about intent embedded in every interaction.

Kindness shows up in the patience of an empty state that doesn’t rush you. In the warmth of micro-interactions that acknowledge your actions without demanding attention. In error messages that guide rather than scold. In defaults that assume good intent rather than user incompetence.

These moments seem subtle, even trivial, in isolation. But they accumulate. They shape how we feel about a product over weeks and months. They turn interfaces into relationships. They build trust.

Kind Products Win

Kind Products Win

Why do so many products feel soulless?

designplusai.com icondesignplusai.com

I think these guidelines from Vercel are great. It’s a one-pager and very clearly written for both humans and AI. It reminds me of the old school MailChimp brand voice guidelines and Apple’s Human Interface Guidelines which have become reference standards.

Web Interface Guidelines

Web Interface Guidelines

Guidelines for building great interfaces on the web. Covers interactions, animations, layout, content, forms, performance & design.

vercel.com iconvercel.com