Skip to content

52 posts tagged with “user interface”

I think these guidelines from Vercel are great. It’s a one-pager and very clearly written for both humans and AI. It reminds me of the old school MailChimp brand voice guidelines and Apple’s Human Interface Guidelines which have become reference standards.

Web Interface Guidelines

Web Interface Guidelines

Guidelines for building great interfaces on the web. Covers interactions, animations, layout, content, forms, performance & design.

vercel.com iconvercel.com

Nielsen Norman Group weighs in on iOS 26 Liquid Glass. Predictably, they don’t like it. Raluca Budiu:

With iOS 26, Apple seems to be leaning harder into visual design and decorative UI effects — but at what cost to usability? At first glance, the system looks fluid and modern. But try to use it, and soon those shimmering surfaces and animated controls start to get in the way.

I get it. Flat—or mostly flat—and static UI conforms to the heuristics. But honestly, it can get boring and homogenous quickly. Put the NNg microscope on any video game UI and it’ll be torn to shreds, despite gamers learning to adapt quickly.

I’ve had iOS 26 on my phone for just a couple of weeks. I continue to be delighted by the animations and effects. So far, nothing has hindered the usability for me. We’ll see what happens as more and more apps get translated.

Liquid Glass Is Cracked, and Usability Suffers in iOS 26

Liquid Glass Is Cracked, and Usability Suffers in iOS 26

iOS 26’s visual language obscures content instead of letting it take the spotlight. New (but not always better) design patterns replace established conventions.

nngroup.com iconnngroup.com

As much as I defended the preview, and as much as Apple wants to make Liquid Glass a thing, the new UI is continuing to draw criticism. Dan Moren for Six Colors:

“Glass” is the overall look of these updates, and it’s everywhere. Transparent, frosted, distorting. In some places it looks quite cool, such as in the edge distortion when you’re swiping up on the lock screen. But elsewhere, it seems to me that glass may not be quite the right material for the job. The Glass House might be architecturally impressive, but it’s not particularly practical.

It’s also a definite philosophical choice, and one that’s going to engender some criticism—much of it well-deserved. Apple has argued that it’s about getting controls out of the way, but is that really what’s happening here? It’s hard to argue that having a transparent button sitting right on top of your email is helping that email be more prominent. To take this argument to its logical conclusion, why is the keyboard not fully transparent glass over our content?

I’ve yet to upgrade myself. I will say that everyone dislikes change. Lest we forget that the now-ubiquitous flat design introduced by iOS 7 was also criticized.

preview-1758732622764.png

iOS 26 Review: Through a glass, liquidly

iOS 26! It feels like just last year we were here discussing iOS 18. How time flies. After a year that saw the debut of Apple Intelligence and the subsequent controversy over the features that it d…

sixcolors.com iconsixcolors.com

Jason Spielman put up a case study on his site for his work on Google’s NotebookLM:

The mental model of NotebookLM was built around the creation journey: starting with inputs, moving through conversation, and ending with outputs. Users bring in their sources (documents, notes, references), then interact with them through chat by asking questions, clarifying, and synthesizing before transforming those insights into structured outputs like notes, study guides, and Audio Overviews.

And yes, he includes a sketch he did on the back of a napkin.

I’ve always wondered about the UX of NotebookLM. It’s not typical and, if I’m being honest, not exactly super intuitive. But after a while, it does make sense. Maybe I’m the outlier though, because Spielman’s grandmother found it easy. In an interview last year on Sequoia Capital’s Training Data, he recalls:

I actually do think part of the explosion of audio overviews was the fact it was a simple one click experience. I was on the phone with my grandma trying to explain her how to use it and it actually didn’t take any explanation. I’m like, “Drop in a source.” And she’s like, “Oh! I see. I click this button to generate it.” And I think that the ease of creation is really actually what catalyzed so much explosion. So I think when we think about adding these knobs [for customization] I think we want to do it in a way that’s very intentional.

preview-1758507696745.png

Designing NotebookLM

Designer, builder, and visual storyteller. Now building Huxe. Previously led design on NotebookLM and contributed to Google AI projects like Gemini and Search. Also shoot photo/video for brands like Coachella, GoPro, and Rivian.

jasonspielman.com iconjasonspielman.com

Chatboxes have become the uber box for all things AI. The criticism of this blank box has been the cold start issue. New users don’t know what to type. Designers shipping these product mostly got around this problem by offering suggested prompts to teach users about the possibilities.

The issue on the other end is that expert users end up creating their own library of prompts to copy and paste into the chatbox for repetitive tasks.

Sharang Sharma writing in UX Collective illustrates how these UIs can be smarter by being predictive of intent:

Contrary, Predictive UX points to an alternate approach. Instead of waiting for users to articulate every step, systems can anticipate intent based on behavior or common patterns as the user types. Apple Reminders suggests likely tasks as you type. Grammarly predicts errors and offers corrections inline. Gmail’s Smart Compose even predicts full phrases, reducing the friction of drafting entirely.

Sharma says that the goal of predictive UX is to “reduce time-to-value and reframe AI as an adaptive partner that anticipates user’s intent as you type.”

Imagine a little widget that appears within the chatbox as you type. Kind of a cool idea.

preview-1758077109263.jpeg

How can AI UI capture intent?

Exploring contextual prompt patterns that capture user intent as it is typed

uxdesign.cc iconuxdesign.cc

Thinking about this morning’s link about web forms, if you abstract why it’s so powerful, you get to the point of human-computer interaction: the computer should do what the user intends, not the buttons they push.

Matt Webb reminds us about the DWIM, or Do What I Mean philosophy in computing that was coined by Warren Teitelman in 1966. Webb quotes computer scientist Larry Masinter:

DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user’s request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious.

Webb goes on to say:

Squint and you can see ChatGPT as a DWIM UI: it never, never, never says “syntax error.”

Now, arguably it should come back and ask for clarifications more often, and in particular DWIM (and AI) interfaces are more successful the more they have access to the user’s context (current situation, history, environment, etc).

But it’s a starting point. The algo is: design for capturing intent and then DWIM; iterate until that works. AI unlocks that.

preview-1757558679383.png

The destination for AI interfaces is Do What I Mean

Posted on Friday 29 Aug 2025. 840 words, 10 links. By Matt Webb.

interconnected.org iconinterconnected.org

Brad Frost, of atomic design fame, wrote a history of themeable UIs as part of a deep dive into design tokens. He writes, “Design tokens may be the latest incarnation, but software creators have been creating themeable user interfaces for quite a long time!”

About Mario and Luigi from Super Mario Bros.:

It’s wild that two of the most iconic characters in the history of pop culture — red-clad Mario and green-clad Luigi — are themeable UI elements born from pragmatic ingenuity to overcome technological challenges. Freaking amazing.

The History of Themeable User Interfaces

The History of Themeable User Interfaces

A full-ish history of user interfaces that can be themed to meet the opportunities and constraints of the time

bradfrost.com iconbradfrost.com

DOC is a publication from Fabricio Teixeira and Caio Braga that I’ve linked to before. Their latest reflection is on interfaces.

A good user interface is a good conversation.

Interfaces thrive on clarity, responsiveness, and mutual understanding. In a productive dialogue, each party clearly articulates their intentions and receives timely, understandable responses. Just as a good conversationalist anticipates the next question or need, a good interface guides you smoothly through your task. At their core, interfaces translate intent into action. They’re a bridge between what’s in your head and what the product can do.

Reflection is the best word I’ve found to describe these pieces. They’re hype-free, urging us to take a step back, and—at least for me—a reminder about our why.

In the end, interfaces are also a space for self-expression.

The ideal of “no interface” promises ultimate efficiency and direct access—but what do we lose in that pursuit? Perhaps the interface is not just a barrier to be minimized, but a space for human expression. It’s a canvas; a place to imbue a product with personality, visual expression, and a unique form of art.

When we strip that away, or make everything look the same, we lose something important. We trade the unique and the delightful for the purely functional. We sacrifice a vital part of what makes technology human: the thoughtful, and sometimes imperfect, ways we present ourselves to the world.

A pixelated hand

DOC • Interface

On connection, multi-modality, and self-expression.

doc.cc icondoc.cc

Hard to believe that the Domino’s Pizza tracker debuted in 2008. The moment was ripe for them—about a year after the debut of the iPhone. Mobile e-commerce was in its early days.

Alex Mayyasi for The Hustle:

…the tracker’s creation was spurred by the insight that online orders were more profitable – and made customers more satisfied – than phone or in-person orders. The company’s push to increase digital sales from 20% to 50% of its business led to new ways to order (via a tweet, for example) and then a new way for customers to track their order.

Mayyasi weaves together a tale of business transparency, UI, and content design, tracing—or tracking?—the tracker’s impact on business since then. “The pizza tracker is essentially a progress bar.” But progress bars do so much for the user experience, most of which is setting proper expectations.

preview-1756791507284.png

How the Domino’s pizza tracker conquered the business world

One cheesy progress update at a time.

thehustle.co iconthehustle.co

I have always wanted to read 6,200 words about color! Sorry, that’s a lie. But I did skim it and really admired the very pretty illustrations. Dan Hollick is a saint for writing and illustrating this chapter in his living book called Making Software, a reference manual for designers and programmers that make digital products. From his newsletter:

I started writing this chapter just trying to explain what a color space is. But it turns out, you can’t really do that without explaining a lot of other stuff at the same time.

Part of the issue is color is really complicated and full of confusing terms that need a maths degree to understand. Gamuts, color models, perceptual uniformity, gamma etc. I don’t have a maths degree but I do have something better: I’m really stubborn.

And here are the opening sentences of the chapter on color:

Color is an unreasonably complex topic. Just when you think you’ve got it figured out, it reveals a whole new layer of complexity that you didn’t know existed.

This is partly because it doesn’t really exist. Sure, there are different wavelengths of light that our eyes perceive as color, but that doesn’t mean that color is actually a property of that light - it’s a phenomenon of our perception.

Digital color is about trying to map this complex interplay of light and perception into a format that computers can understand and screens can display. And it’s a miracle that any of it works at all.

I’m just waiting for him to put up a Stripe link so I can throw money at him.

preview-1756359522301.jpg

Making Software: What is a color space?

In which we answer every question you've ever had about digital color, and some you haven't.

makingsoftware.com iconmakingsoftware.com

Vitaly Friedman writes a good primer on the design possibilities for users to interact with AI features. As AI capabilities become more and more embedded in the products designers make, we have to become facile in manipulating AI as material.

Many products are obsessed with being AI-first. But you might be way better off by being AI-second instead. The difference is that we focus on user needs and sprinkle a bit of AI across customer journeys where it actually adds value.

preview-1752639762962.jpg

Design Patterns For AI Interfaces

Designing a new AI feature? Where do you even begin? From first steps to design flows and interactions, here’s a simple, systematic approach to building AI experiences that stick.

smashingmagazine.com iconsmashingmagazine.com

John Calhoun joined Apple 30 years ago as a programmer to work on the Color Picker.

Having never written anything in assembly, you can imagine how overjoyed I was. It’s not actually a very accurate analogy, but imagine someone handing you a book in Chinese and asking you to translate it into English (I’m assuming here that you don’t know Chinese of course). Okay, it wasn’t that hard, but maybe you get a sense that this was quite a hurdle that I would have to overcome.

Calhoun was given an old piece of code and tasked with updating it. Instead, he translated it into a programming language he knew—C—and then decided to add to the feature. He explains:

I disliked HSL as a color space, I preferred HSV (Hue, Saturation, Value) because when I did artwork I was more comfortable thinking about color in those terms. So writing an HSV color picker was on my short list.

When I had my own color picker working I think I found that it was kind of fun. Perhaps for that reason, I struck out again and wrote another color picker. The World Wide Web (www) was a rather new thing that seemed to be catching on, so I naturally thought that an HTML color picker made sense. So I tackled that one as well. It was more or less the RGB color picker but the values were in hexadecimal and a combined RGB string value like “#FFCC33” was made easy to copy for the web designer.

So an engineer decided, all on his own, that he’d add a couple extra features. Including the fun crayon picker:

On a roll, I decided to also knock out a “crayon picker”. At this point, to be clear, the color picker was working and I felt I understood it well enough. As I say, I was kind of just having some fun now.

Screenshot of a classic Mac OS color picker showing the “Crayon Picker” tab. A green color named “Watercress” is selected, replacing the original orange color. Options include CMYK, HLS, and HSV pickers on the left.

And Calhoun makes this point:

It was frankly a thing I liked about working for Apple in those days. The engineers were the one’s driving the ship. As I said, I wrote an HSV picker because it was, I thought, a more intuitive color space for artists. I wrote the HTML color picker because of the advent of the web. And I wrote the crayon picker because it seemed to me to be the kind of thing Apple was all about: HSL, RGB — these were kind of nerdy color spaces — a box of crayons is how the rest of us picked colors.

Making software—especially web software—has matured since then, with product managers and designers now collaborating closely with engineers. But with AI coding assistants, the idea of an individual contributor making solo decisions and shipping code might become de rigueur again.

Man sitting outside 2 Infinite Loop, Apple’s former headquarters in Cupertino, holding a book with an ID badge clipped to his jeans.

Almost Fired

I was hired on at Apple in October of 1995. This was what I refer to as Apple’s circling the drain period. Maybe you remember all the doomsaying — speculation that Apple was going to be shuttering soon. It’s a little odd perhaps then that they were hiring at all but apparently Apple reasoned that they nonetheless needed another “graphics engineer” to work on the technology known as QuickdrawGX. I was then a thirty-one year old programmer who lived in Kansas and wrote games for the Macintosh — surely, Apple thought, I would be a good fit for the position.

engineersneedart.com iconengineersneedart.com

This is an amazing article and website by Marcin Wichary, the man behind the excellent Shift Happens book.

…I had a realization that the totemic 1984 Mac control panel, designed by Susan Kare, is still to this day perhaps the only settings screen ever brought up in casual conversation.

I kept wondering about that screen, and about what happened since then. Turns out, the Mac settings have lived a far more fascinating life than I imagined, have been redesigned many times, and can tell us a lot about the early history and the troubled upbringing of this interesting machine.

Indeed, Wichary goes through multiple versions of Mac operating systems and performs digital paleontology, uncovering long lost Settings minutiae. It’s also a great lesson in UI along the way. Be sure to click in the Mac screens.

preview-1752030135022.png

Frame of preference

A story of early Mac settings told by 10 emulators.

aresluna.org iconaresluna.org

I remember the article from 2016 titled “Hamburger Menus and Hidden Navigation Hurt UX Metrics” where the conclusion from NN/g was:

Discoverability is cut almost in half by hiding a website’s main navigation. Also, task time is longer and perceived task difficulty increases.

Fast forward nearly 10 years later and NN/g says:

Hamburger menus are a more familiar pattern today than 10 years ago, but the same old best practices for hidden navigation still apply.

Kate Kaplan, revisiting her conclusion from nearly a decade ago:

Over the past decade, the hamburger menu — much like its namesake — has become a classic. As mobile-first design took hold, it offered a clean, space-saving solution, and when design leaders like Apple and Amazon adopted it, others followed. Its growing ubiquity helped standardize its meaning: Through repeated exposure, users learned to recognize and interpret the icon with increasing confidence.

I think the hamburger menu grew in popularity despite NN/g’s authoritative finger wagging. As designers, most of the time, we have to balance between the needs of the project and client with known best practices. Many websites, especially e-commerce, don’t have four or fewer main navigation links. We had to put the links somewhere and the hamburger menu made sense.

preview-1750137551560.png

The Hamburger-Menu Icon Today: Is it Recognizable?

Hamburger menus are a more familiar pattern today than 10 years ago, but the same old best practices for hidden navigation still apply.

nngroup.com iconnngroup.com

Christopher Butler writes a wonderful essay about the “best interfaces we never built,” exploring the UIs from sci-fi:

Science fiction, by the way, hasn’t just predicted our technological future. We all know the classic examples, particularly those from Star Trek: the communicator and tricorder anticipated the smartphone; the PADD anticipated the tablet; the ship’s computer anticipated Siri, Alexa, Google, and AI voice interfaces; the entire interior anticipated the Jony Ive glass filter on reality. It’s enough to make a case that Trek didn’t anticipate these things so much as those who watched it as young people matured in careers in design and engineering. But science fiction has also been a fertile ground for imagining very different ways for how humans and machines interact.

He goes on to namecheck 2001: A Space Odyssey, Quantum Leap, Inspector Gadget and others. I don’t know Butler personally, but I’d bet $1 he’s Gen X like me.

As UX designers, it’s very easy to get stuck thinking that UI is just pixels rendered on a screen. But in fact, an interface is anything that translates our intentions into outcomes that technology can deliver.

preview-1750007787508.jpg

The Best Interfaces We Never Built

Every piece of technology is an interface. Though the word has come to be a shorthand for what we see and use on a screen, an interface is anything

chrbutler.com iconchrbutler.com

Vincent Nguyen writing for Yanko Design, interviewing Alan Dye, VP of Human Interface Design at Apple:

This technical challenge reveals the core problem Apple set out to solve: creating a digital material that maintains form-changing capabilities while preserving transparency. Traditional UI elements either block content or disappear entirely, but Apple developed a material that can exist in multiple states without compromising visibility of underlying content. Dye’s emphasis on “celebrating user content” exposes Apple’s hierarchy philosophy, where the interface serves content instead of competing with it. When you tap to magnify text, the interface doesn’t resize but stretches and flows like liquid responding to pressure, ensuring your photos, videos, and web content remain the focus while navigation elements adapt around them.

Since the Jony Ive days, Apple’s hardware has always been about celebrating the content. Bezels got smaller. Screens got bigger and brighter. Even the flat design brought on by iOS 7 and eventually adopted by the whole ecosystem was a way to strip away the noise and focus on the content.

Dye’s explanation of the “glass layer versus application layer” architecture provides insight into how Apple technically implements this philosophy. The company has created a distinct separation between functional controls (the glass layer) and user content (the application layer), allowing each to behave according to different rules while maintaining visual cohesion. This architectural decision enables the morphing behavior Dye described, where controls can adapt and change while content remains stable and prominent.

The Apple platform UI today sort of does that, but Liquid Glass seems to take it even further.

Nguyen about his experience using the Music app on Mac:

The difference from current iOS becomes apparent in specific scenarios. In the current Music app, scrolling through your library feels like moving through flat, static layers. With Liquid Glass, scrolling creates a sense of depth. You can see your album artwork subtly shifting beneath the translucent controls, creating spatial awareness of where interface elements sit in relation to your content. The tab bar doesn’t just scroll with you; it creates gentle optical distortions that make the underlying content feel physically present beneath the glass surface.

preview-1749793045679.jpg

Apple’s Liquid Glass Hands-On: Why Every Interface Element Now Behaves Like Physical Material

Liquid Glass represents more than an aesthetic update or surface-level polish. It functions as a complex behavioral system, precisely engineered to dictate how interface layers react to user input. In practical terms, this means Apple devices now interact with interface surfaces not as static, interchangeable panes, but as dynamic, adaptive materials that fluidly flex and

yankodesign.com iconyankodesign.com
Collection of iOS interface elements showcasing Liquid Glass design system including keyboards, menus, buttons, toggles, and dialogs with translucent materials on dark background.

Breaking Down Apple’s Liquid Glass: The Tech, The Hype, and The Reality

I kind of expected it: a lot more ink was spilled on Liquid Glass—particularly on social media. In case you don’t remember, Liquid Glass is the new UI for all of Apple’s platforms. It was announced Monday at WWDC 2025, their annual developers conference.

The criticism is primarily around legibility and accessibility. Secondary reasons include aesthetics and power usage to animate all the bubbles.

How Liquid Glass Actually Works

Before I go and address the criticism, I think it would be great to break down the team’s design thinking and how Liquid Glass actually works. 

I watched two videos from Apple’s developer site. Much of the rest of the article is a summary of the videos. You can watch them and skip to the end of this piece.

First off is this video that explains Liquid Glass in detail.

As I watched the video, one thing stood out clearly to me: the design team at Apple did a lot of studying of the real world before digitizing it into UI.

The Core Innovation: Lensing

Instead of scattering light like previous materials, Liquid Glass dynamically bends and shapes light in real-time. Apple calls this “lensing.”

It’s their attempt to recreate how transparent objects work in the physical world. We all intuitively understand how warping and bending light communicates presence and motion. Liquid Glass uses these visual cues to provide separation while letting content shine through.

A Multi-Layer System That Adapts

Liquid Glass toolbar with pink tinted buttons (bookmark, refresh, more) floating over geometric green background, showing tinting capabilities.

This isn’t just a simple effect. It’s built from several layers working together:

  • Highlights respond to environmental lighting and device motion. When you unlock your phone, lights move through 3D space, causing illumination to travel around the material.
  • Shadows automatically adjust based on what’s behind them—darker over text for separation, lighter over solid backgrounds.
  • Tint layers continuously adapt. As content scrolls underneath, the material flips between light and dark modes for optimal legibility.
  • Interactive feedback spreads from your fingertip throughout the element, making it feel alive and responsive.

All of this happens automatically when developers apply Liquid Glass.

Two Variants (Frosted and Clear)

Liquid Glass has the same two types of material.

  • Regular is the workhorse—full adaptive behaviors, works anywhere.
  • Clear is more transparent but needs dimming layers for legibility.

Clear should only be used over media-rich content when the content layer won’t suffer from dimming. Otherwise, stick with Regular.

It’s like ice cubes—cloudy ones from your freezer versus clear ones at fancy bars that let you see your drink’s color.

Four examples of regular Liquid Glass elements: audio controls, deletion dialog, text selection menu, and red toolbar, demonstrating various applications.

Regular is the workhorse—full adaptive behaviors, works anywhere.

Video player interface with Liquid Glass controls (pause, skip buttons) overlaying blue ocean scene with sea creature.

Clear should only be used over media-rich content when the content layer won’t suffer from dimming.

Smart Contextual Changes

When elements scale up (like expanding menus), the material simulates thicker glass with deeper shadows. On larger surfaces, ambient light from nearby content subtly influences the appearance.

Elements don’t fade—they materialize by gradually modulating light bending. The gel-like flexibility responds instantly to touch, making interactions feel satisfying.

This is something that’s hard to see in stills.

The New Tinting Approach

Red "Add" button with music note icon using Liquid Glass material over black and white checkered pattern background.

Instead of flat color overlays, Apple generates tone ranges mapped to content brightness underneath. It’s inspired by how colored glass actually works—changing hue and saturation based on what’s behind it.

Apple recommends sparing use of tinting. Only for primary actions that need emphasis. Makes sense.

Design Guidelines That Matter

Liquid Glass is for the navigation and controls layer floating above content—not for everything. Don’t add Liquid Glass to or make content areas Liquid Glass. Never stack glass on glass.

Liquid Glass button with a black border and overlapping windows icon floating over blurred green plant background, showing off its accessibility mode.

Accessibility features are built-in automatically—reduced transparency, increased contrast, and reduced motion modify the material without breaking functionality.

The Legibility Outcry (and Why It’s Overblown)

Apple devices (MacBook, iPad, iPhone, Apple Watch) displaying new Liquid Glass interface with translucent elements over blue gradient wallpapers.

“Legibility” was mentioned 13 times in the 19-minute video. Clearly that was a concern of theirs. Yes, in the keynote, clear tinted device home screens were shown and many on social media took that to be an accessibility abomination. Which, yes, that is. But that’s not the default. 

The fact that the system senses the type of content underneath it and adjusts accordingly—flipping from light to dark, increasing opacity, or adjusting shadow depth—means they’re making accommodations for legibility.

Maybe Apple needs to do some tweaking, but it’s evident that they care about this.

And like the 18 macOS releases before Tahoe—this version—accessibility settings and controls have been built right in. Universal Access debuted with Mac OS X 10.2 Jaguar in 2002. Apple has had a long history of supporting customers with disabilities, dating all the way back to 1987.

So while the social media outcry about legibility is understandable, Apple’s track record suggests they’ll refine these features based on real user feedback, not just Twitter hot takes.

The Real Goal: Device Continuity

Why and what is Liquid Glass meant to do? It’s unification. With the new design language, Apple has also come out with a new design system. This video presented by Apple designer Maria Hristoforova lays it out.

Hristoforova says that Apple’s new design system overhaul is fundamentally about creating seamless familiarity as users move between devices—ensuring that interface patterns learned on iPhone translate directly to Mac and iPad without requiring users to relearn how things work. The video points out that the company has systematically redesigned everything from typography (hooray for left alignment!) and shapes to navigation bars and sidebars around Liquid Glass as the unifying foundation, so that the same symbols, behaviors, and interactions feel consistent across all screen sizes and contexts. 

The Pattern of Promised Unity

This isn’t Apple’s first rodeo with “unified design language” promises.

Back in 2013, iOS 7’s flat design overhaul was supposed to create seamless consistency across Apple’s ecosystem. Jony Ive ditched skeuomorphism for minimalist interfaces with translucency and layering—the foundation for everything that followed.

OS X Yosemite (2014) brought those same principles to desktop. Flatter icons, cleaner lines, translucent elements. Same pitch: unified experience across devices.

macOS Big Sur (2020) pushed even further with iOS-like app icons and redesigned interfaces. Again, the promise was consistent visual language across all platforms.

And here we are in 2025 with Liquid Glass making the exact same promises. 

But maybe “goal” is a better word.

Consistency Makes the Brand

I’m OK with the goal of having a unified design language. As designers, we love consistency. Consistency is what makes a brand. As Apple has proven over and over again for decades now, it is one of the most valuable brands in the world. They maintain their position not only by making great products, but also by being incredibly disciplined about consistency.

San Francisco debuted 10 years ago as the system typeface for iOS 9 and OS El Capitan. They’ve since extended it and it works great in marketing and in interfaces.

iPhone Settings screen showing Liquid Glass grouped table cells with red outline highlighting the concentric shape design.

The rounded corners on their devices are all pretty much the same radii. Now that concentricity is being incorporated into the UI, screen elements will be harmonious with their physical surroundings. Only Apple can do that because they control the hardware and the software. And that is their magic.

Design Is Both How It Works and How It Looks

In 2003, two years after the iPod launched, Rob Walker of The New York Times did a profile on Apple. The now popular quote about design from Steve Jobs comes from this piece.

[The iPod] is, in short, an icon. A handful of familiar clichés have made the rounds to explain this — it’s about ease of use, it’s about Apple’s great sense of design. But what does that really mean? “Most people make the mistake of thinking design is what it looks like,” says Steve Jobs, Apple’s C.E.O. “People think it’s this veneer — that the designers are handed this box and told, ‘Make it look good!’ That’s not what we think design is. It’s not just what it looks like and feels like. Design is how it works.”

People misinterpret this quote all the time to mean design is only how it works. That is not what Steve meant. He meant, design is both what it looks like and how it works.

Steve did care about aesthetics. That’s why the Graphic Design team mocked up hundreds of PowerMac G5 box designs (the graphics on the box, not the construction). That’s why he obsessed over the materials used in Pixar’s Emeryville headquarters. From Walter Isaacson’s biography:

Because the building’s steel beams were going to be visible, Jobs pored over samples from manufacturers across the country to see which had the best color and texture. He chose a mill in Arkansas, told it to blast the steel to a pure color, and made sure the truckers used caution not to nick any of it.

Liquid Glass is a welcomed and much-needed visual refresh. It’s the natural evolution of Apple’s platforms, going from skeuomorphic so users knew they could use their fingers and tap on virtual buttons on a touchscreen, to flat as a response to the cacophony of visual noise in UIs at the time, and now to something kind of in-between.

Humans eventually tire of seeing the same thing. Carmakers refresh their vehicle designs every three or four years. Then they do complete redesigns every five to eight years. It gets consumers excited. 

Liquid Glass will help Apple sell a bunch more hardware.

I’ve been very interested in finding tools to close the design-to-code gap. Martina Sartor writing in UX Planet articulates why that is so important:

After fifteen years hopping between design systems, dev stand-ups, and last-minute launch scrambles, I’m convinced design-to-dev QA is still one of the most underestimated bottlenecks in digital product work. We pour weeks into meticulous Figma files, yet the last mile between mock-up and production code keeps tripping us up.

This is an honest autopsy of why QA hurts and how teams can start healing it — today — without buying more software (though new approaches are brewing).

preview-1749534149927.png

Why Design-to-Dev QA Still Stings

(and Practical Ways to Ease the Pain)

uxplanet.org iconuxplanet.org

I have relayed here before the story that I’ve been using Macs since 1985. It wasn’t the hardware that drew me in—it was MacPaint. I was always an artistic kid so being able to paint on a digital canvas seemed thrilling to me. And of course it was back then.

Behind MacPaint, was a man named Bill Atkinson. Atkinson died last Thursday, June 5 of pancreatic cancer. In a short remembrance, John Gruber said:

I say this with no hyperbole: Bill Atkinson may well have been the best computer programmer who ever lived. Without question, he’s on the short list. What a man, what a mind, what gifts to the world he left us.

I‘m happy that Figma also remembered Atkinson and that they are standing on his shoulders.

Every day at Figma, we wrestle with the same challenges Atkinson faced: How do you make powerful tools feel effortless? How do you hide complexity behind intuitive interactions? His fingerprints are on every pixel we push, every selection we make, every moment of creative flow our users experience.

preview-1749532457343.jpg

Bill Atkinson’s 10 Rules for Making Interfaces More Human

We commemorate the Apple pioneer whose QuickDraw and HyperCard programs made the Macintosh intuitive enough for nearly anyone to use.

figma.com iconfigma.com
Abstract gradient design with flowing liquid glass elements in blue and pink colors against a gray background, showcasing Apple's new Liquid Glass design language.

Quick Notes About WWDC 2025

Apple’s annual developer conference kicked off today with a keynote that announced:

  • Unified Version 26 across all Apple platforms (iOS, iPadOS, macOS, watchOS, tvOS, visionOS)
  • “Liquid Glass” design system. A complete UI and UX overhaul, the first major redesign since iOS 7
  • Apple Intelligence. Continued small improvements, though not the deep integration promised a year ago
  • Full windowing system on iPadOS. Windows comes to iPad! Finally.

Of course, those are the very high-level highlights.

For designers, the headline is Liquid Glass. Sebastiaan de With’s predictive post and renderings from last week were very spot-on.

I like it. I think iOS and macOS needed a fresh coat of paint and Liquid Glass delivers.

There’s already been some criticism—naturally, because we’re opinionated designers after all!—with some calling it over the top, a rehash of Windows Vista, or an accessibility nightmare.

Apple Music interface showing the new Liquid Glass design with translucent playback controls and navigation bar overlaying colorful album artwork, featuring "Blest" by Yuno in the player and navigation tabs for Home, New, Radio, Library, and Search.

The new Liquid Glass design language acts like real glass, refracting light and bending the image behind it accordingly.

In case you haven’t seen it, it’s a visual and—albeit less so—experience overhaul for the various flavors of Apple OSes. Imagine a transparent glass layer where controls sit. The layer has all the refractive qualities of glass, bending the light as images pass below it, and its edges catching highlights from a light source. This is all powered by a sophisticated 3D engine, I’m sure. It’s gorgeous.

It’s been 12 years since the last major refresh, with iOS 7 bringing on an era of so-called flat design to the world. At the time, it was a natural extension of Jony Ive’s predilection for minimalism, to strip things to their core. What could be more pure than using only type? It certainly appealed to my sensibilities. But what it brought on was a universe of sameness in UI design. 

Person using an iPad with a transparent glass interface overlay, demonstrating the new Liquid Glass design system with translucent app icons visible through the glass layer.

**

Hand interacting with a translucent glass interface displaying text on what appears to be a tablet or device, showing the new design's transparency effects.

The design team at Apple studied the physical properties of real glass to perfect the material in the new versions of the OSes.

With the release of Liquid Glass, led by Apple’s VP of Design, Alan Dye, I hope we’ll see designers add a little more personality, depth, and texture back into their UIs. No, we don’t need to return to the days of skeuomorphism—kicked off by Mac OS X’s Aqua interface design. I do think there’s been a movement away from flat design recently. Even at the latest Config conference, Figma showed off functionality to add noise and texture into our designs. We’ve been in a flat world for 12 years! Time to add a little spice back in.

Finally, it’s a beta. This is typical of Apple. The implementation will be iterated on and by the time it ships later this year in September, it will have been further refined. 

I do miss a good 4-minute video from Jony Ive talking about the virtues of software material design though…

In this short piece by Luke Wroblewski, he observes how the chat box is slowly giving way as agents and MCP give AI chatbots a little more autonomy.

When agents can use multiple tools, call other agents and run in the background, a person’s role moves to kicking things off, clarifying things when needed, and making use of the final output. There’s a lot less chatting back and forth. As such, the prominence of the chat interface can recede even further. It’s there if you want to check the steps an AI took to accomplish your task. But until then it’s out of your way so you can focus on the output.

preview-1749011480163.png

The Receding Role of AI Chat

While chat interfaces to AI models aren't going away anytime soon, the increasing capabilities of AI agents are making the concept of chatting back and forth wi...

lukew.com iconlukew.com

Sebastiaan de With, former designer at Apple and currently co-founder and designer at Lux (makers of Halide, Kino, Spectre, and Orion) imagined what the next era in iOS design might be. (WWDC, Apple’s developer conference is next week. This is typically when they unveil the new operating systems that will launch in the fall. Rumors are flying as usual.)

But he starts with a history lesson:

Smart people study history to understand the future. If we were to categorize the epochs of iOS design, we could roughly separate them into the Shaded Age, the Adaptive Age, and the New Age.

The Shaded Age, or skeuomorphic age, took inspiration from the Dashboard feature of Mac OS X Tiger. And then the Flat Age brought on by the introduction of iOS 7.

de With’s concept mocks for the New Age are fantastic. Based on the physicality of visionOS, with specular highlights and reactive reflections, it’s luscious and reminds me of the first time I ever laid eyes on Aqua—the glossy, candy-like look of the original Mac OS X. Steve Jobs said at its introduction, “…one of the design goals was when you saw it you wanted to lick it.”

Close-up of a glass-rendered user interface

Sebastiaan de With: “Philosophically, if I was Apple, I’d describe this as finally having an interface that matches the beautiful material properties of its devices. All the surfaces of your devices have glass screens. This brings an interface of a matching material, giving the user a feeling of the glass itself coming alive.

preview-1749013108308.jpg

Physicality: the new age of UI

There’s a lot of rumors of a big impending UI redesign from Apple. Let’s imagine what’s (or what could be) next for the design of iPhones, Macs and iPads.

lux.camera iconlux.camera

Following up on OpenAI’s acquisition of Jony Ive’s hardware startup, io, Mark Wilson, writing for Fast Company:

As Ive told me back in 2023, there have been only three significant modalities in the history of computing. After the original command line, we got the graphical user interface (the desktop, folders, and mouse of Xerox, Mac OS, and Windows), then voice (Alexa, Siri), and, finally, with the iPhone, multitouch (not just the ability to tap a screen, but to gesture and receive haptic feedback). When I brought up some other examples, Ive quickly nodded but dismissed them, acknowledging these as “tributaries” of experimentation. Then he said that to him the promise, and excitement, of building new AI hardware was that it might introduce a new breakthrough modality to interacting with a machine. A fourth modality.

Hmm, it hasn’t taken off yet because AR hasn’t really gained mainstream popularity, but I would argue hand gestures in AR UI to be a fourth modality. But Ive thinks different. Wilson continues:

Ive’s fourth modality, as I gleaned, was about translating AI intuition into human sensation. And it’s the exact sort of technology we need to introduce ubiquitous computing, also called quiet computing and ambient computing. These are terms coined by the late UX researcher Mark Weiser, who in the 1990s began dreaming of a world that broke us free from our desktop computers to usher in devices that were one with our environment. Weiser did much of this work at Xerox PARC, the same R&D lab that developed the mouse and GUI technology that Steve Jobs would eventually adopt for the Macintosh. (I would also be remiss to ignore that ubiquitous computing is the foundation of the sci-fi film Her, one of Altman’s self-stated goalposts.)

Ah, essentially an always-on, always watching AI that is ready to assist. But whatever the form factor this device takes, it will likely depend on a smartphone:

The first io device seems to acknowledge the phone’s inertia. Instead of presenting itself as a smartphone-killer like the Ai Pin or as a fabled “second screen” like the Apple Watch, it’s been positioned as a third, er, um … thing next to your phone and laptop. Yeah, that’s confusing, and perhaps positions the io product as unessential. But it also appears to be a needed strategy: Rather than topple these screened devices, it will attempt to draft off them.

Wilson ends with the idea of a subjective computer, one that has personality and gives you opinions. He explains:

I think AI is shifting us from objective to subjective. When a Fitbit counts your steps and calories burned, that’s an objective interface. When you ask ChatGPT to gauge the tone of a conversation, or whether you should eat better, that’s a subjective interface. It offers perspective, bias, and, to some extent, personality. It’s not just serving facts; it’s offering interpretation.

The entire column is worth a read.

preview-1748580958171.jpg

Can Jony Ive and Sam Altman build the fourth great interface? That's the question behind io

Where Meta, Google, and Apple zig, Ive and Altman are choosing to zag. Can they pull it off?

fastcompany.com iconfastcompany.com

Nick Babich writing for UX Planet:

Because AI design and code generators quickly take an active part in the design process, it’s essential to understand how to make the most of these tools. If you’ve played with Cursor, Bolt, Lovable, or v0, you know the output is only as good as the input.

Well said, especially as prompting is the primary input for these AI tools. He goes on to enumerate his five parts to a good prompt. Worth a quick read.

preview-1748498594917.png

How to write better prompts for AI design & code generators

Because AI design and code generators quickly take an active part in the design process, it’s essential to understand how to make the most…

uxplanet.org iconuxplanet.org
Page 1 of 3