Skip to content

60 posts tagged with “user interface”

9 min read
Collection of iOS interface elements showcasing Liquid Glass design system including keyboards, menus, buttons, toggles, and dialogs with translucent materials on dark background.

Breaking Down Apple’s Liquid Glass: The Tech, The Hype, and The Reality

I kind of expected it: a lot more ink was spilled on Liquid Glass—particularly on social media. In case you don’t remember, Liquid Glass is the new UI for all of Apple’s platforms. It was announced Monday at WWDC 2025, their annual developers conference.

The criticism is primarily around legibility and accessibility. Secondary reasons include aesthetics and power usage to animate all the bubbles.

I’ve been very interested in finding tools to close the design-to-code gap. Martina Sartor writing in UX Planet articulates why that is so important:

After fifteen years hopping between design systems, dev stand-ups, and last-minute launch scrambles, I’m convinced design-to-dev QA is still one of the most underestimated bottlenecks in digital product work. We pour weeks into meticulous Figma files, yet the last mile between mock-up and production code keeps tripping us up.

This is an honest autopsy of why QA hurts and how teams can start healing it — today — without buying more software (though new approaches are brewing).

preview-1749534149927.png

Why Design-to-Dev QA Still Stings

(and Practical Ways to Ease the Pain)

uxplanet.org iconuxplanet.org

I have relayed here before the story that I’ve been using Macs since 1985. It wasn’t the hardware that drew me in—it was MacPaint. I was always an artistic kid so being able to paint on a digital canvas seemed thrilling to me. And of course it was back then.

Behind MacPaint, was a man named Bill Atkinson. Atkinson died last Thursday, June 5 of pancreatic cancer. In a short remembrance, John Gruber said:

I say this with no hyperbole: Bill Atkinson may well have been the best computer programmer who ever lived. Without question, he’s on the short list. What a man, what a mind, what gifts to the world he left us.

I‘m happy that Figma also remembered Atkinson and that they are standing on his shoulders.

Every day at Figma, we wrestle with the same challenges Atkinson faced: How do you make powerful tools feel effortless? How do you hide complexity behind intuitive interactions? His fingerprints are on every pixel we push, every selection we make, every moment of creative flow our users experience.

preview-1749532457343.jpg

Bill Atkinson’s 10 Rules for Making Interfaces More Human

We commemorate the Apple pioneer whose QuickDraw and HyperCard programs made the Macintosh intuitive enough for nearly anyone to use.

figma.com iconfigma.com
Abstract gradient design with flowing liquid glass elements in blue and pink colors against a gray background, showcasing Apple's new Liquid Glass design language.

Quick Notes About WWDC 2025

Apple’s annual developer conference kicked off today with a keynote that announced:

  • Unified Version 26 across all Apple platforms (iOS, iPadOS, macOS, watchOS, tvOS, visionOS)
  • “Liquid Glass” design system. A complete UI and UX overhaul, the first major redesign since iOS 7
  • Apple Intelligence. Continued small improvements, though not the deep integration promised a year ago
  • Full windowing system on iPadOS. Windows comes to iPad! Finally.

Of course, those are the very high-level highlights.

In this short piece by Luke Wroblewski, he observes how the chat box is slowly giving way as agents and MCP give AI chatbots a little more autonomy.

When agents can use multiple tools, call other agents and run in the background, a person’s role moves to kicking things off, clarifying things when needed, and making use of the final output. There’s a lot less chatting back and forth. As such, the prominence of the chat interface can recede even further. It’s there if you want to check the steps an AI took to accomplish your task. But until then it’s out of your way so you can focus on the output.

preview-1749011480163.png

The Receding Role of AI Chat

While chat interfaces to AI models aren't going away anytime soon, the increasing capabilities of AI agents are making the concept of chatting back and forth wi...

lukew.com iconlukew.com

Sebastiaan de With, former designer at Apple and currently co-founder and designer at Lux (makers of Halide, Kino, Spectre, and Orion) imagined what the next era in iOS design might be. (WWDC, Apple’s developer conference is next week. This is typically when they unveil the new operating systems that will launch in the fall. Rumors are flying as usual.)

But he starts with a history lesson:

Smart people study history to understand the future. If we were to categorize the epochs of iOS design, we could roughly separate them into the Shaded Age, the Adaptive Age, and the New Age.

The Shaded Age, or skeuomorphic age, took inspiration from the Dashboard feature of Mac OS X Tiger. And then the Flat Age brought on by the introduction of iOS 7.

de With’s concept mocks for the New Age are fantastic. Based on the physicality of visionOS, with specular highlights and reactive reflections, it’s luscious and reminds me of the first time I ever laid eyes on Aqua—the glossy, candy-like look of the original Mac OS X. Steve Jobs said at its introduction, “…one of the design goals was when you saw it you wanted to lick it.”

Close-up of a glass-rendered user interface

Sebastiaan de With: “Philosophically, if I was Apple, I’d describe this as finally having an interface that matches the beautiful material properties of its devices. All the surfaces of your devices have glass screens. This brings an interface of a matching material, giving the user a feeling of the glass itself coming alive.

preview-1749013108308.jpg

Physicality: the new age of UI

There’s a lot of rumors of a big impending UI redesign from Apple. Let’s imagine what’s (or what could be) next for the design of iPhones, Macs and iPads.

lux.camera iconlux.camera

Following up on OpenAI’s acquisition of Jony Ive’s hardware startup, io, Mark Wilson, writing for Fast Company:

As Ive told me back in 2023, there have been only three significant modalities in the history of computing. After the original command line, we got the graphical user interface (the desktop, folders, and mouse of Xerox, Mac OS, and Windows), then voice (Alexa, Siri), and, finally, with the iPhone, multitouch (not just the ability to tap a screen, but to gesture and receive haptic feedback). When I brought up some other examples, Ive quickly nodded but dismissed them, acknowledging these as “tributaries” of experimentation. Then he said that to him the promise, and excitement, of building new AI hardware was that it might introduce a new breakthrough modality to interacting with a machine. A fourth modality.

Hmm, it hasn’t taken off yet because AR hasn’t really gained mainstream popularity, but I would argue hand gestures in AR UI to be a fourth modality. But Ive thinks different. Wilson continues:

Ive’s fourth modality, as I gleaned, was about translating AI intuition into human sensation. And it’s the exact sort of technology we need to introduce ubiquitous computing, also called quiet computing and ambient computing. These are terms coined by the late UX researcher Mark Weiser, who in the 1990s began dreaming of a world that broke us free from our desktop computers to usher in devices that were one with our environment. Weiser did much of this work at Xerox PARC, the same R&D lab that developed the mouse and GUI technology that Steve Jobs would eventually adopt for the Macintosh. (I would also be remiss to ignore that ubiquitous computing is the foundation of the sci-fi film Her, one of Altman’s self-stated goalposts.)

Ah, essentially an always-on, always watching AI that is ready to assist. But whatever the form factor this device takes, it will likely depend on a smartphone:

The first io device seems to acknowledge the phone’s inertia. Instead of presenting itself as a smartphone-killer like the Ai Pin or as a fabled “second screen” like the Apple Watch, it’s been positioned as a third, er, um … thing next to your phone and laptop. Yeah, that’s confusing, and perhaps positions the io product as unessential. But it also appears to be a needed strategy: Rather than topple these screened devices, it will attempt to draft off them.

Wilson ends with the idea of a subjective computer, one that has personality and gives you opinions. He explains:

I think AI is shifting us from objective to subjective. When a Fitbit counts your steps and calories burned, that’s an objective interface. When you ask ChatGPT to gauge the tone of a conversation, or whether you should eat better, that’s a subjective interface. It offers perspective, bias, and, to some extent, personality. It’s not just serving facts; it’s offering interpretation.

The entire column is worth a read.

preview-1748580958171.jpg

Can Jony Ive and Sam Altman build the fourth great interface? That's the question behind io

Where Meta, Google, and Apple zig, Ive and Altman are choosing to zag. Can they pull it off?

fastcompany.com iconfastcompany.com

Nick Babich writing for UX Planet:

Because AI design and code generators quickly take an active part in the design process, it’s essential to understand how to make the most of these tools. If you’ve played with Cursor, Bolt, Lovable, or v0, you know the output is only as good as the input.

Well said, especially as prompting is the primary input for these AI tools. He goes on to enumerate his five parts to a good prompt. Worth a quick read.

preview-1748498594917.png

How to write better prompts for AI design & code generators

Because AI design and code generators quickly take an active part in the design process, it’s essential to understand how to make the most…

uxplanet.org iconuxplanet.org

Josh Miller, writing in The Browser Company’s substack:

After a couple of years of building and shipping Arc, we started running into something we called the “novelty tax” problem. A lot of people loved Arc — if you’re here you might just be one of them — and we’d benefitted from consistent, organic growth since basically Day One. But for most people, Arc was simply too different, with too many new things to learn, for too little reward.

“Novelty tax” is another way of saying using non-standard patterns that users just didn’t get. I love Arc. It’s my daily driver. But, Miller is right that it does have a steep learning curve. So there is a natural ceiling to their market.

Miller’s conclusion is where things get really interesting:

Let me be even more clear: traditional browsers, as we know them, will die. Much in the same way that search engines and IDEs are being reimagined [by AI-first products like Perplexity and Cursor]. That doesn’t mean we’ll stop searching or coding. It just means the environments we do it in will look very different, in a way that makes traditional browsers, search engines, and IDEs feel like candles — however thoughtfully crafted. We’re getting out of the candle business. You should too.

“You should too.”

And finally, to bring it back to the novelty tax:

**New interfaces start from familiar ones. **In this new world, two opposing forces are simultaneously true. How we all use computers is changing much faster (due to AI) than most people acknowledge. Yet at the same time, we’re much farther from completely abandoning our old ways than AI insiders give credit for. Cursor proved this thesis in the coding space: the breakthrough AI app of the past year was an (old) IDE — designed to be AI-native. OpenAI confirmed this theory when they bought Windsurf (another AI IDE), despite having Codex working quietly in the background. We believe AI browsers are next.

Sad to see Arc’s slow death, but excited to try Dia soon.

preview-1748494472613.png

Letter to Arc members 2025

On Arc, its future, and the arrival of AI browsers — a moment to answer the largest questions you've asked us this past year.

browsercompany.substack.com iconbrowsercompany.substack.com
A futuristic scene with a glowing, tech-inspired background showing a UI design tool interface for AI, displaying a flight booking project with options for editing and previewing details. The screen promotes the tool with a “Start for free” button.

Beyond the Prompt: Finding the AI Design Tool That Actually Works for Designers

There has been an explosion of AI-powered prompt-to-code tools within the last year. The space began with full-on integrated development environments (IDEs) like Cursor and Windsurf. These enabled developers to use leverage AI assistants right inside their coding apps. Then came a tools like v0, Lovable, and Replit, where users could prompt screens into existence at first, and before long, entire applications.

A couple weeks ago, I decided to test out as many of these tools as I could. My aim was to find the app that would combine AI assistance, design capabilities, and the ability to use an organization’s coded design system.

While my previous essay was about the future of product design, this article will dive deep into a head-to-head between all eight apps that I tried. I recorded the screen as I did my testing, so I’ve put together a video as well, in case you didn’t want to read this.

Karri Saarinen, writing for the Linear blog:

Unbounded AI, much like a river without banks, becomes powerful but directionless. Designers need to build the banks and bring shape to the direction of AI’s potential. But we face a fundamental tension in that AI sort of breaks our usual way of designing things, working back from function, and shaping the form.

I love the metaphor of AI being the a river and we designers are the banks. Feels very much in line with my notion that we need to become even better curators.

Saarinen continues, critiquing the generic chatbox being the primary form of interacting with AI:

One way I visualize this relationship between the form of traditional UI and the function of AI is through the metaphor of a ‘workbench’. Just as a carpenter’s workbench is familiar and purpose-built, providing an organized environment for tools and materials, a well-designed interface can create productive context for AI interactions. Rather than being a singular tool, the workbench serves as an environment that enhances the utility of other tools – including the ‘magic’ AI tools.

Software like Linear serves as this workbench. It provides structure, context, and a specialized environment for specific workflows. AI doesn’t replace the workbench, it’s a powerful new tool to place on top of it.

It’s interesting. I don’t know what Linear is telegraphing here, but if I had to guess, I wonder if it’s closer to being field-specific or workflow-specific, similar to Generative Fill in Photoshop. It’s a text field—not textarea—limited to a single workflow.

preview-1744257584139.png

Design for the AI age

For decades, interfaces have guided users along predefined roads. Think files and folders, buttons and menus, screens and flows. These familiar structures organize information and provide the comfort of knowing where you are and what's possible.

linear.app iconlinear.app

Such a gorgeous visual essay from Amelia Wattenberger. Beyond being wonderful to look at, the content is just as thought-provoking. Her experiment towards the middle of the piece is interesting. In our world of flat design and design systems, Amelia is truly innovating.

People made of yarn working on room-sized computers

Our interfaces have lost their senses

With increasing amounts of AI chatbots, we're losing even more: texture, color, shape. Instead of interactive controls, we have a text input. Want to edit an image? Type a command. Adjust a setting? Type into a text box. Learn something? Read another block of text.

wattenberger.com iconwattenberger.com

The New FOX Sports Scorebug

I was sitting on a barstool next to my wife in a packed restaurant in Little Italy. We were the lone Kansas City Chiefs supporters in a nest full of hipster Philadelphia Eagles fans. After Jon Batiste finished his fantastic rendition of the national anthem, and the teams took the field for kickoff, I noticed something. The scorebug—the broadcast industry’s term for the lower-third or chyron graphic at the bottom of the screen—was different, and in a good way.

A Bluesky post praising the minimalistic Super Bowl lower-thirds, with a photo of a TV showing the Chiefs vs. Eagles game and sleek on-screen graphics.

posted about it seven minutes into the first quarter, saying I appreciated “the minimalistic lower-thirds for this Super Bowl broadcast.” It was indeed refreshing, a break from the over-the-top 3D-animated sparkling. I thought the graphics were clear and utilitarian while being exquisitely-designed. They weren’t distracting from the action. As with any good interface design, this new scorebug kept the focus on the players and the game, not itself. I also thought they were a long-delayed response to Apple’s Friday Night Baseball scorebug.

Surreal scene of a robotic chicken standing in the center of a dimly lit living room with retro furnishings, including leather couches and an old CRT television emitting a bright blue glow.

Chickens to Chatbots: Web Design’s Next Evolution

In the early 2000s to the mid-oughts, every designer I knew wanted to be featured on the FWA, a showcase for cutting-edge web design. While many of the earlier sites were Flash-based, it’s also where I discovered the first uses of parallax, Paper.js, and Three.js. Back then, websites were meant to be explored and their interfaces discovered.

Screenshot of The FWA website from 2009 displaying a dense grid of creative web design thumbnails.

A grid of winners from The FWA in 2009. Source: Rob Ford.

One of my favorite sites of that era was Burger King’s Subservient Chicken, where users could type free text into a chat box to command a man dressed in a chicken suit. In a full circle moment that perfectly captures where we are today, we now type commands into chat boxes to tell AI what to do.

Apple VR headset on a table

Thoughts on Apple Vision Pro

Apple finally launched its Vision Pro “spatial computing” device in early February. We immediately saw TikTok memes of influencers being ridiculous. I wrote about my hope for the Apple Vision Pro back in June 2023, when it was first announced. When preorders opened for Vision Pro in January, I told myself I wouldn’t buy it. I couldn’t justify the $3,500 price tag. Out of morbid curiosity, I would lurk in the AVP subreddits to live vicariously through those who did take the plunge.

After about a month of reading all the positives from users about the device, I impulsively bought an Apple Vision Pro. I placed my order online at noon and picked it up just two hours later at an Apple Store near me.

Many great articles and YouTube videos have already been produced, so this post won’t be a top-to-bottom review of the Apple Vision Pro. Instead, I’ll try to frame it from my standpoint as someone who has designed user experiences for VR