Skip to content

295 posts in Linked

In just about a year, Bluesky has doubled its userbase from 20 million to 40 million. Last year, it benefitted from “the wake of Donald Trump’s re-election as president, and Elon Musk’s continued degradation of X, Bluesky welcomed an exodus of liberals, leftists, journalists, and academic researchers, among other groups.” Writing in his Platformer newsletter, Casey Newton reflects back on the year, surfacing up the challenges Bluesky has tried to solve in reimagining a more “feel-good feed.”

It’s clear that you can build a nicer online environment than X has; in many ways Bluesky already did. What’s less clear is that you can build a Twitter clone that mostly makes people feel good. For as vital and hilarious as Twitter often was, it also accelerated the polarization of our politics and often left users feeling worse than they did before they opened it.

Bluesky’s ingenuity in reimagining feeds and moderation tools has been a boon to social networks, which have happily adopted some of its best ideas. (You can now find “starter packs” on both Threads and Mastodon.) Ultimately, though, it has the same shape and fundamental dynamics as a place that even its most active users called “the Hellsite.”

Bluesky began by rethinking many core assumptions about social networks. To realize its dream of a feel-good feed, though, it will likely need to rethink several more.

I agree with Newton. I’m not sure that in this day and age, building a friendlier, snark- and toxic-free social media platform is possible. Users are too used to hiding behind keyboards. It’s not only the shitposters but also the online mobs who jump on the anything that might seem out of the norm with whatever community a user might be in.

Newton again:

Nate Silver opened the latest front in the Bluesky debate in September with a post about “Blueskyism,” which he defines as “not a political movement so much as a tribal affiliation, a niche set of attitudes and style of discursive norms that almost seem designed in a lab to be as unappealing as possible to anyone outside the clique.” Its hallmarks, he writes, are aggressively punishing dissent, credentialism, and a dedication to the proposition that we are all currently living through the end of the world.

Mobs, woke or otherwise, silence speech and freeze ideas into orthodoxy.

I miss the pre-Elon Musk Twitter. But I can’t help but think it would have become just as polarized and toxic regardless of Musk transforming it into X.

I think the form of text-based social media from the last 20 years is akin to manufacturing tobacco in the mid-1990s. We know it’s harmful. It may be time to slap a big warning label on these platforms and discourage use.

(Truth be told, I’m on the social networks—see the follow icons in the sidebar—but mainly to give visibility into my work here, though largely unsuccessfully.)

White rounded butterfly-shaped 3D icon with soft shadows centered on a bright blue background.

The Bluesky exodus, one year later

The company has 40 million users and big plans for the future. So why don’t its users seem happy? PLUS: The NEO Home Robot goes viral + Ilya Sutskever’s surprising deposition

platformer.news iconplatformer.news

Robin Sloan wrote a thought piece exploring what “extended thinking” and “reasoning” models actually mean.

…the models can only “think” by spooling out more text — while human thinking often does the oppo­site: retreats into silence, because it doesn’t have words yet to say what it wants to say.

That’s an interesting point Sloan makes. I believe there’s nuance though.

I’ve long felt that I do my best thinking by writing. When I work through a gnarly design problem, I’m writing first, then sketching, then maybe Figma-ing. But that could be after a walk, a shower, or doing the dishes.

Diagonal black comet-like streak across a pink-red sky with a pale blue planet and scattered stars.

Thinking modes

Floating in linguistic space.

robinsloan.com iconrobinsloan.com

I think the headline is a hard stance, but I appreciate the sentiment. All the best designers and creatives—including developers—I’ve ever worked with do things on the side. Or in Rohit Prakash’s words, they tinker. They’re always making something, learning along the way.

Prakash, writing in his blog:

Acquiring good taste comes through using various things, discarding the ones you don’t like and keeping the ones you do. if you never try various things, you will not acquire good taste.

It’s important for designers to see other designs and use other products—if you’re a software designer. It’s equally important to look up from Dribbble, Behance, Instagram, and even this blog and go experience something unrelated to design. Art, concerts, cooking. All of it gets synthesized through your POV and becomes your taste.

Large white text "@seatedro on x dot com" centered on a black background.

If you don’t tinker, you don’t have taste

programmer by day, programmer by night.

seated.ro iconseated.ro

In a very gutsy move, Grammarly is rebranding to Superhuman. I was definitely scratching my head when the company acquired the eponymous email app back in June. Why is this spellcheck-on-steroids company buying an email product?

Turns out the company has been quietly acquiring other products too, like Coda, a collaborative document platform similar to Notion, building the company into an AI-powered productivity suite.

So the name Superhuman makes sense.

Grace Snelling, writing in Fast Company about the rebrand:

[Grammarly CEO Shishir] Mehrotra explains it like this: Grammarly has always run on the “AI superhighway,” meaning that, instead of living on its own platform, Grammarly travels with you to places like Google Docs, email, or your Notes app to help improve your writing. Superhuman will use that superhighway to bring a huge new range of productivity tools to wherever you’re working.

In shedding the Grammarly name, Mehrota says:

“The trouble with the name ‘Grammarly’ is, like many names, its strength is its biggest weakness: it’s so precise,” Mehrotra says. “People’s expectations of what Grammarly can do for them are the reason it’s so popular. You need very little pitch for what it does, because the name explains the whole thing … As we went and looked at all the other things we wanted to be able to do for you, people scratch their heads a bit [saying], ‘Wait, I don’t really perceive Grammarly that way.’”

The company tapped branding agency Smith & Diction, the firm behind Perplexity’s brand identity.

Grammarly began briefing the Smith & Diction team on the rebrand in early 2025, but the company didn’t officially select its new name until late June, when the Superhuman acquisition was completed. For Chara and Mike Smith, the couple behind Smith & Diction, that meant there were only about three months to fully realize Superhuman’s branding.

Ouch, just three months for a complete rebrand. Ambitious indeed, but they hit a homerun with the icon, an arrow cursor which also morphs into a human with a cape, lovingly called “Hero.”

In their case study writeup, one of the Smiths says:

I was working on logo concepts and I was just drawing the basic shapes, you know the ones: triangles, circles, squares, octagons, etc., to see if I could get a story to fall out of any of them. Then I drew this arrow and was like hmm, that kinda looks like a cursor, oh wow it also kinda looks like a cape. I wonder if I put a dot on top of tha…OH MY GOD IT’S A SUPERHERO.

Check out the full case study for example visuals from the rebrand and some behind-the-scenes sketches.

Large outdoor billboard with three colorful panels reading "The power to be more human." and "SUPERHUMAN", with abstract silhouetted figures.

Inside the Superhuman effort to rebrand Grammarly

(Gift link) CEO Shishir Mehrotra and the design firm behind Grammarly's name change explain how they took the company's newest product and made it the face for a brand of workplace AI agents.

fastcompany.com iconfastcompany.com

Apologies for sharing back-to-back articles from NN/g, but this is a good comprehensive index of all the AI-related guides the firm has published. Start here if you’re just getting into it.

Highlights from my POV:

  • Your AI UX Intern: Meet Ari. AI tools in UX act like junior interns whose output serves as a starting draft needing review, specific instructions, and added context. Their work should be checked and not used for final products or decisions without supervision.
  • The Future-Proof Designer. AI speeds up product development and automates design tasks, but creates risks like design marginalization and information overload. Designers must focus on strategic thinking, outcomes, and critical judgment to ensure decisions benefit users and business value.
  • Design Taste vs. Technical Skills in the Era of AI. Generative AI has equalized access to design output, but quality depends on creative discernment and taste, which remain essential for impactful results.
Using AI for UX Work: Study Guide — profile head with magnifying glass, robot face, papers, speech bubble and vector-cursor icons; NN/G logo

Using AI for UX Work: Study Guide

Unsure where to start? Use this collection of links to our articles and videos to learn about the best ways to use artificial intelligence for UX work.

nngroup.com iconnngroup.com

Leave it to NN/g to evaluate the AI prompt-to-code tool landscape with some rigor. Huei-Hsin Wang and Megan Brown cover over a dozen tools, including ChatGPT, Claude, UX Pilot, Uizard, Relume, Stitch, Bolt, Lovable, v0, Replit, Figma Make, Magic Patterns, and Subframe. They use a human designer as the control.

Among their conclusions:

AI’s limited grasp of design nuances and inconsistent output make it best suited for ideation, concept exploration, and early-phase prototype testing, rather than later stages. While you likely won’t take an AI-generated prototype straight to production, these tools can help you break through creative blocks and explore new directions quickly.

I think the best part is they shared screenshots of outputs in a FigJam board.

Header "Good from Afar, But Far from Good: AI Prototyping in Real Design Contexts" with teal robot icon and dotted wireframe UI.

Good from Afar, But Far from Good: AI Prototyping in Real Design Contexts

AI prototyping tools follow general directions but lack the judgment and nuance of an experienced designer.

nngroup.com iconnngroup.com

I’ve been a big fan of node-based UIs since I first experimented with Shake in the early 2000s. It’s kind of weird to wrap your head around, especially if you’re used to layers in Photoshop or Figma. The easiest way to think about nodes is to rotate the layer stack 90-degrees. Each node has inputs on the left, a distinct process that it does to the input, and outputs stuff on the right. You connect up multiple nodes to process assets to form your final composition. Popular apps with node-based workflows today include Unreal Engine (Blueprints), DaVinci Resolve (Fusion and Color), and n8n.

ComfyUI is another open source tool that uses the same node graph architecture. Made in 2023 to add some UI to the visual generative AI models like Stable Diffusion appearing around that time, it’s become popular among artists to wield the plethora of image and video gen AI models.

Fast-forward to last week, when Figma announced they had acquired Weavy, a much friendlier and cloud-based version of ComfyUI.

Weavy brings the world’s leading AI models together with professional editing tools on a single, browser-based canvas. With Weavy, you can choose the model you want for a task (e.g. Seedance, Sora, and Veo for cinematic video; Flux and Ideogram for realism; and Nano-Banana or Seedream for precision) and compose powerful primitives using generative AI outputs and hands-on edits (e.g. adjusting lighting, masking an object, color grading a shot). The end result is an inspiring environment for creative exploration and a flexible media pipeline where every output feeds the next.

This node-based approach brings a new level of craft and control to AI generation. Outputs can be branched, remixed, and refined, combining creative exploration with precision and craft. The Weavy team has inspired us with the balance they’ve struck between simplicity, approachability, and power. They’ve also created a tool that’s just a joy to use.

I must admit I had not heard about Weavy before the announcement. I had high hopes for Visual Electric, but it never quite lived up to its ambitions. I proceeded to watch all the official tutorial videos on YouTube and love it. Seems so much easier to use than ComfyUI. Let’s see what Figma does with the product.

Node-based image editor with connected panels showing a man in a rowboat on water then composited floating over a deep canyon.

Introducing Figma Weave: the next generation of AI-native creation at Figma

Figma has acquired Weavy, a platform that brings generative AI and professional editing tools into the open canvas.

figma.com iconfigma.com

In graphic design news, a new version of the Affinity suite dropped last week, and it’s free. Canva purchased Serif, the company behind the Affinity products, last year. After about a year of engineering, they have combined all the products into a single product to offer maximum flexibility. And they made it free.

Of course then, that sparks debate.

Joe Foley, writing for Creative Bloq explains:

…A natural suspicion of big corporations is causing some to worry about what the new Affinity will become. What’s in it for Canva?

Theories abound. Some think the app will start to show adverts like many free mobile apps do. Others think it will be used to train AI (something Canva denies). Some wonder if Canva’s just doing it to spite Adobe. “Their objective was to undermine Adobe, not provide for paying customers. Revenge instead of progress,” one person thinks.

Others fear Affinity’s tools will be left to stagnate. “If you depend on a software for your design work it needs to be regularly updated and developed. Free software never has that pressure and priority to be kept top notch,” one person writes.

AI features are gated behind paid Canva premium subscription plans. This makes sense as AI features have inference costs. As Adobe is going all out with its AI features, gen AI is now table stakes for creative and design programs.

Photo editor showing a man in a green jacket with gold chains against a purple gradient background, layers panel visible.

Is Affinity’s free Photoshop rival too good to be true?

Designers are torn over the new app.

creativebloq.com iconcreativebloq.com

Mnemonics—short audio branding—is one of those trends that come and go. The one for Intel was pretty well-known for a long time and spurred the creation of similar sonic identities (e.g., T-Mobile, Netflix). Apple announced in mid-October that they were dropping the “Plus” from their streaming service to be known simply as “Apple TV.” The news spurred a bunch of punditry around probable confusion with the Apple TV app and Apple TV hardware device.

Yesterday, Apple shared the new identity work that drops the ”+” from the streaming service name and logo. It’s in the form of the opening logo and mnemonic that will appear in front of shows.

Here is a longer version that’ll appear before films.

In an interview, Finneas O’Connell (who goes by his stage name Finneas), brother of pop star Billie Eilish and her main collaborator, spilled the beans on how he came up with it. Chris Willman, writing for Variety:

Speaking via Zoom from his home studio, Finneas points to the piano behind him as a starting point for a fleeting piece of music whose instrumentation isn’t easy to pin down before it’s gone in one ear and out the other at the start of a viewing experience. “I have my upright piano back here, so I sat and started there. I’m always more able to make something quickly on a real instrument than I am with software. I played a chord that felt kind of hopeful and kind of optimistic, but had gravity to it and hopefully had a little bit of an enigmatic, mysterious quality. And so I had this chord thing happening and then I started building the sounds around it. I had these pieces of zinc and I was hitting them and then reversing the audio, and I was playing real piano and then reversing that, and playing these bass synthesizers and then pitching those up and gliding them down.”

Beyond the cool sound, I love how the logo sting seems to be inspired by early TV logo idents like this one for NBC from 1967.

Update November 6, 2025 3:00 PM PT:

Ad Age reports that the logo sting was filmed in-camera! Writing in the industry mag, Tim Nudd says the branding was done by Apple’s longtime agency TBWA\Media Arts Lab and production company Optical Arts, and that its “lush visuals are meant to capture the platform’s cinematic ambitions and remind viewers that Apple TV is where prestige storytelling lives.”

The report includes a link to a 33-second behind-the-scenes video:

Smiling man with shoulder-length red hair and beard in a black suit next to a black panel with iridescent Apple TV logo

Finneas on Creating a New Mnemonic Intro for Apple Originals — His Shortest Music Ever, but Possibly Soon to Be the Most Ubiquitous (EXCLUSIVE)

Finneas talks the assignment to do his shortest piece of music ever — the few seconds of sound that will precede every Apple TV program from now on.

variety.com iconvariety.com

I’ve been on the receiving end of Layer 1226 before and it’s not fun. While I’m pretty good with my layer naming hygiene, I’m not perfect. So I welcome anything that can help rename my layers. Apparently, when Adobe showed off this new AI feature at their Adobe MAX user conference last week, it drew a big round of applause. (Figma’s had this feature since June 2024.)

There’s more than just renaming layers though. Adobe is leaning into conversational UI for editing too. For new users coming to editing tools, this makes a lot of sense because the learning curve for Photoshop is very steep. But as I’ve always said, professionals will also need fine-grained controls.

Writing for CNET, Katelyn Chedraoui:

Renaming layers is just one of many things Adobe’s new AI assistants will be able to do. These chatbot-like tools will be added to Photoshop and Express. They have an emphasis on “conversational, agentic” experiences — meaning you can ask the chatbot to make edits, and it can independently handle them.

Express’s AI assistant is similar to using a chatbot. Once you toggle on the tool in the upper left corner, a conversation window pops up. You can ask the AI to change the color of an object or remove an obtrusive element. While pro users might be comfortable making those edits manually, the AI assistant might be more appealing to its less experienced users and folks working under a time crunch.

A peek into Adobe’s future reveals more agentic experiences:

Also announced on Tuesday is Project Moonlight, a new platform in beta on Adobe’s AI hub, Firefly. It’s a new tool that hopes to act as a creative partner. With your permission, it uses your data from Adobe platforms and social media accounts to help you create content. For example, you can ask it to come up with 20 ideas for what to do with your newest Lightroom photos based on your most successful Instagram posts in the past. 

These AI efforts represent a range of what conversational editing can look like, Mike Polner, Adobe Firefly’s vice president of product marketing for creators said in an interview. 

“One end of the spectrum is [to] type in a prompt and say, ‘Make my hat blue.’ That’s very simplistic,” said Polner. “With Project Moonlight, it can understand your context, explore and help you come up with new ideas and then help you analyze the content that you already have,” Polner said.

Photoshop AI Assistant UI over stone church landscape with large 'haven' text and command bubbles like 'Increase saturation'.

Photoshop’s New AI Assistant Can Rename All Your Layers So You Don’t Have To

The chatbot-like AI assistant isn’t out yet, but there is at least one practical way to use it.

cnet.com iconcnet.com

In thinking about the three current AI-native web browsers, Fanny on Medium sees what lessons product designers can take from their different approaches.

On Perplexity Comet:

Design Insight: Comet succeeds by making AI feel like a natural extension of browsing, not an interruption. The sidecar model is brilliant because it respects the user’s primary task (reading, researching, shopping) while offering help exactly when context is fresh. But there’s a trade-off — Comet’s background assistant, which can handle multiple tasks simultaneously while you work, requires extensive permissions and introduces real security concerns.

On ChatGPT Atlas:

Design Insight: Atlas is making a larger philosophical statement — that the future of computing isn’t about better search, it’s about conversation as an interface. The key product decision here is making ChatGPT’s memory and context awareness central. Atlas remembers what sites you’ve visited, what you were working on, and uses that history to personalize responses. Ask “What was that doc I had my presentation plan in?” and it finds it.

On The Browser Company Dia:

Design Insight: Dia is asking the most interesting question — what happens when AI isn’t a sidebar or a search replacement, but a fundamental rethinking of input methods? The insertion cursor, the mouse, the address bar — these are the primitives of computing. Dia is making them intelligent.

She concludes that they “can’t all be right. But they’re probably all pointing at pieces of what comes next.”

I do think it’s a combo and Atlas is likely headed in the right direction. For AI to be truly assistive, it has to have relevant context. Since a lot of our lives are increasingly on the internet via web apps—and nearly everything is a web app these days—ChatGPT’s profile of you will have the most context, including your chats with the chatbot.

I began using Perplexity because I appreciated its accuracy compared with ChatGPT; this was pre-web search. But even with web search built into ChatGPT 5, I still find Perplexity’s (and therefore Comet’s) approach to be more trustworthy.

My conclusion stands though: I’m still waiting on the Arc-Dia-Comet browser smoothie.

Three app icons on dock: blue flower with paper plane, rounded square with sunrise gradient, and dark circle with white arches.

The AI Browser Wars: What Comet, Atlas, and Dia Reveal About Designing for AI-First Experiences

Last week, I watched OpenAI’s Sam Altman announce Atlas with the kind of confidence usually reserved for iPhone launches. “Tabs were…

uxplanet.org iconuxplanet.org

Did you know that Apple made Office before Microsoft made Office? It was called AppleWorks and launched in 1984 for the Apple II. They’d make it for the Mac in 1991 and called it ClarisWorks because Apple spun off a software subsidiary for who knows what reason.

Howard Oakley recently wrote a brief history of AppleWorks and shared some nice visuals. Though I wished he included an image from that original Apple II text-based AppleWorks as well.

AppleWorks screenshot: Certificate of Achievement for Marcia Marks, ornate black border, yellow seal, color palette panel

A brief history of AppleWorks

It took 7 years for it to become available for the Mac, changed names and hands twice, but somehow survived until 2007.

eclecticlight.co iconeclecticlight.co

The good folks at Linear have proven that a design-led company can carve out a space against an entrenched company like Atlassian. They do this by having very strong opinions about how things should work and then pixelfucking the hell out of their products. I truly admire their dedication to craft.

When Apple introduced Liquid Glass, they decided to write their own version of it for more control. Robb Böhnke, writing on Linear’s blog:

Liquid Glass is a beautiful successor to Aqua. Its primary purpose is to feel fluid and friendly for a broad consumer audience. Apple has to design for every kind of app—education, entertainment, banking, fitness—and build systems that adapt to all of them.

We have a different set of constraints. Our users come to Linear to do a specific kind of work in a focused environment. That gives us freedom to push the design in specific ways Apple can’t.

In that sense, we saw an opportunity to take Liquid Glass’s aesthetic qualities—its translucency, depth, and physicality—and apply them with a ProKit philosophy: purpose-built, disciplined, and designed for sustained focus.

ProKit—as Böhnke explained—was Apple’s “pro” theme, the slightly flatter, less flashy big brother to the lickable Aqua. It was “built for professional tools like Final Cut or Logic with complex, information-dense workflows where clarity and control are more important than visual flourish.”

Dark schematic: two circular 3×3 shaded-grid nodes linked by three rounded horizontal tracks with downward arrows.

A Linear spin on Liquid Glass

Earlier this year, we were ready to redesign our mobile app. The original version had served us well, but it was built with a narrow use case foremost in mind: individual contributors engaging with issues.

linear.app iconlinear.app

To close us out on Halloween, here’s one more archive full of spooky UX called the Dark Patterns Hall of Shame. It’s managed by a team of designers and researchers, who have dedicated themselves to identifying and exposing dark patterns and unethical design examples on the internet. More than anything, I just love the names some of these dark patterns have, like Confirmshaming, Privacy Zuckering, and Roach Motel.

Small gold trophy above bold dark text "Hall of shame. design" on a pale beige background.

Collection of Dark Patterns and Unethical Design

Discover a variety of dark pattern examples, sorted by category, to better understand deceptive design practices.

hallofshame.design iconhallofshame.design

’Tis the season for online archives. From GQ comes this archive of the work of Virgil Abloh, the multi-hyphenate creative powerhouse who started as an intern at Fendi and rose to found his own streetwear label Off-White, before becoming artistic director of Louis Vuitton’s menswear collection. He had collabs with Nike, IKEA, and artist Jenny Holzer.

I do think my favorite from this archive is his collection of LV bags. I’m used to seeing them in monochromatic colors, not these bright ones.

Inside the Top Secret Virgil Abloh Archive

Inside the Top Secret Virgil Abloh Archive

In the years since the premature death of the former Off-White and Louis Vuitton creative director, a team of archivists has tirelessly catalogued one of the most remarkable private fashion collections ever assembled. We’re revealing it here for the first time.

gq.com icongq.com

In a world where case studies dominate portfolios, explaining the problem and sharing the outcomes, a visuals-only gallery feels old fashioned. But Pentagram has earned the right to compile their own online monograph. It is one of the very few agencies in the world who could pull together an archive like this that features over 2,000 projects spanning their 53-year existence.

Try searches like: album covers, New York City, SNL, and Paula Scher.

*The folks at Pentagram aren’t complete heretics. They have a more traditional case studies section here.

Dark gallery grid of small thumbnails with a centered translucent search box saying "Show me album covers".

Archive — Pentagram

A place where we’ve condensed over 50 years of our design prowess into an immersive exploration. Delve into 2,000+ projects, spanning from 1972 to the present and beyond, all empowered by Machine Learning.

pentagram.com iconpentagram.com

Celine Nguyen wrote a piece that connects directly to what Ethan Mollick calls “working with wizards” and what SAP’s Ellie Kemery describes as the “calibration of trust” problem. It’s about how the interfaces we design shape the relationships we have with technology.

The through-line is metaphor. For LLMs, that metaphor is conversation. And it’s working—maybe too well:

Our intense longing to be understood can make even a rudimentary program seem human. This desire predates today’s technologies—and it’s also what makes conversational AI so promising and problematic.

When the metaphor is this good, we forget it’s a metaphor at all:

When we interact with an LLM, we instinctively apply the same expectations that we have for humans: If an LLM offers us incorrect information, or makes something up because it the correct information is unavailable, it is lying to us. …The problem, of course, is that it’s a little incoherent to accuse an LLM of lying. It’s not a person.

We’re so trapped inside the conversational metaphor that we accuse statistical models of having intent, of choosing to deceive. The interface has completely obscured the underlying technology.

Nguyen points to research showing frequent chatbot users “showed consistently worse outcomes” around loneliness and emotional dependence:

Participants who are more likely to feel hurt when accommodating others…showed more problematic AI use, suggesting a potential pathway where individuals turn to AI interactions to avoid the emotional labor required in human relationships.

However, replacing human interaction with AI may only exacerbate their anxiety and vulnerability when facing people.

This isn’t just about individual users making bad choices. It’s about an interface design that encourages those choices by making AI feel like a relationship rather than a tool.

The kicker is that we’ve been here before. In 1964, Joseph Weizenbaum created ELIZA, a simple chatbot that parodied a therapist:

I was startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it…What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Sixty years later, we’ve built vastly more sophisticated systems. But the fundamental problem remains unchanged.

The reality is we’re designing interfaces that make powerful tools feel like people. Susan Kare’s icons for the Macintosh helped millions understand computers. But they didn’t trick people into thinking their computers cared about them.

That’s the difference. And it matters.

Old instant-message window showing "MeowwwitsMadix3: heyyy" and "are you mad at me?" with typed reply "no i think im just kinda embarassed" and buttons Warn, Block, Expressions, Games, Send.

how to speak to a computer

against chat interfaces ✦ a brief history of artificial intelligence ✦ and the (worthwhile) problem of other minds

personalcanon.com iconpersonalcanon.com

Speaking of trusting AI, in a recent episode of Design Observer’s Design As, Lee Moreau speaks with four industry leaders about trust and doubt in the age of AI.

We’ve linked to a story about Waymo before, so here’s Ryan Powell, head of UX at Waymo:

Safety is at the heart of everything that we do. We’ve been at this for a long time, over a decade, and we’ve taken a very cautious approach to how we scale up our technology. As designers, what we have really focused on is that idea that more people will use us as a serious transportation option if they trust us. We peel that back a little bit. Okay, well, How do we design for trust? What does it actually mean?

Ellie Kemery, principal research lead, advancing responsible AI at SAP, on maintaining critical thinking and transparency in AI-driven products:

We need to think about ethics as a part of this because the unintended consequences, especially at the scale that we operate, are just too big, right?

So we focus a lot of our energy on value, delivering the right value, but we also focus a lot of our energy on making sure that people are aware of how the technology came to that output,…making sure that people are in control of what’s happening at all times, because at the end of the day, they need to be the ones making the call.

Everybody’s aware that without trust, there is no adoption. But there is something that people aren’t talking about as much, which is that people should also not blindly trust a system, right? And there’s a huge risk there because, humans we tend to, you know, we’ll try something a couple of times and if it works it works. And then we lose that critical thinking. We stop checking those things and we simply aren’t in a space where we can do that yet. And so making sure that we’re focusing on the calibration of trust, like what is the right amount of trust that people should have to be able to benefit from the technology while at the same time making sure that they’re aware of the limitations.

Bold white letters in a 3x3 grid reading D E S / I G N / A S on a black background, with a right hand giving a thumbs-up over the right column.

Design as Trust | Design as Doubt

Explore how designers build trust, confront doubt, and center equity and empathy in the age of AI with leaders from Adobe, Waymo, RUSH, and SAP

designobserver.com icondesignobserver.com

Ethan Mollick, a professor of entrepreneurship at the Wharton School says that AI has gotten so good that our relationship with them is changing. “We’re moving from partners to audience, from collaboration to conjuring,” he says.

He fed NotebookLM his book and 140 Substack posts and asked for a video overview. AI famously hallucinates. But Mollick found no factual errors in the six-minute video.

We’re shifting from being collaborators who shape the process to being supplicants who receive the output. It is a transition from working with a co-intelligence to working with a wizard. Magic gets done, but we don’t always know what to do with the results. This pattern — impressive output, opaque process — becomes even more pronounced with research tasks.

Mollick believes that the most wizard-like model today is GPT-5 Pro. He uploaded an academic paper that took him a year to write, which was peer-reviewed, and was then published in a major journal…

Nine minutes and forty seconds later, I had a very detailed critique. This wasn’t just editorial criticism, GPT-5 Pro apparently ran its own experiments using code to verify my results, including doing Monte Carlo analysis and re-interpreting the fixed effects in my statistical models. It had many suggestions as a result (though it fortunately concluded that “the headline claim [of my paper] survives scrutiny”), but one stood out. It found a small error, previously unnoticed. The error involved two different sets of numbers in two tables that were linked in ways I did not explicitly spell out in my paper. The AI found the minor error, no one ever had before.

Later in his post, Mollick says that there’s a problem with this wizardry—it’s too opaque. So what can we do?

First, learn when to summon the wizard versus when to work with AI as a co-intelligence or to not use AI at all. AI is far from perfect, and in areas where it still falls short, humans often succeed. But for the increasing number of tasks where AI is useful, co-intelligence, and the back-and-forth it requires, is often superior to a machine alone. Yet, there are, increasingly, times when summoning a wizard is best, and just trusting what it conjures.

Second, we need to become connoisseurs of output rather than process. We need to curate and select among the outputs the AI provides, but more than that, we need to work with AI enough to develop instincts for when it succeeds and when it fails.

And lastly, trust it. Trust the technology, he suggests. “The question isn’t ‘Is this completely correct?’ but ‘Is this useful enough for this purpose?’”

I think we’re in that transition period. AI is indeed dastardly great at some things and constantly getting better at the tasks it’s not. But we all know where this is headed.

Witch hat hovering over a desktop monitor with circuit-like lines flowing into the screen, small coffee mug on the desk.

On Working with Wizards

Verifying magic on the jagged frontier

oneusefulthing.org icononeusefulthing.org

In this era of AI, we’ve been taught that LLMs are probabilistic, not deterministic, and that they will sometimes hallucinate. There’s a saying in AI circles that humans are right about 80% of the time, and so are AIs. Except when less than 100% accuracy is unacceptable. Accountants need to be 100% accurate, lest they lose track of money for their clients or businesses.

And that’s the problem Intuit had to solve to roll out their AI agent. Sean Michael Kerner, writing in VentureBeat:

Even when its accounting agent improved transaction categorization accuracy by 20 percentage points on average, they still received complaints about errors.

“The use cases that we’re trying to solve for customers include tax and finance; if you make a mistake in this world, you lose trust with customers in buckets and we only get it back in spoonfuls,” Joe Preston, Intuit’s VP of product and design, told VentureBeat.

So they built an agent that queries data from a multitude of sources and returns those exact results. But do users trust those results? It comes down to a design decision on being transparent:

Intuit has made explainability a core user experience across its AI agents. This goes beyond simply providing correct answers: It means showing users the reasoning behind automated decisions.

When Intuit’s accounting agent categorizes a transaction, it doesn’t just display the result; it shows the reasoning. This isn’t marketing copy about explainable AI, it’s actual UI displaying data points and logic.

“It’s about closing that trust loop and making sure customers understand the why,” Alastair Simpson, Intuit’s VP of design, told VentureBeat.

Rusty metal bucket tipped over pouring a glowing stream of blue binary digits (ones and zeros) onto a dark surface.

Intuit learned to build AI agents for finance the hard way: Trust lost in buckets, earned back in spoonfuls

The QuickBooks maker's approach to embedding AI agents reveals a critical lesson for enterprise AI adoption: in high-stakes domains like finance and tax, one mistake can erase months of user confidence.

venturebeat.com iconventurebeat.com

We’ve been hearing a lot about AI agents and now enough time has passed that we’re starting to see some learnings in industry. Writing in Harvard Business Review, Linda Mantia, Surojit Chatterjee and Vivian S. Lee showcase three case studies of enterprises that have deployed AI agents.

They write about Hitachi Digital and how they deployed an AI agent as the first responder to the 90,000 questions employees send to their HR team annually.

Every year, employees put over 90,000 questions about everything from travel policies and remote work to training and IT support to the company’s HR team of 120 human responders. Answering these queries can be difficult, in part because of Hitachi’s complex infrastructure of over 20 systems of record, including multiple disparate HR systems, various payroll providers, and different IT environments.

Their system, called “Skye,” is actually a system of agents, coordinating with one another and firing off queries depending on the intent and task.

For example, the intent classifier agent sends a simple policy question like “What are allowed expenses for traveling overseas?” or “Does this holiday count in paid time off?” to a file search and respond agent, which provides immediate answers by examining the right knowledge base given the employee’s position and organization. A document generation agent can create employee verification letters (which verify individuals’ employment status) in seconds, with an option for human approval. When an employee files a request for vacation, the leave management agent uses the appropriate HR management system based on its understanding of the user’s identity, completes the necessary forms, waits for the approval of the employee’s manager, and reports back to the employee.

The authors see three essential imperatives when designing and deploying AI agents into companies.

  1. Design around outcomes and appoint accountable mission owners. Companies need to stop organizing around internal functions and start building teams around actual customer outcomes—which means putting someone in charge of the whole journey, not just pieces of it.
  2. Unlock data silos and clarify the business logic. Your data doesn’t need to be perfect or centralized, but you do need to map out how work actually gets done so AI agents know where to find things and what decisions to make.
  3. Develop the leaders and guardrails that intelligent systems require. You can’t just drop AI agents into your org and hope for the best—leaders need to understand how these systems work, build trust with their teams, and put real governance in place to keep things on track.
Top-down view of two people at a white desk with monitor, keyboard and mouse, overlaid by a multicolored translucent grid.

Designing a Successful Agentic AI System

Agentic AI systems can execute workflows, make decisions, and coordinate across departments. To realize its promise, companies must design workflows around outcomes and appoint mission owners who define the mission, steer both humans and AI agents, and own the outcome; unlock the data silos it needs to access and clarify the business logic underpinning it; and develop the leaders and guardrails that these intelligent systems require.

hbr.org iconhbr.org

It’s interesting to me that Figma had to have a separate conference and set of announcements focused on design systems. In some sense it’s an indicator of how big and mature this part of design has become.

A few highlights from my point-of-view…

Slots seems to solve one of those small UX paper cuts—those niggly inconveniences that we just lived with. But this is a big deal. You’ll be able to add layers within component instances without breaking the connection to your design system. No more pre-building hidden list items or forcing designers to detach components. Pretty advanced stuff.

On the code front, they’re making Code Connect actually approachable with a new UI that connects directly to GitHub and uses AI to map components. The Figma MCP server is out of beta and now supports design system guidelines—meaning your agentic coding tools can actually respect your design standards. Can’t wait to try these.

For teams like mine that are using Make, you’ll be able to pull in design systems through two routes: Make kits (generate React and CSS from Figma libraries) or npm package imports (bring in your existing code components). This is the part where AI-assisted design doesn’t have to mean throwing pixelcraft out the window.

Design systems have always been about maintaining quality at scale. These updates are very welcomed.

Bright cobalt background with "schema" in a maroon bar and light-blue "by Figma" text, stepped columns of orange semicircles on pale-cyan blocks along right and bottom.

Schema 2025: Design Systems For A New Era

As AI accelerates product development, design systems keep the bar for craft and quality high. Here’s everything we announced at Schema to help teams design for the AI era.

figma.com iconfigma.com

I will say that A-ha’s 1985 hit “Take On Me” and its accompanying video was incredibly influential on me as a kid. Listening to the struggles the band endured and the constant tuning of the song they did is very inspiring. In an episode of Song Exploder, Hrishikesh Hirway interviews Paul Waaktaar-Savoy, who originally wrote the bones of the song as a teenager, about the creative journey the band took to realize the version we know and love.

Hirway:

Okay, so you have spent the whole budget and then this version of the song comes out in 1984, and it flops. How were you able to convince anybody to give you another chance? Or maybe even more so, I’m curious, for your own sake: How were you able to feel like that wasn’t the end of the road for the song? Like, it had its chance, it didn’t happen, and that was that.

Waaktaar-Savoy:

Yeah, that’s the good thing about being young. You don’t feel, (chuckles) you know, you just sort of, brush it off your shoulders, you know. We were a hundred percent confident. We were like, there’s not a doubt in our minds.

…it took some time, you know, it was very touch and go. ‘Cause the, you know, they’ve spent this much money on the half-finished album. Are they gonna pour more money into it and risk losing more money? So, from Norway? Hey, no one comes from Norway and makes it. And so it was a risk for people.

Having gone to England from their native Norway, A-ha released two versions of the song in the UK before it became a hit in the US. With the help of the music video, of course.

A new record exec at the US arm of Warner Bros. took a liking to the band and the album, as Waaktaar-Savoy recalls:

And there was a new guy on the company, Jeff Ayeroff. He fell in love with the, the album and the song. And he had been keeping this one particular idea sort of in the back of his head. There was this art film called Commuter, with animation. So, he was the one who put together that with Steve Barron, who was the director.

And they made the video. And the song slowly climbed the charts to become a number one hit.

Play
Episode 301: A-ha

Episode 301: A-ha

Explore the making of “Take On Me” by A-ha on Song Exploder. Listen as band member Paul Waaktaar-Savoy shares the origins, evolution, and creative process behind their iconic hit. This episode delves into the band’s journey, the song’s chart-topping success, and the inspiration behind the legendary music video. Find full episode audio, streaming links, a transcript, and behind-the-scenes stories from A-ha, the most successful Norwegian pop group of all time. Discover music history and artist insights only on Song Exploder’s in-depth podcast series.

songexploder.net iconsongexploder.net

Circling back to Monday’s item on how caring is good design, Felix Haas has a subtly different take: build kindness into your products.

Kindness in design isn’t about adding smiley faces or writing cheerful copy. It’s deeper than tone. It’s about intent embedded in every interaction.

Kindness shows up in the patience of an empty state that doesn’t rush you. In the warmth of micro-interactions that acknowledge your actions without demanding attention. In error messages that guide rather than scold. In defaults that assume good intent rather than user incompetence.

These moments seem subtle, even trivial, in isolation. But they accumulate. They shape how we feel about a product over weeks and months. They turn interfaces into relationships. They build trust.

Kind Products Win

Kind Products Win

Why do so many products feel soulless?

designplusai.com icondesignplusai.com