Skip to content

403 posts in Linked

Earlier I linked to Hardik Pandya’s piece on invisible work—the coordination, the docs, the one-on-ones that hold projects together but never show up in a performance review. Designers have their own version of this problem, and it’s getting worse.

Kai Wong, writing in his Data and Design Substack, puts it plainly. A design manager he interviewed told him:

“It’s always been a really hard thing for design to attribute their hard work to revenue… You can make the most amazingly satisfying user experience. But if you’re not bringing in any revenue out of that, you’re not going to have a job for very much longer. The company’s not going to succeed.”

That’s always been true, but AI made it urgent. When a PM can generate something that “looks okay” using an AI tool, the question is obvious: what do we need designers for? Wong’s answer is the strategic work—research, translation between user needs and business goals. The trouble is that this work is the hardest to see.

Wong’s practical advice is to stop presenting design decisions in design terms. Instead of explaining that Option A follows the Gestalt principle of proximity, say this:

“Option A reduces checkout from 5 to 3 steps, making it much easier for users to complete their purchase instead of abandoning their cart.”

You’re not asking “which looks better?” You’re showing that you understand the business problem and the user problem, and can predict outcomes based on behavioral patterns.

I left a comment on this article when it came out, asking how these techniques translate at the leadership level. It’s one thing to help individual designers frame their work in business terms. It’s another to make an entire design org’s contribution legible to the rest of the company. Product management talks to customers and GTM teams. Engineering delivers features. Design is in the messy middle making sense of it all—and that sense-making is exactly the kind of invisible work that’s hardest to put on a slide.

Figure draped in a white sheet like a ghost wearing dark sunglasses, standing among leafy shrubs with one hand visible.

Designers often do invisible work that matters. Here’s how to show it

What matters in an AI-integrated UX department? Highlighting invisible work

open.substack.com iconopen.substack.com

Every team I’ve ever led has had one of these people. The person who writes the doc that gives the project its shape, who closes context gaps in one-on-ones before they turn into conflicts, who somehow keeps six workstreams from drifting apart. They rarely get the credit they deserve because the work, when it’s done well, looks like it just happened on its own.

Hardik Pandya writes about this on his blog. He shares a quote from a founder friend describing his most valuable employee:

“She’s the reason things actually work around here. She just… makes sure everything happens. She writes the docs. She runs the meetings that matter. She talks to people. Somehow everything she touches stays on track. I don’t know how I’d even describe what she does to a person outside the company. But if she left, we’d fall apart in a month. Maybe less.”

I’ve known people like this at every company I’ve worked at. And I’ve watched them get passed over because the performance system couldn’t see them. Pandya nails why:

When a project succeeds, credit flows to the people whose contributions are easy to describe. The person who presented to the board. The person whose name is on the launch email. The person who shipped the final feature. These contributions are real, I’m not diminishing them. But they’re not more real than the work that made them possible. They’re just easier to point at.

Most organizations try to fix this by telling the invisible workers to “be more visible”—present more, build your personal brand internally. Pandya’s advice goes the other direction, and I think he’s right:

If you’re good at the invisible work, the first move isn’t to get better at visibility. It’s to find the leader who doesn’t need you to be visible.

As a leader, I take this as a challenge. If someone on my team is doing the work that holds everything together, it’s my job to make sure the organization sees it too—especially when it doesn’t announce itself.

Sketch portrait, title "THE INVISIBLE WORK" and hvpandya.com/notes on pale blue left; stippled open book and stars on black right.

The Invisible Work

The coordination work that holds projects together disappears the moment it works. On the unfairness of recognition and finding leaders who see it anyway.

hvpandya.com iconhvpandya.com

What happens to a designer when the tool starts doing the thinking? Yaheng Li poses this question in his MFA thesis, “Different Ways of Seeing.” The CCA grad published a writeup about his project in Slanted, explaining that he drew on embodiment research to make a point about how tools change who we are:

Whether they are tools, toys, or mirror reflections, external objects temporarily become part of who we are all the time. When I put my eyeglasses on, I am a being with 20/20 vision, not because my body can do that it can’t, but because my body-with-augmented-vision-hardware can.

The eyeglasses example is simple but the logic extends further than you’d expect. Li takes it to the smartphone:

When you hold your smartphone in your hand, it’s not just the morphological computation happening at the surface of your skin that becomes part of who you are. As long as you have Wi-Fi or a phone signal, the information available all over the internet (both true and false information, real news and fabricated lies) is literally at your fingertips. Even when you’re not directly accessing it, the immediate availability of that vast maelstrom of information makes it part of who you are, lies and all. Be careful with that.

Now apply that same logic to a designer sitting in front of an AI tool. If the tool becomes an extension of the self, and the tool is doing the visual thinking and layout generation, what does the designer become? Li’s thesis argues that graphic design shapes perception, that it acts as “a form of visual poetry that can convey complex ideas and evoke emotional responses, thus influencing cognitive and cultural shifts.” If that’s true, and I think it is, then the tool the designer uses to make that poetry is shaping the poetry itself.

This is a philosophical piece, not a practical one. But the underlying question is practical for anyone designing with AI right now: if your tools become part of who you are, you should care a great deal about what those tools are doing to your thinking.

Left spread: cream page with text "DIFFERENT WAYS OF SEEING" and "A VISUAL NARRATIVE". Right spread: green hill under blue sky with two cows and a sheep.

Different Ways of Seeing

When I was a child, I once fell ill with a fever and felt as...

slanted.de iconslanted.de

For as long as I’ve been in startups, execution speed has been the thing teams optimized for. The assumption was always that if you could just build faster, you’d win. That’s your moat. AI has mostly delivered on that promise—teams can now ship in weeks—see Claude Cowork—what used to take months. And the result is that a lot of teams are building the wrong things faster than ever.

Gale Robins, writing for UX Collective, opens with a scene I’ve lived through from both sides of the table:

I watched a talented software team present three major features they’d shipped on time, hitting all velocity metrics. When I asked, “What problem do these features solve?” silence followed. They could describe what they’d built and how they’d built it. But they couldn’t articulate why any of it mattered to customers.

Robins argues that judgment has replaced execution as the real constraint on product teams. And AI is making this worse, not better:

What once took six months of misguided effort now takes six weeks, or with AI, six days.

Six days to build the wrong thing. The build cycle compressed but the thinking didn’t. Teams are still skipping the same discovery steps, still assuming they know what users want. They’re just doing it at a pace that makes the waste harder to catch.

Robins again:

AI doesn’t make bad judgment cheaper or less damaging — it just accelerates how quickly those judgment errors compound.

She illustrates this with a cascade example: a SaaS company interviews only enterprise clients despite SMBs making up 70% of revenue. That one bad call—who to talk to—ripples through problem framing, solution design, feature prioritization, and evidence interpretation, costing $315K over ten months. With AI-accelerated development, the same cascade plays out in five months at the same cost. You just fail twice as fast.

The article goes on to map 19 specific judgment points across the product discovery process. The framework itself is worth a read, but the underlying argument is the part I keep coming back to: as execution gets cheaper, the quality of your decisions is the only thing that scales.

Circle split in half: left teal circuit-board lines with tech icons, right orange hands pointing to a central flowchart.

The anatomy of product discovery judgment

The 19 critical decision moments where human judgment determines whether teams build the right things.

uxdesign.cc iconuxdesign.cc

I’ve watched this pattern play out more times than I can count: a team ships something genuinely better and users ignore it. They go back to the old thing. The spreadsheet. The manual process. And the team concludes that users “resist change,” which is the wrong diagnosis.

Tushar Deshmukh, writing for UX Magazine, frames it well:

Many teams assume users dislike change. In reality, users dislike cognitive disruption.

Deshmukh describes an enterprise team that built a predictive dashboard with dynamic tiles, smart filters, and smooth animations. It failed. Employees skipped it and went straight to the basic list view:

Not because the dashboard was bad. But because it disrupted 20 years of cognitive routine. The brain trusted the old list more than the new intelligence. When we merged both—familiar list first, followed by predictive insights—usage soared.

He tells a similar story about a logistics company that built an AI-powered route planner. Technically superior, visually polished, low adoption. Drivers had spent years building mental models around compass orientation, landmarks, and habitual map-reading patterns:

The AI’s “optimal route” felt psychologically incorrect. It was not wrong—it was unfamiliar. We added a simple “traditional route overlay,” showing older route patterns first. The AI suggestion was then followed as an enhancement. Adoption didn’t just improve—trust increased dramatically.

The fix was the same in both cases: layer the new on top of the familiar. Don’t replace the mental model—extend it. This is something I think about constantly as my team designs AI features into our product. The temptation is always to lead with the impressive new capability. But if users can’t find their footing in the interface, the capability doesn’t matter. Familiarity is the on-ramp.

Neon head outline with glowing brain and ghosted silhouettes on black; overlaid text: "UX doesn't begin when users see your interface. It begins with what their minds expect to see.

The Cortex-First Approach: Why UX Starts Before the Screen

The moment your interface loads, the user experience is already halfway over, shaped by years of digital memories, unconscious biases, and mental models formed long before they arrived. Most products fail not because of bad design, but because they violate the psychological expectations users can't even articulate. This is the Cortex-First approach: understanding that great UX begins in the mind, where emotion and familiarity decide whether users flow effortlessly or abandon in silent frustration.

uxmag.com iconuxmag.com

Fitts’s Law is one of those design principles everyone learns in school and then quietly stops thinking about. Target size, target distance, movement time. It’s a mouse-and-cursor concept, and once you’ve internalized the basics—make buttons big, put them close—it fades into the background. But with AI and voice becoming primary interaction models, the principle matters again. The friction just moved.

Julian Scaff, writing for Bootcamp, traces Fitts’s Law from desktop GUIs through touch, spatial computing, voice, and neural interfaces. His argument is that the law didn’t become obsolete—it became metaphorical:

With voice interfaces, the notion of physical distance disappears altogether, yet the underlying cognitive pattern persists. When a user says, “Turn off the lights,” there’s no target to touch or point at, but there is still a form of interaction distance, the mental and temporal gap between intention and response. Misrecognition, latency, or unclear feedback increase this gap, introducing friction analogous to a small or distant button.

“Friction analogous to a small or distant button” is a useful way to think about what’s happening with AI interfaces right now. When a user stares at a blank text field and doesn’t know what to type, that’s distance. When an agent misinterprets a prompt and the user has to rephrase three times, that’s a tiny target. The physics changed but the math didn’t.

Scaff extends this into AI and neural interfaces, where the friction gets even harder to see:

Every layer of mediation, from neural decoding errors to AI misinterpretations, adds new forms of interaction friction. The task for designers will be to minimize these invisible distances, not spatial or manual, but semantic and affective, so that the path from intention to effect feels seamless, trustworthy, and humane.

He then describes what he calls a “semantic interface,” one that interprets intent rather than waiting for explicit commands:

A semantic interface understands the why behind a user’s action, interpreting intent through context, language, and behavior rather than waiting for explicit commands. It bridges gaps in understanding by aligning system logic with human mental models, anticipating needs, and communicating in ways that feel natural and legible.

This connects to the current conversation about AI UX. The teams building chatbot-first products are, in Fitts’s terms, forcing users to cross enormous distances with tiny targets. Every blank prompt field with no guidance is a violation of the same principle that tells you to make a button bigger. We’ve known this for seventy years. We’re just ignoring it because the interface looks new.

Collage of UIs: vintage monochrome OS, classic Windows, modern Windows tiles and macOS dock, plus smartphone gesture demos

The shortest path from thought to action

Reassessing Fitts’ Law in the age of multimodal interfaces

medium.com iconmedium.com

For years, the thing that made designers valuable was the thing that was hardest to fake: the ability to look at a spreadsheet of requirements and turn it into something visual that made sense. That skill got people hired and got them a seat at the table. And now a PM with access to Lovable or Figma Make can produce something that looks close enough to pass.

Kai Wong interviewed 22 design leaders and heard the same thing from multiple directions. One Global UX Director described the moment it clicked for his team:

“A designer on my team had a Miro session with a PM — wireframes, sketches, the usual. Then the PM went to Stitch by Google and created designs that looked pretty good. To an untrained eye, it looked finished. It obviously worried the team.”

It should worry teams. Not because the PM did anything wrong, but because designers aren’t always starting from a blank canvas anymore. They’re inheriting AI-generated drafts from people who don’t know what’s wrong with them.

Wong puts the commoditization bluntly:

Our superpower hasn’t been taken away: it’s more like anyone can buy something similar at the store.

The skill isn’t gone. It’s just no longer rare enough to carry your career on its own. What fills the gap, Wong argues, is the ability to articulate why—why this layout works, why that one doesn’t. One CEO he interviewed put it this way:

“I want the person who’s designing the thing from the start to understand the full business context.”

This resonates with me as a design leader. The designers on my teams who are hardest to replace are the ones who can walk into a room and explain why something needs to change, and tie that explanation to a user need or a business outcome. AI can’t do that yet. And the people generating those 90%-done drafts definitely can’t.

Hiker in blue shirt and cap standing on a rocky cliff edge, looking out over a sunlit forested valley and distant mountains

The 90% Problem: Why other's AI's designs may become your problem

The unfortunate reality of how many companies use AI

dataanddesign.substack.com icondataanddesign.substack.com

Every few years, the industry latches onto an interaction paradigm and tries to make it the answer to everything. A decade ago it was “make it an app.” Now it’s “just make it a chat.” The chatbot-as-default impulse is strong right now, and it’s leading teams to ship worse experiences than what they’re replacing.

Katya Korovkina, writing for UX Collective, calls this “chatbot-first thinking” and lays out a convincing case for why it’s a trap:

Many of the tasks we deal with in our personal life and at work require rich, multi-modal interaction patterns that conversational interfaces simply cannot support.

She walks through a series of validating questions product teams should ask before defaulting to a conversational UI, and the one that stuck with me is about discoverability. The food ordering example is a good one—if you don’t know what you want, listening to a menu read aloud is objectively worse than scanning one visually. But the real issue is who chat-first interfaces actually serve:

Prompt-based products work best for the users who already know how to ask the right question.

Jakob Nielsen has written about this as the “articulation barrier,” and Korovkina cites the stat that nearly half the population in wealthy countries struggles with complex texts. We’re building interfaces that require clear, precise written communication from people who don’t have that skill. And we’re acting like that’s fine because the technology is impressive.

Korovkina also makes a practical point that gets overlooked. She describes using a ChatGPT agent to get a YouTube transcript — a task that takes four clicks with a dedicated tool — and watching the agent spend minutes crawling the web, hitting paywalls, and retrying failures:

When an LLM agent spends five minutes crawling the web, calling tools, retrying failures, reasoning through intermediate steps, it is running on energy-intensive infrastructure, contributing to real data-center load, energy usage, and CO₂ emissions. For a task that could be solved with less energy by a specialised service, this is computational overkill.

The question she lands on—“was AI the right tool for this task at all?”—is the one product teams keep skipping. Sometimes a button, a dropdown, and a confirmation screen is the better answer.

Centered chat window with speech-bubble icon and text "How can I help you today?" plus a message input field; faded dashboard windows behind

Are we doing UX for AI the right way?

How chatbot-first thinking makes products harder for users

uxdesign.cc iconuxdesign.cc

My wife is a huge football fan—Kansas City Chiefs, if you must know—and I’m one too (go Niners!), but too a lesser degree. I just really hope the Seattle Seahawks don’t break out the super-ugly green highlighter-colored Action Green uniforms when they face off against the Patriots in Super Bowl LX.

Anyway, sports teams are some of the best examples of legacy brands, steeped in history, and with legions of literal fans. It interesting how legacy brands evolve—especially ones where the audience feels genuine ownership. And sports teams are the extreme case. Mess with a logo that fans have tattooed on their bodies, and you’ll hear about it.

Natalie Fear talked to several designers about what makes NFL logo updates succeed or fail. Paul Woods, president of AIGA Los Angeles:

The updates that work tend to be quieter. The Chargers’ continued refinement of their iconic bolt or the Vikings’ measured refinements show that evolution can be about better execution, not louder ideas. Improved proportions, stronger typography, and systems that scale across digital, broadcast, and physical environments matter more than novelty.

Better execution, not louder ideas. That’s the whole thing, really. The temptation with any redesign is to justify the effort by making the change visible. But visible change and meaningful improvement aren’t the same thing.

Woods again:

Appeasing fans does not mean standing still. It means understanding what they actually care about.

This applies well beyond sports branding. Any time you’re working on a product or brand that people have history with, the job isn’t to make your mark—it’s to make the thing better without breaking what already works.

Michael Vamosy, founder of DEFIANT LA, puts a finer point on the challenge:

Fans are much more forgiving of poor design choices from the past than they are of design improvements built for the future.

That asymmetry is worth sitting with. Nostalgia protects old mistakes. New work gets no such grace period.

Green Bay Packers player in yellow helmet lifted and hugged by cheering fans in Packers hats as beer sprays in the background

What we can learn from NFL logo rebrand fails

Industry experts weigh in on the power of branding.

creativebloq.com iconcreativebloq.com

I write everything in Markdown now. These link posts start in Obsidian, which stores them as .md files. When I rebuilt my blog with Astro, I moved from a database to plain Markdown files. It felt like going backwards—and also exactly right.

Anil Dash has written a lovely history of how John Gruber’s simple text format quietly became the infrastructure of the modern internet:

The trillion-dollar AI industry’s system for controlling their most advanced platforms is a plain text format one guy made up for his blog and then bounced off of a 17-year-old kid [Aaron Swartz] before sharing it with the world for free.

The format was released in 2004, the same year blogs went mainstream. Twenty years later, it’s everywhere—Google Docs, GitHub, Slack, Apple Notes, and every AI prompt you’ve ever written.

Dash’s larger point is about how the internet actually gets built:

Smart people think of good things that are crazy enough that they just might work, and then they give them away, over and over, until they slowly take over the world and make things better for everyone.

Worth a full read.

White iMac on wooden desk with white keyboard, round speakers, colored pencils and lens holder; screen shows purple pattern.

How Markdown took over the world

Anil Dash. A blog about making culture. Since 1999.

anildash.com iconanildash.com

I’ve been licensing fonts for my entire career. Hundreds of them over the years, whether it was Adobe’s Font Folio, or fonts from Emigre, House Industries, T-26, or Grilli Type. I always assumed that when I paid for a font license, I was paying for something with clear intellectual property protection—like software or music.

Turns out that assumption might be wrong.

Matthew Butterick, a copyright litigator who also happens to be a type designer and author of the great reference, Practical Typography, lays out why digital fonts probably aren’t protected by U.S. copyright law:

Fonts have traditionally been excluded from U.S. copyright protection. This principle was judicially affirmed in the 1978 case Eltra Corp. v. Ringer (“typeface has never been considered entitled to copyright”).

The type industry has operated for decades on a workaround: register fonts as software programs rather than as typefaces. A 1998 case, Adobe v. Southern Software, seemed to validate this approach. But a recent ruling in Laatz v. Zazzle pokes holes in that reasoning. Butterick on the implications:

To those in the type industry who have staked a lot on the Adobe case, that last sentence might be a doozy. The Laatz court’s perspective debunks decades of wishful thinking about the breadth of the Adobe opinion. Under the Laatz view, unless you “created the software that produced the font programs”, you don’t fall within the scope of the Adobe ruling.

That distinction is wild. If you designed a typeface using FontLab or Glyphs—as most type designers do nowadays—you might not have the copyright protection you thought you had, because you didn’t write the software that generated the final font file.

Given all this legal uncertainty, how does Butterick run his own type foundry? He’s refreshingly pragmatic:

My business necessarily runs on something more akin to the honor system. I try to make nice fonts, price my licenses fairly, and thereby make internet strangers enthusiastic about sending me money rather than going to pirate websites. Enough of them do. My business continues.

I don’t have a strong opinion on the legal questions here—I’m not a lawyer or a type designer. But as someone who’s spent a lot of money on fonts over the years, I find it fascinating that the whole edifice might be built on shakier ground than any of us realized. And yes, I absolutely want to support the type designers who sweat the details by giving them cash money.

Basket of white, pink, and red roses on a wooden table, several blossoms and leaves fallen in front

The copyrightability of fonts revisited

Recently some other partic­i­pants in the type-design industry asked me to endorse a letter to the U.S. Copy­right Office about copy­right regis­tra­tions for digital fonts. The impetus was a set of concerns arising from ongoing rejec­tions of font-copy­right regis­tra­tions and a recent opinion in a case called Laatz v. Zazzle pertaining to the infringe­ment of font copy­rights.

matthewbutterick.com iconmatthewbutterick.com

Back in September, when Trump announced America by Design and appointed Joe Gebbia as Chief Design Officer, I wrote that it was “yet another illustration of this administration’s incompetence.” The executive order came months after DOGE gutted 18F and the US Digital Service, the agencies that had spent a decade building the expertise Gebbia now claims to be inventing.

Mark Wilson, writing for Fast Company, spoke to a dozen government designers about how Gebbia’s tenure has played out. When Wilson asked Gebbia about USDS and 18F—whether he thought these groups were overrated and needed to be rebuilt—here’s what he said:

“Without knowing too much about the groups you mentioned, I do know that the air cover and the urgency around design is in a place it’s [never] been before.”

He doesn’t know much about them. The agencies his administration destroyed. The hundreds of designers recruited from Google, Amazon, and Facebook who fixed healthcare.gov and built the COVID test ordering system. He doesn’t know much about them.

Mikey Dickerson, who founded USDS, on the opportunity Gebbia inherited:

“He’s inheriting the blank check kind of environment… [so] according to the laws of physics, he should be able to get a lot done. But if the things that he’s allowed to do, or the things that he wants to do, are harmful, then he’ll be able to do a lot of harm in a really short amount of time.”

And what has Gebbia done with that blank check? He’s built promotional websites for Trump initiatives: trumpaccounts.gov, trumpcard.gov, trumprx.com. Paula Scher of Pentagram looked at the work:

“The gold card’s embarrassing. The typeface is hackneyed.”

But Scher’s real critique goes beyond aesthetics.

“You can’t talk about people losing their Medicare and have a slick website,” says Paula Scher. “It just doesn’t go.”

That’s the contradiction at the center of America by Design. You can’t strip food stamps, gut healthcare subsidies, and purge the word “disability” from government sites, then turn around and promise to make government services “delightful.” The design isn’t the problem. The policy is.

Scher puts it plainly:

“[Trump] wants to make it look like a business. It’s not a business. The government is a place that creates laws and programs for society—it’s not selling shit.”

Wilson’s piece is long and worth reading in full. There’s more on what USDS and 18F actually accomplished, and on the designers who watched their work get demolished by people who didn’t understand it.

Man in a casual jacket and sneakers standing before a collage of large "AMERICA" and "DESIGN" text, US flag and architectural imagery.

From Airbnb to the White House: Joe Gebbia is reshaping the government in Trump’s image

The president decimated the U.S. government’s digital design agencies and replaced them with a personal propaganda czar.

fastcompany.com iconfastcompany.com

Google’s design team is working on a hard problem: how do you create a visual identity for AI? It’s not a button or a menu. It doesn’t have a fixed set of functions. It’s a conversation partner that can do… well, a lot of things. That ambiguity is difficult to represent.

Daniel John, writing for Creative Bloq, reports on Google’s recent blog post about Gemini’s visual design:

“Consider designer Susan Kare, who pioneered the original Macintosh interface. Her icons weren’t just pixels; they were bridges between human understanding and machine logic. Gemini faces a similar challenge around accessibility, visibility, and alleviating potential concerns. What is Gemini’s equivalent of Kare’s smiling computer face?”

That’s a great question. Kare’s work on the original Mac made the computer feel approachable at a moment when most people had never touched one. She gave the machine a personality through icons that communicated function and friendliness at the same time. AI needs something similar: a visual language that builds trust while honestly representing what the technology can do.

Google’s answer? Gradients. They offer “an amorphous, adaptable approach,” one that “inspires a sense of discoverability.”

They think they’ve nailed it. I don’t think they did.

To their credit, Google seems to sense the comparison is a stretch. John quotes the Google blog again:

“Gradients might be much more about energy than ‘objectness,’ like Kare’s illustrations (a trash can is a thing, a gradient is a vibe), but they infuse a spirit and directionality into Gemini.”

Kare’s icons worked because they mapped to concrete actions and mental models people already had. A trash can means delete. A folder means storage. A smiling Mac means this thing is friendly and working. Gradients don’t map to anything. They just look nice. They’re aesthetic, not communicative. John’s word to describe them, “vibe” is right. Will a user pick up on the subtleties of a concentrated gradient versus a diffuse one?

The design challenge Google identified is real. But gradients aren’t the Kare equivalent. They’re not ownable nor iconic (pun intended). They’re a placeholder until someone figures out what is.

Rounded four-point rainbow-gradient star on left and black pixel-art vintage Macintosh-style computer with smiling face on right.

Did Google really just compare its design to Apple?

For rival tech brands, Google and Apple have seemed awfully cosy lately. Earlier this month it was announced that, in a huge blow to OpenAI, Google's Gemini will be powering the much awaited (and much delayed) enhanced Siri assistant on every iPhone. And now, Google has compared its UI design with that of Apple. Apple of 40 years ago, that is.

creativebloq.com iconcreativebloq.com

Daniel Kennett dug out his old Mac Pro to revisit Aperture, the photo app Apple discontinued in 2015:

It’s hard to overstate quite how revolutionary and smooth this flow is until you had it for multiple years before having it taken away. Nothing on the market—even over a decade later—is this good at meeting you where you are and not interrupting your flow.

Kennett’s observation: Aperture came to you. Most software makes you go to it. You could edit a photo right on the map view, or while laying out a book page. No separate editing mode. Press H for the adjustments HUD, make your changes, done.

The cruel twist was Apple suggesting Photos as a replacement. Ten years later, photographers are still grumbling about it in comment sections.

Aperture screenshot: map of Le Sauze-du-Lac with pins; left Library sidebar; right Adjustments panel; filmstrip thumbnails.

Daniel Kennett - A Lament For Aperture, The App We'll Never Get Over Losing

I’m an old Mac-head at heart, and I’ve been using Macs since the mid 1990s (the first Mac I used was an LC II with System 7.1 installed on it). I don’t tend to _genuinely_ think that the computing experience was better in the olden days — sure, there’s a thing to be said about the simplicity of older software, but most of my fondness for those days is nostalgia.

ikennd.ac iconikennd.ac

There’s a design principle I return to often: if everything is emphasized, nothing is. Bold every word in a paragraph and you’ve just made regular text harder to read. Highlight every line in a document and you’ve defeated the purpose of highlighting.

Nikita Prokopov applies this to Apple’s macOS Tahoe, which adds icons to nearly every menu item:

Perhaps counter-intuitively, adding an icon to everything is exactly the wrong thing to do. To stand out, things need to be different. But if everything has an icon, nothing stands out.

The whole article is a detailed teardown of the icon choices—inconsistent metaphors, icons reused for unrelated actions, details too small to parse at 12×12 effective pixels. But the core problem isn’t execution. It’s the premise.

Prokopov again:

It’s delusional to think that there’s a good icon for every action if you think hard enough. There isn’t. It’s a lost battle from the start.

What makes this such a burn is that Apple knew better. Prokopov pulls from the 1992 Macintosh Human Interface Guidelines, which warned that poorly used icons become “unpleasant, distracting, illegible, messy, cluttered, confusing, frustrating.” Thirty-two years later, Apple built exactly that.

This applies beyond icons. Any time you’re tempted to apply something universally—color, motion, badges, labels—ask whether you’re helping users find what matters or just adding visual noise. Emphasis only works through contrast.

Yellow banner with scattered black UI icons, retro Mac window at left, text: It's hard to justify Tahoe icons — tonsky.me

It’s hard to justify Tahoe icons

Looking at the first principles of icon design—and how Apple failed to apply all of them in macOS Tahoe

tonsky.me icontonsky.me

When I worked at LEVEL Studios (which became Rosetta) in the early 2010s, we had a whole group dedicated to icon design. It was small but incredibly talented and led by Jon Delman, a master of this craft. And yes, Jon and team designed icons for Apple.

Those glory days are long gone and the icons coming out of Cupertino these days are pedestrian, to put it gently. The best observation about Apple’s icon decline comes from Héliographe, via John Gruber:

If you put the Apple icons in reverse it looks like the portfolio of someone getting really really good at icon design.

Row of seven pen-and-paper app icons showing design evolution from orange stylized pen to ink bottle with fountain pen.

Posted by @heliographe.studio on Threads

Seven Pages icons from newest to oldest, each one more artistically interesting than the last. The original is exquisite. The current one is a squircle with a pen on it.

This is even more cringe-inducing when you keep in mind something Gruber recalls from a product briefing with Jony Ive years ago:

Apple didn’t change things just for the sake of changing them. That Apple was insistent on only changing things if the change made things better. And that this was difficult, at times, because the urge to do something that looks new and different is strong, especially in tech.

Apple’s hardware team still operates this way. An M5 MacBook Pro looks like an M1 MacBook Pro. An Apple Watch Series 11 is hard to distinguish from a Series 0. These designs don’t change because they’re excellent.

The software team lost that discipline somewhere. Gruber again:

I know a lot of talented UI designers and a lot of insightful UI critics. All of them agree that MacOS’s UI has gotten drastically worse over the last 10 years, in ways that seem so obviously worse that it boggles the mind how it happened.

The icons are just the most visible symptom. The confidence to not change something—to trust that the current design is still the best design—requires knowing the difference between familiarity and complacency. Somewhere along the way, Apple’s software designers stopped being able to tell.

Centered pale gray circle with a dark five-pointed star against a muted blue-gray background

Thoughts and Observations Regarding Apple Creator Studio

Starting with a few words on the new app icons.

daringfireball.net icondaringfireball.net

Brand guidelines have always been a compromise. You document the rules—colors, typography, spacing, logo usage—and hope people follow them. They don’t, or they follow the letter while missing the spirit. Every designer who’s inherited a brand system knows the drift: assets that are technically on-brand but feel wrong, or interpretations that stretch “flexibility” past recognition.

Luke Wroblewski is pointing at something different:

Design projects used to end when “final” assets were sent over to a client. If more assets were needed, the client would work with the same designer again or use brand guidelines to guide the work of others. But with today’s AI software development tools, there’s a third option: custom tools that create assets on demand, with brand guidelines encoded directly in.

The key word is encoded. Not documented. Not explained in a PDF that someone skims once. Built into software that enforces the rules automatically.

Wroblewski again:

So instead of handing over static assets and static guidelines, designers can deliver custom software. Tools that let clients create their own on-brand assets whenever they need them.

That is a super interesting way of looking at it.

He built a proof of concept—the LukeW Character Maker—where an LLM rewrites user requests to align with brand style before the image model generates anything. The guidelines aren’t a reference document; they’re guardrails in the code.

This isn’t purely theoretical. When Pentagram designed Performance.gov in 2024, they delivered a library of 1,500 AI-generated icons that any federal agency could use going forward. Paula Scher defended the approach by calling it “self-sustaining”—the deliverable wasn’t a fixed set of illustrations but a system that could produce more:

The problem that’s plagued government publishing is the inability to put together a program because of the interference of different people with different ideas. This solved that.

I think this is an interesting glimpse into the future. Brand guidelines might have software with them. I can even see a day where AI can generate new design system components based on guidelines.

Timeline showing three green construction-worker mascots growing larger from 2000 to 2006, final one with red hard hat reading a blueprint.

Design Tools Are The New Design Deliverables

Design projects used to end when "final" assets were sent over to a client. If more assets were needed, the client would work with the same designer again or us...

lukew.com iconlukew.com

I spent all of last week linking to articles that say designers need to be more strategic. I still stand by that. But that doesn’t mean we shouldn’t understand the technical side of things.

Benhur Senabathi, writing for UX Collective, shipped 3 apps and 15+ working prototypes in 2025 using Claude Code and Cursor. His takeaway:

I didn’t learn to code this year. I learned to orchestrate. The difference matters. Coding is about syntax. Orchestration is about intent, systems, and knowing what ‘done’ looks like. Designers have been doing that for years. The tools finally caught up.

The skills that make someone good at design—defining outcomes, anticipating edge cases, communicating intent to people who don’t share your context—are exactly what AI-assisted building requires.

Senabathi again:

Prompting well isn’t about knowing to code. It’s about articulating the ‘what’ and ‘why’ clearly enough that the AI can handle the ‘how.’

This echoes how Boris Cherny uses Claude Code. Cherny runs 10-15 parallel sessions, treating AI as capacity to orchestrate rather than a tool to use. Same insight, different vantage point: Cherny from engineering, Senabathi from design.

GitHub contributions heatmap reading "701 contributions in the last year" with Jan–Sep labels and varying green activity squares

Designers as agent orchestrators: what I learnt shipping with AI in 2025

Why shipping products matters in the age of AI and what designers can learn from it

uxdesign.cc iconuxdesign.cc

One of my favorite parts of shipping a product is finding out how people actually use it. Not how we intended them to use it—how they bend it, repurpose it, surprise us with it. That’s when you learn what you really built.

Karo Zieminski, writing for Product with Attitude, captures a great example of this in her breakdown of Anthropic’s Cowork launch. She quotes Anthropic engineer Boris Cherny:

Since we launched Claude Code, we saw people using it for all sorts of non-coding work: conducting vacation research, creating slide presentations, organizing emails, cancelling subscriptions, retrieving wedding photos from hard drives, tracking plant growth, and controlling ovens.

Controlling ovens. I love it. Users took a coding tool and turned it into a general-purpose assistant because that’s what they needed it to be.

Simon Willison had already spotted this:

Claude Code is a general agent disguised as a developer tool. What it really needs is a UI that doesn’t involve the terminal and a name that doesn’t scare away non-developers.

That’s exactly what Anthropic shipped in Cowork. Same engine, new packaging, name that doesn’t say “developers only.”

This is the beauty of what we do. Once you create something, it’s really up to users to show you how it should be used. Your job is to pay attention—and have the humility to build what the behavior is asking for, not what your roadmap says.

Cartoon girl with ponytail wearing an oversized graduation cap with yellow tassel, carrying books and walking while pointing ahead.

Anthropic Shipped Claude Cowork in 10 Days Using Its Own AI. Here’s Why That Changes Everything.

The acceleration that should make product leaders sit up.

open.substack.com iconopen.substack.com

When I managed over 40 creatives at a digital agency, the hardest part wasn’t the work itself—it was resource allocation. Who’s got bandwidth? Who’s blocked waiting on feedback? Who’s deep in something and shouldn’t be interrupted? You learn to think of your team not as individuals you assign tasks to, but as capacity you orchestrate.

I was reminded of that when I read about Boris Cherny’s approach to Claude Code. Cherny is a Staff Engineer at Anthropic who helped build Claude Code. Karo Zieminski, writing in her Product with Attitude Substack, breaks down how Cherny actually uses his own tool:

He keeps ~10–15 concurrent Claude Code sessions alive: 5 in terminal (tabbed, numbered, with OS notifications). 5–10 in the browser. Plus mobile sessions he starts in the morning and checks in on later. He hands off sessions between environments and sometimes teleports them back and forth.

Zieminski’s analysis is sharp:

Boris doesn’t see AI as a tool you use, but as a capacity you schedule. He’s distributing cognition like compute: allocate it, queue it, keep it hot, switch contexts only when value is ready. The bottleneck isn’t generation; it’s attention allocation.

Most people treat AI assistants like a single very smart coworker. You give it a task, wait for the answer, evaluate, iterate. Cherny treats Claude like a team—multiple parallel workers, each holding different context, each making progress while he’s focused elsewhere.

Zieminski again:

Each session is a separate worker with its own context, not a single assistant that must hold everything. The “fleet” approach is basically: don’t make one brain do all jobs; run many partial brains.

I’ve been using Claude Code for months, but mostly one session at a time. Reading this, I realize I’ve been thinking too small. The parallel session model is about working efficiently. Start a research task in one session, let it run while you code in another, check back when it’s ready.

Looks like the new skill on the block is orchestration.

Cartoon avatar in an orange cap beside text "I'm Boris and I created Claude Code." with "6.4M Views" in a sketched box.

How Boris Cherny Uses Claude Code

An in-depth analysis of how Boris Cherny, creator of Claude Code, uses it — and what it reveals about AI agents, responsibility, and product thinking.

open.substack.com iconopen.substack.com

I started my career in print. I remember specifying designs in fractional inches and points, and expecting the printed piece to match the comp exactly. When I moved to the web in the late ’90s, I brought that same expectation with me because that’s how we worked back then. Our Photoshop files were precise. But if we’re being honest—that the web is an interactive, quickly malleable medium—that expectation is misplaced. I’ve long since changed my mind, of course.

Web developer Amit Sheen, writing for Smashing Magazine, articulates the problem with “pixel perfect” better than I’ve seen anyone do it:

When a designer asks for a “pixel-perfect” implementation, what are they actually asking for? Is it the colors, the spacing, the typography, the borders, the alignment, the shadows, the interactions? Take a moment to think about it. If your answer is “everything”, then you’ve just identified the core issue… When we say “make it pixel perfect,” we aren’t giving a directive; we’re expressing a feeling.

According to Sheen, “pixel perfect” sounds like a specification but functions as a vibe. It tells the developer nothing actionable.

He traces the problem back to print’s influence on early web design:

In the print industry, perfection was absolute. Once a design was sent to the press, every dot of ink had a fixed, unchangeable position on a physical page. When designers transitioned to the early web, they brought this “printed page” mentality with them. The goal was simple: The website must be an exact, pixel-for-pixel replica of the static mockup created in design applications like Photoshop and QuarkXPress.

Sheen doesn’t just tear down the old model. He offers replacement language. Instead of demanding “pixel perfect,” teams should ask for things like “visually consistent with the design system” or “preserves proportions and alignment logic.” These phrases describe actual requirements rather than feelings.

Sheen again, addressing designers directly:

When you hand over a design, don’t give us a fixed width, but a set of rules. Tell us what should stretch, what should stay fixed, and what should happen when the content inevitably overflows. Your “perfection” lies in the logic you define, not the pixels you draw.

I’m certain advanced designers and design teams know all of the above already. I just appreciated Sheen’s historical take. A Figma file is a hypothesis, a picture of what to build. The browser is the truth.

Smashing Magazine article header: "Rethinking 'Pixel Perfect' Web Design" with tags, author Amit Sheen and a red cat-and-bird illustration.

Rethinking “Pixel Perfect” Web Design — Smashing Magazine

Amit Sheen takes a hard look at the “Pixel Perfect” legacy concept, explaining why it’s failing us and redefining what “perfection” actually looks like in a multi-device, fluid world.

smashingmagazine.com iconsmashingmagazine.com

I became an associate creative director (ACD) in 2005, ten years after I started working professionally. I was hired by the digital agency Organic into that role. I remembered struggling mightily with trusting my team to do the work. In my previous job as an art director, I hated it when my ACD or CD would go into my files after I’d gone home and just redo stuff. I didn’t do that, but it was very difficult to fight the urge or to avoid designing my own direction. (I failed on the latter.) That’s an intrinsic problem.

Sometimes, the issue is extrinsic, especially when you’re promoted into a leadership role from being an individual contributor (IC). The transition is a struggle. You get promoted because you were great at the work, and then the organization keeps pulling you back to do the work instead of leading at the level your new role demands.

Sabina Nawaz, writing for Harvard Business Review, explains why promotions grant potential but not always permission:

Research shows many midlevel and senior leaders still spend a disproportionate amount of time on tactical work rather than enterprise leadership. In my coaching work with senior leaders, I’ve found that while promotions provide the potential to lead strategically, they don’t always grant permission to do so. To gain that, you must do the hidden (and harder) work of redefining how you think, behave, and interact within the system.

That phrase, “potential but not permission,” is the whole problem in four words. You have the title, but the org’s muscle memory keeps treating you like your old self.

Nawaz identifies a common culprit: bosses who can’t let go of their former role:

Because the SVP had personally run my client’s division for years, he struggled to let go of intervening in the VP’s work. Six months into the transition, the SVP was still reviewing every decision, overriding calls, and re-engaging in tactical discussions he no longer needed to oversee. While he explained his involvement as giving feedback and advice, he was “overhelping,” a seemingly benign act that research suggests can ultimately erode trust, autonomy, and performance.

I’ve watched this pattern derail design organizations. A new creative director gets promoted, but the VP who used to hold that role keeps jumping into design reviews, redlining layouts, second-guessing type choices. The CD never develops their own judgment because their boss never leaves the room.

Nawaz’s advice for breaking the cycle is direct:

Take a quick glance at your calendar and ask yourself if it still reflects the activities, information flow, and ownership items of your prior role. Just as you need your boss to step back to empower you, you must redesign where you spend your time and which decisions to let your team fully own.

Your calendar doesn’t lie. If it’s packed with the same meetings you attended before your promotion, you haven’t actually made the transition. You’ve just added a new title to your old job.

Older person with short gray hair and glasses in profile, hand on chin, overlaid with orange dots and black swirling line.

Your New Role Requires Strategic Thinking…But You’re Stuck in the Weeds

Senior-level promotions are an opportunity for leaders to impact a company’s strategy, but it’s easy to get pulled back into the tactical weeds. A visibly higher spot on the organizational chart doesn’t guarantee time for strategic thinking. To gain that, you must do the hidden (and harder) work of redefining how you think, behave, and interact within the system, and be adaptable to the unpredictable needs of stakeholders you need to influence. Here’s how to protect your ability to lead at the altitude your new role requires—and that your team needs to succeed.

hbr.org iconhbr.org

Nice mini-site from the Figma showcasing the “iconic interactions” of the last 20 years. It explores how software has become inseparable from how we think and connect—and how AI is accelerating that shift toward adaptive, conversational interfaces. Made with Figma Make, of course.

Centered bold white text "Software is culture" on a soft pastel abstract gradient background (pink, purple, green, blue).

Software Is Culture

Yesterday's software has shaped today's generation. To understand what's next as software grows more intelligent, we look back on 20 years of interaction design.

figma.com iconfigma.com

Every designer has noticed that specific seafoam green in photos of mid-century control rooms. It shows up in nuclear plants, NASA mission control, old hospitals. Wasn’t the hospital in 1975’s One Flew Over the Cuckoo’s Nest that color? It’s too consistent to be coincidence.

Beth Mathews traced the origin back to color theorist Faber Birren, who consulted for DuPont and created the industrial color safety codes still in use today. His reasoning:

“The importance of color in factories is first to control brightness in the general field of view for an efficient seeing condition. Interiors can then be conditioned for emotional pleasure and interest, using warm, cool, or luminous hues as working conditions suggest. Color should be functional and not merely decorative.”

Color should be functional and not merely decorative. These weren’t aesthetic choices—they were human factors engineering decisions, made in environments where one mistake could be catastrophic. The seafoam green was specifically chosen to reduce visual fatigue. Kinda cool.

Vintage teal industrial control room with wall-mounted analog gauges and switches, wooden swivel chair and yellow rope barrier.

Why So Many Control Rooms Were Seafoam Green

The Color Theory Behind Industrial Seafoam Green

open.substack.com iconopen.substack.com