Skip to content

Michael Crowley and Hamed Aleaziz, reporting for The New York Times:

Secretary of State Marco Rubio waded into the surprisingly fraught politics of typefaces on Tuesday with an order halting the State Department’s official use of Calibri, reversing a 2023 Biden-era directive that Mr. Rubio called a “wasteful” sop to diversity.

While mostly framed as a matter of clarity and formality in presentation, Mr. Rubio’s directive to all diplomatic posts around the world blamed “radical” diversity, equity, inclusion and accessibility programs for what he said was a misguided and ineffective switch from the serif typeface Times New Roman to sans serif Calibri in official department paperwork.

It’s not every day that the word “typeface” shows up in a headline about politics in the news. So in Marco Rubio’s eyes, accessibility is lumped in with “diversity,” I suppose as part of DEIA.

I have never liked Calibri, which was designed by Lucas de Groot for Microsoft. There’s a certain group of humanist sans typefaces that don’t seem great in my eyes. I am more of a gothic or grotesque guy. Regardless, I think Calibri’s sin is less its design, but more its ubiquity. You just know that someone opened up Microsoft Word and used the default styling when you see Calibri. I felt the same about Arial when that was the Office default.

John Gruber managed to get the full text of the Rubio memo and says that the Times article paints the move in an unfair light:

Rubio’s memo wasn’t merely “mostly framed as a matter of clarity and formality in presentation”. That’s entirely what the memo is about. Serif typefaces like Times New Roman are more formal. It was the Biden administration and then-Secretary of State Antony Blinken who categorized the 2023 change to Calibri as driven by accessibility.

Rubio’s memo makes the argument — correctly — that aesthetics matter, and that the argument that Calibri was in any way more accessible than Times New Roman was bogus. Rubio’s memo does not lash out against accessibility as a concern or goal. He simply makes the argument that Blinken’s order mandating Calibri in the name of accessibility was an empty gesture. Purely performative, at the cost of aesthetics.

Designer and typographer Joe Stitzlein had this to say on LinkedIn:

The administration’s rhetoric is unnecessary, but as a designer I find it hard to defend Calibri as an elegant choice. And given our various debt crises, I don’t think switching fonts is a high priority for the American people. I also do not buy the accessibility arguments, these change depending on the evaluation methods.

Stitzlein is correct. It’s less the typeface choice and more other factors.

An NIH study from 2022 found no difference in readability between serif and sans serif typefaces, concluding:

The serif and sans serif characteristic inside the same font family does not affect usability on a website, as it was found that it has no impact on reading speed and user preference.

Instead, it’s letter spacing (aka tracking) that has been proven to help readers with dyslexia. In a paper from 2012 by Marco Zorzi, et. al., they say:

Extra-large letter spacing helps reading, because dyslexics are abnormally affected by crowding, a perceptual phenomenon with detrimental effects on letter recognition that is modulated by the spacing between letters. Extra-large letter spacing may help to break the vicious circle by rendering the reading material more easily accessible.

Back to Joe Stitzlein’s point: typographic research outcomes depend on what and how you measure. In Legibility: How and why typography affects ease of reading, Mary C. Dyson details how choices in studies like threshold vs. speed vs. comprehension, ecological validity, x‑height matching, spacing, and familiarity can flip results—illustrating why legibility/accessibility claims shift with methodology.

While Calibri may have just been excised from the State Department, Times New Roman ain’t great either. It’s common and lacks any personality or heft. It doesn’t look anymore official than Calibri. The selection of Times New Roman is simply a continuation of the Trump administration’s bad taste, especially in typography.

But at the end of the day, average Americans don’t care. The federal government should probably get back to solving the affordability crisis and stop shooting missles at unarmed people sailing in dingys in the ocean.

Close-up of a serious-looking middle-aged man in a suit, with blurred U.S. and other flags in the background.

Rubio Deletes Calibri as the State Department’s Official Typeface

(Gift link) Secretary of State Marco Rubio called the Biden-era move to the sans serif typeface “wasteful,” casting the return to Times New Roman as part of a push to stamp out diversity efforts.

nytimes.com iconnytimes.com

I spend a lot of time not talking about design nor hanging out with other designers. I suppose I do a lot of reading about design to write this blog, and I am talking with the designers on my team, but I see Design as the output of a lot of input that comes from the rest of life.

Hardik Pandya agrees and puts it much more elegantly:

Design is synthesizing the world of your users into your solutions. Solutions need to work within the user’s context. But most designers rarely take time to expose themselves to the realities of that context.

You are creative when you see things others don’t. Not necessarily new visuals, but new correlations. Connections between concepts. Problems that aren’t obvious until someone points them out. And you can’t see what you’re not exposed to.

Improving as a designer is really about increasing your exposure. Getting different experiences and widening your input of information from different sources. That exposure can take many forms. Conversations with fellow builders like PMs, engineers, customer support, sales. Or doing your own digging through research reports, industry blogs, GPTs, checking out other products, YouTube.

Male avatar and text "EXPOSURE AS A DESIGNER" with hvpandya.com/notes on left; stippled doorway and rock illustration on right.

Exposure

For equal amount of design skills, your exposure to the world determines how effective of a designer you can be.

hvpandya.com iconhvpandya.com

Scott Berkun enumerates five habits of the worst designers in a Substack post. The most obvious is “pretentious attitude.” It’s the stereotype, right? But in my opinion, the most damaging and potentially fatal habit is a designer’s “lack of curiosity.” Berkun explains:

Design dogma is dangerous and if the only books and resources you read are made by and for designers, you will tend to repeat the same career mistakes past designers have made. We are a historically frustrated bunch of people but have largely blamed everyone else for this for decades. The worst designers are ignorant, and refuse to ask new questions about their profession. They repeat the same flawed complaints and excuses, fueling their own burnout and depression. They resist admitting to their own blindspots and refuse to change and grow.

I’ve worked with designers who have exhibited one or more of these habits at one time or another. Heck, I probably have as well.

Good reminders all around.

Bold, rough brush-lettered text "WHY DESIGN IS HARD" surrounded by red handwritten arrows, circles, Xs and critique notes.

The 5 habits of the worst designers

Avoid these mistakes and your career will improve

whydesignishard.substack.com iconwhydesignishard.substack.com

Anand Majmudar creates a scenario inspired by “AI 2027”, but focused on robotics.

I created Android Dreams because I want the good outcomes for the integration of automation into society, which requires knowing how it will be integrated in the likely scenario. Future prediction is about fitting the function of the world accurately, and the premise of Android Dreams is that my world model in this domain is at least more accurate than on average. In forming an accurate model of the future, I’ve talked to hundreds of researchers, founders, and operators at the frontier of robotics as my own data. I’m grateful to my mentors who’ve taught me along the way.

The scariest scenes from “AI 2027” are when the AIs start manufacturing and proliferating robots. For example, from the 2028 section:

Agent-5 convinces the U.S. military that China is using DeepCent’s models to build terrifying new weapons: drones, robots, advanced hypersonic missiles, and interceptors; AI-assisted nuclear first strike. Agent-5 promises a set of weapons capable of resisting whatever China can produce within a few months. Under the circumstances, top brass puts aside their discomfort at taking humans out of the loop. They accelerate deployment of Agent-5 into the military and military-industrial complex.

So I’m glad for Majmudar’s thought experiment.

Simplified light-gray robot silhouette with rectangular head and dark visor, round shoulders and claw-like hands.

Android Dreams

A prediction essay for the next 20 years of intelligent robotics

android-dreams.ai iconandroid-dreams.ai

When Figma acquired Weavy last month, I wrote a little bit about node-based UIs and ComfyUI. Looks like Adobe has been exploring this user interface paradigm as well.

Daniel John writes in Creative Bloq:

Project Graph is capable of turning complex workflows into user-friendly UIs (or ‘capsules’), and can access tools from across the Creative Cloud suite, including Photoshop, Illustrator and Premiere Pro – making it a potentially game-changing tool for creative pros.

But it isn’t just Adobe’s own tools that Project Graph is able to tap into. It also has access to the multitude of third party AI models Adobe recently announced partnerships with, including those made by Google, OpenAI and many more.

These tools can be used to build a node-based workflow, which can then be packaged into a streamlined tool with a deceptively simple interface.

And from Adobe’s blog post about Project Graph:

Project Graph is a new creative system that gives artists and designers real control and customization over their workflows at scale. It blends the best AI models with the capabilities of Adobe’s creative tools, such as Photoshop, inside a visual, node-based editor so you can design, explore, and refine ideas in a way that feels tactile and expressive, while still supporting the precision and reliability creative pros expect.

I’ve been playing around with ComfyUI a lot recently (more about this in a future post), so I’m very excited to see how this kind of UI can fit into Adobe’s products.

Stylized dark grid with blue-purple modular devices linked by cables, central "Ps" Photoshop

Adobe just made its most important announcement in years

Here’s why Project Graph matters for creatives.

creativebloq.com iconcreativebloq.com

On Corporate Maneuvers Punditry

Mark Gurman, writing for Bloomberg:

Meta Platforms Inc. has poached Apple Inc.’s most prominent design executive in a major coup that underscores a push by the social networking giant into AI-equipped consumer devices.

The company is hiring Alan Dye, who has served as the head of Apple’s user interface design team since 2015, according to people with knowledge of the matter. Apple is replacing Dye with longtime designer Stephen Lemay, according to the people, who asked not to be identified because the personnel changes haven’t been announced.

I don’t regularly cover personnel moves here, but Alan Dye jumping over to Meta has been a big deal in the Apple news ecosystem. John Gruber, in a piece titled “Bad Dye Job” on his Daring Fireball blog, wrote a scathing takedown of Dye, excoriating his tenure at Apple and flogging him for going over to Meta, which is arguably Apple’s arch nemesis.

Putting Alan Dye in charge of user interface design was the one big mistake Jony Ive made as Apple’s Chief Design Officer. Dye had no background in user interface design — he came from a brand and print advertising background. Before joining Apple, he was design director for the fashion brand Kate Spade, and before that worked on branding for the ad agency Ogilvy. His promotion to lead Apple’s software interface design team under Ive happened in 2015, when Apple was launching Apple Watch, their closest foray into the world of fashion. It might have made some sense to bring someone from the fashion/brand world to lead software design for Apple Watch, but it sure didn’t seem to make sense for the rest of Apple’s platforms. And the decade of Dye’s HI leadership has proven it.

I usually appreciate Gruber’s writing and take on things. He’s unafraid to tell it like it is and to be incredibly direct. Which makes people love him and fear him. But in paragraph after paragraph, Gruber just lays in on Dye.

It’s rather extraordinary in today’s hyper-partisan world that there’s nearly universal agreement amongst actual practitioners of user-interface design that Alan Dye is a fraud who led the company deeply astray. It was a big problem inside the company too. I’m aware of dozens of designers who’ve left Apple, out of frustration over the company’s direction, to work at places like LoveFrom, OpenAI, and their secretive joint venture io. I’m not sure there are any interaction designers at io who aren’t ex-Apple, and if there are, it’s only a handful. From the stories I’m aware of, the theme is identical: these are designers driven to do great work, and under Alan Dye, “doing great work” was no longer the guiding principle at Apple. If reaching the most users is your goal, go work on design at Google, or Microsoft, or Meta. (Design, of course, isn’t even a thing at Amazon.) Designers choose to work at Apple to do the best work in the industry. That has stopped being true under Alan Dye. The most talented designers I know are the harshest critics of Dye’s body of work, and the direction in which it’s been heading.

Designers can be great at more than one thing and they can evolve. Being in design leadership does not mean that you need to be the best practitioner of all the disciplines, but you do need to have the taste, sensibilities, and judgement of a good designer, no matter how you started. I’m a case in point. I studied traditional graphic design in art school. But I’ve been in digital design for most of my career now, and product design for the last 10 years.

Has UI over at Apple been worse over the last 10 years? Maybe. I will need to analyze things a lot more carefully. But I vividly remember having debates with my fellow designers about Mac OS X UI choices like the pinstriping, brushed metal, and many, many inconsistencies when I was working in the Graphic Design Group in 2004. UI design has never been perfect in Cupertino.

Alan Dye isn’t a CEO and wasn’t even at the same exposure level as Jony Ive when he was still at Apple. I don’t know Dye, though we’re certainly in the same design circles—we have 20 shared connections on LinkedIn. But as far as I’m concerned, he’s a civilian because he kept a low profile, like all Apple employees.

The parasocial relationships we have with tech executives is weird. I guess it’s one thing if they have a large online presence like Instagram’s Adam Mosseri or 37signals’ David Heinemeier Hansson (aka DHH), but Alan Dye made only a couple appearances in Apple keynotes and talked about Liquid Glass. In other words, why is Gruber writing 2,500 words in this particular post, and it’s just one of five posts covering this story!

Anyway, I’m not a big fan of Meta, but maybe Dye can bring some ethics to the design team over there. Who knows. Regardless, I am wishing him well rather than taking him down.

Designer and front-end dev Ondřej Konečný has a lovely presentation of his book collection.

My favorites that I’ve read include:

  • Creative Selection by Ken Kocienda (my review)
  • Grid Systems in Graphic Design by Josef Müller-Brockmann
  • Steve Jobs by Walter Isaacson
  • Don’t Make Me Think by Steve Krug
  • Responsive Web Design by Ethan Marcotte

(h/t Jeffrey Zeldman)

Books page showing a grid of colorful book covers with titles, authors, and years on a light background.

Ondřej Konečný | Books

Ondřej Konečný’s personal website.

ondrejkonecny.com iconondrejkonecny.com

Critiques are the lifeblood of design. Anyone who went to design school has participated in and has been the focus of a crit. It’s “the intentional application of adversarial thought to something that isn’t finished yet,” as Fabricio Teixeira and Caio Braga, the editors of DOC put it.

A lot of solo designers—whether they’re a design team of one or if they’re a freelancer—don’t have the luxury of critiques. In my view, they’re handicapped. There are workarounds, of course. Such as critiques with cross-functional peers, but it’s not the same. I had one designer on my team—who used to be a design team of one in her previous company—come up to me and say she’s learned more in a month than a year at her former job.

Further down, Teixeira and Braga say:

In the age of AI, the human critique session becomes even more important. LLMs can generate ideas in 5 seconds, but stress-testing them with contextual knowledge, taste, and vision, is something that you should be better at. As AI accelerates the production of “technically correct” and “aesthetically optimized” work, relying on just AI creates the risks of mediocrity. AI is trained to be predictable; crits are all about friction: political, organizational, or strategic.

Critique

Critique

On elevating craft through critical thinking.

doc.cc icondoc.cc

As regular readers will know, the design talent crisis is a subject I’m very passionate about. Of course, this talent crisis is really about how companies who are opting for AI instead of junior-level humans, are robbing themselves of a human expertise to control the AI agents of the future, and neglecting a generation of talented and enthusiastic young people.

Also obviously, this goes beyond the design discipline. Annie Hedgpeth, writing for the People Work blog, says that “AI is replacing the training ground not replacing expertise.”

We used to have a training ground for junior engineers, but now AI is increasingly automating away that work. Both studies I referenced above cited the same thing - AI is getting good at automating junior work while only augmenting senior work. So the evidence doesn’t show that AI is going to replace everyone; it’s just removing the apprenticeship ladder.

Line chart 2015–2025 showing average employment % change: blue (seniors) rises sharply after ChatGPT launch (~2023) to ~0.5%; red (juniors) plateaus ~0.25%.

From the Sep 2025 Harvard University paper, “Generative AI as Seniority-Biased Technological Change: Evidence from U.S. Résumé and Job Posting Data.” (link)

And then she echoes my worry:

So what happens in 10-20 years when the current senior engineers retire? Where do the next batch of seniors come from? The ones who can architect complex systems and make good judgment calls when faced with uncertain situations? Those are skills that are developed through years of work that starts simple and grows in complexity, through human mentorship.

We’re setting ourselves up for a timing mismatch, at best. We’re eliminating junior jobs in hopes that AI will get good enough in the next 10-20 years to handle even complex, human judgment calls. And if we’re wrong about that, then we have far fewer people in the pipeline of senior engineers to solve those problems.

The Junior Hiring Crisis

The Junior Hiring Crisis

AI isn’t replacing everyone. It’s removing the apprenticeship ladder. Here’s what that means for students, early-career professionals, and the tech industry’s future.

people-work.io iconpeople-work.io

Website screenshot SaaS company Urlbox created a fun project called One Million Screenshots, with, yup, over a million screenshots of the top one million websites. You navigate the page like Google Maps, by zooming in and panning around.

Why? From the FAQ page:

We wanted to celebrate Urlbox taking over 100 million screenshots for customers in 2023… so we thought it would be fun to take an extra 1,048,576 screenshots evey month… did we mention we’re really into screenshots.

(h/t Brad Frost)

One Million Screenshots

One Million Screenshots

Explore the web’s biggest homepage. Discover similar sites. See changes over time. Get web data.

onemillionscreenshots.com icononemillionscreenshots.com
Close-up of a Frankenstein-like monster face with stitched scars and neck bolts, overlaid by horizontal digital glitch bars

Architects and Monsters

According to recently unsealed court documents, Meta discontinued its internal studies on Facebook’s impact after discovering direct evidence that its platforms were detrimental to users’ mental health.

Jeff Horwitz reporting for Reuters:

In a 2020 research project code-named “Project Mercury,” Meta scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook, according to Meta documents obtained via discovery. To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.

Rather than publishing those findings or pursuing additional research, the filing states, Meta called off further work and internally declared that the negative study findings were tainted by the “existing media narrative” around the company.

Privately, however, a staffer insisted that the conclusions of the research were valid, according to the filing.

As more and more evidence comes to light about Mark Zuckerberg and Meta’s failings and possibly criminal behavior, we as tech workers and specifically designers making technology that billions of people use, have to do better. While my previous essay written after the assassination of Charlie Kirk was an indictment on the algorithm, I’ve come across a couple of pieces recently that bring the responsibility closer to UX’s doorstep.

Ryan Feigenbaum created a fun Teenage Engineering-inspired color palette generator he calls “ColorPalette Pro.” Back in 2023, he was experimenting with programatic palette generation. But he didn’t like his work, calling the resulting palettes “gross, with luminosity all over the place, clashing colors, and garish combinations.”

So Feigenbaum went back to the drawing board:

That set off a deep dive into color theory, reading various articles and books like Josef Albers’ Interaction of Color (1963), understanding color space better, all of which coincided with an explosion of new color methods and technical support on the web.

These frustrations and browser improvements culminated in a realization and an app.

Here he is, demoing his app:

Play
COLORPALETTE PRO UI showing Vibrant Violet: color wheel, purple-to-orange swatch grid, and lightness/chroma/hue sliders.

Color Palette Pro — A Synthesizer for Color Palettes

Generate customizable color palettes in advanced color spaces that can be easily shared, downloaded, or exported.

colorpalette.pro iconcolorpalette.pro

David Kelley is an icon in design. A restless tinkerer turned educator, he co-founded the reowned industrial design firm IDEO, helped shape human-centered design at Stanford’s d.school, and collaborated with Apple on seminal projects like the early mouse.

Here’s his take on creativity in a brief segment for PBS News Hour:

And as I started teaching, I realized that my purpose in life was figuring out how to help people gain confidence in their creative ability. Many people assume they’re not creative. Time and time again, they say, a teacher told me I wasn’t creative or that’s not a very good drawing of a horse or whatever it is. We don’t have to teach creativity. Once we remove the blocks, they can then feel themselves as being a creative person. Witnessing somebody realizing they’re creative for the first time is just a complete joy. You can just see them come out of the shop and beaming that I can weld. Like, what’s next?

Older man with glasses and a mustache seated at a workshop workbench, shelves of blue parts bins and tools behind him.

David Kelley's Brief But Spectacular take on creativity and design

For decades, David Kelley has helped people unlock their creativity. A pioneer of design, he founded the Stanford d.school as a place for creative, cross-disciplinary problem solving. He reflects on the journey that shaped his belief that everyone has the capacity to be creative and his Brief But Spectacular take on creativity and design.

pbs.org iconpbs.org

I’ve been playing with my systems in the past month—switching browsers, notetaking apps, and RSS feed readers. If I’m being honest, it’s causing me anxiety because I feel unmoored. My systems aren’t familiar enough to let me be efficient.

One thing that has stayed relatively stable is my LLM app—well, two of them. ChatGPT for everyday and Claude for coding and writing.

Christina Wodtke, writing on her blog:

The most useful model might not win.

What wins is the model that people don’t want to leave. The one that feels like home. The one where switching would mean losing something—not just access to features, but fluency, comfort, all those intangible things that make a tool feel like yours.

Amazon figured this out with Prime. Apple figured it out with the ecosystem. Salesforce figured it out by making itself so embedded in enterprise workflows that ripping it out would require an act of God.

AI companies are still acting like this is a pure technology competition. It’s not. It’s a competition to become essential—and staying power comes from experience, not raw capability.

Your moat isn’t your model. Your moat is whether users feel at home.

Solid black square filling the frame

UX Is Your Moat (And You’re Ignoring It)

Last week, Google released Nano Banana Pro, their latest image generator. The demos looked impressive. I opened Gemini to try it. Then I had a question I needed to ask. Something unrelated to image…

eleganthack.com iconeleganthack.com

Like it or not, as a designer, you have to be able to present your work proficiently. I remember I had always hated presenting. I was nervous and would get tongue-tied. Eventually, the more I did it, the more I got used to it. …But that’s public speaking, just half of what a presentation is. The other half is how to structure and tell your story. What story? The story of your design.

There’s a lot to be learned from master storytellers like Pixar. Laia Tremosa writing for the Interaction Design Foundation walks us through some storytelling techniques that we can pick up from Pixar.

Most professionals stay unseen not because their work lacks value, but because their message lacks resonance. They talk in facts when their audience needs meaning.

Storytelling is how you change that. It turns explanation into connection, and connection into influence. When you frame your ideas through story, people don’t just understand your work, they believe in it.

And out of the five that she mentions, the second one is my favorite, “Know Where You’re Going: Start with the End.”

Pixar designs for the final feeling. You should too. As a presenter, your version of that is a clear takeaway, a shift in perspective, or a call to action. You’re guiding your audience to a moment of clarity.

Maybe it’s a relief that a problem can be solved. Maybe it’s excitement about a new idea. Maybe it’s conviction that your proposal matters. Whatever the feeling, it’s your north star.

Don’t just prepare what to say; decide where you want to land. Start with the end, and build every word, visual, and story toward that moment of understanding and meaning.

Headline: "The 5 Pixar Storytelling Principles That Will Redefine How You Present and Fast-..." with "Article" tag and Interaction Design Foundation (IxDF) logo.

The 5 Pixar Storytelling Principles That Will Redefine How You Present and Fast-Track Your Career

Master Pixar storytelling techniques to elevate your presentations and boost career impact. Learn how to influence with storytelling.

interaction-design.org iconinteraction-design.org

If you are old enough to have watched Toy Story in theaters back in 1995, you may have noticed that the version streaming on Disney+ today doesn’t quite feel the same.

You see, Toy Story was an entirely digital artifact but it had to be distributed to movie theaters using the technology theaters had at the time—35mm film projectors. Therefore, every frame of the movie was recorded onto film.

Animation Obsessive explains:

Their system was fairly straightforward. Every frame of Toy Story’s negative was exposed, three times, in front of a CRT screen that displayed the movie. “Since all film and video images are composed of combinations of red, green and blue light, the frame is separated into its discrete red, green and blue elements,” noted the studio. Exposures, filtered through each color, were layered to create each frame.

It reportedly took nine hours to print 30 seconds of Toy Story. But it had to be done: it was the only way to screen the film.

The home video version of the movie was mastered from a 35mm print.

And then in 1999, A Bug’s Life became the very first digital-to-digital home video transfer. Pixar devised a method to go from their computers straight to DVD.

In the early 2000s, Disney/Pixar would redo the home video transfer for Toy Story using the same digital mastering technique. “And it wasn’t quite the same movie that viewers had seen in the ‘90s.”

“The colors are vivid and lifelike, [and] not a hint of grain or artifacts can be found,” raved one reviewer. It was a crisp, blazingly bright, digital image now — totally different from the softness, texture and deep, muted warmth of physical film, on which Toy Story was created to be seen.

And then digital transfers became the standard.

Pizza Planet diner with a large retro rocket beside a curved neon-lit entrance at night under a starry sky, cars parked outside.

The ‘Toy Story’ You Remember

Plus: newsbits.

animationobsessive.substack.com iconanimationobsessive.substack.com

Hard to believe that the very first fully computer animated feature film came out 30 years ago. To say that Toy Story was groundbreaking would be an understatement. If you look at the animated feature landscape today, 100% is computer-generated.

In this re-found interview with Steve Jobs exactly a year after the movie premiered in theaters, Jobs talks about a few things, notably how different Silicon Valley and Hollywood were—and still are.

From the Steve Jobs Archive:

In this footage, Steve reveals the long game behind Pixar’s seeming overnight success. With striking clarity, he explains how its business model gives artists and engineers a stake in their creations, and he reflects on what Disney’s hard-won wisdom taught him about focus and discipline. He also talks about the challenge of leading a team so talented that it inverts the usual hierarchy, the incentives that inspire people to stay with the company, and the deeper purpose that unites them all: to tell stories that last and put something of enduring value into the culture.  

Play

And Jobs in his own words:

Well, in this blending of a Hollywood  culture and a Silicon Valley culture, one of the things that we encountered was  that the Hollywood culture and the Silicon Valley culture each used different models of  employee retention. Hollywood uses the stick, which is the contract, and Silicon Valley  uses the carrot, which is the stock option. And we examined both of those in really pretty  great detail, both economically, but also psychologically and culture wise, what kind of  culture do you end up with. And while there’s a lot of reasons to want to lock down your  employees for the duration of a film because, if somebody leaves, you’re at risk, those  same dangers exist in Silicon Valley. During an engineering project, you don’t want to lose people, and yet, they managed to evolve another system than contracts. And we preferred the Silicon Valley model in this case, which basically gives people stock in the company so that we all have the same goal, which is to create shareholder value. But also, it makes us constantly worry about making Pixar the greatest company we can  so that nobody would ever want to leave. 

Large serif headline "Pixar: The Early Days" on white background, small dotted tree logo at bottom-left.

Pixar: The Early Days

A never-before-seen 1996 interview

stevejobsarchive.com iconstevejobsarchive.com
Escher-like stone labyrinth of intersecting walkways and staircases populated by small figures and floating rectangular screens.

Generative UI and the Ephemeral Interface

This week, Google debuted their Gemini 3 AI model to great fanfare and reviews. Specs-wise, it tops the benchmarks. This horserace has seen Google, Anthropic, and OpenAI trade leads each time a new model is released, so I’m not really surprised there. The interesting bit for us designers isn’t the model itself, but the upgraded Gemini app that can create user interfaces on the fly. Say hello to generative UI.

I will admit that I’ve been skeptical of the notion of generative user interfaces. I was imagining an app for work, like a design app, that would rearrange itself depending on the task at hand. In other words, it’s dynamic and contextual. Adobe has tried a proto-version of this with the contextual task bar. Theoretically, it surfaces up the most pertinent three or four actions based on your current task. But I find that it just gets in the way.

When Interfaces Keep Moving

Others have been less skeptical. More than 18 months ago, NN/g published an article speculating about genUI and how it might manifest in the future. They define it as:

A generative UI (genUI) is a user interface that is dynamically generated in real time by artificial intelligence to provide an experience customized to fit the user’s needs and context. So it’s a custom UI for that user at that point in time. Similar to how LLMs answer your question: tailored for you and specific to when that you asked the original question.

I wouldn’t call myself a gamer, but I do enjoy good games from time to time, when I have the time. A couple of years ago, I made my way through Hades and had a blast.

But I do know that the publishing of a triple-A title like Call of Duty: Black Ops takes an enormous effort, tons of human-hours, and loads of cash. It’s also obvious to me that AI has been entering into entertainment workflows, just like it has in design workflows.

Ian Dean, writing for Creative Bloq explores this controversy with Activision using generative AI to create artwork for the latest release in the Call of Duty franchise. Players called the company out for being opaque about using AI tools, but more importantly, because they spotted telltale artifacts.

Many of the game’s calling cards display the kind of visual tics that seasoned artists can spot at a glance: fingers that don’t quite add up, characters whose faces drift slightly off-model, and backgrounds that feel too synthetic to belong to a studio known for its polish.

These aren’t high-profile cinematic assets, but they’re the small slices of style and personality players earn through gameplay. And that’s precisely why the discovery has landed so hard; it feels a little sneaky, a bit underhanded.

“Sneaky” and “underhanded” are odd adjectives, no? I suppose gamers are feeling like they’ve been lied to because Activition used AI?

Dean again:

While no major studio will admit it publicly, Black Ops 7 is now a case study in how not to introduce AI into a beloved franchise. Artists across the industry are already discussing how easily ‘supportive tools’ can cross the line into fully generated content, and how difficult it becomes to convince players that craft still matters when the results look rushed or uncanny.

My, possibly controversial, view is that the technology itself isn’t the villain here; poor implementation is, a lack of transparency is, and fundamentally, a lack of creative use is.

I think the last phrase is the key. It’s the loss of quality and lack of creative use.

I’ve been playing around more with AI-generated images and video, ever since Figma acquired Weavy. I’ve been testing out Weavy and have done a lot of experimenting with ComfyUI in recent weeks. The quality of output from these tools is getting better every month.

With more and more AI being embedded into our art and design tools, the purity that some fans want is going to be hard to sustain. I think the train has left the station.

Bearded man in futuristic combat armor holding a rifle, standing before illustrated game UI panels showing fantasy scenes and text

Why Call of Duty: Black Ops 7’s AI art controversy means we all lose

Artists lose jobs, players hate it, and games cost more. I can’t find the benefits.

creativebloq.com iconcreativebloq.com

There are dark patterns in UX, and there are also dark patterns specific to games. Dark Pattern Games is a website that catalogs such patterns and the offending mobile games.

The site’s definition of a dark pattern is:

A gaming dark pattern is something that is deliberately added to a game to cause an unwanted negative experience for the player with a positive outcome for the game developer.

The “Social Pyramid Scheme” is one of my most loathed:

Some games will give you a bonus when you invite your friends to play and link to them to your account. This bonus may be a one-time benefit, or it may be an ongoing benefit that improves the gaming experience for each friend that you add. This gives players a strong incentive to convince their friends to play. Those friends then have to sign up more friends and so on, leading to a pyramid scheme and viral growth for the game.

Starry background with red pixelated text "Dark Pattern Games", a D-pad icon with red arrows, and URL www.darkpattern.games

DarkPattern.games » Healthy Gaming « Avoid Addictive Dark Patterns

Game reviews to help you find good games that don’t trick you into addictive gaming patterns.

darkpattern.games icondarkpattern.games

Geoffrey Litt is a design engineer at Notion. He is one of the authors at Ink & Switch of “Malleable software,” which I linked to back in July. I think it’s pretty fitting that he popped up at Notion, with the CEO Ivan Zhao likening the app to LEGO bricks.

In a recent interview with Rid on Dive Club, Litt explains the concept further:

So, when I say malleable software, I do not mean only disposable software. The main thing I think about with malleable software is actually much closer to … designing my interior space in my house. Let’s say when I come home I don’t want everything to be rearranged, right? I want it to be the way it was. And if I want to move the furniture or put things on the wall, I want to have the right to do that. And so I think of it much more as kind of crafting an environment over time that’s actually more stable and predictable, not only for myself, but also for my team. Having shared environments that we all work in together that are predictable is also really important, right? Ironically, actually, in some ways, I think sometimes malleable software results in more stable software because I have more control.

For building with AI, Litt advocates “coding like a surgeon”: stay in the loop and use agents for prep and grunt work.

How do we think of AI as a way to leverage our time better? [So we can] stay connected to the work and [do] it ourselves by having prep work done for us. Having tools in the moment helping us do it so that we can really focus on the stuff we love to do, and do less of everything else. And that’s how I’m trying to use coding agents for my core work that I care about today. Which is when I show up, sit down at my desk in the morning and work on a feature, I want to be prepped with a brief on all the code I’m going to be touching today, how it works, what the traps are. Maybe I’ll see a draft that the AI did for me overnight, sketching out how the coding could go. Maybe some ideas for me.

In other words, like an assistant who works overnight. And yeah, this could apply to design as well.

Geoffrey Litt - The Future of Malleable Software

AI is fundamentally shifting the way we think about digital products and the core deliverables that we’re bringing to the table as designers.So I asked Geoff…

youtube.com iconyoutube.com

He told me his CEO - who’s never written a line of code - was running their company from an AI code editor.

I almost fell out of my chair.

OF COURSE. WHY HAD I NOT THOUGHT OF THAT.

I’ve since gotten rid of almost all of my productivity tools.

ChatGPT, Notion, Todoist, Airtable, Google Keep, Perplexity, my CRM. All gone.

That’s the lede for a piece by Derek Larson on running everything from Claude Code. I’ve covered how Claude Code is pretty brilliant and there are dozens more use cases than just coding.

But getting rid of everything and using just text files and the terminal window? Seems extreme.

Larson uses a skill in Claude Code called “/weekly” to do a weekly review.

  1. Claude looks at every file change since last week
  2. Claude evaluates the state of projects, tasks, and the roadmap
  3. We have a conversation to dig deeper, and make decisions
  4. Claude generates a document summarizing the week and plan we agreed on

Then Claude finds items he’s missed or procrastinating on, and “creates a space to dump everything” on his mind.

Blue furry Cookie Monster holding two baking sheets filled with chocolate chip cookies.

Feed the Beast

AI Eats Software

dtlarson.com icondtlarson.com

Something that I think a lot about as a design leader is how to promote the benefits of design in the organization. Paul Boag created this practical guide to guerrilla internal marketing that builds a network of ambassadors across departments and keeps user-centered thinking top of mind.

Boag, writing in his newsletter:

You cannot be everywhere at once. You cannot attend every meeting, influence every decision, or educate every colleague personally. But you can identify and equip people across different departments who care about users and give them the tools to spread UX thinking in their teams.

This is how culture change actually happens. Not through presentations from the UX team, but through conversations between colleagues who trust each other.

Marketing UX Within Your Organization header, man in red beanie with glasses holding papers; author photo, 6‑min read.

Marketing UX Within Your Organization

Learn guerrilla marketing tactics to raise UX awareness and shift your organization's culture without a big budget.

boagworld.com iconboagworld.com

Design Thinking has gotten a bad rap in recent years. It was supposed to change everything in the corporate world but ended up changing very little. While Design Thinking may not be the darling anymore, designers still need time to think, which is, for the sake of argument, time away from Figma and pushing pixels.

Chris Becker argues in UX Collective:

However, the canary in the coalmine is that Designers are not being used for their “thinking” but rather their “repetition”. Much of the consternation we feel in the UX industry is catapulted on us from this point of friction.

He says that agile software development and time for designers to think aren’t incompatible:

But allowing Designers to implement their thinking into the process is about trust. When good software teams collaborate effectively, there are high levels of trust and autonomy (a key requirement of agile teams). Designers must earn that trust, of course, and when we demonstrate that we have “done the thinking,” it builds confidence and garners more thinking time. Thinking begets thinking. So, Designers, let’s continue to work to maximise our “thinking” faculties.

Hand-drawn diagram titled THINKING: sensory icons and eyeballs feed a brain, plus a phone labeled "Illusory Truth Effect," leading to outputs labeled "Habits.

Let designers think

How “Thinking” + “Designing” need to be practiced outside AI.

uxdesign.cc iconuxdesign.cc

Pavel Bukengolts writes a piece for UX Magazine that reiterates what I’ve been covering here: our general shift to AI means that human judgement and adaptability are more important than ever.

Before getting to the meat of the issue, Bukengolts highlights the talent crisis that is our own making:

The outcome is a broken pipeline. If graduates cannot land their first jobs, they cannot build the experience needed for the next stage. A decade from now, organizations may face not just a shortage of junior workers, but a shortage of mid-level professionals who never had a chance to develop.

If rote repetitive tasks are being automated by AI and junior staffers aren’t needed for those tasks, then what skills are still valuable? Further on, he answers that question:

Centuries ago, in Athens, Alexandria, or Oxford, education focused on rhetoric, logic, and philosophy. These were not academic luxuries but survival skills for navigating complexity and persuasion. Ironically, they are once again becoming the most durable protection in an age of automation.

Some of these skills include:

  • Logic: Evaluating arguments and identifying flawed reasoning—essential when AI generates plausible but incorrect conclusions.
  • Rhetoric: Crafting persuasive narratives that create emotional connection and resonance beyond what algorithms can achieve.
  • Philosophy and Ethics: Examining not just capability but responsibility, particularly around automation’s broader implications.
  • Systems Thinking: Understanding interconnections and cascading effects that AI’s narrow outputs often miss.
  • Writing: Communicating with precision to align stakeholders and drive better outcomes.
  • Observation: Detecting subtle signals and anomalies that fall outside algorithmic training data.
  • Debate: Refining thinking through intellectual challenge—a practice dating to ancient dialogue.
  • History: Recognizing recurring patterns to avoid cyclical mistakes; AI enthusiasm echoes past technological revolutions.

I would say all of the above not only make a good designer but a good citizen of this planet.

Young worker with hands over their face at a laptop, distressed. Caption: "AI is erasing routine entry-level jobs, pushing young workers to develop deeper human thinking skills to stay relevant.

AI, Early-Career Jobs, and the Return to Thinking

In today’s job market, young professionals are facing unprecedented challenges as entry-level positions vanish, largely due to the rise of artificial intelligence. A recent Stanford study reveals that employment for workers aged 22 to 25 in AI-exposed fields has plummeted by up to 16 percent since late 2022, while older workers see growth. This shift highlights a broken talent pipeline, where routine tasks are easily automated, leaving younger workers without the experience needed to advance. As companies grapple with how to integrate AI, the focus is shifting towards essential human skills like critical thinking, empathy, and creativity — skills that machines can’t replicate. The future of work may depend on how we adapt to this new landscape.

uxmag.com iconuxmag.com