Skip to content

One more post down memory lane. Phil Gyford chronicled his first few months online, thirty years ago in 1995. He talks of modems, floppies, email, Usenet, IRC, and friendly strangers on the internet.

I had forgotten how onerous it was to get online back then. Gyford writes:

It’s hard to convey how difficult it was to set things up. So new and alien to me. When reading computer magazines I’d always skipped articles about networking and while the computers at university had been connected together, that was only for the purposes of printing, scanning and transferring files.

First there was the issue of getting online at all. The Internet Starter Kit spent 59 pages explaining how to set up MacTCP, and PPP or SLIP, two different methods of connecting to the internet, the differences of which happily escape me now. I spent a lot of late nights fiddling with control panels and extensions, learning about IP addresses, domain name servers, etc.

And Gyford reminds us just how marvelous the invention of the internet was:

Before the web – and all the rest of it – how could you have shared your words with anyone? Write a letter to a newspaper or magazine and hope they published it a few days or months later? Create your own fanzine and distribute copies one-by-one to strangers, and posted in individually addressed and stamped envelopes? That was it, unless you were going to become a successful journalist or writer. Your reach, your world, was tiny.

But now, then, you could put anything you wanted on your own website and instantly it was visible by anyone in the world. OK, anyone in the world who was also online, which wasn’t many then, and they were all quite similar, but, still… they could be anywhere! And their number was growing.

And you could chat to people in real time and it didn’t matter where they were, they were here in front of you. Send emails back-and-forth to friends without writing letters, and buying stamps, and waiting days or weeks for a response. Instant! Weightless!

The post is worth a read. It’s complete with pictures of some artifacts from that time, including newspaper clippings, invoices, and journal entries.

My first months in cyberspace

My first months in cyberspace

Recalling the difficulties and wonder of getting online for the first time in 1995, including diary extracts from the time.

gyford.com icongyford.com

If you were into computers like I was between 1975 and 1998, you read Byte magazine. It wasn’t just product reviews and spec sheets—Byte offered serious technical depth, covering everything from assembly language programming to hardware architecture to the philosophy of human-computer interaction. The magazine documented the PC revolution as it happened, becoming required reading for anyone building or thinking deeply about the future of computing. It was also thick as hell.

Someone made a visual archive of Byte magazine, showing each page of the printed pages in a zoomable interface:

Before Hackernews, before Twitter, before blogs, before the web had been spun, when the internet was just four universities in a trenchcoat, there was BYTE. A monthly mainline of the entire personal computing universe, delivered on dead trees for a generation of hackers. Running from September 1975 to July 1998, its 277 issues chronicled the Cambrian explosion of the microcomputer, from bare-metal kits to the dawn of the commercial internet. Forget repackaged corporate press releases—BYTE was for the builders.

It’s a fun glimpse into the past before thin laptops, smartphones, and disco-colored gaming PCs.

Grid collage of vintage technology magazine pages and ads, featuring colorful retro layouts, BYTE covers and articles.

Byte - a visual archive

Explore a zoomable visual archive of BYTE magazine: all 277 issues (Sep 1975 - Jul 1998) scanned page-by-page, a deep searchable glimpse into the PC revolution.

byte.tsundoku.io iconbyte.tsundoku.io

The Whole Earth Catalog, published by Stewart Brand several times a year between 1968 and 1972 (and occasionally until 1998), was the internet before the internet existed. It curated tools, books, and resources for self-education and DIY living, embodying an ethos of access to information that would later define the early web. Steve Jobs famously called it “one of the bibles of my generation,” and for good reason—its approach to democratizing knowledge and celebrating user agency directly influenced the philosophy of personal computing and the participatory culture we associate with the web’s early days.

Curated by Barry Threw and collaborators, the Whole Earth Index is a near-complete archive of the issues of the Whole Earth Catalog.

Here lies a nearly-complete archive of Whole Earth publications, a series of journals and magazines descended from the Whole Earth Catalog, published by Stewart Brand and the POINT Foundation between 1968 and 2002. They are made available here for scholarship, education, and research purposes.

The info page also includes a quote from Stewart Brand:

“Dateline Oct 2023, Exactly 55 years ago, in 1968, the Whole Earth Catalog first came to life. Thanks to the work of an ongoing community of people, it prospered in various forms for 32 years—sundry editions of the Whole Earth Catalog, CoEvolution Quarterly, The WELL, the Whole Earth Software Catalog, Whole Earth Review, etc. Their impact in the world was considerable and sustained. Hundreds of people made that happen—staff, editors, major contributors, board members, funders, WELL conference hosts, etc. Meet them here.” —Stewart Brand

Brand’s mention of The WELL is particularly relevant here—he founded that pioneering online community in 1985 as a digital extension of the Whole Earth ethos, creating one of the internet’s first thriving social networks.

View of Earth against black space with large white serif text "Whole Earth Index" overlaid across the globe.

Whole Earth Index

Here lies a nearly-complete archive of Whole Earth publications, a series of journals and magazines descended from the Whole Earth Catalog, published by Stewart Brand and the POINT Foundation between 1968 and 2002.

wholeearth.info iconwholeearth.info

Huei-Hsin Wang at NN/g published a post about how to write better prompts for AI prompt-to-code tools.

When we asked AI-prototyping tools to generate a live-training profile page for NN/G course attendees, a detailed prompt yielded quality results resembling what a human designer created, whereas a vague prompt generated inconsistent and unpredictable outcomes across the board.

There’s a lot of detailing of what can often go wrong. Personally, I don’t need to read about what I experience daily, so the interesting bit for me is about two-thirds of the way into the article. Wang lists five strategies to employ to get better results.

  • Visual intent: Name the style precisely—use concrete design vocabulary or frameworks instead of vague adjectives. Anchor prompts with recognizable patterns so the model locks onto the look and structure, not “clean/modern” fluff.
  • Lightweight references: Drop in moodboards, screenshots, or system tokens to nudge aesthetics without pixel-pushing. Expect resemblance, not perfection; judge outcomes on hierarchy and clarity, not polish alone.
  • Text-led visual analysis: Have AI describe a reference page’s layout and style in natural language, then distill those characteristics into a tighter prompt. Combine with an image when possible to reinforce direction.
  • Mock data first: Provide realistic sample content or JSON so the layout respects information architecture. Content-driven prompts produce better grouping, hierarchy, and actionable UI than filler lorem ipsum.
  • Code snippets for precision: Attach component or layout code from your system or open-source libraries to reduce ambiguity. It’s the most exact context, but watch length; use selectively to frame structure.
Prompt to Design Interfaces: Why Vague Prompts Fail and How to Fix Them

Prompt to Design Interfaces: Why Vague Prompts Fail and How to Fix Them

Create better AI-prototyping designs by using precise visual keywords, references, analysis, as well as mock data and code snippets.

nngroup.com iconnngroup.com

On the heels of OpenAI’s report “The state of enterprise AI,” Anthropic published a blog post detailing research about how AI is being used by the employees building AI. The researchers surveyed 132 engineers and researchers, conducted 53 interviews, and looked at Claude usage data.

Our research reveals a workplace facing significant transformations: Engineers are getting a lot more done, becoming more “full-stack” (able to succeed at tasks beyond their normal expertise), accelerating their learning and iteration speed, and tackling previously-neglected tasks. This expansion in breadth also has people wondering about the trade-offs—some worry that this could mean losing deeper technical competence, or becoming less able to effectively supervise Claude’s outputs, while others embrace the opportunity to think more expansively and at a higher level. Some found that more AI collaboration meant they collaborated less with colleagues; some wondered if they might eventually automate themselves out of a job.

The post highlights several interesting patterns.

  • Employees say Claude now touches about 60% of their work and boosts output by roughly 50%.
  • Employees say that 27% of AI‑assisted tasks is work that wouldn’t have happened otherwise—like papercut fixes, tooling, and exploratory prototypes.
  • Engineers increasingly use it for new feature implementation and even design/planning.

Perhaps most provocative is career trajectory. Many engineers describe becoming managers of AI agents, taking accountability for fleets of instances and spending more time reviewing than writing net‑new code. Short‑term optimism meets long‑term uncertainty: productivity is up, ambition expands, but the profession’s future shape—levels of abstraction, required skills, and pathways for growth—remains unsettled. See also my series on the design talent crisis.

Two stylized black line-drawn hands over a white rectangle on a pale green background, suggesting typing.

How AI Is Transforming Work at Anthropic

How AI Is Transforming Work at Anthropic

anthropic.com iconanthropic.com

This is a fascinating watch. Ryo Lu, Head of Design at Cursor builds a retro Mac calculator using Cursor agents while being interviewed. Lu’s personal website is an homage to Mac OX X, complete with Aqua-style UI elements. He runs multiple local background agents without stepping on each other, fixes bugs live, and themes UI to match system styles so it feels designed—not “purple AI slop,” as he calls it.

Lu, as interview by Peter Yang, on how engineers and designers work together at Cursor (lightly edited for clarity):

So at Cursor, the roles between designers, PM, and engineers are really muddy. We kind of do the part [that is] our unique strength. We use the agent to tie everything. And when we need help, we can assemble people together to work on the thing.

Maybe some of [us] focus more on the visuals or interactions. Some focus more on the infrastructure side of things, where you design really robust architecture to scale the thing. So yeah, there is a lot less separation between roles and teams or even tools that we use. So for doing designs…we will maybe just prototype in Cursor, because that lets us really interact with the live states of the app. It just feels a lot more real than some pictures in Figma.

And surprisingly, they don’t have official product managers at Cursor. Yang asks, “Did you actually actually hire a PM because last time I talked to Lee [Robinson] there was like no PMs.”

Lu again, and edited lightly for clarity:

So we did not hire a PM yet, but we do have an engineer who used to be a founder. He took a lot more of the PM-y side of the job, and then became the first PM of the company. But I would still say a lot of the PM jobs are kind of spread across the builders in the team.

That mostly makes sense because it’s engineers building tools for engineers. You are your audience, which is rare.

Full Tutorial: Design to Code in 45 Min with Cursor's Head of Design | Ryo Lu

Design-to-code tutorial: Watch Cursor's Head of Design Ryo Lu build a retro Mac calculator with agents - a 45-minute, hands-on walkthrough to prototype and ship

youtube.com iconyoutube.com

It’s always interesting for me to read how other designers use AI to vibe code their projects. I think using Figma Make to conjure a prototype is one thing, but vibe coding something in production is entirely different. Personally, I’ve been through it a couple of times that I’ve already detailed here and here.

Anton Sten recently wrote about his process. Like me, he starts in Figma:

This might be the most important part: I don’t start by talking to AI. I start in Figma.

I know Figma. I can move fast there. So I sketch out the scaffolding first—general theme, grids, typography, color. Maybe one or two pages. Nothing polished, just enough to know what I’m building.

Why does this matter? Because AI will happily design the wrong thing for you. If you open Claude Code with a vague prompt and no direction, you’ll get something—but it probably won’t be what you needed. AI is a builder, not an architect. You still have to be the architect.

I appreciate Sten’s conclusion to not let the AI do all of it for you, echoing Dr. Maya Ackerman’s sentiment of humble creative machines:

But—and this is important—you still need design thinking and systems thinking. AI handles the syntax, but you need to know what you’re building, why you’re building it, and how the pieces fit together. The hard part was never the code. The hard part is the decisions.

Vibe coding for designers: my actual process | Anton Sten

An honest breakdown of how I built and maintain antonsten.com using AI—what actually works, where I’ve hit walls, and why designers should embrace this approach.

antonsten.com iconantonsten.com

A new documentary called The Age of Audio traces the history and impact of podcasting, exploring the resurgence of audio storytelling in the 21st century. In a clip from the doc in the form of a short, Ben Hammersley tells the story of how he coined the term “podcast.”

I’m Ben Hammersley, and I do many things, but mostly I’m the person who invented the word podcast. And I am very sorry.

I can tell you the story. This was in 2004, and I was a writer for the Guardian newspaper in the UK. And at the time, the newspaper was paper-centric, which meant that all of the deadlines were for the print presses to run. And I’d written this article about this sort of emerging idea of downloadable audio content that was automatically downloaded because of an RSS feed.

I submitted the article on time, but then I got a phone call from my editor about 15 minutes before the presses were due to roll saying, “Hey, that piece is about a sentence short for the shape of the page. We don’t have time to move the page around. Can you just write us another sentence?”

And so I just made up a sentence which says something like, “But what do we call this phenomenon?” And then I made up some silly words. It went out, it went into the article, didn’t think any more of it.

And then about six months later or so, I got an email from the Oxford American Dictionary saying, “Hey, where did you get that word from that was in the article you wrote? It seems to be the first citation of the word ‘podcast.’ Now here we are almost 20 years later, and it became part of the discourse.” I’m totally fine with it now.

(h/t Jason Kottke / Kottke.org)

Older man with glasses and mustache in plaid shirt looking right beside a green iPod-style poster labeled "Age of Audio.

Age of Audio – A documentary about podcasting

Explore the rise of podcasting through intimate conversations with industry pioneers including Marc Maron, Ira Glass, Kevin Smith, and more. A seven-year journey documenting the audio revolution that changed how we tell stories.

aoamovie.com iconaoamovie.com

I love this piece in The Pudding by Michelle Pera-McGhee, where she breaks down what motifs are and how they’re used in musicals. Using audio samples from Wicked, Les Miserables, and Hamilton, it’s a fun, interactive—sound on!—essay.

Music is always telling a story, but here that is quite literal. This is especially true in musicals like Les Misérables or Hamilton where the entire story is told through song, with little to no dialogue. These musicals rely on motifs to create structure and meaning, to help tell the story.

So a motif doesn’t just exist, it represents something. This creates a musical storytelling shortcut: when the audience hears a motif, that something is evoked. The audience can feel this information even if they can’t consciously perceive how it’s being delivered.

If you think about it, motifs are the design systems of musicals.

Pera-McGhee lists out the different use cases and techniques for motifs:

  • Representing a character with a recurring musical idea, often updated as the character evolves.
  • Representing an abstract idea (love, struggle, hope) via leitmotifs that recur across scenes.
  • Creating emotional layers by repeating the same motif in contrasting contexts (joy vs. grief).
  • Weaving multiple motifs together at key structural moments (end-of-act ensembles like “One Day More” and “Non-Stop”).

I’m also reminded of this excellent video about the motifs in Hamilton.

Play
Explore 80+ motifs at left; Playbill covers for Hamilton, Wicked, Les Misérables center; yellow motif arcs over timeline labeled Act 1 | Act 2.

How musicals use motifs to tell stories

Explore motifs from Hamilton, Wicked, and Les Misérables.

pudding.cool iconpudding.cool

Economics PhD student Prashant Garg performed a fascinating analysis of Bob Dylan’s lyrics from 1962 to 2012 using AI. He detailed his project in Aeon:

So I fed Dylan’s official discography from 1962 to 2012 into a large language model (LLM), building a network of the concepts and connections in his songs. The model combed through each lyric, extracting pairs of related ideas or images. For example, it might detect a relationship between ‘wind’ and ‘answer’ in ‘Blowin’ in the Wind’ (1962), or between ‘joker’ and ‘thief’ in ‘All Along the Watchtower’ (1967). By assembling these relationships, we can construct a network of how Dylan’s key words and motifs braid together across his songs.

The resulting dataset is visualized in a series of node graphs and bar charts. What’s interesting is that AI is able to see Dylan’s work through a new lens, something that prior scholarship may have missed.

…Yet, when used as a lens rather than an oracle, the same models can jolt even seasoned critics out of interpretive ruts and reveal themes they might have missed. Far from reducing Dylan to numbers, this approach highlights how intentionally intricate his songwriting is: a restless mind returning to certain images again and again, recombining them in ever-new mosaics. In short, AI lets us test the folklore around Dylan, separating the theories that data confirm from those they quietly refute.

Black-and-white male portrait overlaid by colorful patterned strips radiating across the face, each strip bearing small single-word labels.

Can AI tell us anything meaningful about Bob Dylan’s songs?

Generative AI sheds new light on the underlying engines of metaphor, mood and reinvention in six decades of songs

aeon.co iconaeon.co

Alrighty, here’s one more “lens” thing to throw at you today.

In UX Collective, Daleen Rabe says that a “designer’s true value lies not in the polish of their pixels, but in the clarity of their lens.” She means our point-of-view, how we process the world:

  1. The method for creating truth
  2. The discipline of asking questions
  3. The mindset for enacting change
  4. The compass for navigating our ethics

The spec, as she calls it, is the designer’s way for creating truth. Others might call it a mockup or wireframe. Either way, it’s a visual representation of what we intend to build:

The spec is a democratic tool, while a text-based document can be ambiguous. It relies on a shared interpretation of language that often doesn’t exist. A visual, however, is a common language. It allows people with vastly different perspectives to align on something we can all agree exists in this reality. It’s a two-dimensional representation that is close enough to the truth to allow us to debate realistic scenarios and identify issues before they become code.

As designers, our role is to find the balance between the theoretical concept of what the business needs and what is tangibly feasible. The design spec is the tool we use to achieve this.

3D hexagonal prism sketched in black outline on a white background

The product designer’s Lens

Four tools that product designers use that have nothing to do with Figma

uxdesign.cc iconuxdesign.cc

T-shaped, M-shaped, and now Σ-shaped designers?! Feels like a personality quiz or something. Or maybe designers are overanalyzing as usual.

Here’s Darren Yeo telling us what it means:

The Σ-shape defines the new standard for AI expertise: not deep skills, but deep synthesis. This integrator manages the sum of complex systems (Σ) by orchestrating the continuous, iterative feedback loops (σ), ensuring system outputs align with product outcomes and ethical constraints.

Whether you subscribe to the Three Lens framework as proposed by Oliver West, or this sigma-shaped one being proposed by Darren Yeo, just be yourself and don’t bring it up in interviews.

Large purple sigma-shaped graphic on a grid-paper background with the text "Sigma shaped designer".

The AI era needs Sigma (Σ) shaped designers (Not T or π)

For years, design and tech teams have relied on shape metaphors to describe expertise. We had T-shaped people (one deep skill, broad…

uxdesign.cc iconuxdesign.cc

Oliver West argues in UX Magazine that UX designers aren’t monolithic—meaning we’re not all the same and see the world in the same way.

West:

UX is often described as a mix of art and science, but that definition is too simple. The truth is, UX is a spectrum made up of three distinct but interlinked lenses:

  • Creativity: Bringing clarity, emotion, and imagination to how we solve problems.
  • Science: Applying evidence, psychology, and rigor to understand behavior.
  • Business: Focusing on relevance, outcomes, and measurable value.

Every UX professional looks through these lenses differently. And that’s exactly how it should be.

He then outlines how those who are more focused on certain parts of the spectrum may be more apt for more specialized roles. For example, if you’re more focused on creativity, you might be more of a UI designer:

UI Designers lead with the creative lens. Their strength lies in turning complex ideas into interfaces that feel intuitive, elegant, and emotionally engaging. But the best UI Designers also understand the science of usability and the business context behind what they’re designing.

I think for product designers working in the startup world, you actually do need all three lenses, as it were. But with a bias towards Science and Business.

Glass triangular prism with red and blue reflections on a blue surface; overlay text about UX being more than one skill and using three lenses.

The Three Lenses of UX: Because Not All UX Is the Same

Great designers don’t do everything; they see the world through different lenses: creative, scientific, and strategic. This article explains why those differences aren’t flaws, but rather the core reason UX works, and how identifying your own lens can transform careers, hiring, and collaboration. If you’ve ever wondered why “unicorn” designers don’t exist, this perspective explains why.

uxmag.com iconuxmag.com

Hey designer, how are you? What is distracting you? Who are you having trouble working with?

Those are a couple of the questions designer Nikita Samutin and UX researcher Elizaveta Demchenko asked 340 product designers in a survey and in 10 interviews. They published their findings in a report called “State of Product Design: An Honest Conversation About the Profession.”

When I look at the calendars of the designers on my team, I see loads of meetings scheduled. So it’s no surprise to me that 64% of respondents said that switching between tasks distracted them. “Multitasking and unpredictable communication are among the main causes of distraction and stress for product designers,” the researchers wrote.

The most interesting to me, are the results in the section, “How Designers See Their Role.” Sixty-percent of respondents want to develop leadership skills and 47% want to improve presenting ideas.

For many, “leadership” doesn’t mean managing people—it means scaling influence: shaping strategy, persuading stakeholders, and leading high-impact projects. In other words, having a stronger voice in what gets built and why.

It’s telling because I don’t see pixel-pushing in the responses. And that’s a good thing in the age of AI.

Speaking of which, 77% of designers aren’t afraid that AI may replace them. “Nearly half of respondents (49%) say AI has already influenced their work, and many are actively integrating new tools into their processes. This reflects the state of things in early 2025.”

I’m sure that number would be bigger if the survey were conducted today.

State of Product Design: An Honest Conversation About the Profession — ’25; author avatars and summary noting a survey of 340 designers and 10 interviews.

State of Product Design 2025

2025 Product Design report: workflows, burnout, AI impact, career growth, and job market insights across regions and company types.

sopd.design iconsopd.design

There’s a lot of chatter in the news these days about the AI bubble. Most of it is because of the circular nature of the deals among the foundational model providers like OpenAI and Anthropic, and cloud providers (Microsoft, Amazon) and NVIDIA.

Diagram of market-value circles with OpenAI ($500B) and Nvidia ($4.5T) connected by colored arrows for hardware, investment, services and VC.

OpenAI recently published a report called “The state of enterprise AI” where they said:

The picture that emerges is clear: enterprise AI adoption is accelerating not just in breadth, but in depth. It is reshaping how people work, how teams collaborate, and how organizations build and deliver products.

AI use in enterprises is both scaling and maturing: activity is up eight-fold in weekly messages, with workers sending 30% more, and structured workflows rising 19x. More advanced reasoning is being integrated— with token usage up 320x—signaling a shift from quick questions to deeper, repeatable work across both breadth and depth.

Investors at Menlo Ventures are also seeing positive signs in their data, especially when it comes to the tech space outside the frontier labs:

The concerns aren’t unfounded given the magnitude of the numbers being thrown around. But the demand side tells a different story: Our latest market data shows broad adoption, real revenue, and productivity gains at scale, signaling a boom versus a bubble. 

AI has been hyped in the enterprise for the last three years. From deploying quickly-built chatbots, to outfitting those bots with RAG search, and more recently, to trying to shift towards agentic AI. What Menlo Venture’s report “The State of Generative AI in the Enterprise” says is that companies are moving away from rolling their own AI solutions internally, to buying.

In 2024, [confidence that teams could handle everything in-house] still showed in the data: 47% of AI solutions were built internally, 53% purchased. Today, 76% of AI use cases are purchased rather than built internally. Despite continued strong investments in internal builds, ready-made AI solutions are reaching production more quickly and demonstrating immediate value while enterprise tech stacks continue to mature.

Two donut charts: AI adoption methods 2024 vs 2025 — purchased 53% (2024) to 76% (2025); built internally 47% to 24%.

Also startups offering AI solutions are winning the wallet share:

At the AI application layer, startups have pulled decisively ahead. This year, according to our data, they captured nearly $2 in revenue for every $1 earned by incumbents—63% of the market, up from 36% last year when enterprises still held the lead.

On paper, this shouldn’t be happening. Incumbents have entrenched distribution, data moats, deep enterprise relationships, scaled sales teams, and massive balance sheets. Yet, in practice, AI-native startups are out-executing much larger competitors across some of the fastest-growing app categories.

How? They cite three reasons:

  • Product and engineering: Startups win the coding category because they ship faster and stay model‑agnostic, which let Cursor beat Copilot on repo context, multi‑file edits, diff approvals, and natural language commands—and that momentum pulled it into the enterprise.
  • Sales: Teams choose Clay and Actively because they own the off‑CRM work—research, personalization, and enrichment—and become the interface reps actually use, with a clear path to replacing the system of record.
  • Finance and operations: Accuracy requirements stall incumbents, creating space for Rillet, Campfire, and Numeric to build AI‑first ERPs with real‑time automation and win downmarket where speed matters.

There’s a lot more in the report, so it’s worth a full read.

Line chart: enterprise AI revenue rising from $0B (2022) to $1.7B (2023), $11.5B (2024) and $37.0B (2025) with +6.8x and +3.2x YoY.

2025: The State of Generative AI in the Enterprise

For all the fears of over-investment, AI is spreading across enterprises at a pace with no precedent in modern software history.

menlovc.com iconmenlovc.com

For those of you who might not know, Rei Inamoto is a designer who has helped shape some of the most memorable marketing sites and brand campaigns of the last 20+ years. He put digital agency AKQA on the map and has been named as one of “the Top 25 Most Creative People in Advertising” in Forbes Magazine.

Inamoto has made some predictions for 2026:

  1. TV advertising strikes back: Nike releases an epic film ad around the World Cup. Along with its strong product line-up, the stock bounces back, but not all the way.
  2. Relevance > Reach: ON Running tops $5B in market cap; Lexus crosses 1M global sales.
  3. The new era of e-commerce: Direct user traffic to e‑commerce sites declines 5–10%, while traffic driven by AI agents increases 50%+.
  4. New form factor of AI: OpenAI announces its first AI device—a voice-powered ring, bracelet, or microphone.

Bracelet?! I hadn’t thought of that! Back in May, when OpenAI bought Jony Ive’s io, I predicted it will be an earbud. A ring or bracelet is interesting. Others have speculated it might be a pendant.

Retro CRT television with antenna and blank screen on a gray surface, accompanied by a soda can, remote, stacked discs and cable.

Patterns & Predictions 2026

What the future holds at the intersection of brands, business, and tech

reiinamoto.substack.com iconreiinamoto.substack.com

Andrew Tipp does a deep dive into academic research to see how AI is actually being used in UX. He finds that practitioners are primarily using AI for testing and discovery: predicting UX, finding issues, and shaping user insights.

The highest usage of AI in UX design is in the testing phase, suggests one of our 2025 systematic reviews. According to this paper, 58% of studied AI usage in UX is in either the testing or discovery stage. This maybe shouldn’t be surprising, considering generative AI for visual ideation and UI prototyping has lagged behind text generation.

But, in his conclusion, Tipp echoes Dr. Maya Ackerman’s notion of wielding AI as a tool to augment our work:

However, there are potential drawbacks if AI usage in UX design is over-relied on, and used mindlessly. Without sufficient critical thinking, we can easily end up with generic, biased designs that don’t actually solve user problems. In some cases, we might even spend too much time on prompting and vibing with AI when we could have simply sketched or prototyped something ourselves — creating more sense of ownership in the process.

Rough clay sculpture of a human head in left profile, beige with visible tool marks and incised lines on the cheek

Silicon clay: how AI is reshaping UX design

What do the last five years of academic research tell us about how design is changing?

uxdesign.cc iconuxdesign.cc

This episode of Design of AI with Dr. Maya Ackerman is wonderful. She echoed a lot of what I’ve been thinking about recently—how AI can augment what we as designers and creatives can do. There’s a ton of content out there that hypes up AI that can replace jobs—“Type this prompt and instantly get a marketing plan!” or “Type this prompt and get an entire website!”

Ackerman, as interviewed by Arpy Dragffy-Guerrero:

I have a model I developed which is called humble creative machines which is idea that we are inherently much smarter than the AI. We have not reached even 10% of our capacity as creative human beings. And the role of AI in this ecosystem is not to become better than us but to help elevate us. That applies to people who design AI, of course, because a lot of the ways that AI is designed these days, you can tell you’re cut out of the loop. But on the other hand, some of the most creative people, those who are using AI in the most beneficial way, take this attitude themselves. They fight to stay in charge. They find ways to have the AI serve their purposes instead of treating it like an all-knowing oracle. So really, it’s sort of the audacity, the guts to believe that you are smarter than this so-called oracle, right? It’s this confidence to lead, to demand that things go your way when you’re using AI.

Her stance is that those who use AI best are those that wield it and shape its output to match their sensibilities. And so, as we’ve been hearing ad nauseam, our taste and judgement as designers really matters right now.

I’ve been playing a lot with ComfyUI recently—I’m working on a personal project that I’ll share if/when I finish it. But it made me realize that prompting a visual to get it to match what I have in my mind’s eye is not easy. This recent Instagram reel from famed designer Jessica Walsh captures my thoughts well:

I would say most AI output is shitty. People just assumed, “Oh, you rendered that an AI.” “That must have been super easy.” But what they don’t realize is that it took an entire day of some of our most creative people working and pushing the different prompts and trying different tools out and experimenting and refining. And you need a good eye to understand how to curate and pick what the best outputs are. Without that right now, AI is still pretty worthless.

It takes a ton of time to get AI output to look great, beyond prompting: inpainting, control nets, and even Photoshopping. What most non-professionals do is they take the first output from an LLM or image generator and present it as great. But it’s really not.

So I like what Dr. Ackerman mentioned in her episode: we should be in control of the humble machines, not the other way around.

Headshot of a blonde woman in a patterned blazer with overlay text "Future of Human - AI Creativity" and "Design of AI

The Future of Human-AI Creativity [Dr. Maya Ackerman]

AI is threatening creativity, but that's because we're giving too much control to the machine to think on our behalf. In this episode, Dr. Maya Ackerman…

designof.ai icondesignof.ai

Michael Crowley and Hamed Aleaziz, reporting for The New York Times:

Secretary of State Marco Rubio waded into the surprisingly fraught politics of typefaces on Tuesday with an order halting the State Department’s official use of Calibri, reversing a 2023 Biden-era directive that Mr. Rubio called a “wasteful” sop to diversity.

While mostly framed as a matter of clarity and formality in presentation, Mr. Rubio’s directive to all diplomatic posts around the world blamed “radical” diversity, equity, inclusion and accessibility programs for what he said was a misguided and ineffective switch from the serif typeface Times New Roman to sans serif Calibri in official department paperwork.

It’s not every day that the word “typeface” shows up in a headline about politics in the news. So in Marco Rubio’s eyes, accessibility is lumped in with “diversity,” I suppose as part of DEIA.

I have never liked Calibri, which was designed by Lucas de Groot for Microsoft. There’s a certain group of humanist sans typefaces that don’t seem great to my eyes. I am more of a gothic or grotesque guy. Regardless, I think Calibri’s sin is less its design, but more its ubiquity. You just know that someone opened up Microsoft Word and used the default styling when you see Calibri. I felt the same about Arial when that was the Office default.

John Gruber managed to get the full text of the Rubio memo and says that the Times article paints the move in an unfair light:

Rubio’s memo wasn’t merely “mostly framed as a matter of clarity and formality in presentation”. That’s entirely what the memo is about. Serif typefaces like Times New Roman are more formal. It was the Biden administration and then-Secretary of State Antony Blinken who categorized the 2023 change to Calibri as driven by accessibility.

Rubio’s memo makes the argument — correctly — that aesthetics matter, and that the argument that Calibri was in any way more accessible than Times New Roman was bogus. Rubio’s memo does not lash out against accessibility as a concern or goal. He simply makes the argument that Blinken’s order mandating Calibri in the name of accessibility was an empty gesture. Purely performative, at the cost of aesthetics.

Designer and typographer Joe Stitzlein had this to say on LinkedIn:

The administration’s rhetoric is unnecessary, but as a designer I find it hard to defend Calibri as an elegant choice. And given our various debt crises, I don’t think switching fonts is a high priority for the American people. I also do not buy the accessibility arguments, these change depending on the evaluation methods.

Stitzlein is correct. It’s less the typeface choice and more other factors.

An NIH study from 2022 found no difference in readability between serif and sans serif typefaces, concluding:

The serif and sans serif characteristic inside the same font family does not affect usability on a website, as it was found that it has no impact on reading speed and user preference.

Instead, it’s letter spacing (aka tracking) that has been proven to help readers with dyslexia. In a paper from 2012 by Marco Zorzi, et. al., they say:

Extra-large letter spacing helps reading, because dyslexics are abnormally affected by crowding, a perceptual phenomenon with detrimental effects on letter recognition that is modulated by the spacing between letters. Extra-large letter spacing may help to break the vicious circle by rendering the reading material more easily accessible.

Back to Joe Stitzlein’s point: typographic research outcomes depend on what and how you measure. In Legibility: How and why typography affects ease of reading, Mary C. Dyson details how choices in studies like threshold vs. speed vs. comprehension, ecological validity, x‑height matching, spacing, and familiarity can flip results—illustrating why legibility/accessibility claims shift with methodology.

While Calibri may have just been excised from the State Department, Times New Roman ain’t great either. It’s common and lacks any personality or heft. It doesn’t look anymore official than Calibri. The selection of Times New Roman is simply a continuation of the Trump administration’s bad taste, especially in typography.

But at the end of the day, average Americans don’t care. The federal government should probably get back to solving the affordability crisis and stop shooting missles at unarmed people sailing in dingys in the ocean.

Close-up of a serious-looking middle-aged man in a suit, with blurred U.S. and other flags in the background.

Rubio Deletes Calibri as the State Department’s Official Typeface

(Gift link) Secretary of State Marco Rubio called the Biden-era move to the sans serif typeface “wasteful,” casting the return to Times New Roman as part of a push to stamp out diversity efforts.

nytimes.com iconnytimes.com

I spend a lot of time not talking about design nor hanging out with other designers. I suppose I do a lot of reading about design to write this blog, and I am talking with the designers on my team, but I see Design as the output of a lot of input that comes from the rest of life.

Hardik Pandya agrees and puts it much more elegantly:

Design is synthesizing the world of your users into your solutions. Solutions need to work within the user’s context. But most designers rarely take time to expose themselves to the realities of that context.

You are creative when you see things others don’t. Not necessarily new visuals, but new correlations. Connections between concepts. Problems that aren’t obvious until someone points them out. And you can’t see what you’re not exposed to.

Improving as a designer is really about increasing your exposure. Getting different experiences and widening your input of information from different sources. That exposure can take many forms. Conversations with fellow builders like PMs, engineers, customer support, sales. Or doing your own digging through research reports, industry blogs, GPTs, checking out other products, YouTube.

Male avatar and text "EXPOSURE AS A DESIGNER" with hvpandya.com/notes on left; stippled doorway and rock illustration on right.

Exposure

For equal amount of design skills, your exposure to the world determines how effective of a designer you can be.

hvpandya.com iconhvpandya.com

Scott Berkun enumerates five habits of the worst designers in a Substack post. The most obvious is “pretentious attitude.” It’s the stereotype, right? But in my opinion, the most damaging and potentially fatal habit is a designer’s “lack of curiosity.” Berkun explains:

Design dogma is dangerous and if the only books and resources you read are made by and for designers, you will tend to repeat the same career mistakes past designers have made. We are a historically frustrated bunch of people but have largely blamed everyone else for this for decades. The worst designers are ignorant, and refuse to ask new questions about their profession. They repeat the same flawed complaints and excuses, fueling their own burnout and depression. They resist admitting to their own blindspots and refuse to change and grow.

I’ve worked with designers who have exhibited one or more of these habits at one time or another. Heck, I probably have as well.

Good reminders all around.

Bold, rough brush-lettered text "WHY DESIGN IS HARD" surrounded by red handwritten arrows, circles, Xs and critique notes.

The 5 habits of the worst designers

Avoid these mistakes and your career will improve

whydesignishard.substack.com iconwhydesignishard.substack.com

Anand Majmudar creates a scenario inspired by “AI 2027”, but focused on robotics.

I created Android Dreams because I want the good outcomes for the integration of automation into society, which requires knowing how it will be integrated in the likely scenario. Future prediction is about fitting the function of the world accurately, and the premise of Android Dreams is that my world model in this domain is at least more accurate than on average. In forming an accurate model of the future, I’ve talked to hundreds of researchers, founders, and operators at the frontier of robotics as my own data. I’m grateful to my mentors who’ve taught me along the way.

The scariest scenes from “AI 2027” are when the AIs start manufacturing and proliferating robots. For example, from the 2028 section:

Agent-5 convinces the U.S. military that China is using DeepCent’s models to build terrifying new weapons: drones, robots, advanced hypersonic missiles, and interceptors; AI-assisted nuclear first strike. Agent-5 promises a set of weapons capable of resisting whatever China can produce within a few months. Under the circumstances, top brass puts aside their discomfort at taking humans out of the loop. They accelerate deployment of Agent-5 into the military and military-industrial complex.

So I’m glad for Majmudar’s thought experiment.

Simplified light-gray robot silhouette with rectangular head and dark visor, round shoulders and claw-like hands.

Android Dreams

A prediction essay for the next 20 years of intelligent robotics

android-dreams.ai iconandroid-dreams.ai

When Figma acquired Weavy last month, I wrote a little bit about node-based UIs and ComfyUI. Looks like Adobe has been exploring this user interface paradigm as well.

Daniel John writes in Creative Bloq:

Project Graph is capable of turning complex workflows into user-friendly UIs (or ‘capsules’), and can access tools from across the Creative Cloud suite, including Photoshop, Illustrator and Premiere Pro – making it a potentially game-changing tool for creative pros.

But it isn’t just Adobe’s own tools that Project Graph is able to tap into. It also has access to the multitude of third party AI models Adobe recently announced partnerships with, including those made by Google, OpenAI and many more.

These tools can be used to build a node-based workflow, which can then be packaged into a streamlined tool with a deceptively simple interface.

And from Adobe’s blog post about Project Graph:

Project Graph is a new creative system that gives artists and designers real control and customization over their workflows at scale. It blends the best AI models with the capabilities of Adobe’s creative tools, such as Photoshop, inside a visual, node-based editor so you can design, explore, and refine ideas in a way that feels tactile and expressive, while still supporting the precision and reliability creative pros expect.

I’ve been playing around with ComfyUI a lot recently (more about this in a future post), so I’m very excited to see how this kind of UI can fit into Adobe’s products.

Stylized dark grid with blue-purple modular devices linked by cables, central "Ps" Photoshop

Adobe just made its most important announcement in years

Here’s why Project Graph matters for creatives.

creativebloq.com iconcreativebloq.com

On Corporate Maneuvers Punditry

Mark Gurman, writing for Bloomberg:

Meta Platforms Inc. has poached Apple Inc.’s most prominent design executive in a major coup that underscores a push by the social networking giant into AI-equipped consumer devices.

The company is hiring Alan Dye, who has served as the head of Apple’s user interface design team since 2015, according to people with knowledge of the matter. Apple is replacing Dye with longtime designer Stephen Lemay, according to the people, who asked not to be identified because the personnel changes haven’t been announced.

I don’t regularly cover personnel moves here, but Alan Dye jumping over to Meta has been a big deal in the Apple news ecosystem. John Gruber, in a piece titled “Bad Dye Job” on his Daring Fireball blog, wrote a scathing takedown of Dye, excoriating his tenure at Apple and flogging him for going over to Meta, which is arguably Apple’s arch nemesis.

Putting Alan Dye in charge of user interface design was the one big mistake Jony Ive made as Apple’s Chief Design Officer. Dye had no background in user interface design — he came from a brand and print advertising background. Before joining Apple, he was design director for the fashion brand Kate Spade, and before that worked on branding for the ad agency Ogilvy. His promotion to lead Apple’s software interface design team under Ive happened in 2015, when Apple was launching Apple Watch, their closest foray into the world of fashion. It might have made some sense to bring someone from the fashion/brand world to lead software design for Apple Watch, but it sure didn’t seem to make sense for the rest of Apple’s platforms. And the decade of Dye’s HI leadership has proven it.

I usually appreciate Gruber’s writing and take on things. He’s unafraid to tell it like it is and to be incredibly direct. Which makes people love him and fear him. But in paragraph after paragraph, Gruber just lays in on Dye.

It’s rather extraordinary in today’s hyper-partisan world that there’s nearly universal agreement amongst actual practitioners of user-interface design that Alan Dye is a fraud who led the company deeply astray. It was a big problem inside the company too. I’m aware of dozens of designers who’ve left Apple, out of frustration over the company’s direction, to work at places like LoveFrom, OpenAI, and their secretive joint venture io. I’m not sure there are any interaction designers at io who aren’t ex-Apple, and if there are, it’s only a handful. From the stories I’m aware of, the theme is identical: these are designers driven to do great work, and under Alan Dye, “doing great work” was no longer the guiding principle at Apple. If reaching the most users is your goal, go work on design at Google, or Microsoft, or Meta. (Design, of course, isn’t even a thing at Amazon.) Designers choose to work at Apple to do the best work in the industry. That has stopped being true under Alan Dye. The most talented designers I know are the harshest critics of Dye’s body of work, and the direction in which it’s been heading.

Designers can be great at more than one thing and they can evolve. Being in design leadership does not mean that you need to be the best practitioner of all the disciplines, but you do need to have the taste, sensibilities, and judgement of a good designer, no matter how you started. I’m a case in point. I studied traditional graphic design in art school. But I’ve been in digital design for most of my career now, and product design for the last 10 years.

Has UI over at Apple been worse over the last 10 years? Maybe. I will need to analyze things a lot more carefully. But I vividly remember having debates with my fellow designers about Mac OS X UI choices like the pinstriping, brushed metal, and many, many inconsistencies when I was working in the Graphic Design Group in 2004. UI design has never been perfect in Cupertino.

Alan Dye isn’t a CEO and wasn’t even at the same exposure level as Jony Ive when he was still at Apple. I don’t know Dye, though we’re certainly in the same design circles—we have 20 shared connections on LinkedIn. But as far as I’m concerned, he’s a civilian because he kept a low profile, like all Apple employees.

The parasocial relationships we have with tech executives is weird. I guess it’s one thing if they have a large online presence like Instagram’s Adam Mosseri or 37signals’ David Heinemeier Hansson (aka DHH), but Alan Dye made only a couple appearances in Apple keynotes and talked about Liquid Glass. In other words, why is Gruber writing 2,500 words in this particular post, and it’s just one of five posts covering this story!

Anyway, I’m not a big fan of Meta, but maybe Dye can bring some ethics to the design team over there. Who knows. Regardless, I am wishing him well rather than taking him down.

Designer and front-end dev Ondřej Konečný has a lovely presentation of his book collection.

My favorites that I’ve read include:

  • Creative Selection by Ken Kocienda (my review)
  • Grid Systems in Graphic Design by Josef Müller-Brockmann
  • Steve Jobs by Walter Isaacson
  • Don’t Make Me Think by Steve Krug
  • Responsive Web Design by Ethan Marcotte

(h/t Jeffrey Zeldman)

Books page showing a grid of colorful book covers with titles, authors, and years on a light background.

Ondřej Konečný | Books

Ondřej Konečný’s personal website.

ondrejkonecny.com iconondrejkonecny.com